Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_md/Initial_manuscript.md +251 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_tex/Initial_manuscript.tex +274 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_md/Initial_manuscript.md +163 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_tex/Initial_manuscript.tex +91 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_md/Initial_manuscript.md +259 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_tex/Initial_manuscript.tex +203 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_md/Initial_manuscript.md +205 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_tex/Initial_manuscript.tex +334 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_md/Initial_manuscript.md +195 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_tex/Initial_manuscript.tex +181 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_md/Initial_manuscript.md +203 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_tex/Initial_manuscript.tex +161 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_md/Initial_manuscript.md +301 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_tex/Initial_manuscript.tex +280 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_md/Initial_manuscript.md +317 -0
- papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_tex/Initial_manuscript.tex +294 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_md/Initial_manuscript.md +419 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_tex/Initial_manuscript.tex +327 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_md/Initial_manuscript.md +413 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_tex/Initial_manuscript.tex +259 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_md/Initial_manuscript.md +357 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_tex/Initial_manuscript.tex +287 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_tex/Initial_manuscript.tex +418 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_md/Initial_manuscript.md +428 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_tex/Initial_manuscript.tex +140 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_md/Initial_manuscript.md +755 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_tex/Initial_manuscript.tex +288 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_md/Initial_manuscript.md +568 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_tex/Initial_manuscript.tex +258 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_md/Initial_manuscript.md +564 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_tex/Initial_manuscript.tex +107 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_md/Initial_manuscript.md +343 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_tex/Initial_manuscript.tex +184 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_md/Initial_manuscript.md +307 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_tex/Initial_manuscript.tex +289 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_md/Initial_manuscript.md +324 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_tex/Initial_manuscript.tex +208 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_md/Initial_manuscript.md +269 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_tex/Initial_manuscript.tex +141 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_md/Initial_manuscript.md +532 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_tex/Initial_manuscript.tex +316 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_md/Initial_manuscript.md +498 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_tex/Initial_manuscript.tex +159 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_md/Initial_manuscript.md +244 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_tex/Initial_manuscript.tex +279 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_md/Initial_manuscript.md +449 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_tex/Initial_manuscript.tex +163 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_md/Initial_manuscript.md +469 -0
- papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_tex/Initial_manuscript.tex +449 -0
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# FeatureOnto: A Schema on Textual Features for Social Data Analysis
|
| 2 |
+
|
| 3 |
+
Sumit Dalal ${}^{1 * }$ , Sarika Jain ${}^{1}$ and Mayank Dave ${}^{2}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Department of Computer Applications, National Institute of Technology, Kurukshetra, India ${}^{2}$ Computer Engineering, National Institute of Technology, Kurukshetra, India
|
| 6 |
+
|
| 7 |
+
sumitdala19050@gmail.com
|
| 8 |
+
|
| 9 |
+
Abstract. Social media is one of the valuable information sources which present much data to the researchers. This information is mainly analyzed by machine learning and the deep learning methods, which lack semantics and interpretation in their outputs. Also, much attention is paid to the feature engineering there. We present a taxonomy of the different feature categories. The categories relate to the features learned during the training for analyzing the textual information, specifically available on social platforms. The ontological view of the data will represent knowledge in a more understandable form besides interpreting the machine learning results for various tasks related to the social data analysis. We chose Depression as the use case purpose. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
|
| 10 |
+
|
| 11 |
+
Keywords: Deep Learning, Depression, Knowledge Graph, Machine learning, Ontology, Social Data, Twitter.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Mental health is an essential aspect of human life to live a productive and energetic life. People tend to ignore their mental health for various reasons like inaccessible health services, limited time for themselves, etc. Nevertheless, technological advancements provide researchers an opportunity for pervasive monitoring to include users' social data for their mental health assessment without interfering with their daily life. People share their feelings, emotion, daily activities related to work and family on social media platforms (Facebook, Twitter, Reddit). These posts can be used for extracting features or looking for particular words, phrases that can be used to assess if a user has depression or not.
|
| 16 |
+
|
| 17 |
+
Various machine learning and deep learning methods have been devised and applied for mental health assessment from users' social data. These techniques mainly consider correlation or structural information of the text for classification purposes. They miss contextual information of the domain. Analyzing social data with traditional statistical and machine learning approaches has limitations like poor big data handling capacity, semantics, and contextual/ background knowledge inclusion. Recently deep learning approaches have been widespread, but interpretability is a significant issue. So a hybrid approach that handles semantics and big data should be considered for better results.
|
| 18 |
+
|
| 19 |
+
Contextual information can be represented by a logic-based model [McCarthy, J. 1993], Key-Value pair [Schilit, B., 1993], object-oriented model [Schmidt, A., 1999], UML diagram [Sheng, Q. Z., & Benatallah, B. 2005], or markup schema. Nevertheless, these models have limited capacity in representing real-world situations. We propose to develop an ontology to represent the domain information. Ontology is a formalization of a domain's knowledge [Gruber, T. R. 1995]. The main principles of ontology are to reuse sharing domain knowledge between agents in a language understandable to them (user or software). Ontology has been developed and used in different application domains. [Konjengbam A.2018, Wang D. 2018] design ontology for analyzing user reviews in social media. [Malik S. & Jain S. 2021 and Allahyari M. 2014] employ ontology for text documents classification while [Taghva, K., 2003] uses ontology for email classification. [Dutta B. & DeBellis M. 2020, Patel A. 2021] developed an ontology for collecting and analyzing the covid-19 data. [Magumba, M. A., & Nabende, P. 2016] develop an ontology for disease event detection from Twitter. [Chowdhury, S., & Zhu, J. 2019] use topic modeling methods to extract essential topics from transportation planning documents for constructing intelligent transportation infrastructure planning ontology. However, ontology-based techniques for depression classification and monitoring from social data have been insufficiently studied.
|
| 20 |
+
|
| 21 |
+
The machine learning and statistical approaches consider limited contextual information. Moreover, it is not easy to interpret their results. For this reason, deep learning models are considered complete black boxes. Personalization of the system is another issue that needs to be in focus. For the implementation purpose, we chose depression as the domain. We aim to develop an underlying ontology for the personalized and disease-specific knowledge graph, to monitor a depressive user through his publicly available textual social data. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
|
| 22 |
+
|
| 23 |
+
Our Contributions in this paper are as follows:
|
| 24 |
+
|
| 25 |
+
1. To develop the FeatureOnto ontology for analyzing social media posts. The features of social posts manipulated by machine learning and deep learning techniques are arranged in a taxonomy. This way, structured data help in the interpretation of output produced.
|
| 26 |
+
|
| 27 |
+
2. We write competency questions to describe the scope of FeatureOnto to detect and monitor depression through social media posts.
|
| 28 |
+
|
| 29 |
+
The remaining paper is organized into four sections. Section 2 discusses the related literature. The FeatureOnto development approach and its scope will be discussed in section 3. Section 4 discusses the conceptual design of the FeatureOnto and the evaluation of the same. Conclusion and future work is discussed in the last section.
|
| 30 |
+
|
| 31 |
+
## 2 Literature
|
| 32 |
+
|
| 33 |
+
This section discusses previous research on depression ontology development using various sources or employing the available ontology for depression detection or monitoring.
|
| 34 |
+
|
| 35 |
+
## a. Ontology Based Sentiment Analysis.
|
| 36 |
+
|
| 37 |
+
Sentiment analysis is a crucial aspect in detecting depression from social posts. However, there are applications other than mental health assessment where it is functional. Sentiment extraction of user posts/reviews is a popular application that considers the affective features of the posts. The authors consider eight emotion categories to develop an emotion ontology for the sentiment classification [Sykora, M., 2013]. [Saif, H., 2012] employ entity extraction tools for extracting entities and mapping semantic concepts from user reviews. They use the extracted semantic features with unigrams for Twitter sentiment analysis. [Kardinata E. A. 2021 and] apply the ontology-based approach for sentiment analysis.
|
| 38 |
+
|
| 39 |
+
## b. Ontology in Healthcare Domain.
|
| 40 |
+
|
| 41 |
+
In the healthcare domain, ontologies have been employed for quite a long time. [Bat-baatar, E., & Ryu, K. H. 2019] employ Unified Medical Language System (UMLS) ontology to extract health-related named entities from user tweets. [Krishnamurthy, M. 2016] implement DBpedia, Freebase, and YAGO2 ontologies for determining behavior addiction category of social users'. In [Kim, J., & Chung, K. Y. 2014], authors develop ontology as a bridge between the device and space-specific ontologies for ubiquitous and personalized healthcare service environment. [Lokala, U.,2020] build ontology as a catalog of drug abuse, use, and addiction concepts for the social data investigation. [On, J., 2019] extract concepts and their relations from clinical practice guidelines, literature, and social posts to build an ontology for social media sentiment analysis on childhood vaccination. [Alamsyah, A., 2018] build ontology with personality traits and their facets as classes and sub-classes, respectively, for personality measurements from Twitter posts. [Ali, F., 2021] design a monitoring framework for diabetes and blood pressure patients that consider various available ontologies medical domain and patient's medical records, wearable sensor and social data.
|
| 42 |
+
|
| 43 |
+
## c. Depression monitoring & Ontology.
|
| 44 |
+
|
| 45 |
+
Authors employ ontologies in depression diagnosis and monitoring. Either they build ontology or use available ones. [Martín-Rodilla, Patricia 2020] propose to add temporal dimension in ontology for analyzing depressed user's linguistic patterns and ontology evolution over time in his social data. [Benfares, C. 2018] represents explicitly defined patient data, self-questionnaire and diagnosis result in the semantic network for preventing and detecting depression among cancer patients. [Birjali, M. 2017] constructs a vocabulary of suicide-related themes and divides them into subclasses concerning the degree of threat. WordNet is further used for semantic analysis of machine learning predictions for suicide sentiments on Twitter. Some works that build an ontology for depression diagnosis are discussed below. We assign ontologies unique ids such as O1, O2, etc. These ids are used in table 2 to mention the particular ontology.
|
| 46 |
+
|
| 47 |
+
O1. [Petry, M. M. 2020] provides a ubiquitous framework based on ontology to assist the treatment of people suffering from depression. The ontology consists of concepts related to the user's depression, person, activity, and depression symptoms. Activity has subclasses related to the social network, email, and geographical activities. The person has a subclass PersonType which further defines a person into User, Medical, and Auxiliary. We are not sure if a patient history is considered or not.
|
| 48 |
+
|
| 49 |
+
O2. [Kim, H. H., 2018] extract concepts and their relationships from posts on the dailyStrength, to develop the OntoDepression ontology for depression detection. They use tweets of family caregivers of Alzheimer's. OntoDepression has four main classes: Symptoms, Treatments, Feelings, and Life. The symptom is categorized into general, medical, physical, and mental. Feelings represent positive and negative aspects. Life class captures what the family caregivers' are talking about. Treatments represent concepts of medical treatment.
|
| 50 |
+
|
| 51 |
+
O3. [Jung, H., 2016/2017] develop ontology from clinical practice guidelines and related literature to detect depression in adolescents from their social data. The ontology consists of five main classes: measurement, diagnostic result & management care, risk factors, and sign & symptoms.
|
| 52 |
+
|
| 53 |
+
04. [Chang, Y. S., 2013] build an ontology for depression diagnosis using Bayesian networks. The ontology consists of three main classes: Patient, Disease, and Depression_Symptom. Depression symptoms are categorized into 36 symptoms.
|
| 54 |
+
|
| 55 |
+
O5. [Hu, B., 2010] developed ontology based on Cognitive Behavioral Theory (CBT) to diagnose depression among online users at the current stage. Their focus is to lower the threshold access of online CBT. The ontology consists of the patient, doctor, patient record, and treatment diary concepts.
|
| 56 |
+
|
| 57 |
+
Work in [Cao, L., et. al. 2020] created ontology for social media users to detect suicidal ideation from their knowledge graph. Their work is similar to our work but they considered limited features taxonomy, moreover we focus depression detection from personalized knowledge graph.
|
| 58 |
+
|
| 59 |
+
Table 2 compares distinct ontologies built in different research papers to detect and monitor depression are compared on four parameters (Main Classes, Dimensions Covered, Entities Source Considered, and Availability & Re-usability). We extracted seven dimensions (Activity, Clinical Record, Patient Profile, Physician Profile, Sensor Data, Social Posts, and Social Profile) from the related literature. A description of each dimension, along with dimension ID, is given in table 1. We cannot find the ontologies built by other authors online. We are not sure if these are available for reuse or not, so the Availability & Re-usability column is blank. O1 ontology has scope over almost all the dimensions we have considered.
|
| 60 |
+
|
| 61 |
+
Table1. Description of different dimensions considered
|
| 62 |
+
|
| 63 |
+
<table><tr><td>Dimension</td><td>Dimension ID</td><td>Description</td></tr><tr><td>Activity</td><td>D1</td><td>This facet covers physical movements, social platforms, daily life activities etc.</td></tr><tr><td>Clinical Record</td><td>D2</td><td>It is related to patient profile, provides historical context, and covers clinical tests, physician observations, treatment diary, schedules, etc.</td></tr><tr><td>Patient Profile</td><td>D3</td><td>The dimensions cover disease symptoms, education, work condition, economical, relationship status, family background etc.</td></tr><tr><td>Physician Profile</td><td>D4</td><td>This aspect describes a physician in terms of his expertise, experience, etc.</td></tr><tr><td>Sensor Data</td><td>D5</td><td>This element is related to the smartphone, body, and back- ground sensors.</td></tr><tr><td>Social Posts</td><td>D6</td><td>It is affiliated with the content of posts by a user on SNS.</td></tr><tr><td>Social Profile</td><td>D7</td><td>Social media profile provides an essential aspect of user per- sonality.</td></tr></table>
|
| 64 |
+
|
| 65 |
+
Table2. Comparison of the FeatureOnto and depression ontologies used in literature
|
| 66 |
+
|
| 67 |
+
<table><tr><td/><td>Main Classes</td><td>Dimensions Covered</td><td>Entities Source Considered</td><td>Availability & Re-usability</td></tr><tr><td>O1</td><td>Depression, Symptom, Activity</td><td>D1, D2, D3, D4, D5, D6</td><td>Literature</td><td>...</td></tr><tr><td>O2</td><td>Symptoms, Treatments, Life, Feelings</td><td>D1, D6</td><td>SNSs</td><td>...</td></tr><tr><td>O3</td><td>Diagnostics, Subtypes, Risk Factors, Sign& Symptoms, Intervention</td><td>D6</td><td>CPG, Literature, SNSs, FAQs</td><td>...</td></tr><tr><td>04</td><td>Patient, Disease, Symp- tom</td><td>D2, D3</td><td>Literature</td><td>...</td></tr><tr><td>O5</td><td>Patient, Doctor, Activity, Diagnosis, Treatment Diary</td><td>D1, D2, D3, D4</td><td>General Scenario</td><td>...</td></tr><tr><td>Our Approach</td><td>Patient, Symptom, Posts, User Profile, Feature</td><td>D2, D3, D6, D7</td><td>Literature</td><td>Yes</td></tr></table>
|
| 68 |
+
|
| 69 |
+
## 3 Designing FeatureOnto Ontology
|
| 70 |
+
|
| 71 |
+
The focus of ontology development is to analyze the social textual data and interpret the results produced by the machine learning or deep learning models. Mainly, authors focus on n-gram features of social media posts, but FeatureOnto also considers other features. We follow the 'Ontology Development 101' methodology for Fea-tureOnto development [Noy, N. F., & McGuinness, D. L. 2001]. An iterative process is followed while designing the ontology lifecycle.
|
| 72 |
+
|
| 73 |
+
## Step 1. Determining Domain and Scope of the Ontology
|
| 74 |
+
|
| 75 |
+
We create a list of competency questions to determine the ontology's domain and scope [Grüninger, M., & Fox, M. S. 1995]. FeatureOnto ontology should be able to answer these questions. E.g., What are the textual features of social media posts? The ontology will be evaluated with these questions. Table 3a and 3b provide the sample of competency questions where 3a is derived to check the ontology schema, i.e., ontology without any instance. In comparison, questions in $3\mathrm{\;b}$ are derived keeping in mind the use case of depression monitoring of a social user. Queries of table $3\mathrm{\;b}$ are out of scope for this paper as here we are only presenting the schema.
|
| 76 |
+
|
| 77 |
+
Table3a. Schema Based Competency Questions.
|
| 78 |
+
|
| 79 |
+
Competency Questions
|
| 80 |
+
|
| 81 |
+
1. Retrieve the labels for every subclass of the class Content?
|
| 82 |
+
|
| 83 |
+
2. "Topics" is the subclass of?
|
| 84 |
+
|
| 85 |
+
3. What type of feature is "Anger"?
|
| 86 |
+
|
| 87 |
+
Table3b. Knowledge Graph Based Competency Questions.
|
| 88 |
+
|
| 89 |
+
<table><tr><td>Competency Questions</td></tr><tr><td>1. What is the sleeping pattern of a user/patient (user can be normal patient)?</td></tr><tr><td>2. In which hour user messages frequently?</td></tr><tr><td>3. How many posts has low valence in a week?</td></tr><tr><td>4. Emotional behavior pattern considering week as a unit?</td></tr><tr><td>5. Daily/weekly average frequency of negative emotions?</td></tr><tr><td>6. Compare daily/weekly/overall average number of first person pronoun and second/third</td></tr><tr><td>person pronouns?</td></tr><tr><td>7. What are the topics of interest for a depressed user?</td></tr><tr><td>8. Anger related words used frequently or not?</td></tr><tr><td>9. Find the pattern of psycholinguistic features?</td></tr></table>
|
| 90 |
+
|
| 91 |
+
## Step 2. Re-using the Existing Ontologies
|
| 92 |
+
|
| 93 |
+
We search for available conceptual frameworks and ontologies on social data analysis at BioPortal [Musen, M. A. 2012], OBOFoundary, and LOD cloud. Ontologies representing sentiment analysis or depression classification, or other social media analysis tasks on the web (Google Scholar, Pubmed) and the kinds of literature are searched for the required concepts and relationships. We have done a comprehensive search but could not find a suitable ontology that could be re-used fully. We find some ontology and can inherit one or more classes from them. Most of the inherited classes are given attributes as per our requirements. Table 4 shows our efforts toward implementing the reusability principle of the semantic web. Figure 2, present in the next section, gives a diagrammatical representation of the inherited entities. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, MFOEM, and HORD, while APAONTO, Obo are the least inherited ontologies. We did not find suitable classes for UniGrams, BiGrams, Emoticon, and POSTags. So we use our schema to represent these classes. The solid and the dotted line represent the property and the subclass relationship between two entities.
|
| 94 |
+
|
| 95 |
+
Table4. Entities and Namespaces considered in the FeatureOnto.
|
| 96 |
+
|
| 97 |
+
<table><tr><td>Entity</td><td>Sub Entities</td><td>Schema Selected</td><td>Available Schemas</td></tr><tr><td>Content</td><td>UniGrams, BiGrams, POSTags</td><td>...</td><td>...</td></tr><tr><td>Emoticon</td><td>...</td><td>...</td><td>...</td></tr><tr><td rowspan="2">Emotion</td><td>Arousal, Positive, Negative</td><td>MFOEM</td><td>MFOEM, SIO, VEO.</td></tr><tr><td>Dominance</td><td>APAONTO</td><td>APAONTO, FB-CV.</td></tr><tr><td>GenderType</td><td>...</td><td>Schema</td><td>Schema, GND</td></tr><tr><td rowspan="2">Person</td><td>Patient</td><td>Schema</td><td>FOAF, Schema, Wiki- data, DUL.</td></tr><tr><td>User</td><td>HORD</td><td>NCIT, SIO, HORD.</td></tr><tr><td>Post</td><td>...</td><td>HORD</td><td>HORD</td></tr><tr><td rowspan="2">Psycholinguistic</td><td>Anger, Anxiety, Sad</td><td>MFOEM</td><td>MFOEM, SIO, VEO, NCIT.</td></tr><tr><td>Pronoun</td><td>...</td><td>...</td></tr><tr><td>Symptoms</td><td>...</td><td>Obo</td><td>NCIT, SYMP, RADLEX, Obo</td></tr><tr><td>Topic</td><td>...</td><td/><td>EDAM, ITO</td></tr></table>
|
| 98 |
+
|
| 99 |
+
Step 3. Extracting Terms and Concepts
|
| 100 |
+
|
| 101 |
+
Keeping in mind our use case, we read literature on depression and mental disorders detection from social data using machine learning or lexicon-based approaches and extract terms related to features considered for classification. We found that different textual features are extracted and learned in machine learning or deep learning training phase [Dalal, S., 2019, Dalal, S., & Jain, S. 2021], e.g., bigrams, unigrams, positive or negative sentiment words. Table 4 shows different entities and sub-entities present in the FeatureOnto ontology. It also provides information about the various available schemas for an entity and the schema used for the inheritance. We also search social networking data to extract additional terms. The extracted terms are used for describing the class concepts.
|
| 102 |
+
|
| 103 |
+
## Step 4. Developing the Ontology and Terminology
|
| 104 |
+
|
| 105 |
+
We have defined the classes and the class hierarchy using the top-down approach. The ontology is developed using Protégé [Musen, M. A. 2015]. The ontology is uploaded on BioPortal.
|
| 106 |
+
|
| 107 |
+
## Step 5. Evaluating the Scope of the Ontology
|
| 108 |
+
|
| 109 |
+
A set of competency questions is given in Tables 3a and 3b. For scope evaluation of the FeatureOnto, answers to the SPARQL queries built on the questions from table 3a are considered. Results of the queries are discussed in the coming sections.
|
| 110 |
+
|
| 111 |
+
## 4 FeatureOnto Ontology Model
|
| 112 |
+
|
| 113 |
+
Following the steps discussed in the previous section, we design FeatureOnto ontology. A high-level view of the FeatureOnto ontology is represented in Figure no. 1 Complete FeatureOnto structure (at the current stage) has five dimensions (Patient, Symptom, Posts, User Profile, and Feature) covered by various classes in the figure. Most of the entities in our ontology belong to the Social Post dimension. The solid and the dotted line represent the property and the subclass relationship between two entities. Figure 1 gives a conceptual schema of the proposed model. FeatureOnto uses existing ontologies to pursue the basic principle of ontology implementation. Figure 2 represents the terms inherited by FeatureOnto from available schemas. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, and MFOEM, while FOAF, and APAONTO are the least inherited ontologies.
|
| 114 |
+
|
| 115 |
+
## Scope Evaluation of the FeatureOnto.
|
| 116 |
+
|
| 117 |
+
Table 3a and 3b presents the competency questions related to the schema and instances. This work is related to the building of the schema only, and hence we executed queries on schema only. Below, queries are built on questions from table 3a. 9
|
| 118 |
+
|
| 119 |
+
Question1. Retrieve the labels for every subclass of sf:Content?
|
| 120 |
+
|
| 121 |
+
Query. PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX sf. <http://www.domain.com/your/namespace/>Results.
|
| 122 |
+
|
| 123 |
+
SELECT ?subClass ?label WHERE \{ ?subClass rdfs:subClassOf sf:Content . ?subClass rdfs:label ?label . \}
|
| 124 |
+
|
| 125 |
+
Results. POSTags, UniGrams, BiGrams
|
| 126 |
+
|
| 127 |
+
Question2. "Topics" is the subclass of (find immediate parent)?
|
| 128 |
+
|
| 129 |
+
Query. PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX ns: <http://www.domain.com/your/namespace/>
|
| 130 |
+
|
| 131 |
+
SELECT ?superClass WHERE \{ ns:Topics rdfs:subClassOf ?superClass . \}
|
| 132 |
+
|
| 133 |
+
Results. Feature.
|
| 134 |
+
|
| 135 |
+
Question3. What type of feature is "Anger" (find all parents)?
|
| 136 |
+
|
| 137 |
+
Query. PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX ns: <http://www.domain.com/your/namespace/>
|
| 138 |
+
|
| 139 |
+
SELECT ?superClass WHERE \{ ns:Topics rdfs:subClassOf* ?superClass .\}
|
| 140 |
+
|
| 141 |
+
Results. Psycholinguistic, Feature.
|
| 142 |
+
|
| 143 |
+
Ontology is still under construction, when but prototype is available on https://github.com/sumitnitkkr.For generalization we have not mentioned any namespace here for our own entities.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+
Figure2. Classes Inherited from the available Ontologies.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
## CONCLUSION
|
| 152 |
+
|
| 153 |
+
We developed the FeatureOnto ontology to provide a taxonomy of social media posts' features (use mental health assessment or depression classification/ monitoring is taken). Posts carry huge information regarding many aspects. This information can be placed into different feature categories. These features are widely used in sentiment analysis, mental health assessment, event detection, user profiling, document classification, and other natural language and image processing tasks. The ontology will be used to create a personalized depression knowledge graph in the future. For this reason, it does not focus on the concepts from clinical practice guidelines and depression literature at the current stage. We will also extend the ontology to include other concepts related to depression in the future.
|
| 154 |
+
|
| 155 |
+
## References
|
| 156 |
+
|
| 157 |
+
1. Alamsyah, A., Putra, M. R. D., Fadhilah, D. D., Nurwianti, F., & Ningsih, E. (2018, May). Ontology modelling approach for personality measurement based on social media activity. In 2018 6th International Conference on Information and Communication Technology (ICoICT) (pp. 507-513). IEEE.
|
| 158 |
+
|
| 159 |
+
2. Ali, F., El-Sappagh, S., Islam, S. R., Ali, A., Attique, M., Imran, M., & Kwak, K. S. (2021). An intelligent healthcare monitoring framework using wearable sensors and social networking data. Future Generation Computer Systems, 114, 23-43.
|
| 160 |
+
|
| 161 |
+
3. Allahyari, M., Kochut, K. J., & Janik, M. (2014, June). Ontology-based text classification into dynamically defined topics. In 2014 IEEE international conference on semantic computing (pp. 273-278). IEEE.
|
| 162 |
+
|
| 163 |
+
4. Batbaatar, E., & Ryu, K. H. (2019). Ontology-based healthcare named entity recognition from twitter messages using a recurrent neural network approach. International journal of environmental research and public health, 16(19), 3628.
|
| 164 |
+
|
| 165 |
+
5. Benfares, C., Idrissi, Y. E. B. E., & Hamid, K. (2018, July). Personalized healthcare system based on ontologies. In International Conference on Advanced Intelligent Systems for Sustainable Development (pp. 185-196). Springer, Cham.
|
| 166 |
+
|
| 167 |
+
6. Birjali, M., Beni-Hssane, A., & Erritali, M. (2017). Machine learning and semantic sentiment analysis based algorithms for suicide sentiment prediction in social networks. Proce-dia Computer Science, 113, 65-72.
|
| 168 |
+
|
| 169 |
+
7. Cao, L., Zhang, H., & Feng, L. (2020). Building and using personal knowledge graph to improve suicidal ideation detection on social media. IEEE Transactions on Multimedia.
|
| 170 |
+
|
| 171 |
+
8. Ceusters, W., & Smith, B. (2010). Foundations for a realist ontology of mental disease. Journal of biomedical semantics, 1(1), 1-23.
|
| 172 |
+
|
| 173 |
+
9. Chang, Y. S., Fan, C. T., Lo, W. T., Hung, W. C., & Yuan, S. M. (2015). Mobile cloud-based depression diagnosis using an ontology and a Bayesian network. Future Generation Computer Systems, 43, 87-98.
|
| 174 |
+
|
| 175 |
+
10. Chowdhury, S., & Zhu, J. (2019). Towards the ontology development for smart transportation infrastructure planning via topic modeling. In ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction (Vol. 36, pp. 507-514). IAARC Publications.
|
| 176 |
+
|
| 177 |
+
11. Dalal, S., Jain, S., & Dave, M. (2019, December). A systematic review of smart mental healthcare. In Proceedings of the 5th International Conference on Cyber Security & Privacy in Communication Networks (ICCS).
|
| 178 |
+
|
| 179 |
+
12. Dalal, S., & Jain, S. (2021). Smart mental healthcare systems. In Web Semantics (pp. 153- 163). Academic Press.
|
| 180 |
+
|
| 181 |
+
13. Dutta, B., & DeBellis, M. (2020). CODO: an ontology for collection and analysis of COVID-19 data. arXiv preprint arXiv:2009.01210.
|
| 182 |
+
|
| 183 |
+
14. Gruber, T. R. (1995). Toward principles for the design of ontologies used for knowledge sharing?. International journal of human-computer studies, 43(5-6), 907-928.
|
| 184 |
+
|
| 185 |
+
15. Grüninger, M., & Fox, M. S. (1995). Methodology for the design and evaluation of ontologies.
|
| 186 |
+
|
| 187 |
+
16. Gyrard, A., Gaur, M., Shekarpour, S., Thirunarayan, K., & Sheth, A. (2018). Personalized health knowledge graph.
|
| 188 |
+
|
| 189 |
+
17. Hadzic, M., Chen, M., & Dillon, T. S. (2008, November). Towards the mental health ontology. In 2008 IEEE International Conference on Bioinformatics and Biomedicine (pp. 284-288). IEEE.
|
| 190 |
+
|
| 191 |
+
18. Huang, Z., Yang, J., Harmelen, F. V., & Hu, Q. (2017, October). Constructing knowledge graphs of depression. In International conference on health information science (pp. 149- 161). Springer, Cham.
|
| 192 |
+
|
| 193 |
+
19. Hu, B., Hu, B., Wan, J., Dennis, M., Chen, H. H., Li, L., & Zhou, Q. (2010). Ontology-based ubiquitous monitoring and treatment against depression. Wireless communications and mobile computing, 10(10), 1303-1319.
|
| 194 |
+
|
| 195 |
+
20. Jung, H., Park, H., & Song, T. M. (2016). Development and evaluation of an adolescents' depression ontology for analyzing social data. In Nursing Informatics 2016 (pp. 442-446). IOS Press.
|
| 196 |
+
|
| 197 |
+
21. Jung, H., Park, H. A., & Song, T. M. (2017). Ontology-based approach to social data sentiment analysis: detection of adolescent depression signals. Journal of medical internet research, 19(7), e7452.
|
| 198 |
+
|
| 199 |
+
22. Kardinata, E. A., Rakhmawati, N. A., & Zuhroh, N. A. (2021, April). Ontology-Based Sentiment Analysis on News Title. In 2021 3rd East Indonesia Conference on Computer and Information Technology (EIConCIT) (pp. 360-364). IEEE.
|
| 200 |
+
|
| 201 |
+
23. Kim, J., & Chung, K. Y. (2014). Ontology-based healthcare context information model to implement ubiquitous environment. Multimedia Tools and Applications, 71(2), 873-888.
|
| 202 |
+
|
| 203 |
+
24. Kim, H. H., Jeong, S., Kim, A., & Shin, D. (2018). Analyzing Twitter Data of Family Caregivers of Alzheimer's Disease Patients Based on the Depression Ontology. In Advances in Computer Science and Ubiquitous Computing (pp. 30-35). Springer, Singapore.
|
| 204 |
+
|
| 205 |
+
25. Krishnamurthy, M., Mahmood, K., & Marcinek, P. (2016, August). A hybrid statistical and semantic model for identification of mental health and behavioral disorders using social network analysis. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 1019-1026). IEEE.
|
| 206 |
+
|
| 207 |
+
26. Konjengbam, A., Dewangan, N., Kumar, N., & Singh, M. (2018). Aspect ontology based review exploration. Electronic Commerce Research and Applications, 30, 62-71.
|
| 208 |
+
|
| 209 |
+
27. Lokala, U., Daniulaityte, R., Lamy, F., Gaur, M., Thirunarayan, K., Kursuncu, U., & Sheth, A. P. (2020). Dao: An ontology for substance use epidemiology on social media and dark web. JMIR Public Health and Surveillance.
|
| 210 |
+
|
| 211 |
+
28. Lytvyn, V., Vysotska, V., Veres, O., Rishnyak, I., & Rishnyak, H. (2017). Classification methods of text documents using ontology based approach. In Advances in Intelligent Systems and Computing (pp. 229-240). Springer, Cham.
|
| 212 |
+
|
| 213 |
+
29. Magumba, M. A., & Nabende, P. (2016). Ontology Driven Disease Incidence Detection on Twitter. arXiv preprint arXiv:1611.06671.
|
| 214 |
+
|
| 215 |
+
30. Malik, S., & Jain, S. (2021, February). Semantic ontology-based approach to enhance text classification. In International Semantic Intelligence Conference, Delhi, India. 25-27 Feb 2021. CEUR Workshop Proceedings (Vol. 2786, pp. 85-98).
|
| 216 |
+
|
| 217 |
+
31. Martin-Rodilla, Patricia. "Adding Temporal Dimension to Ontology Learning Models for Depression Signs Detection from Social Media Texts." In ENASE, pp. 323-330. 2020.
|
| 218 |
+
|
| 219 |
+
32. McCarthy, J. (1993). Notes on formalizing context.
|
| 220 |
+
|
| 221 |
+
33. Musen, M. A. (2015). The protégé project: a look back and a look forward. AI matters, 1(4), 4-12.
|
| 222 |
+
|
| 223 |
+
34. Musen, M. A., Noy, N. F., Shah, N. H., Whetzel, P. L., Chute, C. G., Story, M. A., ... & NCBO team. (2012). The national center for biomedical ontology. Journal of the American Medical Informatics Association, 19(2), 190-195.
|
| 224 |
+
|
| 225 |
+
35. Noy, N. F., & McGuinness, D. L. (2001). Ontology development 101: A guide to creating your first ontology.
|
| 226 |
+
|
| 227 |
+
36. On, J., Park, H. A., & Song, T. M. (2019). Sentiment analysis of social media on childhood vaccination: development of an ontology. Journal of medical Internet research, 21(6), e13456.
|
| 228 |
+
|
| 229 |
+
37. Patel, A., Debnath, N. C., Mishra, A. K., & Jain, S. (2021). Covid19-IBO: a Covid-19 impact on Indian banking ontology along with an efficient schema matching approach. New Generation Computing, 39(3), 647-676.
|
| 230 |
+
|
| 231 |
+
38. Petry, M. M., Barbosa, J. L. V., Rigo, S. J., Dias, L. P. S., & Büttenbender, P. C. (2020). Toward a ubiquitous model to assist the treatment of people with depression. Universal Access in the Information Society, 19(4), 841-854.
|
| 232 |
+
|
| 233 |
+
39. Rastogi, N., & Zaki, M. J. (2020). Personal Health Knowledge Graphs for Patients. arXiv preprint arXiv:2004.00071.
|
| 234 |
+
|
| 235 |
+
40. Saif, H., He, Y., & Alani, H. (2012, November). Semantic sentiment analysis of twitter. In International semantic web conference (pp. 508-524). Springer, Berlin, Heidelberg.
|
| 236 |
+
|
| 237 |
+
41. Schilit, B., Adams, N., & Want, R. (1994, December). Context-aware computing applications. In 1994 first workshop on mobile computing systems and applications (pp. 85-90). IEEE.
|
| 238 |
+
|
| 239 |
+
42. Schmidt, A., Beigl, M., & Gellersen, H. W. (1999). There is more to context than location. Computers & Graphics, 23(6), 893-901.
|
| 240 |
+
|
| 241 |
+
43. Sheng, Q. Z., & Benatallah, B. (2005, July). ContextUML: a UML-based modeling language for model-driven development of context-aware web services. In International Conference on Mobile Business (ICMB'05) (pp. 206-212). IEEE.
|
| 242 |
+
|
| 243 |
+
44. Singla, S. (2020). Role of Ontology in Health Care. Ontology-Based Information Retrieval for Healthcare Systems, 1-18.
|
| 244 |
+
|
| 245 |
+
45. Sykora, M., Jackson, T., O'Brien, A., & Elayan, S. (2013). Emotive ontology: Extracting fine-grained emotions from terse, informal messages.
|
| 246 |
+
|
| 247 |
+
46. Taghva, K., Borsack, J., Coombs, J., Condit, A., Lumos, S., & Nartker, T. (2003, April). Ontology-based classification of email. In Proceedings ITCC 2003. International Conference on Information Technology: Coding and Computing (pp. 194-198). IEEE.
|
| 248 |
+
|
| 249 |
+
47. Wang, D., Xu, L., & Younas, A. (2018, July). Social Media Sentiment Analysis Based on Domain Ontology and Semantic Mining. In International Conference on Machine Learning and Data Mining in Pattern Recognition (pp. 28-39). Springer, Cham.
|
| 250 |
+
|
| 251 |
+
48. Wei, D. H., Kang, T., Pincus, H. A., & Weng, C. (2019). Construction of disease similarity networks using concept embedding and ontology. Studies in health technology and informatics, 264, 442.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,274 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ FEATUREONTO: A SCHEMA ON TEXTUAL FEATURES FOR SOCIAL DATA ANALYSIS
|
| 2 |
+
|
| 3 |
+
Sumit Dalal ${}^{1 * }$ , Sarika Jain ${}^{1}$ and Mayank Dave ${}^{2}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Department of Computer Applications, National Institute of Technology, Kurukshetra, India ${}^{2}$ Computer Engineering, National Institute of Technology, Kurukshetra, India
|
| 6 |
+
|
| 7 |
+
sumitdala19050@gmail.com
|
| 8 |
+
|
| 9 |
+
Abstract. Social media is one of the valuable information sources which present much data to the researchers. This information is mainly analyzed by machine learning and the deep learning methods, which lack semantics and interpretation in their outputs. Also, much attention is paid to the feature engineering there. We present a taxonomy of the different feature categories. The categories relate to the features learned during the training for analyzing the textual information, specifically available on social platforms. The ontological view of the data will represent knowledge in a more understandable form besides interpreting the machine learning results for various tasks related to the social data analysis. We chose Depression as the use case purpose. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
|
| 10 |
+
|
| 11 |
+
Keywords: Deep Learning, Depression, Knowledge Graph, Machine learning, Ontology, Social Data, Twitter.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Mental health is an essential aspect of human life to live a productive and energetic life. People tend to ignore their mental health for various reasons like inaccessible health services, limited time for themselves, etc. Nevertheless, technological advancements provide researchers an opportunity for pervasive monitoring to include users' social data for their mental health assessment without interfering with their daily life. People share their feelings, emotion, daily activities related to work and family on social media platforms (Facebook, Twitter, Reddit). These posts can be used for extracting features or looking for particular words, phrases that can be used to assess if a user has depression or not.
|
| 16 |
+
|
| 17 |
+
Various machine learning and deep learning methods have been devised and applied for mental health assessment from users' social data. These techniques mainly consider correlation or structural information of the text for classification purposes. They miss contextual information of the domain. Analyzing social data with traditional statistical and machine learning approaches has limitations like poor big data handling capacity, semantics, and contextual/ background knowledge inclusion. Recently deep learning approaches have been widespread, but interpretability is a significant issue. So a hybrid approach that handles semantics and big data should be considered for better results.
|
| 18 |
+
|
| 19 |
+
Contextual information can be represented by a logic-based model [McCarthy, J. 1993], Key-Value pair [Schilit, B., 1993], object-oriented model [Schmidt, A., 1999], UML diagram [Sheng, Q. Z., & Benatallah, B. 2005], or markup schema. Nevertheless, these models have limited capacity in representing real-world situations. We propose to develop an ontology to represent the domain information. Ontology is a formalization of a domain's knowledge [Gruber, T. R. 1995]. The main principles of ontology are to reuse sharing domain knowledge between agents in a language understandable to them (user or software). Ontology has been developed and used in different application domains. [Konjengbam A.2018, Wang D. 2018] design ontology for analyzing user reviews in social media. [Malik S. & Jain S. 2021 and Allahyari M. 2014] employ ontology for text documents classification while [Taghva, K., 2003] uses ontology for email classification. [Dutta B. & DeBellis M. 2020, Patel A. 2021] developed an ontology for collecting and analyzing the covid-19 data. [Magumba, M. A., & Nabende, P. 2016] develop an ontology for disease event detection from Twitter. [Chowdhury, S., & Zhu, J. 2019] use topic modeling methods to extract essential topics from transportation planning documents for constructing intelligent transportation infrastructure planning ontology. However, ontology-based techniques for depression classification and monitoring from social data have been insufficiently studied.
|
| 20 |
+
|
| 21 |
+
The machine learning and statistical approaches consider limited contextual information. Moreover, it is not easy to interpret their results. For this reason, deep learning models are considered complete black boxes. Personalization of the system is another issue that needs to be in focus. For the implementation purpose, we chose depression as the domain. We aim to develop an underlying ontology for the personalized and disease-specific knowledge graph, to monitor a depressive user through his publicly available textual social data. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
|
| 22 |
+
|
| 23 |
+
Our Contributions in this paper are as follows:
|
| 24 |
+
|
| 25 |
+
1. To develop the FeatureOnto ontology for analyzing social media posts. The features of social posts manipulated by machine learning and deep learning techniques are arranged in a taxonomy. This way, structured data help in the interpretation of output produced.
|
| 26 |
+
|
| 27 |
+
2. We write competency questions to describe the scope of FeatureOnto to detect and monitor depression through social media posts.
|
| 28 |
+
|
| 29 |
+
The remaining paper is organized into four sections. Section 2 discusses the related literature. The FeatureOnto development approach and its scope will be discussed in section 3. Section 4 discusses the conceptual design of the FeatureOnto and the evaluation of the same. Conclusion and future work is discussed in the last section.
|
| 30 |
+
|
| 31 |
+
§ 2 LITERATURE
|
| 32 |
+
|
| 33 |
+
This section discusses previous research on depression ontology development using various sources or employing the available ontology for depression detection or monitoring.
|
| 34 |
+
|
| 35 |
+
§ A. ONTOLOGY BASED SENTIMENT ANALYSIS.
|
| 36 |
+
|
| 37 |
+
Sentiment analysis is a crucial aspect in detecting depression from social posts. However, there are applications other than mental health assessment where it is functional. Sentiment extraction of user posts/reviews is a popular application that considers the affective features of the posts. The authors consider eight emotion categories to develop an emotion ontology for the sentiment classification [Sykora, M., 2013]. [Saif, H., 2012] employ entity extraction tools for extracting entities and mapping semantic concepts from user reviews. They use the extracted semantic features with unigrams for Twitter sentiment analysis. [Kardinata E. A. 2021 and] apply the ontology-based approach for sentiment analysis.
|
| 38 |
+
|
| 39 |
+
§ B. ONTOLOGY IN HEALTHCARE DOMAIN.
|
| 40 |
+
|
| 41 |
+
In the healthcare domain, ontologies have been employed for quite a long time. [Bat-baatar, E., & Ryu, K. H. 2019] employ Unified Medical Language System (UMLS) ontology to extract health-related named entities from user tweets. [Krishnamurthy, M. 2016] implement DBpedia, Freebase, and YAGO2 ontologies for determining behavior addiction category of social users'. In [Kim, J., & Chung, K. Y. 2014], authors develop ontology as a bridge between the device and space-specific ontologies for ubiquitous and personalized healthcare service environment. [Lokala, U.,2020] build ontology as a catalog of drug abuse, use, and addiction concepts for the social data investigation. [On, J., 2019] extract concepts and their relations from clinical practice guidelines, literature, and social posts to build an ontology for social media sentiment analysis on childhood vaccination. [Alamsyah, A., 2018] build ontology with personality traits and their facets as classes and sub-classes, respectively, for personality measurements from Twitter posts. [Ali, F., 2021] design a monitoring framework for diabetes and blood pressure patients that consider various available ontologies medical domain and patient's medical records, wearable sensor and social data.
|
| 42 |
+
|
| 43 |
+
§ C. DEPRESSION MONITORING & ONTOLOGY.
|
| 44 |
+
|
| 45 |
+
Authors employ ontologies in depression diagnosis and monitoring. Either they build ontology or use available ones. [Martín-Rodilla, Patricia 2020] propose to add temporal dimension in ontology for analyzing depressed user's linguistic patterns and ontology evolution over time in his social data. [Benfares, C. 2018] represents explicitly defined patient data, self-questionnaire and diagnosis result in the semantic network for preventing and detecting depression among cancer patients. [Birjali, M. 2017] constructs a vocabulary of suicide-related themes and divides them into subclasses concerning the degree of threat. WordNet is further used for semantic analysis of machine learning predictions for suicide sentiments on Twitter. Some works that build an ontology for depression diagnosis are discussed below. We assign ontologies unique ids such as O1, O2, etc. These ids are used in table 2 to mention the particular ontology.
|
| 46 |
+
|
| 47 |
+
O1. [Petry, M. M. 2020] provides a ubiquitous framework based on ontology to assist the treatment of people suffering from depression. The ontology consists of concepts related to the user's depression, person, activity, and depression symptoms. Activity has subclasses related to the social network, email, and geographical activities. The person has a subclass PersonType which further defines a person into User, Medical, and Auxiliary. We are not sure if a patient history is considered or not.
|
| 48 |
+
|
| 49 |
+
O2. [Kim, H. H., 2018] extract concepts and their relationships from posts on the dailyStrength, to develop the OntoDepression ontology for depression detection. They use tweets of family caregivers of Alzheimer's. OntoDepression has four main classes: Symptoms, Treatments, Feelings, and Life. The symptom is categorized into general, medical, physical, and mental. Feelings represent positive and negative aspects. Life class captures what the family caregivers' are talking about. Treatments represent concepts of medical treatment.
|
| 50 |
+
|
| 51 |
+
O3. [Jung, H., 2016/2017] develop ontology from clinical practice guidelines and related literature to detect depression in adolescents from their social data. The ontology consists of five main classes: measurement, diagnostic result & management care, risk factors, and sign & symptoms.
|
| 52 |
+
|
| 53 |
+
04. [Chang, Y. S., 2013] build an ontology for depression diagnosis using Bayesian networks. The ontology consists of three main classes: Patient, Disease, and Depression_Symptom. Depression symptoms are categorized into 36 symptoms.
|
| 54 |
+
|
| 55 |
+
O5. [Hu, B., 2010] developed ontology based on Cognitive Behavioral Theory (CBT) to diagnose depression among online users at the current stage. Their focus is to lower the threshold access of online CBT. The ontology consists of the patient, doctor, patient record, and treatment diary concepts.
|
| 56 |
+
|
| 57 |
+
Work in [Cao, L., et. al. 2020] created ontology for social media users to detect suicidal ideation from their knowledge graph. Their work is similar to our work but they considered limited features taxonomy, moreover we focus depression detection from personalized knowledge graph.
|
| 58 |
+
|
| 59 |
+
Table 2 compares distinct ontologies built in different research papers to detect and monitor depression are compared on four parameters (Main Classes, Dimensions Covered, Entities Source Considered, and Availability & Re-usability). We extracted seven dimensions (Activity, Clinical Record, Patient Profile, Physician Profile, Sensor Data, Social Posts, and Social Profile) from the related literature. A description of each dimension, along with dimension ID, is given in table 1. We cannot find the ontologies built by other authors online. We are not sure if these are available for reuse or not, so the Availability & Re-usability column is blank. O1 ontology has scope over almost all the dimensions we have considered.
|
| 60 |
+
|
| 61 |
+
Table1. Description of different dimensions considered
|
| 62 |
+
|
| 63 |
+
max width=
|
| 64 |
+
|
| 65 |
+
Dimension Dimension ID Description
|
| 66 |
+
|
| 67 |
+
1-3
|
| 68 |
+
Activity D1 This facet covers physical movements, social platforms, daily life activities etc.
|
| 69 |
+
|
| 70 |
+
1-3
|
| 71 |
+
Clinical Record D2 It is related to patient profile, provides historical context, and covers clinical tests, physician observations, treatment diary, schedules, etc.
|
| 72 |
+
|
| 73 |
+
1-3
|
| 74 |
+
Patient Profile D3 The dimensions cover disease symptoms, education, work condition, economical, relationship status, family background etc.
|
| 75 |
+
|
| 76 |
+
1-3
|
| 77 |
+
Physician Profile D4 This aspect describes a physician in terms of his expertise, experience, etc.
|
| 78 |
+
|
| 79 |
+
1-3
|
| 80 |
+
Sensor Data D5 This element is related to the smartphone, body, and back- ground sensors.
|
| 81 |
+
|
| 82 |
+
1-3
|
| 83 |
+
Social Posts D6 It is affiliated with the content of posts by a user on SNS.
|
| 84 |
+
|
| 85 |
+
1-3
|
| 86 |
+
Social Profile D7 Social media profile provides an essential aspect of user per- sonality.
|
| 87 |
+
|
| 88 |
+
1-3
|
| 89 |
+
|
| 90 |
+
Table2. Comparison of the FeatureOnto and depression ontologies used in literature
|
| 91 |
+
|
| 92 |
+
max width=
|
| 93 |
+
|
| 94 |
+
X Main Classes Dimensions Covered Entities Source Considered Availability & Re-usability
|
| 95 |
+
|
| 96 |
+
1-5
|
| 97 |
+
O1 Depression, Symptom, Activity D1, D2, D3, D4, D5, D6 Literature ...
|
| 98 |
+
|
| 99 |
+
1-5
|
| 100 |
+
O2 Symptoms, Treatments, Life, Feelings D1, D6 SNSs ...
|
| 101 |
+
|
| 102 |
+
1-5
|
| 103 |
+
O3 Diagnostics, Subtypes, Risk Factors, Sign& Symptoms, Intervention D6 CPG, Literature, SNSs, FAQs ...
|
| 104 |
+
|
| 105 |
+
1-5
|
| 106 |
+
04 Patient, Disease, Symp- tom D2, D3 Literature ...
|
| 107 |
+
|
| 108 |
+
1-5
|
| 109 |
+
O5 Patient, Doctor, Activity, Diagnosis, Treatment Diary D1, D2, D3, D4 General Scenario ...
|
| 110 |
+
|
| 111 |
+
1-5
|
| 112 |
+
Our Approach Patient, Symptom, Posts, User Profile, Feature D2, D3, D6, D7 Literature Yes
|
| 113 |
+
|
| 114 |
+
1-5
|
| 115 |
+
|
| 116 |
+
§ 3 DESIGNING FEATUREONTO ONTOLOGY
|
| 117 |
+
|
| 118 |
+
The focus of ontology development is to analyze the social textual data and interpret the results produced by the machine learning or deep learning models. Mainly, authors focus on n-gram features of social media posts, but FeatureOnto also considers other features. We follow the 'Ontology Development 101' methodology for Fea-tureOnto development [Noy, N. F., & McGuinness, D. L. 2001]. An iterative process is followed while designing the ontology lifecycle.
|
| 119 |
+
|
| 120 |
+
§ STEP 1. DETERMINING DOMAIN AND SCOPE OF THE ONTOLOGY
|
| 121 |
+
|
| 122 |
+
We create a list of competency questions to determine the ontology's domain and scope [Grüninger, M., & Fox, M. S. 1995]. FeatureOnto ontology should be able to answer these questions. E.g., What are the textual features of social media posts? The ontology will be evaluated with these questions. Table 3a and 3b provide the sample of competency questions where 3a is derived to check the ontology schema, i.e., ontology without any instance. In comparison, questions in $3\mathrm{\;b}$ are derived keeping in mind the use case of depression monitoring of a social user. Queries of table $3\mathrm{\;b}$ are out of scope for this paper as here we are only presenting the schema.
|
| 123 |
+
|
| 124 |
+
Table3a. Schema Based Competency Questions.
|
| 125 |
+
|
| 126 |
+
Competency Questions
|
| 127 |
+
|
| 128 |
+
1. Retrieve the labels for every subclass of the class Content?
|
| 129 |
+
|
| 130 |
+
2. "Topics" is the subclass of?
|
| 131 |
+
|
| 132 |
+
3. What type of feature is "Anger"?
|
| 133 |
+
|
| 134 |
+
Table3b. Knowledge Graph Based Competency Questions.
|
| 135 |
+
|
| 136 |
+
max width=
|
| 137 |
+
|
| 138 |
+
Competency Questions
|
| 139 |
+
|
| 140 |
+
1-1
|
| 141 |
+
1. What is the sleeping pattern of a user/patient (user can be normal patient)?
|
| 142 |
+
|
| 143 |
+
1-1
|
| 144 |
+
2. In which hour user messages frequently?
|
| 145 |
+
|
| 146 |
+
1-1
|
| 147 |
+
3. How many posts has low valence in a week?
|
| 148 |
+
|
| 149 |
+
1-1
|
| 150 |
+
4. Emotional behavior pattern considering week as a unit?
|
| 151 |
+
|
| 152 |
+
1-1
|
| 153 |
+
5. Daily/weekly average frequency of negative emotions?
|
| 154 |
+
|
| 155 |
+
1-1
|
| 156 |
+
6. Compare daily/weekly/overall average number of first person pronoun and second/third
|
| 157 |
+
|
| 158 |
+
1-1
|
| 159 |
+
person pronouns?
|
| 160 |
+
|
| 161 |
+
1-1
|
| 162 |
+
7. What are the topics of interest for a depressed user?
|
| 163 |
+
|
| 164 |
+
1-1
|
| 165 |
+
8. Anger related words used frequently or not?
|
| 166 |
+
|
| 167 |
+
1-1
|
| 168 |
+
9. Find the pattern of psycholinguistic features?
|
| 169 |
+
|
| 170 |
+
1-1
|
| 171 |
+
|
| 172 |
+
§ STEP 2. RE-USING THE EXISTING ONTOLOGIES
|
| 173 |
+
|
| 174 |
+
We search for available conceptual frameworks and ontologies on social data analysis at BioPortal [Musen, M. A. 2012], OBOFoundary, and LOD cloud. Ontologies representing sentiment analysis or depression classification, or other social media analysis tasks on the web (Google Scholar, Pubmed) and the kinds of literature are searched for the required concepts and relationships. We have done a comprehensive search but could not find a suitable ontology that could be re-used fully. We find some ontology and can inherit one or more classes from them. Most of the inherited classes are given attributes as per our requirements. Table 4 shows our efforts toward implementing the reusability principle of the semantic web. Figure 2, present in the next section, gives a diagrammatical representation of the inherited entities. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, MFOEM, and HORD, while APAONTO, Obo are the least inherited ontologies. We did not find suitable classes for UniGrams, BiGrams, Emoticon, and POSTags. So we use our schema to represent these classes. The solid and the dotted line represent the property and the subclass relationship between two entities.
|
| 175 |
+
|
| 176 |
+
Table4. Entities and Namespaces considered in the FeatureOnto.
|
| 177 |
+
|
| 178 |
+
max width=
|
| 179 |
+
|
| 180 |
+
Entity Sub Entities Schema Selected Available Schemas
|
| 181 |
+
|
| 182 |
+
1-4
|
| 183 |
+
Content UniGrams, BiGrams, POSTags ... ...
|
| 184 |
+
|
| 185 |
+
1-4
|
| 186 |
+
Emoticon ... ... ...
|
| 187 |
+
|
| 188 |
+
1-4
|
| 189 |
+
2*Emotion Arousal, Positive, Negative MFOEM MFOEM, SIO, VEO.
|
| 190 |
+
|
| 191 |
+
2-4
|
| 192 |
+
Dominance APAONTO APAONTO, FB-CV.
|
| 193 |
+
|
| 194 |
+
1-4
|
| 195 |
+
GenderType ... Schema Schema, GND
|
| 196 |
+
|
| 197 |
+
1-4
|
| 198 |
+
2*Person Patient Schema FOAF, Schema, Wiki- data, DUL.
|
| 199 |
+
|
| 200 |
+
2-4
|
| 201 |
+
User HORD NCIT, SIO, HORD.
|
| 202 |
+
|
| 203 |
+
1-4
|
| 204 |
+
Post ... HORD HORD
|
| 205 |
+
|
| 206 |
+
1-4
|
| 207 |
+
2*Psycholinguistic Anger, Anxiety, Sad MFOEM MFOEM, SIO, VEO, NCIT.
|
| 208 |
+
|
| 209 |
+
2-4
|
| 210 |
+
Pronoun ... ...
|
| 211 |
+
|
| 212 |
+
1-4
|
| 213 |
+
Symptoms ... Obo NCIT, SYMP, RADLEX, Obo
|
| 214 |
+
|
| 215 |
+
1-4
|
| 216 |
+
Topic ... X EDAM, ITO
|
| 217 |
+
|
| 218 |
+
1-4
|
| 219 |
+
|
| 220 |
+
Step 3. Extracting Terms and Concepts
|
| 221 |
+
|
| 222 |
+
Keeping in mind our use case, we read literature on depression and mental disorders detection from social data using machine learning or lexicon-based approaches and extract terms related to features considered for classification. We found that different textual features are extracted and learned in machine learning or deep learning training phase [Dalal, S., 2019, Dalal, S., & Jain, S. 2021], e.g., bigrams, unigrams, positive or negative sentiment words. Table 4 shows different entities and sub-entities present in the FeatureOnto ontology. It also provides information about the various available schemas for an entity and the schema used for the inheritance. We also search social networking data to extract additional terms. The extracted terms are used for describing the class concepts.
|
| 223 |
+
|
| 224 |
+
§ STEP 4. DEVELOPING THE ONTOLOGY AND TERMINOLOGY
|
| 225 |
+
|
| 226 |
+
We have defined the classes and the class hierarchy using the top-down approach. The ontology is developed using Protégé [Musen, M. A. 2015]. The ontology is uploaded on BioPortal.
|
| 227 |
+
|
| 228 |
+
§ STEP 5. EVALUATING THE SCOPE OF THE ONTOLOGY
|
| 229 |
+
|
| 230 |
+
A set of competency questions is given in Tables 3a and 3b. For scope evaluation of the FeatureOnto, answers to the SPARQL queries built on the questions from table 3a are considered. Results of the queries are discussed in the coming sections.
|
| 231 |
+
|
| 232 |
+
§ 4 FEATUREONTO ONTOLOGY MODEL
|
| 233 |
+
|
| 234 |
+
Following the steps discussed in the previous section, we design FeatureOnto ontology. A high-level view of the FeatureOnto ontology is represented in Figure no. 1 Complete FeatureOnto structure (at the current stage) has five dimensions (Patient, Symptom, Posts, User Profile, and Feature) covered by various classes in the figure. Most of the entities in our ontology belong to the Social Post dimension. The solid and the dotted line represent the property and the subclass relationship between two entities. Figure 1 gives a conceptual schema of the proposed model. FeatureOnto uses existing ontologies to pursue the basic principle of ontology implementation. Figure 2 represents the terms inherited by FeatureOnto from available schemas. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, and MFOEM, while FOAF, and APAONTO are the least inherited ontologies.
|
| 235 |
+
|
| 236 |
+
§ SCOPE EVALUATION OF THE FEATUREONTO.
|
| 237 |
+
|
| 238 |
+
Table 3a and 3b presents the competency questions related to the schema and instances. This work is related to the building of the schema only, and hence we executed queries on schema only. Below, queries are built on questions from table 3a. 9
|
| 239 |
+
|
| 240 |
+
Question1. Retrieve the labels for every subclass of sf:Content?
|
| 241 |
+
|
| 242 |
+
Query. PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX sf. <http://www.domain.com/your/namespace/>Results.
|
| 243 |
+
|
| 244 |
+
SELECT ?subClass ?label WHERE { ?subClass rdfs:subClassOf sf:Content . ?subClass rdfs:label ?label . }
|
| 245 |
+
|
| 246 |
+
Results. POSTags, UniGrams, BiGrams
|
| 247 |
+
|
| 248 |
+
Question2. "Topics" is the subclass of (find immediate parent)?
|
| 249 |
+
|
| 250 |
+
Query. PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX ns: <http://www.domain.com/your/namespace/>
|
| 251 |
+
|
| 252 |
+
SELECT ?superClass WHERE { ns:Topics rdfs:subClassOf ?superClass . }
|
| 253 |
+
|
| 254 |
+
Results. Feature.
|
| 255 |
+
|
| 256 |
+
Question3. What type of feature is "Anger" (find all parents)?
|
| 257 |
+
|
| 258 |
+
Query. PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX ns: <http://www.domain.com/your/namespace/>
|
| 259 |
+
|
| 260 |
+
SELECT ?superClass WHERE { ns:Topics rdfs:subClassOf* ?superClass .}
|
| 261 |
+
|
| 262 |
+
Results. Psycholinguistic, Feature.
|
| 263 |
+
|
| 264 |
+
Ontology is still under construction, when but prototype is available on https://github.com/sumitnitkkr.For generalization we have not mentioned any namespace here for our own entities.
|
| 265 |
+
|
| 266 |
+
< g r a p h i c s >
|
| 267 |
+
|
| 268 |
+
Figure2. Classes Inherited from the available Ontologies.
|
| 269 |
+
|
| 270 |
+
< g r a p h i c s >
|
| 271 |
+
|
| 272 |
+
§ CONCLUSION
|
| 273 |
+
|
| 274 |
+
We developed the FeatureOnto ontology to provide a taxonomy of social media posts' features (use mental health assessment or depression classification/ monitoring is taken). Posts carry huge information regarding many aspects. This information can be placed into different feature categories. These features are widely used in sentiment analysis, mental health assessment, event detection, user profiling, document classification, and other natural language and image processing tasks. The ontology will be used to create a personalized depression knowledge graph in the future. For this reason, it does not focus on the concepts from clinical practice guidelines and depression literature at the current stage. We will also extend the ontology to include other concepts related to depression in the future.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Devising Mapping Interoperability with Mapping Translation
|
| 2 |
+
|
| 3 |
+
Ana Iglesias-Molina ${}^{1}$ , Andrea Cimmino ${}^{1}$ and Oscar Corcho ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Ontology Engineering Group, Universidad Politécnica de Madrid
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Nowadays, Knowledge Graphs are extensively created using very different techniques, mapping languages among them. The wide variety of use cases, data peculiarities, and potential uses has had a substantial impact in how these languages have been created, extended, and applied. This situation is closely related to the global adoption of these languages and their associated tools. The large number of languages, compliant tools, and usually the lack of information of the combination of both leads users to use other techniques to construct Knowledge Graphs. Often, users choose to create their own ad hoc programming scripts that suit their needs. This choice is normally less reproducible and maintainable, what ultimately affects the quality of the generated RDF data, particularly in long-term scenarios. We devise with mapping translation an enhancement to the interoperability of existing mapping languages. This position paper analyses the possible language translation approaches, presents the scenarios in which it is being applied and discusses how it can be implemented.
|
| 10 |
+
|
| 11 |
+
## Keywords
|
| 12 |
+
|
| 13 |
+
Mapping languages, Ontology Description, Mapping Translation
|
| 14 |
+
|
| 15 |
+
## 1. Introduction
|
| 16 |
+
|
| 17 |
+
Knowledge Graphs (KG) are increasingly used in academia and industry to represent and manage the increasing amount of data on the Web [1]. A large number of techniques to create KGs have been proposed. These techniques may follow, namely, two approaches: RDF materialization, that consists of translating data from one or more heterogeneous sources into RDF; or Virtualization, (Ontology Based Data Access) [2] that consists in translating a SPARQL query into one or more equivalent queries which are distributed and executed on the original data source(s) and where its results are transformed back to the SPARQL results format [3]. Both approaches rely on an essential element, a mapping document, which is the key-enabler for performing the translations.
|
| 18 |
+
|
| 19 |
+
Mapping languages represent the relationships between the structure or the model of heterogeneous data and an RDF version following an ontology, i.e., the rules on how to translate from non-RDF data into RDF. This data can be originally expressed in a variety of formats, such as tabular, JSON, or XML. Due to the heterogeneous nature of data, the wide corpus of techniques and the specific requirements that some scenarios may impose, an increasing number of mapping languages have been proposed $\left\lbrack {4,5}\right\rbrack$ . The differences among them are usually based on three aspects: (a) the focus on one or more particular data formats, e.g., the W3C Recommendations R2RML focuses on SQL tabular data [6]; (b) an addressed specific feature, e.g. SPARQL-Generate [7] allows the definition of functions in the mapping for cleaning or linking the generated RDF data; or (c) if they are designed for a particular technique or scenario that has special requirements, e.g. the WoT-mappings [8] where designed as an extension of the WoT standard [9].
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
Third International Workshop On Knowledge Graph Construction, Co-located with the ESWC 2022, Crete - 30th May 2022
|
| 24 |
+
|
| 25 |
+
Qana.iglesiasm@upm.es (A. Iglesias-Molina); andreajesus.cimmino@upm.es (A. Cimmino); oscar.corcho@upm.es (O. Corcho)
|
| 26 |
+
|
| 27 |
+
© 0000-0001-5375-8024 (A. Iglesias-Molina); 0000-0002-1823-4484 (A. Cimmino); 0000-0002-9260-0753 (O. Corcho) (C) (C) (C) (C) Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
As a result, the diversity of mapping languages allows the construction of KG from heterogeneous data sources in many different scenarios. Current mapping languages may be categorized by their schema: RDF-based (e.g. R2RML [6] and extensions, CSVW [10]), SPARQL-based (e.g., SPARQL-Generate [7], SPARQL-Anything [11]) or based on other schemas (e.g. ShExML [12], Helio mappingsHelio ${}^{1}$ ). Nevertheless, the existing techniques usually implement just one mapping language, and sometimes not even the whole language specification [13]. Deciding which language and technique should be used in each scenario becomes a costly task, since the choice of one language may not cover all needed requirements [14]. Some scenarios require a combination of mapping languages because of their differential features, which entails using different techniques. In many cases, this diversity leads to ad hoc solutions that reduce reproducibility, maintainability, and reusability [15].
|
| 32 |
+
|
| 33 |
+
The increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions that are, more commonly than desired, not compatible with existing ones. This position paper develops the concept of mapping translation, proposed by Corcho et al. [16], a concept that can enhance the interoperability among existing mapping languages and thus, improve the user experience of these technologies by allowing communication and understanding among them. This paper presents some approaches for language translation, shows the current situations in which mapping translation is being applied and their benefits, and proposes different techniques to extend it to more languages.
|
| 34 |
+
|
| 35 |
+
The remaining of this article is structured as follows: Section 2 provides some insights about language translation and the situations in which it is being applied. Section 3 proposes three different techniques to address mapping translation at a larger scale. Finally, Section 4 draw some conclusions of the concepts presented in the paper.
|
| 36 |
+
|
| 37 |
+
## 2. Mapping translation: Context
|
| 38 |
+
|
| 39 |
+
In this section, we introduce mapping translation describing some approaches to language translation and present a set of scenarios in which mapping translation has been applied. Authors assume the reader is familiar with current mapping languages and their general characteristics.
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
${}^{1}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv--xml-and-json-files-using-rml
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Figure 1: Types of language translations (Adapted from [18]).
|
| 50 |
+
|
| 51 |
+
### 2.1. Approaches to language translation
|
| 52 |
+
|
| 53 |
+
In the context of language translation, there are several approaches that carry out translations among a set of languages. Depending on the situation at hand, an approach can be advantageous with respect to the other ones. We highlight the following [17]:
|
| 54 |
+
|
| 55 |
+
Peer-to-peer translation (Fig. 1a) supports ad hoc translation solutions between pairs of languages. This one may seem as the most straightforward approach, requiring the development of only the translator services needed for the situation at hand and with the possibility of adjusting it ad hoc for each situation. However, it becomes decreasingly feasible as the number of required translations increases.
|
| 56 |
+
|
| 57 |
+
Common interchange language (Fig. 1b) uses a language that serves as an intermediary among several languages. This approach reduces the number of translator services needed to develop and it is the most feasible of the three to scale in amount. It involves creating (or luckily having) a language able to represent the expressiveness of all languages, to avoid information loss. Additionally, this implies that there are common patterns shared by the languages independently of their representation, and that an abstract manner of gathering them is possible, which may not be thus for highly heterogeneous languages.
|
| 58 |
+
|
| 59 |
+
Family of languages (Fig. 1c) considers sets of languages and translations between the representatives of each set. This approach stands out for situations where there are clear subgroups of languages similar among them but among languages from other groups.
|
| 60 |
+
|
| 61 |
+
### 2.2. Mapping translation scenarios
|
| 62 |
+
|
| 63 |
+
Regarding mapping languages, there are currently some implementations that unidirectionally translate pairs of mapping languages. ShExML and YARRRML in their respective online editors ${}^{2,3}$ enable translation to RML. Another case is when tools implement RML/R2RML mapping translation into the language they are designed to parse; such is the case of Helio ${}^{4}$ and SPARQL-Generate ${}^{5}$ , that translate from RML to their respective language; and Ontop [19], that translates R2RML into its proprietary language, OBDA mappings [20]. These translation makes it possible to extend the outreach of the tool, since they enable the possibility of using them without the need of learning their specific language, but using one that is widely used and extended, such as R2RML and RML.
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
${}^{2}$ http://shexml.herminiogarcia.com/editor/
|
| 68 |
+
|
| 69 |
+
${}^{3}$ https://rml.io/yarrrml/matey/#
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
Another case we want to present is Mapeathor [21], a tool that takes the mapping rules specified in spreadsheets and transforms them into a mapping in either R2RML, RML or YARRRML. It aims to lower the learning curve of those languages for new users and ease the mapping writing process. Finally, we remark the case where tools provide a set of optimizations on the construction of RDF graphs exploiting the translation of mapping rules, this is the case of Morph-CSV [22] and FunMap [23]. Morph-CSV first performs a transformation over the tabular data with RML+FnO mappings and CSVW annotations, and outputs a database and R2RML mappings ready to be transformed by an R2RML-compliant tool. FunMap takes an RML+FnO mapping, performs the transformation functions indicated, outputs the parsed data and generates a function-free RML mapping.
|
| 74 |
+
|
| 75 |
+
The approaches presented are, mainly, examples of peer-to-peer translation for specific uses. The exception is Mapeathor, that abstracts the rules from R2RML, RML and YARRRML in a spreadsheet-based representation, which aligns with the approach of a common interchange language. Even though most of these translation examples involve R2RML or RML, there is no holistic approach of a general translation framework.
|
| 76 |
+
|
| 77 |
+
## 3. Mapping translation: Techniques
|
| 78 |
+
|
| 79 |
+
This section presents three proposals to implement a mapping translator service general enough to enable translation among several languages. These proposals are, namely, (1) Software-based, (2) construct query-based, and (3) Executable mapping-based. These implementations can be applied to any of the language translation approaches presented in Section 2.1.
|
| 80 |
+
|
| 81 |
+
Software-based translation. It consists on ad-hoc software implementation for each pair of languages to perform bidirectional translations between them. As any ad hoc solution, it benefits from adjusting specifically to any situation with the (almost) unlimited possibilities that programming languages provide. This is the approach that all situations presented in Section 2.2 have applied, although with unidirectional translations.
|
| 82 |
+
|
| 83 |
+
Construct query-based translation. This approach takes advantage of SPARQL query language with construct queries, which return an RDF graph. These particular queries extract the data by matching graph patterns of the query (with the WHERE clause) and builds the output graph based on a template (with the CONSTRUCT clause). Since many languages are RDF-based, that is, follow the schema of an ontology and are usually written in the Turtle syntax (e.g., R2RML and extensions), this approach can be applicable to them. This approach benefits from relying on a well-stablished standard, as SPARQL is nowadays, and its compliant engines. However, it would leave out languages with other schemas, such as ShExML and SPARQL-based, wthout relying on software-based solutions.
|
| 84 |
+
|
| 85 |
+
---
|
| 86 |
+
|
| 87 |
+
${}^{4}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv-xml-and-json-files-using-rml
|
| 88 |
+
|
| 89 |
+
${}^{5}$ https://github.com/sparql-generate/rml-to-sparql-generate
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
Executable mapping-based translation. This last approach makes use of executable mappings automatically generated from ontology alignment to perform data translation between the two ontologies [24]. Similarly to the previous approach, this one also makes use of construct queries from SPARQL in the executable mappings. While the previous one relied on manual effort to build queries, this one takes advantage of the ontologies that define RDF-based mapping languages. In addition to the benefits and setbacks that the previous approach has, this approach may be hindered by the language constructs to build mappings. That is to say, single one-to-one correspondences of ontology entities may not be enough to gather and be able to translate their expressiveness and capabilities, especially for considerably different languages.
|
| 94 |
+
|
| 95 |
+
The techniques proposed are presented in decreasing order of manual effort required. The first one is completely ad hoc, and even though it could use some modules of the developed solutions presented in Section 2.2, many more would be needed to provide a complete set of bidirectional translations covering a good number of languages. The second one requires considerable effort to build queries for RDF-based languages, assuming no extra help from software implementation is needed. The third one could ideally be automatically done, from ontology alignments creation to mapping execution generation. However, the rate of success of this approach without manual intervention is not expected to be high, especially for the ontology alignment part when the input ontologies considerably differ from one another or present different constructs (with different number of elements or differently structured).
|
| 96 |
+
|
| 97 |
+
## 4. Conclusions
|
| 98 |
+
|
| 99 |
+
This paper develops the concept of mapping translation, proposed by Corcho et al. [16]. It analyses the possible language translation approaches, updates the scenarios in which it is being applied, and proposes some implementation techniques to perform it.
|
| 100 |
+
|
| 101 |
+
There are several possibilities in order to fully develop a complete solution to achieve mapping translation that ensures information preservation, as described in previous sections. It not only requires choosing the technical implementation according to the available efforts and resources, but more importantly, it involves deciding wisely the language translation approach that suits best this particular case of mapping languages. As presented previously, we categorize current mapping languages by their schema: RDF-based, SPARQL-based and based on other schemas. All of them have been designed for a basic purpose: describing non-RDF data to allow either materialization or virtualization. Intuitively, we can assume that the rules that the different mappings create can be represented in an abstract, language-independent manner. However, the sometimes large differences among these languages may question this assumption. Some languages, inside their categories, are similar to each other, R2RML and its extensions, for instance. Languages from different groups can be related, such as ShExML and RML, despite some inevitable differences in their features. There are others that are more unique, such as CSVW. Lastly, the SPARQL-based group is more isolated from the others due to the great possibilities that provide relying on SPARQL. This scenario poses challenges for every language translation approach. Peer-to-peer translation would require a substantial amount of effort for divergent languages. Using families of languages would improve in comparison with the previous one, but it still would have to face several challenges in language representation and the amount of translator services required. Meanwhile, using a common interchange language would be the one that reduces most efforts, but there is no absolute certainty that a common interchange language could be able to represent them all. Still, some steps have been taken to draft this language ${}^{6}$ , with the base idea that the mapping rules can be abstracted and represented in an ontology-based language.
|
| 102 |
+
|
| 103 |
+
Even though it does not present as an easy task, mapping translation is a concept that can only benefit the current landscape of heterogeneous mapping languages. After years of KG construction, in which the increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions, sometimes ad hoc, sometimes with extensions of standards or widely used languages. Mapping translation has the potential to build bridges among the past (but still used) and new solutions to improve interoperability.
|
| 104 |
+
|
| 105 |
+
## Acknowledgments
|
| 106 |
+
|
| 107 |
+
The work presented in this paper is partially funded by Knowledge Spaces project (Grant PID2020-118274RB-I00 funded by MCIN/AEI/ 10.13039/501100011033); and partially funded by the European Union's Horizon 2020 Research and Innovation Programme through the AURORAL project, Grant Agreement No. 101016854.
|
| 108 |
+
|
| 109 |
+
## References
|
| 110 |
+
|
| 111 |
+
[1] A. Hogan, E. Blomqvist, M. Cochez, C. d'Amato, G. D. Melo, C. Gutierrez, S. Kirrane, J. E. L. Gayo, R. Navigli, S. Neumaier, et al., Knowledge graphs, ACM Computing Surveys (CSUR) 54 (2021) 1-37.
|
| 112 |
+
|
| 113 |
+
[2] A. Poggi, D. Lembo, D. Calvanese, G. De Giacomo, M. Lenzerini, R. Rosati, Linking data to ontologies, Journal on data semantics X (2008) 133--173.
|
| 114 |
+
|
| 115 |
+
[3] A. Chebotko, S. Lu, F. Fotouhi, Semantics preserving sparql-to-sql translation, Data & Knowledge Engineering 68 (2009) 973-1000.
|
| 116 |
+
|
| 117 |
+
[4] A. Dimou, M. V. Sande, P. Colpaert, R. Verborgh, E. Mannens, R. Van De Walle, RML: A generic language for integrated RDF mappings of heterogeneous data, in: LDOW, 2014.
|
| 118 |
+
|
| 119 |
+
[5] ShExML: improving the usability of heterogeneous data mapping languages for first-time users, PeerJ Computer Science 6 (2020) e318. URL: https://peerj.com/articles/cs-318.
|
| 120 |
+
|
| 121 |
+
[6] S. Das, S. Sundara, R. Cyganiak, R2RML: RDB to RDF Mapping Language, W3C Recommendation 27 September 2012, www.w3.org/TR/r2rml (2012).
|
| 122 |
+
|
| 123 |
+
[7] M. Lefrançois, A. Zimmermann, N. Bakerally, A SPARQL extension for generating RDF from heterogeneous formats, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10249 LNCS (2017) 35-50.
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
${}^{6}$ https://oeg-upm.github.io/Conceptual-Mapping/index.html
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
[8] A. Cimmino, M. Poveda-Villalón, R. García-Castro, ewot: A semantic interoperability approach for heterogeneous iot ecosystems based on the web of things, Sensors 20 (2020) 822.
|
| 132 |
+
|
| 133 |
+
[9] M. Kovatsch, R. Matsukura, M. Lagally, T. Kawaguchi, K. Kajimoto, Web of Things (WoT) Architecture, W3C Recommendation 9 April 2020, https://www.w3.org/TR/wot-architecture/ (2020).
|
| 134 |
+
|
| 135 |
+
[10] J. Tennison, G. Kellogg, I. Herman, Model for tabular data and metadata on the web, W3C Recommendation (2015).
|
| 136 |
+
|
| 137 |
+
[11] E. Daga, L. Asprino, P. Mulholland, A. Gangemi, Facade-x: an opinionated approach to sparql anything, arXiv preprint arXiv:2106.02361 (2021).
|
| 138 |
+
|
| 139 |
+
[12] H. García-González, A shexml perspective on mapping challenges: already solved ones, language modifications and future required actions, in: Proceedings of the 2nd International Workshop on Knowledge Graph Construction, 2021.
|
| 140 |
+
|
| 141 |
+
[13] D. Chaves-Fraga, F. Priyatna, A. Cimmino, J. Toledo, E. Ruckhaus, O. Corcho, Gtfs-madrid-bench: A benchmark for virtual knowledge graph access in the transport domain, Journal of Web Semantics 65 (2020) 100596.
|
| 142 |
+
|
| 143 |
+
[14] B. De Meester, W. Maroy, A. Dimou, R. Verborgh, E. Mannens, Declarative data transformations for linked data generation: The case of DBpedia, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10250 LNCS (2017) 33-48.
|
| 144 |
+
|
| 145 |
+
[15] A. Iglesias-Molina, D. Chaves-Fraga, F. Priyatna, O. Corcho, Enhancing the maintainability of the bio2rdf project using declarative mappings., in: SWAT4HCLS, 2019.
|
| 146 |
+
|
| 147 |
+
[16] O. Corcho, F. Priyatna, D. Chaves-Fraga, Towards a new generation of ontology based data access, Semantic Web 11 (2020) 153-160.
|
| 148 |
+
|
| 149 |
+
[17] J. Euzenat, H. Stuckenschmidt, The 'family of languages' approach to semantic interoper-ability, Knowledge transformation for the semantic web 95 (2003) 49.
|
| 150 |
+
|
| 151 |
+
[18] O. Corcho, A. Gómez-Pérez, A layered approach to ontology translation with knowledge representation, Ph.D. thesis, UPM, 2004.
|
| 152 |
+
|
| 153 |
+
[19] D. Calvanese, B. Cogrel, S. Komla-Ebri, R. Kontchakov, D. Lanti, M. Rezk, M. Rodriguez-Muro, G. Xiao, Ontop: Answering sparql queries over relational databases, Semantic Web 8 (2017) 471-487.
|
| 154 |
+
|
| 155 |
+
[20] M. Rodriguez-Muro, M. Rezk, Efficient sparql-to-sql with r2rml mappings, Journal of Web Semantics 33 (2015) 141-169.
|
| 156 |
+
|
| 157 |
+
[21] A. Iglesias-Molina, L. Pozo-Gilo, D. Doña, E. Ruckhaus, D. Chaves-Fraga, Ö. Corcho, Mapeathor: Simplifying the specification of declarative rules for knowledge graph construction, in: ISWC (Demos/Industry), 2020.
|
| 158 |
+
|
| 159 |
+
[22] D. Chaves-Fraga, E. Ruckhaus, F. Priyatna, M.-E. Vidal, O. Corcho, Enhancing virtual ontology based access over tabular data with morph-csv, Semantic Web (2021) 1-34.
|
| 160 |
+
|
| 161 |
+
[23] S. Jozashoori, D. Chaves-Fraga, E. Iglesias, M.-E. Vidal, O. Corcho, Funmap: Efficient execution of functional mappings for knowledge graph creation, in: International Semantic Web Conference, Springer, 2020, pp. 276-293.
|
| 162 |
+
|
| 163 |
+
[24] C. R. Rivero, I. Hernández, D. Ruiz, R. Corchuelo, Generating sparql executable mappings to integrate ontologies, in: International Conference on Conceptual Modeling, Springer, 2011, pp. 118-131.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DEVISING MAPPING INTEROPERABILITY WITH MAPPING TRANSLATION
|
| 2 |
+
|
| 3 |
+
Ana Iglesias-Molina ${}^{1}$ , Andrea Cimmino ${}^{1}$ and Oscar Corcho ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Ontology Engineering Group, Universidad Politécnica de Madrid
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Nowadays, Knowledge Graphs are extensively created using very different techniques, mapping languages among them. The wide variety of use cases, data peculiarities, and potential uses has had a substantial impact in how these languages have been created, extended, and applied. This situation is closely related to the global adoption of these languages and their associated tools. The large number of languages, compliant tools, and usually the lack of information of the combination of both leads users to use other techniques to construct Knowledge Graphs. Often, users choose to create their own ad hoc programming scripts that suit their needs. This choice is normally less reproducible and maintainable, what ultimately affects the quality of the generated RDF data, particularly in long-term scenarios. We devise with mapping translation an enhancement to the interoperability of existing mapping languages. This position paper analyses the possible language translation approaches, presents the scenarios in which it is being applied and discusses how it can be implemented.
|
| 10 |
+
|
| 11 |
+
§ KEYWORDS
|
| 12 |
+
|
| 13 |
+
Mapping languages, Ontology Description, Mapping Translation
|
| 14 |
+
|
| 15 |
+
§ 1. INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Knowledge Graphs (KG) are increasingly used in academia and industry to represent and manage the increasing amount of data on the Web [1]. A large number of techniques to create KGs have been proposed. These techniques may follow, namely, two approaches: RDF materialization, that consists of translating data from one or more heterogeneous sources into RDF; or Virtualization, (Ontology Based Data Access) [2] that consists in translating a SPARQL query into one or more equivalent queries which are distributed and executed on the original data source(s) and where its results are transformed back to the SPARQL results format [3]. Both approaches rely on an essential element, a mapping document, which is the key-enabler for performing the translations.
|
| 18 |
+
|
| 19 |
+
Mapping languages represent the relationships between the structure or the model of heterogeneous data and an RDF version following an ontology, i.e., the rules on how to translate from non-RDF data into RDF. This data can be originally expressed in a variety of formats, such as tabular, JSON, or XML. Due to the heterogeneous nature of data, the wide corpus of techniques and the specific requirements that some scenarios may impose, an increasing number of mapping languages have been proposed $\left\lbrack {4,5}\right\rbrack$ . The differences among them are usually based on three aspects: (a) the focus on one or more particular data formats, e.g., the W3C Recommendations R2RML focuses on SQL tabular data [6]; (b) an addressed specific feature, e.g. SPARQL-Generate [7] allows the definition of functions in the mapping for cleaning or linking the generated RDF data; or (c) if they are designed for a particular technique or scenario that has special requirements, e.g. the WoT-mappings [8] where designed as an extension of the WoT standard [9].
|
| 20 |
+
|
| 21 |
+
Third International Workshop On Knowledge Graph Construction, Co-located with the ESWC 2022, Crete - 30th May 2022
|
| 22 |
+
|
| 23 |
+
Qana.iglesiasm@upm.es (A. Iglesias-Molina); andreajesus.cimmino@upm.es (A. Cimmino); oscar.corcho@upm.es (O. Corcho)
|
| 24 |
+
|
| 25 |
+
© 0000-0001-5375-8024 (A. Iglesias-Molina); 0000-0002-1823-4484 (A. Cimmino); 0000-0002-9260-0753 (O. Corcho) (C) (C) (C) (C) Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)
|
| 26 |
+
|
| 27 |
+
As a result, the diversity of mapping languages allows the construction of KG from heterogeneous data sources in many different scenarios. Current mapping languages may be categorized by their schema: RDF-based (e.g. R2RML [6] and extensions, CSVW [10]), SPARQL-based (e.g., SPARQL-Generate [7], SPARQL-Anything [11]) or based on other schemas (e.g. ShExML [12], Helio mappingsHelio ${}^{1}$ ). Nevertheless, the existing techniques usually implement just one mapping language, and sometimes not even the whole language specification [13]. Deciding which language and technique should be used in each scenario becomes a costly task, since the choice of one language may not cover all needed requirements [14]. Some scenarios require a combination of mapping languages because of their differential features, which entails using different techniques. In many cases, this diversity leads to ad hoc solutions that reduce reproducibility, maintainability, and reusability [15].
|
| 28 |
+
|
| 29 |
+
The increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions that are, more commonly than desired, not compatible with existing ones. This position paper develops the concept of mapping translation, proposed by Corcho et al. [16], a concept that can enhance the interoperability among existing mapping languages and thus, improve the user experience of these technologies by allowing communication and understanding among them. This paper presents some approaches for language translation, shows the current situations in which mapping translation is being applied and their benefits, and proposes different techniques to extend it to more languages.
|
| 30 |
+
|
| 31 |
+
The remaining of this article is structured as follows: Section 2 provides some insights about language translation and the situations in which it is being applied. Section 3 proposes three different techniques to address mapping translation at a larger scale. Finally, Section 4 draw some conclusions of the concepts presented in the paper.
|
| 32 |
+
|
| 33 |
+
§ 2. MAPPING TRANSLATION: CONTEXT
|
| 34 |
+
|
| 35 |
+
In this section, we introduce mapping translation describing some approaches to language translation and present a set of scenarios in which mapping translation has been applied. Authors assume the reader is familiar with current mapping languages and their general characteristics.
|
| 36 |
+
|
| 37 |
+
${}^{1}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv–xml-and-json-files-using-rml
|
| 38 |
+
|
| 39 |
+
< g r a p h i c s >
|
| 40 |
+
|
| 41 |
+
Figure 1: Types of language translations (Adapted from [18]).
|
| 42 |
+
|
| 43 |
+
§ 2.1. APPROACHES TO LANGUAGE TRANSLATION
|
| 44 |
+
|
| 45 |
+
In the context of language translation, there are several approaches that carry out translations among a set of languages. Depending on the situation at hand, an approach can be advantageous with respect to the other ones. We highlight the following [17]:
|
| 46 |
+
|
| 47 |
+
Peer-to-peer translation (Fig. 1a) supports ad hoc translation solutions between pairs of languages. This one may seem as the most straightforward approach, requiring the development of only the translator services needed for the situation at hand and with the possibility of adjusting it ad hoc for each situation. However, it becomes decreasingly feasible as the number of required translations increases.
|
| 48 |
+
|
| 49 |
+
Common interchange language (Fig. 1b) uses a language that serves as an intermediary among several languages. This approach reduces the number of translator services needed to develop and it is the most feasible of the three to scale in amount. It involves creating (or luckily having) a language able to represent the expressiveness of all languages, to avoid information loss. Additionally, this implies that there are common patterns shared by the languages independently of their representation, and that an abstract manner of gathering them is possible, which may not be thus for highly heterogeneous languages.
|
| 50 |
+
|
| 51 |
+
Family of languages (Fig. 1c) considers sets of languages and translations between the representatives of each set. This approach stands out for situations where there are clear subgroups of languages similar among them but among languages from other groups.
|
| 52 |
+
|
| 53 |
+
§ 2.2. MAPPING TRANSLATION SCENARIOS
|
| 54 |
+
|
| 55 |
+
Regarding mapping languages, there are currently some implementations that unidirectionally translate pairs of mapping languages. ShExML and YARRRML in their respective online editors ${}^{2,3}$ enable translation to RML. Another case is when tools implement RML/R2RML mapping translation into the language they are designed to parse; such is the case of Helio ${}^{4}$ and SPARQL-Generate ${}^{5}$ , that translate from RML to their respective language; and Ontop [19], that translates R2RML into its proprietary language, OBDA mappings [20]. These translation makes it possible to extend the outreach of the tool, since they enable the possibility of using them without the need of learning their specific language, but using one that is widely used and extended, such as R2RML and RML.
|
| 56 |
+
|
| 57 |
+
${}^{2}$ http://shexml.herminiogarcia.com/editor/
|
| 58 |
+
|
| 59 |
+
${}^{3}$ https://rml.io/yarrrml/matey/#
|
| 60 |
+
|
| 61 |
+
Another case we want to present is Mapeathor [21], a tool that takes the mapping rules specified in spreadsheets and transforms them into a mapping in either R2RML, RML or YARRRML. It aims to lower the learning curve of those languages for new users and ease the mapping writing process. Finally, we remark the case where tools provide a set of optimizations on the construction of RDF graphs exploiting the translation of mapping rules, this is the case of Morph-CSV [22] and FunMap [23]. Morph-CSV first performs a transformation over the tabular data with RML+FnO mappings and CSVW annotations, and outputs a database and R2RML mappings ready to be transformed by an R2RML-compliant tool. FunMap takes an RML+FnO mapping, performs the transformation functions indicated, outputs the parsed data and generates a function-free RML mapping.
|
| 62 |
+
|
| 63 |
+
The approaches presented are, mainly, examples of peer-to-peer translation for specific uses. The exception is Mapeathor, that abstracts the rules from R2RML, RML and YARRRML in a spreadsheet-based representation, which aligns with the approach of a common interchange language. Even though most of these translation examples involve R2RML or RML, there is no holistic approach of a general translation framework.
|
| 64 |
+
|
| 65 |
+
§ 3. MAPPING TRANSLATION: TECHNIQUES
|
| 66 |
+
|
| 67 |
+
This section presents three proposals to implement a mapping translator service general enough to enable translation among several languages. These proposals are, namely, (1) Software-based, (2) construct query-based, and (3) Executable mapping-based. These implementations can be applied to any of the language translation approaches presented in Section 2.1.
|
| 68 |
+
|
| 69 |
+
Software-based translation. It consists on ad-hoc software implementation for each pair of languages to perform bidirectional translations between them. As any ad hoc solution, it benefits from adjusting specifically to any situation with the (almost) unlimited possibilities that programming languages provide. This is the approach that all situations presented in Section 2.2 have applied, although with unidirectional translations.
|
| 70 |
+
|
| 71 |
+
Construct query-based translation. This approach takes advantage of SPARQL query language with construct queries, which return an RDF graph. These particular queries extract the data by matching graph patterns of the query (with the WHERE clause) and builds the output graph based on a template (with the CONSTRUCT clause). Since many languages are RDF-based, that is, follow the schema of an ontology and are usually written in the Turtle syntax (e.g., R2RML and extensions), this approach can be applicable to them. This approach benefits from relying on a well-stablished standard, as SPARQL is nowadays, and its compliant engines. However, it would leave out languages with other schemas, such as ShExML and SPARQL-based, wthout relying on software-based solutions.
|
| 72 |
+
|
| 73 |
+
${}^{4}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv-xml-and-json-files-using-rml
|
| 74 |
+
|
| 75 |
+
${}^{5}$ https://github.com/sparql-generate/rml-to-sparql-generate
|
| 76 |
+
|
| 77 |
+
Executable mapping-based translation. This last approach makes use of executable mappings automatically generated from ontology alignment to perform data translation between the two ontologies [24]. Similarly to the previous approach, this one also makes use of construct queries from SPARQL in the executable mappings. While the previous one relied on manual effort to build queries, this one takes advantage of the ontologies that define RDF-based mapping languages. In addition to the benefits and setbacks that the previous approach has, this approach may be hindered by the language constructs to build mappings. That is to say, single one-to-one correspondences of ontology entities may not be enough to gather and be able to translate their expressiveness and capabilities, especially for considerably different languages.
|
| 78 |
+
|
| 79 |
+
The techniques proposed are presented in decreasing order of manual effort required. The first one is completely ad hoc, and even though it could use some modules of the developed solutions presented in Section 2.2, many more would be needed to provide a complete set of bidirectional translations covering a good number of languages. The second one requires considerable effort to build queries for RDF-based languages, assuming no extra help from software implementation is needed. The third one could ideally be automatically done, from ontology alignments creation to mapping execution generation. However, the rate of success of this approach without manual intervention is not expected to be high, especially for the ontology alignment part when the input ontologies considerably differ from one another or present different constructs (with different number of elements or differently structured).
|
| 80 |
+
|
| 81 |
+
§ 4. CONCLUSIONS
|
| 82 |
+
|
| 83 |
+
This paper develops the concept of mapping translation, proposed by Corcho et al. [16]. It analyses the possible language translation approaches, updates the scenarios in which it is being applied, and proposes some implementation techniques to perform it.
|
| 84 |
+
|
| 85 |
+
There are several possibilities in order to fully develop a complete solution to achieve mapping translation that ensures information preservation, as described in previous sections. It not only requires choosing the technical implementation according to the available efforts and resources, but more importantly, it involves deciding wisely the language translation approach that suits best this particular case of mapping languages. As presented previously, we categorize current mapping languages by their schema: RDF-based, SPARQL-based and based on other schemas. All of them have been designed for a basic purpose: describing non-RDF data to allow either materialization or virtualization. Intuitively, we can assume that the rules that the different mappings create can be represented in an abstract, language-independent manner. However, the sometimes large differences among these languages may question this assumption. Some languages, inside their categories, are similar to each other, R2RML and its extensions, for instance. Languages from different groups can be related, such as ShExML and RML, despite some inevitable differences in their features. There are others that are more unique, such as CSVW. Lastly, the SPARQL-based group is more isolated from the others due to the great possibilities that provide relying on SPARQL. This scenario poses challenges for every language translation approach. Peer-to-peer translation would require a substantial amount of effort for divergent languages. Using families of languages would improve in comparison with the previous one, but it still would have to face several challenges in language representation and the amount of translator services required. Meanwhile, using a common interchange language would be the one that reduces most efforts, but there is no absolute certainty that a common interchange language could be able to represent them all. Still, some steps have been taken to draft this language ${}^{6}$ , with the base idea that the mapping rules can be abstracted and represented in an ontology-based language.
|
| 86 |
+
|
| 87 |
+
Even though it does not present as an easy task, mapping translation is a concept that can only benefit the current landscape of heterogeneous mapping languages. After years of KG construction, in which the increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions, sometimes ad hoc, sometimes with extensions of standards or widely used languages. Mapping translation has the potential to build bridges among the past (but still used) and new solutions to improve interoperability.
|
| 88 |
+
|
| 89 |
+
§ ACKNOWLEDGMENTS
|
| 90 |
+
|
| 91 |
+
The work presented in this paper is partially funded by Knowledge Spaces project (Grant PID2020-118274RB-I00 funded by MCIN/AEI/ 10.13039/501100011033); and partially funded by the European Union's Horizon 2020 Research and Innovation Programme through the AURORAL project, Grant Agreement No. 101016854.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,259 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Implementation-independent Knowledge Graph Construction Workflows using FnO Composition
|
| 2 |
+
|
| 3 |
+
Gertjan De Mulder (C) and Ben De Meester (C)
|
| 4 |
+
|
| 5 |
+
IDLab, Department of Electronics and Information Systems,
|
| 6 |
+
|
| 7 |
+
Ghent University - imec, Technologiepark-Zwijnaarde 122, 9052 Ghent, Belgium
|
| 8 |
+
|
| 9 |
+
\{firstname.lastname\}@ugent.be
|
| 10 |
+
|
| 11 |
+
Abstract. Knowledge Graph construction is typically a task within larger workflows, with a tight coupling between the abstract workflow and its execution. Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended over the entire Knowledge Graph construction workflow. In this paper, we introduce an interoperable and reproducible solution for defining Knowledge Graph construction workflows leveraging Semantic Web technologies. We describe how a data flow workflow can be described interoperable (i.e., independent from the underlying technology stack) and reproducible (i.e., with detailed provenance) by composing semantic abstract function descriptions; and how such a semantic workflow can be automatically executed across technology stacks. We demonstrate that composing functions using the Function Ontology allows for functional descriptions of entire workflows, automatically executable using a Function Ontology Handler implementation. The semantic descriptions allow for interoperable workflows, the alignment with P-PLAN and PROV-O allows for reproducibility, and the mapping to concrete implementations allows for automatic execution.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Knowledge Graph (KG) construction - i.e., RDF graph construction - involves computational tasks on data, and is typically a task within larger (business or scientific) workflows. The construction of a KG itself can also be considered an overarching and more complex task that is composed of smaller tasks, e.g., extracting data from a database, mapping it to RDF, and publishing it using a web API (i.e., Extract-Transform-Load or ETL). Such a process - i.e., a set of tasks that can be automated - can be facilitated using a workflow system.
|
| 16 |
+
|
| 17 |
+
When a tight coupling between the abstract workflow and its execution exists, interoperability diminishes and composing tasks into a workflow introduces challenges to connect tools that implement a task. Similar issues arise when integrating a KG construction task into a larger workflow. For example, connecting a mapping implemented in JAVA and a web API tool implemented in JavaScript.
|
| 18 |
+
|
| 19 |
+
Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended to the entire KG construction workflow. The lack of interoperability inhibits use of different tools for a task, making it harder to adapt to changing requirements and constraints. For example, Tool A might initially suffice for the RDF-generation task given the size of the source data. Later on, the data size might become unmanageable for Tool A. Tool B is available that can handle larger data sets, however, the lack of interoperability prevents the flexibility in switching from one tool to the other.
|
| 20 |
+
|
| 21 |
+
In this paper, we represent tasks within a workflow through the composition of implementation-independent semantic function descriptions. By providing in-teroperability between tasks and the tools that execute them, users can focus on the overarching task for which the workflow was created, for example, managing the KG construction life cycle using different mapping processors that generate RDF, and different endpoints on which the RDF is published.
|
| 22 |
+
|
| 23 |
+
Section 2 presents related work. In Section 3, we show how interoperability between tasks and tools within a workflow can be achieved through the composition of declarative function descriptions. We showcase this in Section 4 by leveraging the Function Ontology $\left( \mathrm{{FnO}}\right) \left\lbrack 7\right\rbrack$ to obtain a data flow workflow that is decoupled from the tools that are used, therefore, illustrating the flexibility in choosing the technology to be used for each task. In Section 5, we demonstrate the resulting workflow composition in FnO. We conclude in Section 6 and give additional pointers for future work.
|
| 24 |
+
|
| 25 |
+
## 2 Related work
|
| 26 |
+
|
| 27 |
+
In this section, we discuss existing RDF graph construction workflows, and work-flow systems' interoperability and reproducibility characteristics.
|
| 28 |
+
|
| 29 |
+
Compared to scripting, using a mapping language improves interoperability of the KG construction process [6]. Mapping languages can provide features to cover many steps within the KG construction process, i.e., not only specify how to map to RDF, but also how to extract data from different data sources [8], and how to publish using various methods [16]. Even when mapping languages provide enough features to be deemed end-to-end, executing a KG construction exists within a wider context, e.g., being part of a Knowledge Graph Lifecycle [4, or as a collection of subtasks to allow for optimization [13]. As such, even though KG construction rules can be described interoperably using, e.g., a mapping language, its position within the wider and narrower tasks makes it interpretable as being (a part of) a workflow.
|
| 30 |
+
|
| 31 |
+
Flexible workflows are needed, as requirements and constraints are subject to change. Thus, interoperability is essential for tasks designed in one system to be used by another [14]. The state of the art puts forward following characteristics for interoperability: 1) declarative paradigm, 2) separation of description and implementation, and 3) standardized language.
|
| 32 |
+
|
| 33 |
+
Statements within an imperative paradigm are exact instructions of what needs to be done and inherently define the control flow: the exact order in which a program must be executed. An imperative paradigm is suitable for processes that are unlikely to change, however, a declarative approach is recommended when workflows resemble processes with changing requirements and constraints that require them to be executed in different ways. Declarative paradigms can be used to represent data flow, i.e., the data dependencies between tasks, and are more robust to change as they describe what needs to be done, instead of how [1].
|
| 34 |
+
|
| 35 |
+
Interoperability diminishes when there is a tight coupling between tasks and implementations [12], e.g., when using ad hoc approaches. Thus, the separation of description and implementation is crucial to interoperability [15].
|
| 36 |
+
|
| 37 |
+
The use of standards is essential to achieve interoperability in heterogeneous environments. Several workflow specifications exist, and can be divided into two parts. On the hand, there are executable specifications, such as the Common Workflow Language (CWL), and on the other hand, descriptive specifications, such as P-PLAN, and Open Provenance Model for Workflows (OPMW). CWL allows for describing a computational workflow and the command-line tools used for executing its tasks [3], with a tight coupling between tasks and implementations. P-PLAN extends the W3C standard PROV. It allows for describing workflow steps and link them to execution traces, and was applied in projects that focus on interoperability [10] and reproducibility [11]. OPMW is an extension of P-PLAN [10]: a simple interchange format for representing workflows at different levels of granularity (ie. abstract model, instances, executions). These specifications are either focused on being executable or descriptive. To the best of our knowledge, however, no specification exists that supports both.
|
| 38 |
+
|
| 39 |
+
The Function Ontology (FnO) [7] presents a similar approach towards inter-operable data transformations using Semantic Web technologies. An implementation-independent function description allows for a decoupled architecture that separates the definition from its execution, and the inputs and outputs of a function are explicitly described. Furthermore, a recent update to FnO includes composition: compose a new function from other functions.
|
| 40 |
+
|
| 41 |
+
Reproducibility is another key characteristic within workflows, as it requires the tasks to be described in sufficient detail so that it can be reproduced in different environments [11]. In order to be reproducible by other scientists, provenance information including the execution details is required [2].
|
| 42 |
+
|
| 43 |
+
## 3 Method and Implementation
|
| 44 |
+
|
| 45 |
+
In this paper we put forward our approach towards interoperable and reproducible workflows through implementation-independent and declarative descriptions, allowing the flexibility of tasks being implemented by different tools. We discussed several existing description languages for defining workflows. The complexity of the language increases with the constructs that are supported. However, it appears that simplicity often pays greater dividends when considering interoperability. In that regard, we decided to look for lightweight - yet flexible and interoperable - solutions.
|
| 46 |
+
|
| 47 |
+
The previous section shows that to have interoperable and reproducible work-flow, we need a declarative paradigm that separates description from implementation in a standardized language, and allows for generating provenance information for individual tasks. In this section we elaborate on the decisions that were made to accommodate for these characteristics.
|
| 48 |
+
|
| 49 |
+
We represent a workflow as a composition of tasks, and a task as a function which can have zero or more inputs and zero or more outputs. Being uniquely identifiable and unambiguously defined increases the reusability of tasks across workflows, as they are universally discoverable and linkable [7].
|
| 50 |
+
|
| 51 |
+
We make the simplification that tasks can only be executed sequentially and currently do not consider control flow constructs other than a sequence. The data flow between tasks within a composition is represented by input and output mappings between functions. Such a composition mapping describes how an input or output of one function is linked to the input or output of another function. For example, within a KG construction workflow this is needed to connect the output of an RDF generation task to the input of the subsequent publishing task.
|
| 52 |
+
|
| 53 |
+
We consider the Function Ontology(FnO)as a model to describe functions and function compositions to represent tasks and workflows. Its simple model aligns with our goal without preventing us to add additional complexity such as mapping to concrete implementations and composition of functions. Both additions are part of the Function Ontology specification ${}^{1}$ .
|
| 54 |
+
|
| 55 |
+
The addition of composition to the FnO specification allows us to align function compositions with workflows as defined in P-PLAN [9], complementary to the existing alignment between FnO and PROV-O [5]. Several related works used or extended P-PLAN and led to the creation of several applications. Consequently, by aligning with P-PLAN we benefit from existing work that provides interoperability with several prominent workflow systems [10]. We use FnO because it allows for linking functions to actual implementations, hence, providing sufficient detail to be directly executed.
|
| 56 |
+
|
| 57 |
+
Therefore, by mapping the workflows defined as function compositions, to workflow descriptions in P-PLAN, we can benefit from those applications, such as the workflow mining, browsing, and provenance visualization solutions discussed
|
| 58 |
+
|
| 59 |
+
## in 10
|
| 60 |
+
|
| 61 |
+
The following shows how FnO and P-PLAN align, and Listing 1.1 shows how construct P-PLAN descriptions from FnO compositions:
|
| 62 |
+
|
| 63 |
+
- fno:Execution is-a p-plan:Step
|
| 64 |
+
|
| 65 |
+
- fnoc:Composition is-a p-plan:Plan
|
| 66 |
+
|
| 67 |
+
- fno:Parameter is-a p-plan:Variable
|
| 68 |
+
|
| 69 |
+
- fno:Output is-a p-plan:Variable
|
| 70 |
+
|
| 71 |
+
- fno:expects is-a p-plan:isInputVarOf
|
| 72 |
+
|
| 73 |
+
- fno:returns is-a p-plan:isOutputVarOf
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
${}^{1}$ https://w3id.org/function/spec/
|
| 78 |
+
|
| 79 |
+
---
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
PREFIX p-plan: <http://purl.org/net/p-plan#>
|
| 84 |
+
|
| 85 |
+
PREFIX fnoc: <https://w3id.org/function/vocabulary/composition#>
|
| 86 |
+
|
| 87 |
+
PREFIX fno: <https://w3id.org/function/ontology#>
|
| 88 |
+
|
| 89 |
+
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
|
| 90 |
+
|
| 91 |
+
CONSTRUCT \{
|
| 92 |
+
|
| 93 |
+
?s a p-plan:Plan .
|
| 94 |
+
|
| 95 |
+
?exX a p-plan:Step ; p-plan:isStepOfPlan ?s .
|
| 96 |
+
|
| 97 |
+
?exY a p-plan:Step ; p-plan:isStepOfPlan ?s ; p-plan:isPrecededBy ?exX .
|
| 98 |
+
|
| 99 |
+
\}
|
| 100 |
+
|
| 101 |
+
WHERE \{
|
| 102 |
+
|
| 103 |
+
?s rdf:type fnoc:Composition ;
|
| 104 |
+
|
| 105 |
+
fnoc:composedOf [ fnoc:mapFrom [ fnoc:constituentFunction ?fx ;
|
| 106 |
+
|
| 107 |
+
fnoc:functionOutput ?fxOut ] ;
|
| 108 |
+
|
| 109 |
+
fnoc:mapTo [ fnoc:constituentFunction ?fy ;
|
| 110 |
+
|
| 111 |
+
fnoc:functionParameter ?fyParameter ] ] .
|
| 112 |
+
|
| 113 |
+
?exX fno:executes ?fx .
|
| 114 |
+
|
| 115 |
+
?exY fno:executes ?fy .
|
| 116 |
+
|
| 117 |
+
\}
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
Listing 1.1. Pseudo-SPARQL query for constructing the precedence relations in P-PLAN from the CompositionMappings in FnO.
|
| 122 |
+
|
| 123 |
+
## 4 Use case
|
| 124 |
+
|
| 125 |
+
In this section we discuss POSH (Predictive Optimized Supply Chain): a motivating use case showcasing the need for an interoperable KG construction work-flow.
|
| 126 |
+
|
| 127 |
+
POSH is an imec.icon research project in which methods and software solutions are researched that leverage data to optimize integrated procurement and inventory management strategies. A data integration and quality framework is deemed necessary to increase the accuracy and reliability of supply chain data that has been collected from heterogeneous data sources (suppliers, customers, service providers, etc.). Within POSH, we developed a semantically-enhanced knowledge integration framework that uses various data repositories and external (meta)data to provide a clear overview of the current state of the supply chain and the necessary inputs for the prediction, optimization and decision support methods.
|
| 128 |
+
|
| 129 |
+
To this end, a KG is generated from the heterogeneous supply chain data and consequently exposed through a triple store endpoint. This enables our partners to take advantage of running queries against a uniform data model without being burdened with heterogeneous sources from which it constitutes, and focus on the designing algorithms for optimizing the supply chain. However, not all data was made available from the start but rather added progressively, and the requirements together with the mappings rules that satisfy them changed in parallel. Hence, the KG generation tasks need to be executed iteratively to incorporate the changes, which can become time-consuming when done manually. To iteratively accommodate for changing requirements and constraints, an implementation-independent workflow system was needed. Within POSH, we applied our method to provide workflow system flexibly enough to adapt to different technology stacks.
|
| 130 |
+
|
| 131 |
+
## 5 Demonstration
|
| 132 |
+
|
| 133 |
+
In this section we demonstrate a working example of an ETL workflow comprising two tasks: i) generating RDF; and ii) publishing the generated RDF. Due to space restrictions only excerpts of the descriptions are shown.
|
| 134 |
+
|
| 135 |
+
First, we define the task of generating RDF as a function that takes the URI to a mapping, and the URI to which the result should be written. We make use of the RML mapping language to have an interoperable RDF generation step. Secondly, we define the publishing task as a function which takes the URI to the generated RDF data as input parameter and outputs a URI to the endpoint through which it is published. These descriptions are shown in Listing 1.2
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
@prefix fno: <https://w3id.org/function/ontology#> .
|
| 140 |
+
|
| 141 |
+
@prefix fns: <http://example.com/functions#> .
|
| 142 |
+
|
| 143 |
+
fns:generateRDF a fno:Function ;
|
| 144 |
+
|
| 145 |
+
fno:expects ( fns:fpathMappingParameter ) ; fno:returnOutput ) .
|
| 146 |
+
|
| 147 |
+
fns:publish a fno:Function ;
|
| 148 |
+
|
| 149 |
+
fno:expects ( fns:inputRDFParameter ) ; fno:returns ( fns:returnOutput ) .
|
| 150 |
+
|
| 151 |
+
Listing 1.2. Task descriptions in FnO
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+
We describe an overarching ETL task as the composition of these two functions, illustrated in Listing 1.3. We define how the data flows between the composed functions using fnoc:CompositionMapping. fnoc:Composition links the output of the first task to the second task by means of a fnoc:CompositionMapping. Note that, using composition, we are able to describe the workflow at multiple levels of abstraction. In analogy with an ETL workflow, for example, the highest level of abstraction represents the three Extract, Transform, and Load tasks. The second level can contain more specific, yet abstract, tasks that are required to fulfill each of the three Extract, Transform, and Load tasks. Depending on the complexity of each task, it can be described further in a lower level of abstraction.
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
@prefix fno: <https://w3id.org/function/ontology#> .
|
| 160 |
+
|
| 161 |
+
@prefix fnoc: <https://w3id.org/function/vocabulary/composition#> .
|
| 162 |
+
|
| 163 |
+
@prefix fns: <http://example.com/functions#> .
|
| 164 |
+
|
| 165 |
+
fns:ETL a fno:Function ;
|
| 166 |
+
|
| 167 |
+
fno:expects ( fns:fpathMappingParameter fns:fpathOutputParameter ) ; fno:returnOutput ) .
|
| 168 |
+
|
| 169 |
+
fns:ETLComposition a fnoc:Composition ;
|
| 170 |
+
|
| 171 |
+
fnoc:composedOf
|
| 172 |
+
|
| 173 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
|
| 174 |
+
|
| 175 |
+
fnoc:functionParameter fns:fpathMappingParameter ] ;
|
| 176 |
+
|
| 177 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
|
| 178 |
+
|
| 179 |
+
fnoc:functionParameter fns:fpathMappingParameter ] ] ,
|
| 180 |
+
|
| 181 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
|
| 182 |
+
|
| 183 |
+
fnoc:functionParameter fns:fpathOutputParameter ] ;
|
| 184 |
+
|
| 185 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
|
| 186 |
+
|
| 187 |
+
fnoc:functionParameter fns:fpathOutputParameter ] ] ,
|
| 188 |
+
|
| 189 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:generateRDF ;
|
| 190 |
+
|
| 191 |
+
fnoc:functionOutput fns:returnOutput ] ;
|
| 192 |
+
|
| 193 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:publish ;
|
| 194 |
+
|
| 195 |
+
fnoc:functionParameter fns:inputRDFParameter ] ] ,
|
| 196 |
+
|
| 197 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:publish ;
|
| 198 |
+
|
| 199 |
+
fnoc:functionOutput fns:returnOutput ] ;
|
| 200 |
+
|
| 201 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:ETL ;
|
| 202 |
+
|
| 203 |
+
---
|
| 204 |
+
|
| 205 |
+
fnoc:functionOutput fns:returnOutput ] ] .
|
| 206 |
+
|
| 207 |
+
Listing 1.3. ETL Workflow description using FnO composition
|
| 208 |
+
|
| 209 |
+
We created a proof-of-concept Function Handler that automatically executes these descriptions using different implementations, available at https://github.com/FnOio/function-handler-js/tree/kgc-etl.Furthermore, we provide tests ${}^{2}$ in which we verify the execution sequence of a function composition, and demonstrate the interoperability through function compositions that resemble a KG construction workflow in which the RDF-generation task can be implemented by different tools.
|
| 210 |
+
|
| 211 |
+
## 6 Conclusion
|
| 212 |
+
|
| 213 |
+
Declarative function descriptions, and compositions thereof, allow us to define workflows that are decoupled from the execution environment. The explicit semantics allow for the unambiguous definition of inputs, outputs and implementations. Hence, allowing for automatically determine the functions that can be used to execute a task. Alignment with PROV allows for a reproducible workflow as both tasks and execution details are provided, which enables to exactly determine which functions were applied throughout the execution of the workflow.
|
| 214 |
+
|
| 215 |
+
Defining a workflow through compositions allows for different levels of abstractions. When rapid prototyping is required, only high-level tasks can be described. As requirements become more concrete, a high-level task can be described in greater detail as a composition of more fine-grained tasks.
|
| 216 |
+
|
| 217 |
+
These various levels of abstraction also allows for various levels of provenance information and thus various levels of reproducibility. For example, at one end of the spectrum, a function can be implemented by a command-line tool: no provenance information is available about the transformations that have been applied to produce the output. At the other end of the spectrum, a task can be described as a (nested) composition of fine-grained functions: provenance information is available up to the level of atomic functions.
|
| 218 |
+
|
| 219 |
+
For future work, we can see a mapping language as a way to describe compositions of transformation tasks. By representing, e.g., a Triples Map in RML as a composition of data and schema transformation tasks, we can provide insights in what a mapping does, and in what order. These insights could help to provide optimization strategies to such kind of engines.
|
| 220 |
+
|
| 221 |
+
## References
|
| 222 |
+
|
| 223 |
+
1. van der Aalst, W.M.P., Pesic, M., Schonenberg, H.: Declarative workflows: Balancing between flexibility and support. Computer Science - Research and Development (2),99-113 (2009)
|
| 224 |
+
|
| 225 |
+
---
|
| 226 |
+
|
| 227 |
+
${}^{2}$ https://github.com/FnOio/function-handler-js/blob/kgc-etl/src/FunctionHandler.test.ts
|
| 228 |
+
|
| 229 |
+
---
|
| 230 |
+
|
| 231 |
+
2. Barker, A., van Hemert, J.: Scientific workflow: A survey and research directions. In: Parallel Processing and Applied Mathematics. pp. 746-753 (2008)
|
| 232 |
+
|
| 233 |
+
3. Crusoe, M., Abeln, S., Iosup, A., Amstutz, P., Chilton, J., Tijanić, N., Ménager, H., Soiland-Reyes, S., Goble, C.: Methods included: Standardizing computational reuse and portability with the common workflow language. arXiv.org pp. 1-11 (2021)
|
| 234 |
+
|
| 235 |
+
4. Şimşek, U., Angele, K., Kärle, E., Opdenplatz, J., Sommer, D., Umbrich, J., Fensel, D.: Knowledge Graph Lifecycle: Building and Maintaining Knowledge graphs. In: Proceedings of the ${2}^{\text{nd }}$ International Workshop on Knowledge Graph Construction co-located with ${18}^{\text{th }}$ Extended Semantic Web Conference (ESWC 2021) (2021)
|
| 236 |
+
|
| 237 |
+
5. De Meester, B., Dimou, A., Verborgh, R., Mannens, E.: Detailed Provenance Capture of Data Processing. In: Proceedings of the First Workshop on Enabling Open Semantic Science (SemSci). pp. 31-38 (2017)
|
| 238 |
+
|
| 239 |
+
6. De Meester, B., Heyvaert, P., Verborgh, R., Dimou, A.: Mapping language analysis of comparative characteristics. In: Joint Proceedings of the ${1}^{\text{st }}$ International Workshop on Knowledge Graph Building and ${1}^{\text{st }}$ International Workshop on Large Scale RDF Analytics co-located with ${16}^{\text{th }}$ Extended Semantic Web Conference (ESWC). pp. 37-45 (2019)
|
| 240 |
+
|
| 241 |
+
7. De Meester, B., Seymoens, T., Dimou, A., Verborgh, R.: Implementation-independent Function Reuse. Future Generation Computer Systems pp. 946-959 (2020)
|
| 242 |
+
|
| 243 |
+
8. Dimou, A., Verborgh, R., Sande, M.V., Mannens, E., de Walle, R.V.: Machine-interpretable dataset and service descriptions for heterogeneous data access and retrieval. In: Proceedings of the ${11}^{\text{th }}$ International Conference on Semantic Systems - SEMANTICS '15 (2015)
|
| 244 |
+
|
| 245 |
+
9. Garijo, D., Gil, Y.: The P-PLAN Ontology. Tech. rep., Ontology Engineering Group (2014), http://purl.org/net/p-plan#
|
| 246 |
+
|
| 247 |
+
10. Garijo, D., Gil, Y., Corcho, O.: Towards workflow ecosystems through semantic and standard representations. In: ${2014}{9}^{\text{th }}$ Workshop on Workflows in Support of Large-Scale Science. pp. 94-104 (2014)
|
| 248 |
+
|
| 249 |
+
11. Gil, Y., Garijo, D., Knoblock, M., Deng, A., Adusumilli, R., Ratnakar, V., Mallick, P.: Improving Publication and Reproducibility of Computational Experiments through Workflow Abstractions. In: K-CAP Workshops (2017)
|
| 250 |
+
|
| 251 |
+
12. Goble, C., Cohen-Boulakia, S., Soiland-Reyes, S., Garijo, D., Gil, Y., Crusoe, M.R., Peters, K., Schober, D.: FAIR computational workflows. Data Intelligence (1-2), 108-121 (2020)
|
| 252 |
+
|
| 253 |
+
13. Jozashoori, S., Vidal, M.E.: MapSDI: A scaled-up semantic data integration framework for knowledge graph creation. In: On the Move to Meaningful Internet Systems: OTM 2019 Conferences. pp. 58-75 (2019)
|
| 254 |
+
|
| 255 |
+
14. Plankensteiner, K., Montagnat, J., Prodan, R.: Iwir: A language enabling portability across grid workflow systems. In: Proceedings of the ${6}^{\text{th }}$ workshop on Workflows in support of large-scale science - WORKS '11. pp. 97-106 (2011)
|
| 256 |
+
|
| 257 |
+
15. Ferreira da Silva, e.a.: Workflows Community Summit: Bringing the Scientific Workflows Community Together. Tech. rep. (2021)
|
| 258 |
+
|
| 259 |
+
16. Van Assche, D., Haesendonck, G., De Mulder, G., Delva, T., Heyvaert, P., De Meester, B., Dimou, A.: Leveraging Web of Things W3C Recommendations for Knowledge Graphs Generation. In: Web Engineering. pp. 337-352. Lecture Notes in Computer Science, Springer (2021)
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ IMPLEMENTATION-INDEPENDENT KNOWLEDGE GRAPH CONSTRUCTION WORKFLOWS USING FNO COMPOSITION
|
| 2 |
+
|
| 3 |
+
Gertjan De Mulder (C) and Ben De Meester (C)
|
| 4 |
+
|
| 5 |
+
IDLab, Department of Electronics and Information Systems,
|
| 6 |
+
|
| 7 |
+
Ghent University - imec, Technologiepark-Zwijnaarde 122, 9052 Ghent, Belgium
|
| 8 |
+
|
| 9 |
+
{firstname.lastname}@ugent.be
|
| 10 |
+
|
| 11 |
+
Abstract. Knowledge Graph construction is typically a task within larger workflows, with a tight coupling between the abstract workflow and its execution. Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended over the entire Knowledge Graph construction workflow. In this paper, we introduce an interoperable and reproducible solution for defining Knowledge Graph construction workflows leveraging Semantic Web technologies. We describe how a data flow workflow can be described interoperable (i.e., independent from the underlying technology stack) and reproducible (i.e., with detailed provenance) by composing semantic abstract function descriptions; and how such a semantic workflow can be automatically executed across technology stacks. We demonstrate that composing functions using the Function Ontology allows for functional descriptions of entire workflows, automatically executable using a Function Ontology Handler implementation. The semantic descriptions allow for interoperable workflows, the alignment with P-PLAN and PROV-O allows for reproducibility, and the mapping to concrete implementations allows for automatic execution.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Knowledge Graph (KG) construction - i.e., RDF graph construction - involves computational tasks on data, and is typically a task within larger (business or scientific) workflows. The construction of a KG itself can also be considered an overarching and more complex task that is composed of smaller tasks, e.g., extracting data from a database, mapping it to RDF, and publishing it using a web API (i.e., Extract-Transform-Load or ETL). Such a process - i.e., a set of tasks that can be automated - can be facilitated using a workflow system.
|
| 16 |
+
|
| 17 |
+
When a tight coupling between the abstract workflow and its execution exists, interoperability diminishes and composing tasks into a workflow introduces challenges to connect tools that implement a task. Similar issues arise when integrating a KG construction task into a larger workflow. For example, connecting a mapping implemented in JAVA and a web API tool implemented in JavaScript.
|
| 18 |
+
|
| 19 |
+
Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended to the entire KG construction workflow. The lack of interoperability inhibits use of different tools for a task, making it harder to adapt to changing requirements and constraints. For example, Tool A might initially suffice for the RDF-generation task given the size of the source data. Later on, the data size might become unmanageable for Tool A. Tool B is available that can handle larger data sets, however, the lack of interoperability prevents the flexibility in switching from one tool to the other.
|
| 20 |
+
|
| 21 |
+
In this paper, we represent tasks within a workflow through the composition of implementation-independent semantic function descriptions. By providing in-teroperability between tasks and the tools that execute them, users can focus on the overarching task for which the workflow was created, for example, managing the KG construction life cycle using different mapping processors that generate RDF, and different endpoints on which the RDF is published.
|
| 22 |
+
|
| 23 |
+
Section 2 presents related work. In Section 3, we show how interoperability between tasks and tools within a workflow can be achieved through the composition of declarative function descriptions. We showcase this in Section 4 by leveraging the Function Ontology $\left( \mathrm{{FnO}}\right) \left\lbrack 7\right\rbrack$ to obtain a data flow workflow that is decoupled from the tools that are used, therefore, illustrating the flexibility in choosing the technology to be used for each task. In Section 5, we demonstrate the resulting workflow composition in FnO. We conclude in Section 6 and give additional pointers for future work.
|
| 24 |
+
|
| 25 |
+
§ 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
In this section, we discuss existing RDF graph construction workflows, and work-flow systems' interoperability and reproducibility characteristics.
|
| 28 |
+
|
| 29 |
+
Compared to scripting, using a mapping language improves interoperability of the KG construction process [6]. Mapping languages can provide features to cover many steps within the KG construction process, i.e., not only specify how to map to RDF, but also how to extract data from different data sources [8], and how to publish using various methods [16]. Even when mapping languages provide enough features to be deemed end-to-end, executing a KG construction exists within a wider context, e.g., being part of a Knowledge Graph Lifecycle [4, or as a collection of subtasks to allow for optimization [13]. As such, even though KG construction rules can be described interoperably using, e.g., a mapping language, its position within the wider and narrower tasks makes it interpretable as being (a part of) a workflow.
|
| 30 |
+
|
| 31 |
+
Flexible workflows are needed, as requirements and constraints are subject to change. Thus, interoperability is essential for tasks designed in one system to be used by another [14]. The state of the art puts forward following characteristics for interoperability: 1) declarative paradigm, 2) separation of description and implementation, and 3) standardized language.
|
| 32 |
+
|
| 33 |
+
Statements within an imperative paradigm are exact instructions of what needs to be done and inherently define the control flow: the exact order in which a program must be executed. An imperative paradigm is suitable for processes that are unlikely to change, however, a declarative approach is recommended when workflows resemble processes with changing requirements and constraints that require them to be executed in different ways. Declarative paradigms can be used to represent data flow, i.e., the data dependencies between tasks, and are more robust to change as they describe what needs to be done, instead of how [1].
|
| 34 |
+
|
| 35 |
+
Interoperability diminishes when there is a tight coupling between tasks and implementations [12], e.g., when using ad hoc approaches. Thus, the separation of description and implementation is crucial to interoperability [15].
|
| 36 |
+
|
| 37 |
+
The use of standards is essential to achieve interoperability in heterogeneous environments. Several workflow specifications exist, and can be divided into two parts. On the hand, there are executable specifications, such as the Common Workflow Language (CWL), and on the other hand, descriptive specifications, such as P-PLAN, and Open Provenance Model for Workflows (OPMW). CWL allows for describing a computational workflow and the command-line tools used for executing its tasks [3], with a tight coupling between tasks and implementations. P-PLAN extends the W3C standard PROV. It allows for describing workflow steps and link them to execution traces, and was applied in projects that focus on interoperability [10] and reproducibility [11]. OPMW is an extension of P-PLAN [10]: a simple interchange format for representing workflows at different levels of granularity (ie. abstract model, instances, executions). These specifications are either focused on being executable or descriptive. To the best of our knowledge, however, no specification exists that supports both.
|
| 38 |
+
|
| 39 |
+
The Function Ontology (FnO) [7] presents a similar approach towards inter-operable data transformations using Semantic Web technologies. An implementation-independent function description allows for a decoupled architecture that separates the definition from its execution, and the inputs and outputs of a function are explicitly described. Furthermore, a recent update to FnO includes composition: compose a new function from other functions.
|
| 40 |
+
|
| 41 |
+
Reproducibility is another key characteristic within workflows, as it requires the tasks to be described in sufficient detail so that it can be reproduced in different environments [11]. In order to be reproducible by other scientists, provenance information including the execution details is required [2].
|
| 42 |
+
|
| 43 |
+
§ 3 METHOD AND IMPLEMENTATION
|
| 44 |
+
|
| 45 |
+
In this paper we put forward our approach towards interoperable and reproducible workflows through implementation-independent and declarative descriptions, allowing the flexibility of tasks being implemented by different tools. We discussed several existing description languages for defining workflows. The complexity of the language increases with the constructs that are supported. However, it appears that simplicity often pays greater dividends when considering interoperability. In that regard, we decided to look for lightweight - yet flexible and interoperable - solutions.
|
| 46 |
+
|
| 47 |
+
The previous section shows that to have interoperable and reproducible work-flow, we need a declarative paradigm that separates description from implementation in a standardized language, and allows for generating provenance information for individual tasks. In this section we elaborate on the decisions that were made to accommodate for these characteristics.
|
| 48 |
+
|
| 49 |
+
We represent a workflow as a composition of tasks, and a task as a function which can have zero or more inputs and zero or more outputs. Being uniquely identifiable and unambiguously defined increases the reusability of tasks across workflows, as they are universally discoverable and linkable [7].
|
| 50 |
+
|
| 51 |
+
We make the simplification that tasks can only be executed sequentially and currently do not consider control flow constructs other than a sequence. The data flow between tasks within a composition is represented by input and output mappings between functions. Such a composition mapping describes how an input or output of one function is linked to the input or output of another function. For example, within a KG construction workflow this is needed to connect the output of an RDF generation task to the input of the subsequent publishing task.
|
| 52 |
+
|
| 53 |
+
We consider the Function Ontology(FnO)as a model to describe functions and function compositions to represent tasks and workflows. Its simple model aligns with our goal without preventing us to add additional complexity such as mapping to concrete implementations and composition of functions. Both additions are part of the Function Ontology specification ${}^{1}$ .
|
| 54 |
+
|
| 55 |
+
The addition of composition to the FnO specification allows us to align function compositions with workflows as defined in P-PLAN [9], complementary to the existing alignment between FnO and PROV-O [5]. Several related works used or extended P-PLAN and led to the creation of several applications. Consequently, by aligning with P-PLAN we benefit from existing work that provides interoperability with several prominent workflow systems [10]. We use FnO because it allows for linking functions to actual implementations, hence, providing sufficient detail to be directly executed.
|
| 56 |
+
|
| 57 |
+
Therefore, by mapping the workflows defined as function compositions, to workflow descriptions in P-PLAN, we can benefit from those applications, such as the workflow mining, browsing, and provenance visualization solutions discussed
|
| 58 |
+
|
| 59 |
+
§ IN 10
|
| 60 |
+
|
| 61 |
+
The following shows how FnO and P-PLAN align, and Listing 1.1 shows how construct P-PLAN descriptions from FnO compositions:
|
| 62 |
+
|
| 63 |
+
* fno:Execution is-a p-plan:Step
|
| 64 |
+
|
| 65 |
+
* fnoc:Composition is-a p-plan:Plan
|
| 66 |
+
|
| 67 |
+
* fno:Parameter is-a p-plan:Variable
|
| 68 |
+
|
| 69 |
+
* fno:Output is-a p-plan:Variable
|
| 70 |
+
|
| 71 |
+
* fno:expects is-a p-plan:isInputVarOf
|
| 72 |
+
|
| 73 |
+
* fno:returns is-a p-plan:isOutputVarOf
|
| 74 |
+
|
| 75 |
+
${}^{1}$ https://w3id.org/function/spec/
|
| 76 |
+
|
| 77 |
+
PREFIX p-plan: <http://purl.org/net/p-plan#>
|
| 78 |
+
|
| 79 |
+
PREFIX fnoc: <https://w3id.org/function/vocabulary/composition#>
|
| 80 |
+
|
| 81 |
+
PREFIX fno: <https://w3id.org/function/ontology#>
|
| 82 |
+
|
| 83 |
+
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
|
| 84 |
+
|
| 85 |
+
CONSTRUCT {
|
| 86 |
+
|
| 87 |
+
?s a p-plan:Plan .
|
| 88 |
+
|
| 89 |
+
?exX a p-plan:Step ; p-plan:isStepOfPlan ?s .
|
| 90 |
+
|
| 91 |
+
?exY a p-plan:Step ; p-plan:isStepOfPlan ?s ; p-plan:isPrecededBy ?exX .
|
| 92 |
+
|
| 93 |
+
}
|
| 94 |
+
|
| 95 |
+
WHERE {
|
| 96 |
+
|
| 97 |
+
?s rdf:type fnoc:Composition ;
|
| 98 |
+
|
| 99 |
+
fnoc:composedOf [ fnoc:mapFrom [ fnoc:constituentFunction ?fx ;
|
| 100 |
+
|
| 101 |
+
fnoc:functionOutput ?fxOut ] ;
|
| 102 |
+
|
| 103 |
+
fnoc:mapTo [ fnoc:constituentFunction ?fy ;
|
| 104 |
+
|
| 105 |
+
fnoc:functionParameter ?fyParameter ] ] .
|
| 106 |
+
|
| 107 |
+
?exX fno:executes ?fx .
|
| 108 |
+
|
| 109 |
+
?exY fno:executes ?fy .
|
| 110 |
+
|
| 111 |
+
}
|
| 112 |
+
|
| 113 |
+
Listing 1.1. Pseudo-SPARQL query for constructing the precedence relations in P-PLAN from the CompositionMappings in FnO.
|
| 114 |
+
|
| 115 |
+
§ 4 USE CASE
|
| 116 |
+
|
| 117 |
+
In this section we discuss POSH (Predictive Optimized Supply Chain): a motivating use case showcasing the need for an interoperable KG construction work-flow.
|
| 118 |
+
|
| 119 |
+
POSH is an imec.icon research project in which methods and software solutions are researched that leverage data to optimize integrated procurement and inventory management strategies. A data integration and quality framework is deemed necessary to increase the accuracy and reliability of supply chain data that has been collected from heterogeneous data sources (suppliers, customers, service providers, etc.). Within POSH, we developed a semantically-enhanced knowledge integration framework that uses various data repositories and external (meta)data to provide a clear overview of the current state of the supply chain and the necessary inputs for the prediction, optimization and decision support methods.
|
| 120 |
+
|
| 121 |
+
To this end, a KG is generated from the heterogeneous supply chain data and consequently exposed through a triple store endpoint. This enables our partners to take advantage of running queries against a uniform data model without being burdened with heterogeneous sources from which it constitutes, and focus on the designing algorithms for optimizing the supply chain. However, not all data was made available from the start but rather added progressively, and the requirements together with the mappings rules that satisfy them changed in parallel. Hence, the KG generation tasks need to be executed iteratively to incorporate the changes, which can become time-consuming when done manually. To iteratively accommodate for changing requirements and constraints, an implementation-independent workflow system was needed. Within POSH, we applied our method to provide workflow system flexibly enough to adapt to different technology stacks.
|
| 122 |
+
|
| 123 |
+
§ 5 DEMONSTRATION
|
| 124 |
+
|
| 125 |
+
In this section we demonstrate a working example of an ETL workflow comprising two tasks: i) generating RDF; and ii) publishing the generated RDF. Due to space restrictions only excerpts of the descriptions are shown.
|
| 126 |
+
|
| 127 |
+
First, we define the task of generating RDF as a function that takes the URI to a mapping, and the URI to which the result should be written. We make use of the RML mapping language to have an interoperable RDF generation step. Secondly, we define the publishing task as a function which takes the URI to the generated RDF data as input parameter and outputs a URI to the endpoint through which it is published. These descriptions are shown in Listing 1.2
|
| 128 |
+
|
| 129 |
+
@prefix fno: <https://w3id.org/function/ontology#> .
|
| 130 |
+
|
| 131 |
+
@prefix fns: <http://example.com/functions#> .
|
| 132 |
+
|
| 133 |
+
fns:generateRDF a fno:Function ;
|
| 134 |
+
|
| 135 |
+
fno:expects ( fns:fpathMappingParameter ) ; fno:returnOutput ) .
|
| 136 |
+
|
| 137 |
+
fns:publish a fno:Function ;
|
| 138 |
+
|
| 139 |
+
fno:expects ( fns:inputRDFParameter ) ; fno:returns ( fns:returnOutput ) .
|
| 140 |
+
|
| 141 |
+
Listing 1.2. Task descriptions in FnO
|
| 142 |
+
|
| 143 |
+
We describe an overarching ETL task as the composition of these two functions, illustrated in Listing 1.3. We define how the data flows between the composed functions using fnoc:CompositionMapping. fnoc:Composition links the output of the first task to the second task by means of a fnoc:CompositionMapping. Note that, using composition, we are able to describe the workflow at multiple levels of abstraction. In analogy with an ETL workflow, for example, the highest level of abstraction represents the three Extract, Transform, and Load tasks. The second level can contain more specific, yet abstract, tasks that are required to fulfill each of the three Extract, Transform, and Load tasks. Depending on the complexity of each task, it can be described further in a lower level of abstraction.
|
| 144 |
+
|
| 145 |
+
@prefix fno: <https://w3id.org/function/ontology#> .
|
| 146 |
+
|
| 147 |
+
@prefix fnoc: <https://w3id.org/function/vocabulary/composition#> .
|
| 148 |
+
|
| 149 |
+
@prefix fns: <http://example.com/functions#> .
|
| 150 |
+
|
| 151 |
+
fns:ETL a fno:Function ;
|
| 152 |
+
|
| 153 |
+
fno:expects ( fns:fpathMappingParameter fns:fpathOutputParameter ) ; fno:returnOutput ) .
|
| 154 |
+
|
| 155 |
+
fns:ETLComposition a fnoc:Composition ;
|
| 156 |
+
|
| 157 |
+
fnoc:composedOf
|
| 158 |
+
|
| 159 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
|
| 160 |
+
|
| 161 |
+
fnoc:functionParameter fns:fpathMappingParameter ] ;
|
| 162 |
+
|
| 163 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
|
| 164 |
+
|
| 165 |
+
fnoc:functionParameter fns:fpathMappingParameter ] ] ,
|
| 166 |
+
|
| 167 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
|
| 168 |
+
|
| 169 |
+
fnoc:functionParameter fns:fpathOutputParameter ] ;
|
| 170 |
+
|
| 171 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
|
| 172 |
+
|
| 173 |
+
fnoc:functionParameter fns:fpathOutputParameter ] ] ,
|
| 174 |
+
|
| 175 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:generateRDF ;
|
| 176 |
+
|
| 177 |
+
fnoc:functionOutput fns:returnOutput ] ;
|
| 178 |
+
|
| 179 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:publish ;
|
| 180 |
+
|
| 181 |
+
fnoc:functionParameter fns:inputRDFParameter ] ] ,
|
| 182 |
+
|
| 183 |
+
[ fnoc:mapFrom [ fnoc:constituentFunction fns:publish ;
|
| 184 |
+
|
| 185 |
+
fnoc:functionOutput fns:returnOutput ] ;
|
| 186 |
+
|
| 187 |
+
fnoc:mapTo [ fnoc:constituentFunction fns:ETL ;
|
| 188 |
+
|
| 189 |
+
fnoc:functionOutput fns:returnOutput ] ] .
|
| 190 |
+
|
| 191 |
+
Listing 1.3. ETL Workflow description using FnO composition
|
| 192 |
+
|
| 193 |
+
We created a proof-of-concept Function Handler that automatically executes these descriptions using different implementations, available at https://github.com/FnOio/function-handler-js/tree/kgc-etl.Furthermore, we provide tests ${}^{2}$ in which we verify the execution sequence of a function composition, and demonstrate the interoperability through function compositions that resemble a KG construction workflow in which the RDF-generation task can be implemented by different tools.
|
| 194 |
+
|
| 195 |
+
§ 6 CONCLUSION
|
| 196 |
+
|
| 197 |
+
Declarative function descriptions, and compositions thereof, allow us to define workflows that are decoupled from the execution environment. The explicit semantics allow for the unambiguous definition of inputs, outputs and implementations. Hence, allowing for automatically determine the functions that can be used to execute a task. Alignment with PROV allows for a reproducible workflow as both tasks and execution details are provided, which enables to exactly determine which functions were applied throughout the execution of the workflow.
|
| 198 |
+
|
| 199 |
+
Defining a workflow through compositions allows for different levels of abstractions. When rapid prototyping is required, only high-level tasks can be described. As requirements become more concrete, a high-level task can be described in greater detail as a composition of more fine-grained tasks.
|
| 200 |
+
|
| 201 |
+
These various levels of abstraction also allows for various levels of provenance information and thus various levels of reproducibility. For example, at one end of the spectrum, a function can be implemented by a command-line tool: no provenance information is available about the transformations that have been applied to produce the output. At the other end of the spectrum, a task can be described as a (nested) composition of fine-grained functions: provenance information is available up to the level of atomic functions.
|
| 202 |
+
|
| 203 |
+
For future work, we can see a mapping language as a way to describe compositions of transformation tasks. By representing, e.g., a Triples Map in RML as a composition of data and schema transformation tasks, we can provide insights in what a mapping does, and in what order. These insights could help to provide optimization strategies to such kind of engines.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Human-in-the-Loop Approach for Personal Knowledge Graph Construction from File Names
|
| 2 |
+
|
| 3 |
+
Markus Schröder, Christian Jilek, and Andreas Dengel
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Smart Data & Knowledge Services Dept., DFKI GmbH, Kaiserslautern, Germany
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Computer Science Dept., TU Kaiserslautern, Germany
|
| 8 |
+
|
| 9 |
+
\{markus.schroeder, christian.jilek, andreas.dengel\}@dfki.de
|
| 10 |
+
|
| 11 |
+
Abstract. Knowledge workers' personal and work related concepts (e.g. persons, projects, topics) are usually not sufficiently covered by knowledge graphs. Yet, already handmade classification schemes, prominently folder structures, naturally mention several of their concepts in file names. Thus, such data could be a promising source for constructing personal knowledge graphs. However, this idea poses several challenges: file names are usually noisy non-grammatical text snippets, while folder structures do not clearly define how concepts relate to each other. To cope with this semantic gap, we include knowledge workers as humans-in-the-loop to guide the building process with their feedback. Our semi-automatic personal knowledge graph construction approach consists of four major stages: domain term extraction, ontology population, taxonomic and non-taxonomic relation learning. We conduct a case study with four expert interviews from different domains in an industrial scenario. Results indicate that file systems are promising sources and, combined with our approach, already yield useful personal knowledge graphs with moderate effort spent.
|
| 12 |
+
|
| 13 |
+
Keywords: Knowledge Graph Construction - Personal Knowledge Graph - Human-in-the-Loop - File System
|
| 14 |
+
|
| 15 |
+
## 1 Introduction
|
| 16 |
+
|
| 17 |
+
Knowledge graphs (KGs) have become a popular technology to support knowledge workers in various applications (for a survey see [8]). Since such KGs are constructed from domain-specific document corpora, personal concepts of knowledge workers in these domains are usually not sufficiently covered. To fill this gap, there is the emerging concept of Personal Knowledge Graphs (PKGs) which focus on resources users are personally related to (also in their professional life). The population and maintenance of such graphs is still an open research question [1], especially, when knowledge is not modeled yet (cold start problem). Various sources in a user's personal information sphere may be worth considering to kick-start a population [12].
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Fig. 1: A file system (left) with file names containing relevant words (green) and irrelevant words (red). They form a personal knowledge graph (right) with nontaxonomic and taxonomic relations. Due to readability, some edges are omitted.
|
| 22 |
+
|
| 23 |
+
When users self-organize diverse documents in daily business, they often manage them in a form of classification schema, prominently in file systems [4]. Here, documents are hierarchically arranged and freely named according to aspects such as projects, organizations, persons, topics and task-related concepts. In file and folder names such concepts are typically mentioned in order to let users guess their contents. Because file systems allow to name them mostly free ${}^{3}$ , users tend to label them with their own vocabulary which can contain technical terms, made-up words or even puns [2]. Thus, we hypothesize that file names could be a promising source for constructing PKG.
|
| 24 |
+
|
| 25 |
+
This idea poses several challenges due to the nature of the data source. Literature already showed that users have a large variety of file naming strategies $\left\lbrack {5,3}\right\rbrack$ . File names are usually short ungrammatical (sometimes noisy) text snippets and contain differently ordered and concatenated keywords. These circumstances make it difficult to discover and extract relevant named entities from them. Besides labeling, users can also assemble files in hierarchically structured folders [14]. Yet, this "folder contains file" structure typically does not explicitly define how named entities relate to each other.
|
| 26 |
+
|
| 27 |
+
To give a visual example, Figure 1 depicts a small file system (left) and a possible personal knowledge graph (right). Because some keywords in the file names are too general (images) or have a technical meaning (Thumbs), they may be irrelevant for the user (underlined in red). Relevant keywords (green) become resources in the PKG, while a foaf:topic property keeps track in which file resource it is mentioned (only one is shown due to readability). Named individuals (Zenphase, Parker, Mercurtainment) are assigned to their classes (Project, Person, Organization) and are connected meaningfully (:hasProject, :worksFor). The remaining ones are rather abstract ideas and thus become skos: Concepts according to the Simple Knowledge Organization System (SKOS). A taxonomy tree is formed (top-right side) by adding broader concepts (: DocumentType, : DocumentState). Since ${WIP}$ is an abbreviation, its skos:prefLabel contains the long form. Synonyms and other spellings are captured in skos:hiddenLabels: for the user the term Drawing is synonym to treeDiagram and docs in file names indicate the concept Document. Due to the lack of space, labels and some other properties are not visualized.
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
${}^{3}$ Restricted only by illegal characters and maximum file name length.
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
In this paper, we present a semi-automatic personal knowledge graph construction approach which is able to build such a graph from a classification schema, in this case, a file system and expert feedback. A graphical user interface (GUI) assists a knowledge engineer (KE) in performing several tasks during construction: the discovery of concepts in file names, ontology population of concepts and learning of taxonomic as well as non-taxonomic relations. In an interview setting an expert can describe his or her personal view on their files to the KE who translates the explanations in suitable knowledge graph statements using the GUI. To reduce the manual effort for the KE, we make use of machine learning models which learn from feedback and predict new statements during usage. This proposed method yields several research questions (RQs), for which first answers are reported in this work.
|
| 36 |
+
|
| 37 |
+
- RQ1: Are file systems promising sources for knowledge graph construction?
|
| 38 |
+
|
| 39 |
+
- RQ2: Can our system suggest helpful statements during usage?
|
| 40 |
+
|
| 41 |
+
- RQ3: How efficient is the construction in our approach?
|
| 42 |
+
|
| 43 |
+
The rest of this paper is structured as follows: related approaches are covered in the next section (Sec. 2). This is followed by the presentation of our approach in Section 3 and a prototypical implementation in Section 3.6. The above research questions are then addressed in a case study with expert interviews in Section 4. Section 5 closes the paper with a conclusion and future work.
|
| 44 |
+
|
| 45 |
+
## 2 Related Work
|
| 46 |
+
|
| 47 |
+
To personally assist knowledge workers in their tasks, knowledge services benefit from personal information models about users [12]. For building such a model, personal concepts can be acquired from various texts in a user's personal information sphere [13]. Thus, folder structures could be useful for this purpose which is also investigated by other related works.
|
| 48 |
+
|
| 49 |
+
Magnini et al. [10] as well consider hierarchical classifications and analyze the implicit knowledge hidden in the labeled nodes. They use logic formulas expressed in description logic and word senses discovered and disambiguated in labels to make knowledge explicit. Contextual interpretations such as implicit disjunctions and negations are performed by exploiting the hierarchy. In contrast to our work, their goal is the definition of an ontology with classes and properties (TBox) by relying on external language repositories containing word senses. For us the usage of such resources is limited, since word senses of personal concepts (like projects) are usually not contained. Moreover, they present a fully automatic approach without integrating domain experts in cases where labels do not match with any entry in dictionaries.
|
| 50 |
+
|
| 51 |
+
More closely related is the work about knowledge extraction from classification schemes by Lamparter et al. [9]. Following the same motivation, the authors would like to acquire explicit semantic descriptions from legacy information such as local folder structures. To archive this, their processing pipeline include the identification of concept candidates, word sense disambiguation, taxonomy construction and identification of non-taxonomic relations. They distinguish ontology and instance layer by checking with dictionaries if terms are rather general (concepts) or specific (instances). In our approach, we only consider instances, but classify general ideas as skos:Concepts (e.g. Diagram). They also build a taxonomy by utilizing hyponym and hyperonym information. In case of non-taxonomic relations, the work reuses domain-specific ontologies, while the classification hierarchy as well as its labels are consulted to guess appropriate relations. Our procedure is similar, but additionally considers user feedback to train machine learning models in order to predict such relations.
|
| 52 |
+
|
| 53 |
+
In conclusion, to the best of our knowledge, there is no approach like ours that constructs personal knowledge graphs from folder structures and at the same time includes experts with their feedback.
|
| 54 |
+
|
| 55 |
+
## 3 Approach
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Fig. 2: Components of our approach from left to right.
|
| 60 |
+
|
| 61 |
+
Our approach enables knowledge engineers (KEs) to construct personal knowledge graphs from a classification schema, for example, a folder structure as shown in Figure 1. In this process, we support them in four tasks which are depicted in Figure 2 and explained in individual sections: Domain Terminology Extraction (Section 3.2), Management of Named Individuals (Section 3.3), Taxonomy Creation (Section 3.4) and Non-Taxonomic Relation Learning (Section 3.5). During modeling using a dedicated GUI (Section 3.6) the KE is assisted by an artificial intelligence (AI) system which proactively makes statements on its own. For ontology population and non-taxonomic relations, machine learning models predict statements. To correctly store and distinguish these assertions, we first designed an appropriate data model.
|
| 62 |
+
|
| 63 |
+
### 3.1 Knowledge Graph Model
|
| 64 |
+
|
| 65 |
+
Our knowledge graph model is an RDF graph consisting of statements in the form of subject-predicate-object triples. However, in our scenario, we have to store additional feedback information for each statement. We consider exactly two agents in our system who are able to give feedback about statements: a knowledge engineer(KE)and an artificial intelligence(AI). Both contribute to the same personal knowledge graph with assertions which can be true, but also false (negative statement). To keep track about the provenance, we store the following meta data for each statement: (a) which agent stated it, (b) the date and time it was stated, (c) how is the statement rated (true, false or undecided) and (d) how confident is the agent (a real value between 0 and 1). Additionally, we use foaf:topic-statements to state that a classification schema node (subject) mentions a certain knowledge graph resource (object) (see an example in Figure 1). Regarding the rating, since natural intelligence is usually more reliable than an artificial one, the KE always outvotes suggestions from the AI. Yet, assertions of the AI are assumed to be true as long as the KE does not disagree.
|
| 66 |
+
|
| 67 |
+
### 3.2 Domain Terminology Extraction
|
| 68 |
+
|
| 69 |
+
Our extraction method uses heuristics to make a first guess for relevant terms in the user's domain. Since word boundaries are often not evident in rather messy file names, we tokenize their basenames (without considering file extensions) by character type and camel case. In addition, the acquired tokens are rated based on some simple rules: stop words and tokens containing a single letter or only symbols are negatively rated. This also applies for tokens which only contain digits, except they look like years (e.g. $n \in \left\lbrack {{1980},{2030}}\right\rbrack$ ). Applying these rules, the following example is tokenized (indicated by a pipe symbol '|') and rated (indicated by color) in the following way: WIP|_____|for|2007|-|tree|Diagram|!|(|28|)|A|.jpg. Thus, the rules let us assume that the tokens WIP, 2007, tree and Diagram are relevant. In case of multi-word terms, the KE is able to merge separated tokens to a single term again, like for the latter two (i.e. Tree Diagram).
|
| 70 |
+
|
| 71 |
+
After adjusting the rating according to feedback from a domain expert, other occurrences of accepted terms are automatically searched using a regular expression, since they may occur in a classification scheme more than once. If the term contains multiple words, we also search for all possible word concatenations using the separators "-" (minus), "-" (underscore), " " (space) and also no separator at all. To give an example, for the term treeDiagram our system also checks the variations tree-Diagram, tree-Diagram and tree Diagram. Finally, the collected term variations are associated with a named individual (i.e. owl:NamedIndividual according to OWL).
|
| 72 |
+
|
| 73 |
+
### 3.3 Management of Named Individuals
|
| 74 |
+
|
| 75 |
+
After retrieving all found term variations $T$ , we have to decide if they (a) resemble an already existing named individual or (b) define a new one. Regarding the first case, each newly discovered term may be a variation that refers to an already created named individual. Thus, we calculate the Jaccard similarity coefficient [7] between the terms $T$ and the candidates’ labels $L$ . A named individual is picked which has the highest overlap between its labels and the given terms. If we cannot find such a resource above a sufficient similarity threshold, a new one is created. The longest term is used to give the resource a preferred label (skos:prefLabel) after some conversions are performed: German umlaut spellings are corrected (e.g. "ae" $\rightarrow$ "ä"), underscores are replaced with spaces, if available a lemma version is used (diagrams $\rightarrow$ diagram) and proper case is applied (Tree Diagram). The remaining terms form the named individual's synonym and differently spelled labels (skos:hiddenLabel). In both cases, we keep track in which file resource the named individuals are mentioned by using a foaf:topic-relation.
|
| 76 |
+
|
| 77 |
+
Unification. If two or more named individuals have the same meaning, we can unify them to one resource. This is done by correctly substituting URIs and at the same time removing the source triples. The AI automatically detects potential individuals with the same meaning by looking at their labels and applying some rules: it checks for hidden labels if they overlap or if there is a prefix or postfix dependency, while preferred labels are compared with the Levenshtein distance and token-based equality. For example, for the following label pairs our procedure would suggest that their individuals are equal: ("Peter Parker", "Parker Peter"); ("Tree Diagram", "Diagram") and ("diagram", "diagramm").
|
| 78 |
+
|
| 79 |
+
Ontology Population. The KE manually create ontology classes and type named individuals with them. To support the KE in this assignment, a random forest model [6] is trained with positive examples from feedback to be able to predict classes for individuals without a type. In order to acquire training features, we follow a gazetteer-based embedding technique by looking up words from several gazetteer lists in preferred labels of named individuals. Remaining characters are counted per character class such as spaces, quotes and digits. The coverage proportions of words and characters in the label serve as the final feature vector. To give some examples, "Tree Diagram 27" receives the vector ${v}_{1} = \left( {\text{English Noun} = {0.73},\text{Space} = {0.13},\text{Digit} = {0.13}}\right)$ , while "WIP" has ${v}_{2} =$ (Uppercase Letter $= {1.0}$ ). Having such feature vectors, the random forest model is able to learn decision trees which predict the same type for named individuals having preferred labels very similar in content. For instance, since the individual Tree Diagram 27 is assigned to skos: Concept and another individual Diagram 3 has a similar feature vector, our model predicts the same class for it.
|
| 80 |
+
|
| 81 |
+
### 3.4 Taxonomy Creation
|
| 82 |
+
|
| 83 |
+
Our intended taxonomy uses broader and narrower relations to structure concepts (skos: Concept) found in file names according to the Simple Knowledge Organization System (SKOS). Since we see these concepts as leafs in a taxonomy tree, our motivation is to find broader concepts for them. For this, our approach utilizes a language resource of synsets and hypernym relations. The concepts in the PKG are mapped via their labels to synsets of the lexical-semantic net. By traversing hypernym relations for all found synsets, two or more of them may share the same ancestor along their hypernym paths. If the average distance from synsets to ancestor is below a configurable threshold, it is suggested as a broader concept for them. This constraint avoids the recommendation of too general concepts (e.g. near the root node). To give an example, given the hypernym paths diagram $\rightarrow$ depiction and timetable $\rightarrow$ overview $\rightarrow$ depiction, our procedure would suggest the broader concept depiction for both leafs. Of course the KE may at any time create concepts manually and link them accordingly. Besides such taxonomic relations, our system also considers non-taxonomic ones between instances.
|
| 84 |
+
|
| 85 |
+
### 3.5 Non-Taxonomic Relation Learning
|
| 86 |
+
|
| 87 |
+
To predict non-taxonomic relations, we perform link prediction by training a model on positive examples from feedback and by exploiting the structure of the classification schema (CS). Our idea is that the same non-taxonomic predicate could be suggested between other resources (subjects and objects) which have a similar neighborhood in the CS. For this, we only consider class instances which are named individuals that have been assigned to an ontology class. Since instances are annotated on files via a foaf:topic-relation, we know in which places of the CS they are mentioned. This annotated CS needs to be transformed into an undirected graph of connected instances to perform link prediction on it. We make an edge from an instance $i$ mentioned in a given node to another instance $j$ , whenever $j$ is mentioned in the (a) node itself,(b) the node’s parent, (c) one of the node's children or (d) one of the node's siblings (i.e. children of parent). In other words, instances are connected in the graph if they are closely mentioned in the CS. With the given graph, we are able to calculate local similarity measures for links (for a survey see [11, Table 1]). Values of the calculated measures form feature vectors in a training set. The test set is acquired by iterating over all possible combinations of instances and properties by using their domain and range information as a filter. A promising triple in the test set is expected when we calculate a small euclidean distance (below a given threshold) between its test vector and a training vector.
|
| 88 |
+
|
| 89 |
+
### 3.6 Prototypical Implementation
|
| 90 |
+
|
| 91 |
+
To test our approach in a case study, we implemented a prototype. A demo video ${}^{4}$ and its source code ${}^{5}$ are publicly available. To assist the KE in entering feedback and constructing the PKG, a graphical user interface (GUI) in form of a web application is provided (see Figure 3). Throughout the interface, we make heavily use of thumbs-up and thumbs-down buttons as well as green and red colored elements to visualize positive and negative feedback (true and false assertions). The three-column layout presents tabs for individual components which give dedicated views for the tasks we have discussed.
|
| 92 |
+
|
| 93 |
+
A typical Explorer view (top left) lists containing files of a currently browsed folder (/User/Downloads). The view presents for each file (from top to bottom) its file name, rated terms from the file name and annotated named individuals. To distinguish individuals from terms the well-known hashtag symbol is added to their preferred labels. In a separate Named Individuals view in the top middle, we itemize them together with their type. Two side-by-side views enable a Drag&Drop mechanism on individuals to let the KE define triples with a selected predicate (drop-down list in the middle). On the top right, classes and properties can be manually created, renamed and rated in an Ontology view. For each property, domain and range classes can be defined too. In separate tabs (bottom left) our GUI also presents suggestions for Unification, Typing, Taxonomic and Non-Taxonomic Relations (the screenshot shows an opened Typing tab). A list of proposals from the AI can be reviewed by the KE, who can accept or reject them individually or in bulk. Decisions are shown below and can always be undone in either way. In a detail view (bottom middle), the KE is able to change a selected individual's preferred label, type, hidden labels and file attachment. A Status view (bottom right) visualizes the current PKG construction state in four sections: the progress in tagging, typing, taxonomy tree and non-taxonomic graph as well as an overall assessment score. These estimations give hints to the KE where more feedback from the expert is necessary.
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
${}^{4}$ https://www.dfki.uni-kl.de/~mschroeder/demo/kecs
|
| 98 |
+
|
| 99 |
+
${}^{5}$ https://github.com/mschroeder-github/kecs
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
Fig. 3: Our graphical user interface in a three-column layout with many feedback possibilities and components (top). Dedicated components are provided to preform certain tasks (bottom).
|
| 106 |
+
|
| 107 |
+
## 4 Case Study: Expert Interviews
|
| 108 |
+
|
| 109 |
+
A case study was conducted with expert interviews in which personal knowledge graphs (PKGs) were built with their feedback. The setup for these interviews is covered in Section 4.1. This is followed by a detailed description of all collected results (Section 4.2) which are then discussed with regard to our stated research questions (Section 4.3).
|
| 110 |
+
|
| 111 |
+
Table 1: Four datasets with their meta data which are used in interviews with four experts.
|
| 112 |
+
|
| 113 |
+
<table><tr><td>Dataset</td><td>Expert</td><td>Branches</td><td>Leafs</td><td>Max. Depth</td><td>Avg. Depth</td><td>Avg. Name Length</td></tr><tr><td>SS1</td><td>E1</td><td>103</td><td>198</td><td>3</td><td>${2.98} \pm {0.16}$</td><td>${8.84} \pm {9.86}$</td></tr><tr><td>FS1</td><td>E2</td><td>25, 988</td><td>95,760</td><td>17</td><td>${9.49} \pm {1.93}$</td><td>${23.30} \pm {16.88}$</td></tr><tr><td>FS2</td><td>E3</td><td>8, 939</td><td>64,571</td><td>17</td><td>${9.18} \pm {1.68}$</td><td>${32.43} \pm {16.77}$</td></tr><tr><td>FS3</td><td>E4</td><td>54,933</td><td>325,476</td><td>22</td><td>${10.08} \pm {2.22}$</td><td>${24.24} \pm {14.57}$</td></tr></table>
|
| 114 |
+
|
| 115 |
+
### 4.1 Expert Interview Setup
|
| 116 |
+
|
| 117 |
+
Since our institute has industry projects with several departments of a large power supply company, we had the great opportunity to get in contact with four individual experts from four departments (guideline management, property management, license management and accounting). Three of them work separately on individual shared drive file systems (FS), while one primarily manages spreadsheet (SS) data. Before the interviews, we received dumps of their data which are listed in Table 1. For each dataset an expert (E) is assigned and meta data about the asset is presented.
|
| 118 |
+
|
| 119 |
+
Since spreadsheets may also contain work related concepts, but are not a form of classification schema, we had to convert the SS1 dataset to a tree structure in the following way. Table names become root folders, while column names are added as their subfolders. In the subfolders, we add files with distinct names from the column's rather short cell values. This way, potential work related concepts could be contained in this generated classification schema.
|
| 120 |
+
|
| 121 |
+
Our system automatically captures several data points during usage. To reproduce the construction process, we keep a history of all stated assertions with their meta data as described in Section 3.1. By observing GUI inputs including mouse clicks, Drag&Drop operations and certain keystrokes, we quantify the KE's effort with the system. In a fixed interval (every 10 inputs) snapshots of the construction metrics (Status view) are saved to record the PKGs evolution over time. Additionally, memory consumption and time performance of certain system modules are monitored.
|
| 122 |
+
|
| 123 |
+
Each one-hour long interview between the knowledge engineer (KE) and an expert had the same setting. One fixed author of this paper took over the role of KE and met the expert in a virtual telephone conference. The KE shared the screen and presented the GUI of our system (see Section 3.6) where the expert's data was already loaded. After a brief introduction, the KE started to ask questions about files and folders by traversing through the file system. The explanations of the participant enabled the KE to model the expert's personal knowledge as discussed in our approach (Section 3). Whenever the AI made predictions, the expert was asked if they are correct or not and feedback was entered accordingly. Every 10 minutes the KE reviewed the current construction state by opening the Status view and changed the focus on parts which needed more attention. After about 50 minutes the session ended and the remaining time was used to let the expert complete a questionnaire about the data source and the modeled knowledge graph. In the next section, we present the questionnaire and the results in detail as well as the data which was logged by our prototype during the interviews.
|
| 124 |
+
|
| 125 |
+
Table 2: The seven questions from the questionnaire with the answers of the four experts and their average values.
|
| 126 |
+
|
| 127 |
+
<table><tr><td>Question</td><td>E1</td><td>E2</td><td>E3</td><td>E4</td><td>Avg. & SD</td></tr><tr><td>Q1: How many years have you been working with the data?</td><td>13</td><td>7</td><td>4</td><td>0</td><td>$6 \pm {5.48}$</td></tr><tr><td>Q2: How much do words in the file names reflect your language use (vocabulary) at work (scale: 1-10)?</td><td>9</td><td>8</td><td>9</td><td>9</td><td>${8.75} \pm {0.50}$</td></tr><tr><td>Q3: Estimate how much your language use (vocabulary) at work is represented by the established tags (percentage).</td><td>50</td><td>15</td><td>10</td><td>10</td><td>${21.25} \pm {19.3}$</td></tr><tr><td>Q4: The established tags meaningfully reflect the language use (vocabulary) at your work (scale: 1-7).</td><td>7</td><td>6</td><td>4</td><td>6</td><td>${5.75} \pm {1.26}$</td></tr><tr><td>Q5: The established tags are assigned to meaningful classes (scale: $1 - 7$ ).</td><td>6</td><td>7</td><td>6</td><td>7</td><td>${6.50} \pm {0.58}$</td></tr><tr><td>Q6: The established tags are meaningfully structured in a taxonomy (scale: 1-7).</td><td>7</td><td>6</td><td>5</td><td>4</td><td>${5.50} \pm {1.29}$</td></tr><tr><td>Q7: The established tags meaningfully relate to each other (scale: $1 - 7$ ).</td><td>5</td><td>7</td><td>6</td><td>7</td><td>${6.25} \pm {0.96}$</td></tr></table>
|
| 128 |
+
|
| 129 |
+
### 4.2 Interview Results
|
| 130 |
+
|
| 131 |
+
The questionnaire at the interview's end consists of seven questions (Q) which are presented in Table 2 together with the experts' answers (E), their average value and standard deviation (Avg. & SD). We stated the first question (Q1) to check how familiar the participants are with the data. The second question (Q2) was asked to figure out if the experts think that the given data actually contains work related words. While Q3 tries to give a rough estimation on the PKG's recall in percentage, Q4 gives an approximate measurement about its precision with regard to created named individuals ${}^{6}$ in the PKG. From the third question on, we are interested in the experts' opinions about the final result that was modeled during the interview. A seven-point Likert scale is used for our opinion-based questions ranging from 1 ("fully disagree") to 7 ("fully agree"). The remaining questions aim at the estimation of meaningfulness in the populated ontology (Q5) and taxonomic (Q6) as well as non-taxonomic relations (Q7).
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
${}^{6}$ The questions refer to "established tags", since we presented tags in the GUI for the named individuals in the personal knowledge graph (PKG).
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
Besides qualitative data, we also captured quantitative data points during the interview which are presented in Table 3. Measurements are listed per row, while dataset-expert pairs are ordered in columns. After the number of resources in the PKG (#Resources) and the counts regarding the knowledge engineer's (KE) effort in the GUI, we list the number of true and false assertions ${}^{7}$ made by KE and AI in individual construction phases. Furthermore, we calculate the AI's accuracy by counting how often the expert agrees (true positive and true negative) with reviewed predictions. The section about Management of Named Individuals is further split into Unification and Ontology Population. While the management includes assertions about types, preferred/hidden labels and foaf:topic-relations, the latter two only consider owl:sameAs and ontology related assertions. Due to a software error in the taxonomy-module during the first two interviews, unfortunately, no broader concepts could be predicted. On the table's bottom all assertions by the KE (whether true or false) and all inputs (clicks, enter keys, drag&drop operations) are aggregated to calculate a assertions per inputs ratio. The Management of Named Individuals does not have an accuracy value $\left( {\mathrm{N}/\mathrm{A}}\right)$ , since each term automatically turns into a named individual and no suggestions for preferred and hidden label are made.
|
| 140 |
+
|
| 141 |
+
Since we continuously recorded measurements, we are able to examine the evolution of the PKG with respect to the inputs performed in the GUI. The development of the taxonomic and non-taxonomic part of the PKG is presented through several plots in Figure 4. We consider named individuals of type skos: Concept as taxonomy concepts (Figure 4a) and the remaining typed ones as non-taxonomic instances (Figure 4d). By looking at the number of graph components (Fig. 4b and 4e), one gets an idea of the connectedness over time. In addition, Figure 4c plots the number of concepts which are connected to at least one broader concept. Similarly, Figure 4f shows the average diameter (the greatest distance between any pair of instances) of non-taxonomic components to visualize the closeness among them.
|
| 142 |
+
|
| 143 |
+
The next section will discuss the results with regard to our research questions.
|
| 144 |
+
|
| 145 |
+
### 4.3 Discussion
|
| 146 |
+
|
| 147 |
+
Since file names are rather unusual sources to build PKGs from, we ask at the beginning of the paper the following question (RQ1): Are file systems promising sources for knowledge graph construction? Our experts agree that words they saw in the file names reflect their language use at work with an average value of 8.75 out of 10 (Q2 in Table 2). Having a higher-level management background, expert E4 came in daily work not in touch with file system F3 (see Q1 in Table 2), but was still able to recognize and explain the terms. Answers to questions Q4 to Q7 in our questionnaire (Table 2) indicate that we modeled all individual PKGs in a meaningful way for the experts. For these reasons, we conclude that file systems are promising sources for building PKGs.
|
| 148 |
+
|
| 149 |
+
---
|
| 150 |
+
|
| 151 |
+
${}^{7}$ False assertions by AI mean that it later rejected initially true ones because of human feedback.
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+
Table 3: Quantity of true and false assertions stated by the knowledge engineer (KE) and the AI for individual construction tasks. Additionally, the KE's GUI effort and the AI's accuracy is given.
|
| 156 |
+
|
| 157 |
+
<table><tr><td>Measurement</td><td>SS1 (E1)</td><td>FS1 (E2)</td><td>FS2 (E3)</td><td>FS3 (E4)</td></tr><tr><td>#Resources</td><td>88</td><td>50</td><td>39</td><td>32</td></tr><tr><td>KE Clicks</td><td>599</td><td>602</td><td>359</td><td>356</td></tr><tr><td>KE Enter-Key</td><td>60</td><td>56</td><td>30</td><td>47</td></tr><tr><td>KE Drag&Drop</td><td>26</td><td>34</td><td>21</td><td>18</td></tr><tr><td colspan="5">Domain Terminology Extraction (Section 3.2)</td></tr><tr><td>KE True</td><td>82</td><td>50</td><td>33</td><td>26</td></tr><tr><td>KE False</td><td>48</td><td>44</td><td>14</td><td>72</td></tr><tr><td>AI True</td><td>400</td><td>270168</td><td>242149</td><td>948405</td></tr><tr><td>AI False</td><td>286</td><td>220285</td><td>106573</td><td>617366</td></tr><tr><td>AI Accuracy</td><td>${0.67} = {45}/{67}$</td><td>${0.72} = {59}/{82}$</td><td>${0.83} = {35}/{42}$</td><td>${0.31} = {25}/{80}$</td></tr><tr><td colspan="5">Management of Named Individuals* (Section 3.3)</td></tr><tr><td>KE True</td><td>102</td><td>68</td><td>39</td><td>58</td></tr><tr><td>KE False</td><td>30</td><td>24</td><td>15</td><td>25</td></tr><tr><td>AI True</td><td>462</td><td>32161</td><td>8223</td><td>37159</td></tr><tr><td>AI False</td><td>4</td><td>1</td><td>23</td><td>155</td></tr><tr><td>AI Accuracy</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td colspan="5">Unification* (Section 3.3)</td></tr><tr><td>KE True</td><td>10</td><td>2</td><td>2</td><td>0</td></tr><tr><td>KE False</td><td>6</td><td>18</td><td>12</td><td>4</td></tr><tr><td>AI True</td><td>8</td><td>10</td><td>7</td><td>2</td></tr><tr><td>AI False</td><td>0</td><td>0</td><td>0</td><td>2</td></tr><tr><td>AI Accuracy</td><td>${0.57} = 4/7$</td><td>${0.10} = 1/{10}$</td><td>${0.14} = 1/7$</td><td>${0.00} = 0/2$</td></tr><tr><td colspan="5">Ontology Population* (SSection 3.3</td></tr><tr><td>KE True</td><td>105</td><td>78</td><td>61</td><td>55</td></tr><tr><td>KE False</td><td>73</td><td>29</td><td>22</td><td>19</td></tr><tr><td>AI True</td><td>134</td><td>102</td><td>92</td><td>85</td></tr><tr><td>AI False</td><td>1</td><td>8</td><td>6</td><td>2</td></tr><tr><td>AI Accuracy</td><td>${0.23} = {18}/{78}$</td><td>${0.65} = {30}/{46}$</td><td>${0.66} = {23}/{35}$</td><td>${0.48} = {12}/{25}$</td></tr><tr><td/><td colspan="4">TaxonomyCreation (Section 3.4)</td></tr><tr><td>KE True</td><td>21</td><td>19</td><td>14</td><td>12</td></tr><tr><td>KE False</td><td>0</td><td>0</td><td>4</td><td>8</td></tr><tr><td>AI True</td><td>N/A</td><td>N/A</td><td>9</td><td>10</td></tr><tr><td>AI False</td><td>N/A</td><td>N/A</td><td>0</td><td>0</td></tr><tr><td>AI Accuracy</td><td>N/A</td><td>N/A</td><td>${0.56} = 5/9$</td><td>${0.20} = 2/{10}$</td></tr><tr><td/><td colspan="4">Non-Taxonomic Relation Learning ((Section 3.5)</td></tr><tr><td>KE True</td><td>5</td><td>23</td><td>33</td><td>7</td></tr><tr><td>KE False</td><td>0</td><td>42</td><td>20</td><td>0</td></tr><tr><td>AI True</td><td>0</td><td>52</td><td>42</td><td>0</td></tr><tr><td>AI False</td><td>4</td><td>11</td><td>5</td><td>0</td></tr><tr><td>AI Accuracy</td><td>0/0</td><td>${0.19} = {10}/{52}$</td><td>${0.52} = {22}/{42}$</td><td>0/0</td></tr><tr><td colspan="5">Aggregated</td></tr><tr><td>All KE Assertions</td><td>482</td><td>397</td><td>269</td><td>286</td></tr><tr><td>All KE Inputs</td><td>685</td><td>692</td><td>410</td><td>421</td></tr><tr><td>KE Assertions/Inputs</td><td>0.70</td><td>0.57</td><td>0.66</td><td>0.68</td></tr></table>
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+
Fig. 4: Plots about the taxonomic and non-taxonomic parts of the PKG with respect to the number of inputs made in the GUI. For each dataset a symbol is assigned to recognize them: SS1 ( $\square$ ), FS1 ( $\circ$ ), FS2 ( $\times$ ) and FS3 ( $\bigtriangleup$ ).
|
| 162 |
+
|
| 163 |
+
Because a completely manual construction can be time-consuming and thus AI could help in this process, we asked the next question (RQ2): Can our system suggest helpful statements during usage? In our approach, we consider the application of AI in several tasks ranging from (a) initial selection of domain relevant terms, (b) unification suggestions, (c) recommendation of class memberships, (d) suggestion of broader concepts and (e) prediction of non-taxonomic relations. How they performed can be obtained from Table 3 in form of accuracy values which calculate how often an expert agreed to suggestions stated by AI. (a) Since we do not consider multi-word terms in the extraction of domain relevant words, such terms had to be corrected frequently, which leads to a drop in performance. (b) Our unification rules tend to suggest more false positives leading to low accuracy scores, since they are designed with a high recall in mind. (c) The prediction of class assignments show mediocre results, since only preferred labels in combination with gazetteer lists are used to extract features. (d) For the taxonomy creation, our language resource GermaNet tended to suggest too general concepts which is why they were often considered unsuitable by our experts. (e) Regarding non-taxonomic relation learning, far to little examples were provided in case of SS1 and FS3 to be able to predict similar relations. All in all, there is a tendency that in certain cases helpful statements can be automatically suggested, but more research has to be done to further improve AI.
|
| 164 |
+
|
| 165 |
+
Concerned about the approach's practicability, we stated the third question (RQ3): How efficient is the construction in our approach? Effort measurements in Table 3 indicate that one input operation results in 0.6 to 0.7 assertions, thus already two inputs lead to a true or false statement. We assume that a value below 1.0 comes from not negligible GUI navigation and search efforts. Still, many clickable (bulk) feedback buttons combined with suggestions from the AI seem to yield to this positive outcome. Especially the Drag&Drop feature turns out to be a simple and fast way to relate resources to each other. Figure 4 visualizes how taxonomies and graphs evolve over entered inputs ${}^{8}$ . In comparison, the maintenance of taxonomies seem to require less effort than the non-taxonomic graphs, probably because only skos:Concepts and the skos:broader-relation need to be considered. The high diameter values of non-taxonomic graphs further indicate that resources in subgraphs are rather loosely connected. In summary, with moderately spent effort our KE was able to create, accept and also reject many assertions that eventually formed a meaningful personal knowledge graph. Still, efficiency could be further improved by better supporting the construction of the graph's non-taxonomic part.
|
| 166 |
+
|
| 167 |
+
## 5 Conclusion and Outlook
|
| 168 |
+
|
| 169 |
+
In this paper, we investigated the construction of personal knowledge graphs from file names with a human-in-the-loop approach. A case study with four independent expert interviews showed that the file system is a promising source, while suggestions by AI help to build such graphs with moderate effort.
|
| 170 |
+
|
| 171 |
+
Since we could not examine all of the aspects in detail, future work may further investigate in the challenges. For instance, there is potential for improvements in machine learning models, especially for the prediction of non-taxonomic relations. More sophisticated solutions could be applied in the extraction of domain terminology, including disambiguation and the discovery of multi-word terms.
|
| 172 |
+
|
| 173 |
+
Acknowledgements This work was funded by the BMBF project SensAI (grant no. 01IW20007).
|
| 174 |
+
|
| 175 |
+
## References
|
| 176 |
+
|
| 177 |
+
1. Balog, K., Kenter, T.: Personal knowledge graphs: A research agenda. In: Proc. of the 2019 ACM SIGIR International Conf. on Theory of Information Retrieval, ICTIR 2019, Santa Clara, CA, USA, October 2-5, 2019. pp. 217-220. ACM (2019)
|
| 178 |
+
|
| 179 |
+
2. Carroll, J.M.: Creative names for personal files in an interactive computing environment. International Journal of Man-Machine Studies 16(4), 405-438 (1982)
|
| 180 |
+
|
| 181 |
+
${}^{8}$ It has to be noted that the clearly visible outlier SS1 (e.g. Figure 4d) comes from a bulk-import of several resources (categories) found in a spreadsheet column.
|
| 182 |
+
|
| 183 |
+
3. Crowder, J.W., Marion, J.S., Reilly, M.: File naming in digital media research: Examples from the humanities and social sciences. Journal of Librarianship and Scholarly Communication $\mathbf{3}\left( 3\right) \left( {2015}\right)$
|
| 184 |
+
|
| 185 |
+
4. Dinneen, J.D., Julien, C.: The ubiquitous digital file: A review of file management research. J. Assoc. Inf. Sci. Technol. 71(1), E1-E32 (2020)
|
| 186 |
+
|
| 187 |
+
5. Hicks, B.J., Dong, A., Palmer, R., McAlpine, H.C.: Organizing and managing personal electronic files: A mechanical engineer's perspective. ACM Trans. Inf. Syst. 26(4), 23:1-23:40 (2008). https://doi.org/10.1145/1402256.1402262
|
| 188 |
+
|
| 189 |
+
6. Ho, T.K.: Random decision forests. In: Proceedings of 3rd international conference on document analysis and recognition. vol. 1, pp. 278-282. IEEE (1995)
|
| 190 |
+
|
| 191 |
+
7. Jaccard, P.: Lois de distribution florale dans la zone alpine. Bull Soc Vaudoise Sci Nat 38, 69-130 (1902)
|
| 192 |
+
|
| 193 |
+
8. Ji, S., Pan, S., Cambria, E., Marttinen, P., Yu, P.S.: A survey on knowledge graphs: Representation, acquisition and applications. CoRR abs/2002.00388 (2020)
|
| 194 |
+
|
| 195 |
+
9. Lamparter, S., Ehrig, M., Tempich, C.: Knowledge extraction from classification schemas. In: On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE, OTM Conf. Int'l. Conf., Agia Napa, Cyprus, October 25-29, 2004, Proceedings, Part I. LNCS, vol. 3290, pp. 618-636. Springer (2004)
|
| 196 |
+
|
| 197 |
+
10. Magnini, B., Serafini, L., Speranza, M.: Making explicit the hidden semantics of hierarchical classifications. In: AI*IA 2003: Advances in AI, 8th Congress of the Italian Association for Artificial Intelligence, Pisa, Italy, September 23-26, 2003. Lecture Notes in Computer Science, vol. 2829, pp. 436-448. Springer (2003)
|
| 198 |
+
|
| 199 |
+
11. Samad, A., Qadir, M., Nawaz, I., Islam, M.A., Aleem, M.: A comprehensive survey of link prediction techniques for social network. EAI Endorsed Trans. Ind. Networks Intell. Syst. 7(23), e3 (2020). https://doi.org/10.4108/eai.13-7-2018.163988
|
| 200 |
+
|
| 201 |
+
12. Sauermann, L., Dengel, A., Van Elst, L., Lauer, A., Maus, H., Schwarz, S.: Personalization in the epos project. In: Proceedings of the Semantic Web Personalization Workshop at the ESWC Conference (2006)
|
| 202 |
+
|
| 203 |
+
13. Schröder, M., Jilek, C., Dengel, A.: Interactive concept mining on personal data - bootstrapping semantic services. CoRR abs/1903.05872 (2019)
|
| 204 |
+
|
| 205 |
+
14. Whitham, R., Cruickshank, L.: The function and future of the folder. Interact. Comput. $\mathbf{{29}}\left( 5\right) ,{629} - {647}\left( {2017}\right)$ . https://doi.org/10.1093/iwc/iww042
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,334 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ A HUMAN-IN-THE-LOOP APPROACH FOR PERSONAL KNOWLEDGE GRAPH CONSTRUCTION FROM FILE NAMES
|
| 2 |
+
|
| 3 |
+
Markus Schröder, Christian Jilek, and Andreas Dengel
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Smart Data & Knowledge Services Dept., DFKI GmbH, Kaiserslautern, Germany
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Computer Science Dept., TU Kaiserslautern, Germany
|
| 8 |
+
|
| 9 |
+
{markus.schroeder, christian.jilek, andreas.dengel}@dfki.de
|
| 10 |
+
|
| 11 |
+
Abstract. Knowledge workers' personal and work related concepts (e.g. persons, projects, topics) are usually not sufficiently covered by knowledge graphs. Yet, already handmade classification schemes, prominently folder structures, naturally mention several of their concepts in file names. Thus, such data could be a promising source for constructing personal knowledge graphs. However, this idea poses several challenges: file names are usually noisy non-grammatical text snippets, while folder structures do not clearly define how concepts relate to each other. To cope with this semantic gap, we include knowledge workers as humans-in-the-loop to guide the building process with their feedback. Our semi-automatic personal knowledge graph construction approach consists of four major stages: domain term extraction, ontology population, taxonomic and non-taxonomic relation learning. We conduct a case study with four expert interviews from different domains in an industrial scenario. Results indicate that file systems are promising sources and, combined with our approach, already yield useful personal knowledge graphs with moderate effort spent.
|
| 12 |
+
|
| 13 |
+
Keywords: Knowledge Graph Construction - Personal Knowledge Graph - Human-in-the-Loop - File System
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Knowledge graphs (KGs) have become a popular technology to support knowledge workers in various applications (for a survey see [8]). Since such KGs are constructed from domain-specific document corpora, personal concepts of knowledge workers in these domains are usually not sufficiently covered. To fill this gap, there is the emerging concept of Personal Knowledge Graphs (PKGs) which focus on resources users are personally related to (also in their professional life). The population and maintenance of such graphs is still an open research question [1], especially, when knowledge is not modeled yet (cold start problem). Various sources in a user's personal information sphere may be worth considering to kick-start a population [12].
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Fig. 1: A file system (left) with file names containing relevant words (green) and irrelevant words (red). They form a personal knowledge graph (right) with nontaxonomic and taxonomic relations. Due to readability, some edges are omitted.
|
| 22 |
+
|
| 23 |
+
When users self-organize diverse documents in daily business, they often manage them in a form of classification schema, prominently in file systems [4]. Here, documents are hierarchically arranged and freely named according to aspects such as projects, organizations, persons, topics and task-related concepts. In file and folder names such concepts are typically mentioned in order to let users guess their contents. Because file systems allow to name them mostly free ${}^{3}$ , users tend to label them with their own vocabulary which can contain technical terms, made-up words or even puns [2]. Thus, we hypothesize that file names could be a promising source for constructing PKG.
|
| 24 |
+
|
| 25 |
+
This idea poses several challenges due to the nature of the data source. Literature already showed that users have a large variety of file naming strategies $\left\lbrack {5,3}\right\rbrack$ . File names are usually short ungrammatical (sometimes noisy) text snippets and contain differently ordered and concatenated keywords. These circumstances make it difficult to discover and extract relevant named entities from them. Besides labeling, users can also assemble files in hierarchically structured folders [14]. Yet, this "folder contains file" structure typically does not explicitly define how named entities relate to each other.
|
| 26 |
+
|
| 27 |
+
To give a visual example, Figure 1 depicts a small file system (left) and a possible personal knowledge graph (right). Because some keywords in the file names are too general (images) or have a technical meaning (Thumbs), they may be irrelevant for the user (underlined in red). Relevant keywords (green) become resources in the PKG, while a foaf:topic property keeps track in which file resource it is mentioned (only one is shown due to readability). Named individuals (Zenphase, Parker, Mercurtainment) are assigned to their classes (Project, Person, Organization) and are connected meaningfully (:hasProject, :worksFor). The remaining ones are rather abstract ideas and thus become skos: Concepts according to the Simple Knowledge Organization System (SKOS). A taxonomy tree is formed (top-right side) by adding broader concepts (: DocumentType, : DocumentState). Since ${WIP}$ is an abbreviation, its skos:prefLabel contains the long form. Synonyms and other spellings are captured in skos:hiddenLabels: for the user the term Drawing is synonym to treeDiagram and docs in file names indicate the concept Document. Due to the lack of space, labels and some other properties are not visualized.
|
| 28 |
+
|
| 29 |
+
${}^{3}$ Restricted only by illegal characters and maximum file name length.
|
| 30 |
+
|
| 31 |
+
In this paper, we present a semi-automatic personal knowledge graph construction approach which is able to build such a graph from a classification schema, in this case, a file system and expert feedback. A graphical user interface (GUI) assists a knowledge engineer (KE) in performing several tasks during construction: the discovery of concepts in file names, ontology population of concepts and learning of taxonomic as well as non-taxonomic relations. In an interview setting an expert can describe his or her personal view on their files to the KE who translates the explanations in suitable knowledge graph statements using the GUI. To reduce the manual effort for the KE, we make use of machine learning models which learn from feedback and predict new statements during usage. This proposed method yields several research questions (RQs), for which first answers are reported in this work.
|
| 32 |
+
|
| 33 |
+
* RQ1: Are file systems promising sources for knowledge graph construction?
|
| 34 |
+
|
| 35 |
+
* RQ2: Can our system suggest helpful statements during usage?
|
| 36 |
+
|
| 37 |
+
* RQ3: How efficient is the construction in our approach?
|
| 38 |
+
|
| 39 |
+
The rest of this paper is structured as follows: related approaches are covered in the next section (Sec. 2). This is followed by the presentation of our approach in Section 3 and a prototypical implementation in Section 3.6. The above research questions are then addressed in a case study with expert interviews in Section 4. Section 5 closes the paper with a conclusion and future work.
|
| 40 |
+
|
| 41 |
+
§ 2 RELATED WORK
|
| 42 |
+
|
| 43 |
+
To personally assist knowledge workers in their tasks, knowledge services benefit from personal information models about users [12]. For building such a model, personal concepts can be acquired from various texts in a user's personal information sphere [13]. Thus, folder structures could be useful for this purpose which is also investigated by other related works.
|
| 44 |
+
|
| 45 |
+
Magnini et al. [10] as well consider hierarchical classifications and analyze the implicit knowledge hidden in the labeled nodes. They use logic formulas expressed in description logic and word senses discovered and disambiguated in labels to make knowledge explicit. Contextual interpretations such as implicit disjunctions and negations are performed by exploiting the hierarchy. In contrast to our work, their goal is the definition of an ontology with classes and properties (TBox) by relying on external language repositories containing word senses. For us the usage of such resources is limited, since word senses of personal concepts (like projects) are usually not contained. Moreover, they present a fully automatic approach without integrating domain experts in cases where labels do not match with any entry in dictionaries.
|
| 46 |
+
|
| 47 |
+
More closely related is the work about knowledge extraction from classification schemes by Lamparter et al. [9]. Following the same motivation, the authors would like to acquire explicit semantic descriptions from legacy information such as local folder structures. To archive this, their processing pipeline include the identification of concept candidates, word sense disambiguation, taxonomy construction and identification of non-taxonomic relations. They distinguish ontology and instance layer by checking with dictionaries if terms are rather general (concepts) or specific (instances). In our approach, we only consider instances, but classify general ideas as skos:Concepts (e.g. Diagram). They also build a taxonomy by utilizing hyponym and hyperonym information. In case of non-taxonomic relations, the work reuses domain-specific ontologies, while the classification hierarchy as well as its labels are consulted to guess appropriate relations. Our procedure is similar, but additionally considers user feedback to train machine learning models in order to predict such relations.
|
| 48 |
+
|
| 49 |
+
In conclusion, to the best of our knowledge, there is no approach like ours that constructs personal knowledge graphs from folder structures and at the same time includes experts with their feedback.
|
| 50 |
+
|
| 51 |
+
§ 3 APPROACH
|
| 52 |
+
|
| 53 |
+
< g r a p h i c s >
|
| 54 |
+
|
| 55 |
+
Fig. 2: Components of our approach from left to right.
|
| 56 |
+
|
| 57 |
+
Our approach enables knowledge engineers (KEs) to construct personal knowledge graphs from a classification schema, for example, a folder structure as shown in Figure 1. In this process, we support them in four tasks which are depicted in Figure 2 and explained in individual sections: Domain Terminology Extraction (Section 3.2), Management of Named Individuals (Section 3.3), Taxonomy Creation (Section 3.4) and Non-Taxonomic Relation Learning (Section 3.5). During modeling using a dedicated GUI (Section 3.6) the KE is assisted by an artificial intelligence (AI) system which proactively makes statements on its own. For ontology population and non-taxonomic relations, machine learning models predict statements. To correctly store and distinguish these assertions, we first designed an appropriate data model.
|
| 58 |
+
|
| 59 |
+
§ 3.1 KNOWLEDGE GRAPH MODEL
|
| 60 |
+
|
| 61 |
+
Our knowledge graph model is an RDF graph consisting of statements in the form of subject-predicate-object triples. However, in our scenario, we have to store additional feedback information for each statement. We consider exactly two agents in our system who are able to give feedback about statements: a knowledge engineer(KE)and an artificial intelligence(AI). Both contribute to the same personal knowledge graph with assertions which can be true, but also false (negative statement). To keep track about the provenance, we store the following meta data for each statement: (a) which agent stated it, (b) the date and time it was stated, (c) how is the statement rated (true, false or undecided) and (d) how confident is the agent (a real value between 0 and 1). Additionally, we use foaf:topic-statements to state that a classification schema node (subject) mentions a certain knowledge graph resource (object) (see an example in Figure 1). Regarding the rating, since natural intelligence is usually more reliable than an artificial one, the KE always outvotes suggestions from the AI. Yet, assertions of the AI are assumed to be true as long as the KE does not disagree.
|
| 62 |
+
|
| 63 |
+
§ 3.2 DOMAIN TERMINOLOGY EXTRACTION
|
| 64 |
+
|
| 65 |
+
Our extraction method uses heuristics to make a first guess for relevant terms in the user's domain. Since word boundaries are often not evident in rather messy file names, we tokenize their basenames (without considering file extensions) by character type and camel case. In addition, the acquired tokens are rated based on some simple rules: stop words and tokens containing a single letter or only symbols are negatively rated. This also applies for tokens which only contain digits, except they look like years (e.g. $n \in \left\lbrack {{1980},{2030}}\right\rbrack$ ). Applying these rules, the following example is tokenized (indicated by a pipe symbol '|') and rated (indicated by color) in the following way: WIP|_____|for|2007|-|tree|Diagram|!|(|28|)|A|.jpg. Thus, the rules let us assume that the tokens WIP, 2007, tree and Diagram are relevant. In case of multi-word terms, the KE is able to merge separated tokens to a single term again, like for the latter two (i.e. Tree Diagram).
|
| 66 |
+
|
| 67 |
+
After adjusting the rating according to feedback from a domain expert, other occurrences of accepted terms are automatically searched using a regular expression, since they may occur in a classification scheme more than once. If the term contains multiple words, we also search for all possible word concatenations using the separators "-" (minus), "-" (underscore), " " (space) and also no separator at all. To give an example, for the term treeDiagram our system also checks the variations tree-Diagram, tree-Diagram and tree Diagram. Finally, the collected term variations are associated with a named individual (i.e. owl:NamedIndividual according to OWL).
|
| 68 |
+
|
| 69 |
+
§ 3.3 MANAGEMENT OF NAMED INDIVIDUALS
|
| 70 |
+
|
| 71 |
+
After retrieving all found term variations $T$ , we have to decide if they (a) resemble an already existing named individual or (b) define a new one. Regarding the first case, each newly discovered term may be a variation that refers to an already created named individual. Thus, we calculate the Jaccard similarity coefficient [7] between the terms $T$ and the candidates’ labels $L$ . A named individual is picked which has the highest overlap between its labels and the given terms. If we cannot find such a resource above a sufficient similarity threshold, a new one is created. The longest term is used to give the resource a preferred label (skos:prefLabel) after some conversions are performed: German umlaut spellings are corrected (e.g. "ae" $\rightarrow$ "ä"), underscores are replaced with spaces, if available a lemma version is used (diagrams $\rightarrow$ diagram) and proper case is applied (Tree Diagram). The remaining terms form the named individual's synonym and differently spelled labels (skos:hiddenLabel). In both cases, we keep track in which file resource the named individuals are mentioned by using a foaf:topic-relation.
|
| 72 |
+
|
| 73 |
+
Unification. If two or more named individuals have the same meaning, we can unify them to one resource. This is done by correctly substituting URIs and at the same time removing the source triples. The AI automatically detects potential individuals with the same meaning by looking at their labels and applying some rules: it checks for hidden labels if they overlap or if there is a prefix or postfix dependency, while preferred labels are compared with the Levenshtein distance and token-based equality. For example, for the following label pairs our procedure would suggest that their individuals are equal: ("Peter Parker", "Parker Peter"); ("Tree Diagram", "Diagram") and ("diagram", "diagramm").
|
| 74 |
+
|
| 75 |
+
Ontology Population. The KE manually create ontology classes and type named individuals with them. To support the KE in this assignment, a random forest model [6] is trained with positive examples from feedback to be able to predict classes for individuals without a type. In order to acquire training features, we follow a gazetteer-based embedding technique by looking up words from several gazetteer lists in preferred labels of named individuals. Remaining characters are counted per character class such as spaces, quotes and digits. The coverage proportions of words and characters in the label serve as the final feature vector. To give some examples, "Tree Diagram 27" receives the vector ${v}_{1} = \left( {\text{ English Noun } = {0.73},\text{ Space } = {0.13},\text{ Digit } = {0.13}}\right)$ , while "WIP" has ${v}_{2} =$ (Uppercase Letter $= {1.0}$ ). Having such feature vectors, the random forest model is able to learn decision trees which predict the same type for named individuals having preferred labels very similar in content. For instance, since the individual Tree Diagram 27 is assigned to skos: Concept and another individual Diagram 3 has a similar feature vector, our model predicts the same class for it.
|
| 76 |
+
|
| 77 |
+
§ 3.4 TAXONOMY CREATION
|
| 78 |
+
|
| 79 |
+
Our intended taxonomy uses broader and narrower relations to structure concepts (skos: Concept) found in file names according to the Simple Knowledge Organization System (SKOS). Since we see these concepts as leafs in a taxonomy tree, our motivation is to find broader concepts for them. For this, our approach utilizes a language resource of synsets and hypernym relations. The concepts in the PKG are mapped via their labels to synsets of the lexical-semantic net. By traversing hypernym relations for all found synsets, two or more of them may share the same ancestor along their hypernym paths. If the average distance from synsets to ancestor is below a configurable threshold, it is suggested as a broader concept for them. This constraint avoids the recommendation of too general concepts (e.g. near the root node). To give an example, given the hypernym paths diagram $\rightarrow$ depiction and timetable $\rightarrow$ overview $\rightarrow$ depiction, our procedure would suggest the broader concept depiction for both leafs. Of course the KE may at any time create concepts manually and link them accordingly. Besides such taxonomic relations, our system also considers non-taxonomic ones between instances.
|
| 80 |
+
|
| 81 |
+
§ 3.5 NON-TAXONOMIC RELATION LEARNING
|
| 82 |
+
|
| 83 |
+
To predict non-taxonomic relations, we perform link prediction by training a model on positive examples from feedback and by exploiting the structure of the classification schema (CS). Our idea is that the same non-taxonomic predicate could be suggested between other resources (subjects and objects) which have a similar neighborhood in the CS. For this, we only consider class instances which are named individuals that have been assigned to an ontology class. Since instances are annotated on files via a foaf:topic-relation, we know in which places of the CS they are mentioned. This annotated CS needs to be transformed into an undirected graph of connected instances to perform link prediction on it. We make an edge from an instance $i$ mentioned in a given node to another instance $j$ , whenever $j$ is mentioned in the (a) node itself,(b) the node’s parent, (c) one of the node's children or (d) one of the node's siblings (i.e. children of parent). In other words, instances are connected in the graph if they are closely mentioned in the CS. With the given graph, we are able to calculate local similarity measures for links (for a survey see [11, Table 1]). Values of the calculated measures form feature vectors in a training set. The test set is acquired by iterating over all possible combinations of instances and properties by using their domain and range information as a filter. A promising triple in the test set is expected when we calculate a small euclidean distance (below a given threshold) between its test vector and a training vector.
|
| 84 |
+
|
| 85 |
+
§ 3.6 PROTOTYPICAL IMPLEMENTATION
|
| 86 |
+
|
| 87 |
+
To test our approach in a case study, we implemented a prototype. A demo video ${}^{4}$ and its source code ${}^{5}$ are publicly available. To assist the KE in entering feedback and constructing the PKG, a graphical user interface (GUI) in form of a web application is provided (see Figure 3). Throughout the interface, we make heavily use of thumbs-up and thumbs-down buttons as well as green and red colored elements to visualize positive and negative feedback (true and false assertions). The three-column layout presents tabs for individual components which give dedicated views for the tasks we have discussed.
|
| 88 |
+
|
| 89 |
+
A typical Explorer view (top left) lists containing files of a currently browsed folder (/User/Downloads). The view presents for each file (from top to bottom) its file name, rated terms from the file name and annotated named individuals. To distinguish individuals from terms the well-known hashtag symbol is added to their preferred labels. In a separate Named Individuals view in the top middle, we itemize them together with their type. Two side-by-side views enable a Drag&Drop mechanism on individuals to let the KE define triples with a selected predicate (drop-down list in the middle). On the top right, classes and properties can be manually created, renamed and rated in an Ontology view. For each property, domain and range classes can be defined too. In separate tabs (bottom left) our GUI also presents suggestions for Unification, Typing, Taxonomic and Non-Taxonomic Relations (the screenshot shows an opened Typing tab). A list of proposals from the AI can be reviewed by the KE, who can accept or reject them individually or in bulk. Decisions are shown below and can always be undone in either way. In a detail view (bottom middle), the KE is able to change a selected individual's preferred label, type, hidden labels and file attachment. A Status view (bottom right) visualizes the current PKG construction state in four sections: the progress in tagging, typing, taxonomy tree and non-taxonomic graph as well as an overall assessment score. These estimations give hints to the KE where more feedback from the expert is necessary.
|
| 90 |
+
|
| 91 |
+
${}^{4}$ https://www.dfki.uni-kl.de/m̃schroeder/demo/kecs
|
| 92 |
+
|
| 93 |
+
${}^{5}$ https://github.com/mschroeder-github/kecs
|
| 94 |
+
|
| 95 |
+
< g r a p h i c s >
|
| 96 |
+
|
| 97 |
+
Fig. 3: Our graphical user interface in a three-column layout with many feedback possibilities and components (top). Dedicated components are provided to preform certain tasks (bottom).
|
| 98 |
+
|
| 99 |
+
§ 4 CASE STUDY: EXPERT INTERVIEWS
|
| 100 |
+
|
| 101 |
+
A case study was conducted with expert interviews in which personal knowledge graphs (PKGs) were built with their feedback. The setup for these interviews is covered in Section 4.1. This is followed by a detailed description of all collected results (Section 4.2) which are then discussed with regard to our stated research questions (Section 4.3).
|
| 102 |
+
|
| 103 |
+
Table 1: Four datasets with their meta data which are used in interviews with four experts.
|
| 104 |
+
|
| 105 |
+
max width=
|
| 106 |
+
|
| 107 |
+
Dataset Expert Branches Leafs Max. Depth Avg. Depth Avg. Name Length
|
| 108 |
+
|
| 109 |
+
1-7
|
| 110 |
+
SS1 E1 103 198 3 ${2.98} \pm {0.16}$ ${8.84} \pm {9.86}$
|
| 111 |
+
|
| 112 |
+
1-7
|
| 113 |
+
FS1 E2 25, 988 95,760 17 ${9.49} \pm {1.93}$ ${23.30} \pm {16.88}$
|
| 114 |
+
|
| 115 |
+
1-7
|
| 116 |
+
FS2 E3 8, 939 64,571 17 ${9.18} \pm {1.68}$ ${32.43} \pm {16.77}$
|
| 117 |
+
|
| 118 |
+
1-7
|
| 119 |
+
FS3 E4 54,933 325,476 22 ${10.08} \pm {2.22}$ ${24.24} \pm {14.57}$
|
| 120 |
+
|
| 121 |
+
1-7
|
| 122 |
+
|
| 123 |
+
§ 4.1 EXPERT INTERVIEW SETUP
|
| 124 |
+
|
| 125 |
+
Since our institute has industry projects with several departments of a large power supply company, we had the great opportunity to get in contact with four individual experts from four departments (guideline management, property management, license management and accounting). Three of them work separately on individual shared drive file systems (FS), while one primarily manages spreadsheet (SS) data. Before the interviews, we received dumps of their data which are listed in Table 1. For each dataset an expert (E) is assigned and meta data about the asset is presented.
|
| 126 |
+
|
| 127 |
+
Since spreadsheets may also contain work related concepts, but are not a form of classification schema, we had to convert the SS1 dataset to a tree structure in the following way. Table names become root folders, while column names are added as their subfolders. In the subfolders, we add files with distinct names from the column's rather short cell values. This way, potential work related concepts could be contained in this generated classification schema.
|
| 128 |
+
|
| 129 |
+
Our system automatically captures several data points during usage. To reproduce the construction process, we keep a history of all stated assertions with their meta data as described in Section 3.1. By observing GUI inputs including mouse clicks, Drag&Drop operations and certain keystrokes, we quantify the KE's effort with the system. In a fixed interval (every 10 inputs) snapshots of the construction metrics (Status view) are saved to record the PKGs evolution over time. Additionally, memory consumption and time performance of certain system modules are monitored.
|
| 130 |
+
|
| 131 |
+
Each one-hour long interview between the knowledge engineer (KE) and an expert had the same setting. One fixed author of this paper took over the role of KE and met the expert in a virtual telephone conference. The KE shared the screen and presented the GUI of our system (see Section 3.6) where the expert's data was already loaded. After a brief introduction, the KE started to ask questions about files and folders by traversing through the file system. The explanations of the participant enabled the KE to model the expert's personal knowledge as discussed in our approach (Section 3). Whenever the AI made predictions, the expert was asked if they are correct or not and feedback was entered accordingly. Every 10 minutes the KE reviewed the current construction state by opening the Status view and changed the focus on parts which needed more attention. After about 50 minutes the session ended and the remaining time was used to let the expert complete a questionnaire about the data source and the modeled knowledge graph. In the next section, we present the questionnaire and the results in detail as well as the data which was logged by our prototype during the interviews.
|
| 132 |
+
|
| 133 |
+
Table 2: The seven questions from the questionnaire with the answers of the four experts and their average values.
|
| 134 |
+
|
| 135 |
+
max width=
|
| 136 |
+
|
| 137 |
+
Question E1 E2 E3 E4 Avg. & SD
|
| 138 |
+
|
| 139 |
+
1-6
|
| 140 |
+
Q1: How many years have you been working with the data? 13 7 4 0 $6 \pm {5.48}$
|
| 141 |
+
|
| 142 |
+
1-6
|
| 143 |
+
Q2: How much do words in the file names reflect your language use (vocabulary) at work (scale: 1-10)? 9 8 9 9 ${8.75} \pm {0.50}$
|
| 144 |
+
|
| 145 |
+
1-6
|
| 146 |
+
Q3: Estimate how much your language use (vocabulary) at work is represented by the established tags (percentage). 50 15 10 10 ${21.25} \pm {19.3}$
|
| 147 |
+
|
| 148 |
+
1-6
|
| 149 |
+
Q4: The established tags meaningfully reflect the language use (vocabulary) at your work (scale: 1-7). 7 6 4 6 ${5.75} \pm {1.26}$
|
| 150 |
+
|
| 151 |
+
1-6
|
| 152 |
+
Q5: The established tags are assigned to meaningful classes (scale: $1 - 7$ ). 6 7 6 7 ${6.50} \pm {0.58}$
|
| 153 |
+
|
| 154 |
+
1-6
|
| 155 |
+
Q6: The established tags are meaningfully structured in a taxonomy (scale: 1-7). 7 6 5 4 ${5.50} \pm {1.29}$
|
| 156 |
+
|
| 157 |
+
1-6
|
| 158 |
+
Q7: The established tags meaningfully relate to each other (scale: $1 - 7$ ). 5 7 6 7 ${6.25} \pm {0.96}$
|
| 159 |
+
|
| 160 |
+
1-6
|
| 161 |
+
|
| 162 |
+
§ 4.2 INTERVIEW RESULTS
|
| 163 |
+
|
| 164 |
+
The questionnaire at the interview's end consists of seven questions (Q) which are presented in Table 2 together with the experts' answers (E), their average value and standard deviation (Avg. & SD). We stated the first question (Q1) to check how familiar the participants are with the data. The second question (Q2) was asked to figure out if the experts think that the given data actually contains work related words. While Q3 tries to give a rough estimation on the PKG's recall in percentage, Q4 gives an approximate measurement about its precision with regard to created named individuals ${}^{6}$ in the PKG. From the third question on, we are interested in the experts' opinions about the final result that was modeled during the interview. A seven-point Likert scale is used for our opinion-based questions ranging from 1 ("fully disagree") to 7 ("fully agree"). The remaining questions aim at the estimation of meaningfulness in the populated ontology (Q5) and taxonomic (Q6) as well as non-taxonomic relations (Q7).
|
| 165 |
+
|
| 166 |
+
${}^{6}$ The questions refer to "established tags", since we presented tags in the GUI for the named individuals in the personal knowledge graph (PKG).
|
| 167 |
+
|
| 168 |
+
Besides qualitative data, we also captured quantitative data points during the interview which are presented in Table 3. Measurements are listed per row, while dataset-expert pairs are ordered in columns. After the number of resources in the PKG (#Resources) and the counts regarding the knowledge engineer's (KE) effort in the GUI, we list the number of true and false assertions ${}^{7}$ made by KE and AI in individual construction phases. Furthermore, we calculate the AI's accuracy by counting how often the expert agrees (true positive and true negative) with reviewed predictions. The section about Management of Named Individuals is further split into Unification and Ontology Population. While the management includes assertions about types, preferred/hidden labels and foaf:topic-relations, the latter two only consider owl:sameAs and ontology related assertions. Due to a software error in the taxonomy-module during the first two interviews, unfortunately, no broader concepts could be predicted. On the table's bottom all assertions by the KE (whether true or false) and all inputs (clicks, enter keys, drag&drop operations) are aggregated to calculate a assertions per inputs ratio. The Management of Named Individuals does not have an accuracy value $\left( {\mathrm{N}/\mathrm{A}}\right)$ , since each term automatically turns into a named individual and no suggestions for preferred and hidden label are made.
|
| 169 |
+
|
| 170 |
+
Since we continuously recorded measurements, we are able to examine the evolution of the PKG with respect to the inputs performed in the GUI. The development of the taxonomic and non-taxonomic part of the PKG is presented through several plots in Figure 4. We consider named individuals of type skos: Concept as taxonomy concepts (Figure 4a) and the remaining typed ones as non-taxonomic instances (Figure 4d). By looking at the number of graph components (Fig. 4b and 4e), one gets an idea of the connectedness over time. In addition, Figure 4c plots the number of concepts which are connected to at least one broader concept. Similarly, Figure 4f shows the average diameter (the greatest distance between any pair of instances) of non-taxonomic components to visualize the closeness among them.
|
| 171 |
+
|
| 172 |
+
The next section will discuss the results with regard to our research questions.
|
| 173 |
+
|
| 174 |
+
§ 4.3 DISCUSSION
|
| 175 |
+
|
| 176 |
+
Since file names are rather unusual sources to build PKGs from, we ask at the beginning of the paper the following question (RQ1): Are file systems promising sources for knowledge graph construction? Our experts agree that words they saw in the file names reflect their language use at work with an average value of 8.75 out of 10 (Q2 in Table 2). Having a higher-level management background, expert E4 came in daily work not in touch with file system F3 (see Q1 in Table 2), but was still able to recognize and explain the terms. Answers to questions Q4 to Q7 in our questionnaire (Table 2) indicate that we modeled all individual PKGs in a meaningful way for the experts. For these reasons, we conclude that file systems are promising sources for building PKGs.
|
| 177 |
+
|
| 178 |
+
${}^{7}$ False assertions by AI mean that it later rejected initially true ones because of human feedback.
|
| 179 |
+
|
| 180 |
+
Table 3: Quantity of true and false assertions stated by the knowledge engineer (KE) and the AI for individual construction tasks. Additionally, the KE's GUI effort and the AI's accuracy is given.
|
| 181 |
+
|
| 182 |
+
max width=
|
| 183 |
+
|
| 184 |
+
Measurement SS1 (E1) FS1 (E2) FS2 (E3) FS3 (E4)
|
| 185 |
+
|
| 186 |
+
1-5
|
| 187 |
+
#Resources 88 50 39 32
|
| 188 |
+
|
| 189 |
+
1-5
|
| 190 |
+
KE Clicks 599 602 359 356
|
| 191 |
+
|
| 192 |
+
1-5
|
| 193 |
+
KE Enter-Key 60 56 30 47
|
| 194 |
+
|
| 195 |
+
1-5
|
| 196 |
+
KE Drag&Drop 26 34 21 18
|
| 197 |
+
|
| 198 |
+
1-5
|
| 199 |
+
5|c|Domain Terminology Extraction (Section 3.2)
|
| 200 |
+
|
| 201 |
+
1-5
|
| 202 |
+
KE True 82 50 33 26
|
| 203 |
+
|
| 204 |
+
1-5
|
| 205 |
+
KE False 48 44 14 72
|
| 206 |
+
|
| 207 |
+
1-5
|
| 208 |
+
AI True 400 270168 242149 948405
|
| 209 |
+
|
| 210 |
+
1-5
|
| 211 |
+
AI False 286 220285 106573 617366
|
| 212 |
+
|
| 213 |
+
1-5
|
| 214 |
+
AI Accuracy ${0.67} = {45}/{67}$ ${0.72} = {59}/{82}$ ${0.83} = {35}/{42}$ ${0.31} = {25}/{80}$
|
| 215 |
+
|
| 216 |
+
1-5
|
| 217 |
+
5|c|Management of Named Individuals* (Section 3.3)
|
| 218 |
+
|
| 219 |
+
1-5
|
| 220 |
+
KE True 102 68 39 58
|
| 221 |
+
|
| 222 |
+
1-5
|
| 223 |
+
KE False 30 24 15 25
|
| 224 |
+
|
| 225 |
+
1-5
|
| 226 |
+
AI True 462 32161 8223 37159
|
| 227 |
+
|
| 228 |
+
1-5
|
| 229 |
+
AI False 4 1 23 155
|
| 230 |
+
|
| 231 |
+
1-5
|
| 232 |
+
AI Accuracy N/A N/A N/A N/A
|
| 233 |
+
|
| 234 |
+
1-5
|
| 235 |
+
5|c|Unification* (Section 3.3)
|
| 236 |
+
|
| 237 |
+
1-5
|
| 238 |
+
KE True 10 2 2 0
|
| 239 |
+
|
| 240 |
+
1-5
|
| 241 |
+
KE False 6 18 12 4
|
| 242 |
+
|
| 243 |
+
1-5
|
| 244 |
+
AI True 8 10 7 2
|
| 245 |
+
|
| 246 |
+
1-5
|
| 247 |
+
AI False 0 0 0 2
|
| 248 |
+
|
| 249 |
+
1-5
|
| 250 |
+
AI Accuracy ${0.57} = 4/7$ ${0.10} = 1/{10}$ ${0.14} = 1/7$ ${0.00} = 0/2$
|
| 251 |
+
|
| 252 |
+
1-5
|
| 253 |
+
5|c|Ontology Population* (SSection 3.3
|
| 254 |
+
|
| 255 |
+
1-5
|
| 256 |
+
KE True 105 78 61 55
|
| 257 |
+
|
| 258 |
+
1-5
|
| 259 |
+
KE False 73 29 22 19
|
| 260 |
+
|
| 261 |
+
1-5
|
| 262 |
+
AI True 134 102 92 85
|
| 263 |
+
|
| 264 |
+
1-5
|
| 265 |
+
AI False 1 8 6 2
|
| 266 |
+
|
| 267 |
+
1-5
|
| 268 |
+
AI Accuracy ${0.23} = {18}/{78}$ ${0.65} = {30}/{46}$ ${0.66} = {23}/{35}$ ${0.48} = {12}/{25}$
|
| 269 |
+
|
| 270 |
+
1-5
|
| 271 |
+
X 4|c|TaxonomyCreation (Section 3.4)
|
| 272 |
+
|
| 273 |
+
1-5
|
| 274 |
+
KE True 21 19 14 12
|
| 275 |
+
|
| 276 |
+
1-5
|
| 277 |
+
KE False 0 0 4 8
|
| 278 |
+
|
| 279 |
+
1-5
|
| 280 |
+
AI True N/A N/A 9 10
|
| 281 |
+
|
| 282 |
+
1-5
|
| 283 |
+
AI False N/A N/A 0 0
|
| 284 |
+
|
| 285 |
+
1-5
|
| 286 |
+
AI Accuracy N/A N/A ${0.56} = 5/9$ ${0.20} = 2/{10}$
|
| 287 |
+
|
| 288 |
+
1-5
|
| 289 |
+
X 4|c|Non-Taxonomic Relation Learning ((Section 3.5)
|
| 290 |
+
|
| 291 |
+
1-5
|
| 292 |
+
KE True 5 23 33 7
|
| 293 |
+
|
| 294 |
+
1-5
|
| 295 |
+
KE False 0 42 20 0
|
| 296 |
+
|
| 297 |
+
1-5
|
| 298 |
+
AI True 0 52 42 0
|
| 299 |
+
|
| 300 |
+
1-5
|
| 301 |
+
AI False 4 11 5 0
|
| 302 |
+
|
| 303 |
+
1-5
|
| 304 |
+
AI Accuracy 0/0 ${0.19} = {10}/{52}$ ${0.52} = {22}/{42}$ 0/0
|
| 305 |
+
|
| 306 |
+
1-5
|
| 307 |
+
5|c|Aggregated
|
| 308 |
+
|
| 309 |
+
1-5
|
| 310 |
+
All KE Assertions 482 397 269 286
|
| 311 |
+
|
| 312 |
+
1-5
|
| 313 |
+
All KE Inputs 685 692 410 421
|
| 314 |
+
|
| 315 |
+
1-5
|
| 316 |
+
KE Assertions/Inputs 0.70 0.57 0.66 0.68
|
| 317 |
+
|
| 318 |
+
1-5
|
| 319 |
+
|
| 320 |
+
< g r a p h i c s >
|
| 321 |
+
|
| 322 |
+
Fig. 4: Plots about the taxonomic and non-taxonomic parts of the PKG with respect to the number of inputs made in the GUI. For each dataset a symbol is assigned to recognize them: SS1 ( $\square$ ), FS1 ( $\circ$ ), FS2 ( $\times$ ) and FS3 ( $\bigtriangleup$ ).
|
| 323 |
+
|
| 324 |
+
Because a completely manual construction can be time-consuming and thus AI could help in this process, we asked the next question (RQ2): Can our system suggest helpful statements during usage? In our approach, we consider the application of AI in several tasks ranging from (a) initial selection of domain relevant terms, (b) unification suggestions, (c) recommendation of class memberships, (d) suggestion of broader concepts and (e) prediction of non-taxonomic relations. How they performed can be obtained from Table 3 in form of accuracy values which calculate how often an expert agreed to suggestions stated by AI. (a) Since we do not consider multi-word terms in the extraction of domain relevant words, such terms had to be corrected frequently, which leads to a drop in performance. (b) Our unification rules tend to suggest more false positives leading to low accuracy scores, since they are designed with a high recall in mind. (c) The prediction of class assignments show mediocre results, since only preferred labels in combination with gazetteer lists are used to extract features. (d) For the taxonomy creation, our language resource GermaNet tended to suggest too general concepts which is why they were often considered unsuitable by our experts. (e) Regarding non-taxonomic relation learning, far to little examples were provided in case of SS1 and FS3 to be able to predict similar relations. All in all, there is a tendency that in certain cases helpful statements can be automatically suggested, but more research has to be done to further improve AI.
|
| 325 |
+
|
| 326 |
+
Concerned about the approach's practicability, we stated the third question (RQ3): How efficient is the construction in our approach? Effort measurements in Table 3 indicate that one input operation results in 0.6 to 0.7 assertions, thus already two inputs lead to a true or false statement. We assume that a value below 1.0 comes from not negligible GUI navigation and search efforts. Still, many clickable (bulk) feedback buttons combined with suggestions from the AI seem to yield to this positive outcome. Especially the Drag&Drop feature turns out to be a simple and fast way to relate resources to each other. Figure 4 visualizes how taxonomies and graphs evolve over entered inputs ${}^{8}$ . In comparison, the maintenance of taxonomies seem to require less effort than the non-taxonomic graphs, probably because only skos:Concepts and the skos:broader-relation need to be considered. The high diameter values of non-taxonomic graphs further indicate that resources in subgraphs are rather loosely connected. In summary, with moderately spent effort our KE was able to create, accept and also reject many assertions that eventually formed a meaningful personal knowledge graph. Still, efficiency could be further improved by better supporting the construction of the graph's non-taxonomic part.
|
| 327 |
+
|
| 328 |
+
§ 5 CONCLUSION AND OUTLOOK
|
| 329 |
+
|
| 330 |
+
In this paper, we investigated the construction of personal knowledge graphs from file names with a human-in-the-loop approach. A case study with four independent expert interviews showed that the file system is a promising source, while suggestions by AI help to build such graphs with moderate effort.
|
| 331 |
+
|
| 332 |
+
Since we could not examine all of the aspects in detail, future work may further investigate in the challenges. For instance, there is potential for improvements in machine learning models, especially for the prediction of non-taxonomic relations. More sophisticated solutions could be applied in the extraction of domain terminology, including disambiguation and the discovery of multi-word terms.
|
| 333 |
+
|
| 334 |
+
Acknowledgements This work was funded by the BMBF project SensAI (grant no. 01IW20007).
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Declarative Description of Knowledge Graphs Construction Automation: Status & Challenges
|
| 2 |
+
|
| 3 |
+
David Chaves-Fraga ${}^{1,2}$ , Anastasia Dimou ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ KU Leuven, Department of Computer Science, Sint-Katelijne-Waver, Belgium
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, Spain
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Nowadays, Knowledge Graphs (KG) are among the most powerful mechanisms to represent knowledge and integrate data from multiple domains. However, most of the available data sources are still described in heterogeneous data structures, schemes, and formats. The conversion of these sources into the desirable KG requires manual and time-consuming tasks, such as programming translation scripts, defining declarative mapping rules, etc. In this vision paper, we analyze the trends regarding the automation of KG construction but also the use of mapping languages for the same process, and align the two by analyzing their tasks and a few exemplary tools. Our aim is not to have a complete study but to investigate if there is potential in this direction and, if so, to discuss what challenges we need to address to guarantee the maintainability, explainability, and reproducibility of the KG construction.
|
| 12 |
+
|
| 13 |
+
## Keywords
|
| 14 |
+
|
| 15 |
+
Knowledge Graphs, Automation, Explainable AI, Declarative Rules
|
| 16 |
+
|
| 17 |
+
## 1. Introduction
|
| 18 |
+
|
| 19 |
+
A lot of works on knowledge graph (KG) construction are focused on defining mapping languages to declaratively describe the transformation process, and on optimizing the execution of such declarative rules. The mapping languages rely on either dedicated syntaxes, such as the family of languages around the W3C recommended R2RML ${}^{1}$ (e.g., RML [1] or R2RML-F [2]), or on re-purposing existing specifications, such as query languages like the W3C recommended SPARQL ${}^{2}$ (e.g., SPARQL-Generate [3] or SPARQL-Anything [4]), or constraints languages like ShEx ${}^{3}$ (e.g., ShExML [5,6]).
|
| 20 |
+
|
| 21 |
+
Despite the plethora of mapping languages and the increasing number of optimizations for the execution of the declarative rules, these rules are still defined through a manual and time-consuming process, affecting negatively their adoption. Different solutions were proposed to automate the definition of mapping rules that describe how a KG should be constructed. On the one hand, MIRROR [7], D2RQ [8] and Ontop [9] follow a similar approach, extracting from the RDB schema a target ontology and the mapping correspondences. On the other hand, AutoMap4OBDA [10] and BootOX [11] consider an input ontology and generate actual R2RML mappings from the RDB. However, these solutions are focused on declarative solutions only for relational databases, while recent solutions investigate non-declarative automation of KG construction.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
KGCW'22: International Workshop on Knolwedge Graph Construction, May 30, 2021, Creete, GRE
|
| 26 |
+
|
| 27 |
+
(C) david.chaves@upm.es (D. Chaves-Fraga); anastasia.dimou@kuleuven.be (A. Dimou)
|
| 28 |
+
|
| 29 |
+
© 0000-0003-3236-2789 (D. Chaves-Fraga); 0000-0003-2138-7972 (A. Dimou)
|
| 30 |
+
|
| 31 |
+
(C) ${}_{67}$ (C) C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
|
| 32 |
+
|
| 33 |
+
CEUR Workshop Proceedings (CEUR-WS.org)
|
| 34 |
+
|
| 35 |
+
${}^{1}$ http://www.w3.org/TR/r2rml/
|
| 36 |
+
|
| 37 |
+
${}^{2}$ https://www.w3.org/TR/sparql11-overview/
|
| 38 |
+
|
| 39 |
+
${}^{3}$ https://shex.io/
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
Beyond relational databases, the recent SemTab challenge ${}^{4}$ present a set of tabular datasets [12] with the aim of matching them automatically to external KGs, such as DBpedia and Wikidata. The proposed solutions $\left\lbrack {{13},{14},{15}}\right\rbrack$ address the problem using different techniques, such as heuristic rules, fuzzy searching over the KGs, or knowledge graph embeddings. Although their final objective is the same (obtain high precision and recall results) and they perform similar procedures, each solution implements its own workflow and addresses each proposed task by SemTab in different ways. Hence, making a fair and fine-grained comparison among the different solutions to understand how they obtain the actual results is not an easy task.
|
| 44 |
+
|
| 45 |
+
In this vision paper, we align the tasks followed by solutions for the automation of the semantic table annotation with concepts of existing declarative solutions. We indicatively select and analyze a few tools for the automation of KG construction and identify common steps. We discuss if they can be declaratively described relying on existing mapping languages, and what the challenges are to proceed in this direction. We consider the RDF Mapping Language (RML) [1] as a high-level and general representation to describe the schema transformations and its extension, the Function Ontology (FnO) [16] to describe the data transformations.
|
| 46 |
+
|
| 47 |
+
Our objective is not to present a complete study, but to investigate if there is potential in this direction. By describing the steps followed by different solutions in a more fine-grained and standard manner, we make the steps comparable, and we can better discuss what challenges we need to address to guarantee the maintainability, explainability and reproducibility of the KG construction, as well as to ensure the provenance of each performed task.
|
| 48 |
+
|
| 49 |
+
## 2. Task alignment with mapping languages
|
| 50 |
+
|
| 51 |
+
We analyze the different steps of the SemTab challenge, inspect the relation between the SemTab challenge tasks and align them with concepts from the declarative construction of RDF graphs (Figure 1). To achieve this, we include the relationship between each of the tasks and their potential declarations within a mapping language. We considered the RML mapping language because it is commonly used and the authors are more familiar with, but we are confident that the other mapping languages could express the same concepts. Before we proceed with the alignment, we give a small introduction on the SemTab challenge and RML:
|
| 52 |
+
|
| 53 |
+
SemTab challenge The SemTab challenge consists of three tasks: (i) cell to KG entity matching (CEA), which matches cells to individuals; (ii) column to KG class matching (CTA), which matches cells to classes; and (iii) column pair to KG property matching (CPA), which captures the relationships between pairs of columns.
|
| 54 |
+
|
| 55 |
+
RML The RDF mapping language (RML), a superset of the W3C recommended R2RML, expresses schema transformations from heterogeneous data to RDF. An RML mapping contains one or more Triple Maps which on their own turn contain a Subject Map to generate the subjects of the RDF triples, and zero or more Predicate Object Maps with pairs of Predicate and Object Maps to generate the predicates and the objects respectively for each incoming data record. RML was aligned with the Function Ontology (FnO) [16] to describe the data transformations which are required to construct the desired RDF graph, ensuring that the functions are independent from any implementation.
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
${}^{4}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
|
| 65 |
+
Figure 1: Automation tasks alignment within declarative mapping language. Example extracted from SemTab 2021 challenge, where the CEA, CTA and CPA tasks are aligned with a declarative construction of a knowledge graph using the RML mapping language (YARRRML serialisation).
|
| 66 |
+
|
| 67 |
+
We analyze how the different tasks of the challenge contribute in constructing a part of an RDF triple, and we align these tasks with the corresponding concepts of the RML mapping language that construct the same part of an RDF triple.
|
| 68 |
+
|
| 69 |
+
Cell-Entity Annotation (CEA): This task identifies the URI of an entity from a cell. In the target RDF graph, this is the subject or the object of the RDF triple. In Fig. 1, the Co10 values are used to obtain the subjects of the triples while the Co13 values generate the objects (both green colored in the RDF extract of Fig. 1). If a declarative approach is considered to generate these triples, for example in RML, the rr: subjectMap property is used (line 5 of RML doc in Fig. 1), which declares how the subjects of the triples are generated and the rr: object Map (line 8 of RML doc in Fig. 1), when the expected objects are in the form of URIs.
|
| 70 |
+
|
| 71 |
+
Column-Type Annotation (CTA): This task predicts the common class of a set of items given a column from the table. SemTab assumes that a table only generates one kind of entity (i.e. the first column is used for CTA). In Figure 1, we can observe that the URIs retrieved using Co10 are considered for obtaining the corresponding shared concept (i.e., restaurant) (red colored in the RDF extract of Fig. 1). Declaring the class in RML can be done through the shortcut rr: class property within the rr: SubjectMap (line 7 of RML doc in Fig. 1) or using a rr:predicateObjectMap with a rdf:type fixed predicate.
|
| 72 |
+
|
| 73 |
+
Columns-Property Annotation (CPA): This task aims to predict the property that relates the CTA column (subjects) to the rest of the columns. Fig. 1 shows a CPA task that relates the Co10 with the Co13 through the property architectural style (wdt : P149, yellow colored in the RDF extract). In RML, the predicates of the triples are declared using the rr: predicateMap property (line 8 of RML doc in Fig. 1), and unlike typical mapping rules, where it is usually assumed that predicates are constants (as they are declared in the input ontology), the predicates depend on the data, hence they are dynamically defined.
|
| 74 |
+
|
| 75 |
+
Based on the aforementioned analysis, we conclude that the tasks performed to automate the KG construction can be aligned with concepts from declarative mapping languages. The CEA task is aligned with the RDF term construction for the subject or the object of the RDF triple, the CTA task assigns the class and the CPA task aligns with the Predicate and Object Map.
|
| 76 |
+
|
| 77 |
+
## 3. Comparing semantic tabular matching systems
|
| 78 |
+
|
| 79 |
+
In this section, we analyze in detail the steps performed by some of the tools proposed for solving the SemTab challenge. The comparative analysis among the three selected engines (summarized in Table 1), is not meant to be exhaustive. We aim to identify if there are common steps and functions that the engines perform to accomplish the challenge's tasks and ultimately if it is possible and desired to declaratively describe them with mapping languages.
|
| 80 |
+
|
| 81 |
+
### 3.1. Selected Systems
|
| 82 |
+
|
| 83 |
+
We indicatively selected the systems that: (i) obtained good results in the SemTab 2021 challenge ${}^{5}$ ; and (ii) have the source code openly available. Therefore, we included in this comparison JenTab [14], MTab [13] and MantisTable V [17]. The use of different terminologies for describing similar tasks (e.g., majority vote in Mantis V is referred as frequency) and the complexity of the proposed workflows, where the results from one of the task influence the others in a iterative way, create difficulties to compare the approaches and reproduce their results.
|
| 84 |
+
|
| 85 |
+
JenTab ${}^{6}$ participated in SemTab 2020 and 2021, and it was always positioned among the top five solutions for most rounds. It follows a heuristic-based approach proposing the CFS (Create, Filter, Select) approach for all tasks and with different configurations and workflows.
|
| 86 |
+
|
| 87 |
+
${\mathbf{{MTab}}}^{7}$ participated in all SemTab editions, winning the first prize in 2019 and 2020. Apart from the support of multilingual datasets, MTab implements several approaches for performing the entity search (i.e., CEA): keyword search, fuzzy search, and aggregation search ${}^{8}$ .
|
| 88 |
+
|
| 89 |
+
MantisTable ${\mathrm{V}}^{9}$ is an extended and improved version of MantisTable [18]. Similarly to JenTab, MantisTable has also participated in SemTab 2020 and 2021 editions. It implements a set of heuristic rules (similar as JenTab) and complex string similarity functions for the entity recognition task (like MTab). Additionally, it provides a general and efficient tool (LamAPI) to fetch the necessary data for all SemTab tasks, independently of the target KG.
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
${}^{5}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/2021
|
| 94 |
+
|
| 95 |
+
${}^{6}$ https://github.com/fusion-jena/JenTab
|
| 96 |
+
|
| 97 |
+
${}^{7}$ https://github.com/phucty/mtab_tool
|
| 98 |
+
|
| 99 |
+
${}^{8}$ https://mtab.app/mtabes/docs
|
| 100 |
+
|
| 101 |
+
${}^{9}$ https://bitbucket.org/disco_unimib/mantistable-v/
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
### 3.2. Observations
|
| 106 |
+
|
| 107 |
+
The systems we inspected follow the same steps: they perform a preprocessing step, and setup lookup and datatype prediction services. Then the CEA task is performed followed by the CTA and CPA tasks which depend on the CEA task. Given that the systems follow the same steps, we could map the three main tasks (CEA, CPA, CTA) to the Create-Filter-Select (CFS) procedure proposed by JenTab (see Table 1).
|
| 108 |
+
|
| 109 |
+
We observe similarities in most tasks among the engines. The subtasks performed in the preprocessing step, are very similar in the three engines. The preprocessing tasks include several functions, such as fixing encoding issues, removing HTML tags or special characters, and detecting missing white spaces (see Table 1), and they usually delegate them to third-party libraries (e.g., ftfy ${}^{10}$ ). We observe similar tasks are performed when declarative solutions are used for cleaning and preparing the data. These preprocessing tasks are described with FnO in the case of RML and executed either together with the schema transformations or as a preprocessing task too.
|
| 110 |
+
|
| 111 |
+
The same occurs for the datatype prediction, where regular expressions are often used to detect if cell values are entities or literals, and what type of literals (string, date, or numbers). In the case of declarative solutions, this datatype inspection task is performed manually. However, adjusting the datatype is possible relying on functions for data transformations.
|
| 112 |
+
|
| 113 |
+
Most of them also incorporate a lookup step to retrieve the necessary data from the KGs (e.g., using SPARQL queries), including similarity functions or fuzzy search. The search engine for the KG lookups in JenTab and Mantis V is ElasticSearch, although the former implements the Jaro Winkler distance [19] while the later embeds it in a more efficient engine and exploits its query capabilities. Lookups were also incorporated in the case of declarative solutions [20], where lookup services retrieve a URI to identify an entity instead of assigning a new one.
|
| 114 |
+
|
| 115 |
+
As far as the actual tasks is concerned, each engine performs its own approach for the CEA, CTA, and CPA tasks, although we also find some similarities. The most important ones that are implemented in the three engines are: (i) the Levenshtein distance [21] for filtering candidates, and (ii) the majority vote (called frequency in Mantis V) for selecting the final annotations. We believe that the use of declarative approaches, such as the Function Ontology [16] for describing common functions (e.g., Levenshtein), could make the solutions more comparable. It would also be clearer if they perform the same function, and more explainable, as current solutions for the automation of KG construction act like blackboxes: neither their implementations are open sourced nor the declarative descriptions of what they execute are available. Providing at least declarative descriptions of the performed tasks would enhance the transparency of these solutions.
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
${}^{10}$ https://pypi.org/project/ftfy/
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
Table 1
|
| 124 |
+
|
| 125 |
+
Tasks comparison among different SemTab solutions
|
| 126 |
+
|
| 127 |
+
<table><tr><td colspan="2"/><td>JenTab</td><td>MTab</td><td>Mantis V</td></tr><tr><td colspan="2">KG Lookup</td><td>ElasticSearch on top of KG SPARQL Queries</td><td>WikiGraph Generation Ad-hoc API</td><td>LamAPI(ElasticSearch, Mongo and Python)</td></tr><tr><td rowspan="5">Preprocessing</td><td>Fix encoding</td><td>Y</td><td>Y</td><td>$\mathrm{N}$</td></tr><tr><td>Special characters</td><td>Y</td><td>N</td><td>Y</td></tr><tr><td>Restore missing spaces</td><td>Y</td><td>$\mathrm{N}$</td><td>Y</td></tr><tr><td>Remove HTML tags</td><td>$\mathrm{N}$</td><td>Y</td><td>$\mathrm{N}$</td></tr><tr><td>Remove non-cell-values</td><td>$\mathrm{N}$</td><td>Y</td><td>$\mathrm{N}$</td></tr><tr><td colspan="2">Datatype</td><td>REGEX Type-based cleaning</td><td>Cell values identification (literal, entity) SpaCy models for potential types Majority vote to define column type</td><td>REGEX for datatypes exceeding a threshold Entity columns that do not exceed the threshold</td></tr><tr><td rowspan="3">CEA</td><td>CREATE</td><td>Different query rewriting techniques</td><td>Keword search (BM25) Fuzzy search (Levenshtein distance)</td><td>LamAPI lookup with IB similarity</td></tr><tr><td>FILTER</td><td>Levenshtein distance (among others)</td><td>Filter and hashing (Symetric Delete) Context similarities by row</td><td>Levenshtein confidence score for entities Literals XXX</td></tr><tr><td>SELECT</td><td>Levenshtein distance</td><td>Highest context similarity</td><td>xxxx</td></tr><tr><td rowspan="3">CTA</td><td>CREATE</td><td>Types from CEA</td><td>Types from CEA</td><td>Types from CEA</td></tr><tr><td>FILTER</td><td>Remove the less popular types</td><td>-</td><td>-</td></tr><tr><td>SELECT</td><td>Maiority vote</td><td>Majority vote</td><td>Majority vote</td></tr><tr><td rowspan="3">CPA</td><td>CREATE</td><td>Cell annotations (CEA) and fuzzy match for data properties</td><td>Aggregate all properties from CEA by row</td><td>Properties from CEA lookups</td></tr><tr><td>FILTER</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SELECT</td><td>Majority vote</td><td>Majority vote</td><td>Majority vote</td></tr></table>
|
| 128 |
+
|
| 129 |
+
## 4. Challenges for a declarative automation of KG Construction
|
| 130 |
+
|
| 131 |
+
We identify a set of challenges that need to be addressed to declaratively describe solutions for automatic KG construction. These challenges can be divided into two main categories: technical challenges and conceptual challenges.
|
| 132 |
+
|
| 133 |
+
On the technical side, there is a major difference between the solutions for the automation of KG construction and the execution of declarative KG construction solutions: The solutions for automatic KG construction rely on iterative processes that continuously refine and improves a task, while the different tasks influence each other. To the contrary, the declarative KG construction is a linear process that is executed only once. Not all declarative rules are executed linearly, solutions that restructure [6] or parallelize them [22, 23] are increasingly encountered. Thus, if the solutions for automatic KG construction are declaratively described, their iterative execution needs to be described as well. How do we do that with the mapping languages?
|
| 134 |
+
|
| 135 |
+
Besides the overall execution process, the iteration patterns are different. The solutions for automatic KG construction are applied to all directions, both per column and per row, and even combined. To the contrary, the declarative solutions are applied only per row, and the mapping languages are designed under this assumption. Should the mapping languages be extended to support more iteration patterns? If so, would the rml:iteration for RML and the relevant constructs in the other mapping languages be sufficient or more adjustments are required?
|
| 136 |
+
|
| 137 |
+
The solutions for automatic KG construction rely on interrelated tasks which may produce intermediate representations, and their results impact the rest of tasks. Thus, the declarative KG construction solutions need to deal with dynamic and recursive steps (e.g., intermediate representation of the input data sources and mapping rules, multiple function execution, etc.) that can negatively impact the generation process. Hence, declaratively describing is a challenge. Should the mapping languages be further extended then?
|
| 138 |
+
|
| 139 |
+
On the conceptual side, there are two main differences with respect to the training and target KG. In most real projects that declarative solutions tackle, the input data and sometimes the target ontology are only provided, but there is neither similar data to train the solutions nor existing KGs that can be used to find entities or to predict the relationships. While relying on ontology matching techniques between existing KGs (e.g., DBPedia, Wikidata) and the target ontology or exploiting NLP approaches between ontology and input sources documentation could be a solution for the latter, would it be realistic given that most ontologies are not aligned and not all of them provide documentation?
|
| 140 |
+
|
| 141 |
+
## 5. Conclusions and Future Work
|
| 142 |
+
|
| 143 |
+
In this paper, we analyze the KG construction solutions and compare the automatic with the declarative. While the tasks can be aligned with respect to what they achieve, their execution is fundamentally different and a direct alignment is not feasible.
|
| 144 |
+
|
| 145 |
+
Automatic solutions for KG construction are required to facilitate the adoption of KGs, but there are also merits when the automation tasks are declaratively described, with respect to maintenability, sustainability, and reproducibility. However, directly aligning the automatic solutions with the declarative solutions might be technically and conceptually challenging considering their different execution and iteration patterns. Extending the existing mapping languages would be a solution, but it would also require to address the identified challenges and not only. Would such extensions be feasible and desired or would they lead them beyond their purpose? Although, mapping languages are not the only approach to have declarative descriptions. Declarative descriptions of workflows emerge as well. Would that be a more viable solution? If so, would the automatic and declarative solutions keep on growing in different directions? These are questions that would be nice to reflect and discuss during the workshop.
|
| 146 |
+
|
| 147 |
+
## References
|
| 148 |
+
|
| 149 |
+
[1] A. Dimou, M. Vander Sande, P. Colpaert, R. Verborgh, E. Mannens, R. Van de Walle, RML: a generic language for integrated RDF mappings of heterogeneous data, in: Ldow, 2014.
|
| 150 |
+
|
| 151 |
+
[2] C. Debruyne, D. O'Sullivan, R2RML-F: towards sharing and executing domain logic in R2RML mappings, in: LDOW@ WWW, 2016.
|
| 152 |
+
|
| 153 |
+
[3] M. Lefrançois, A. Zimmermann, N. Bakerally, A SPARQL extension for generating RDF from heterogeneous formats, in: European Semantic Web Conference, Springer, 2017, pp. 35-50.
|
| 154 |
+
|
| 155 |
+
[4] E. Daga, L. Asprino, P. Mulholland, A. Gangemi, Facade-X: an opinionated approach to SPARQL anything, Studies on the Semantic Web 53 (2021) 58-73.
|
| 156 |
+
|
| 157 |
+
[5] E. Iglesias, S. Jozashoori, D. Chaves-Fraga, D. Collarana, M.-E. Vidal, SDM-RDFizer: An RML Interpreter for the Efficient Creation of RDF Knowledge Graphs, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 3039-3046.
|
| 158 |
+
|
| 159 |
+
[6] S. Jozashoori, D. Chaves-Fraga, E. Iglesias, M.-E. Vidal, O. Corcho, Funmap: Efficient
|
| 160 |
+
|
| 161 |
+
execution of functional mappings for knowledge graph creation, in: International Semantic Web Conference, Springer, 2020, pp. 276-293.
|
| 162 |
+
|
| 163 |
+
[7] L. F. d. Medeiros, F. Priyatna, O. Corcho, MIRROR: Automatic R2RML mapping generation from relational databases, in: International Conference on Web Engineering, Springer, 2015, pp. 326-343.
|
| 164 |
+
|
| 165 |
+
[8] C. Bizer, A. Seaborne, D2RQ-treating non-RDF databases as virtual RDF graphs, in: Proceedings of the 3rd international semantic web conference (ISWC2004), volume 2004, Springer Hiroshima, 2004.
|
| 166 |
+
|
| 167 |
+
[9] D. Calvanese, B. Cogrel, S. Komla-Ebri, R. Kontchakov, D. Lanti, M. Rezk, M. Rodriguez-Muro, G. Xiao, Ontop: Answering SPARQL queries over relational databases, Semantic Web 8 (2017) 471-487.
|
| 168 |
+
|
| 169 |
+
[10] Á. Sicilia, G. Nemirovski, AutoMap4OBDA: Automated generation of R2RML mappings for OBDA, in: European Knowledge Acquisition Workshop, Springer, 2016, pp. 577-592.
|
| 170 |
+
|
| 171 |
+
[11] E. Jiménez-Ruiz, E. Kharlamov, D. Zheleznyakov, I. Horrocks, C. Pinkel, M. G. Skjæveland, E. Thorstensen, J. Mora, Bootox: Bootstrapping OWL 2 ontologies and R2RML mappings from relational databases, in: International Semantic Web Conference (P&D), 2015.
|
| 172 |
+
|
| 173 |
+
[12] E. Jiménez-Ruiz, O. Hassanzadeh, V. Efthymiou, J. Chen, K. Srinivas, Semtab 2019: Resources to benchmark tabular data to knowledge graph matching systems, in: European Semantic Web Conference, Springer, 2020, pp. 514-530.
|
| 174 |
+
|
| 175 |
+
[13] P. Nguyen, I. Yamada, N. Kertkeidkachorn, R. Ichise, H. Takeda, SemTab 2021: Tabular Data Annotation with MTab Tool, SemTab@ ISWC (2021) 92-101.
|
| 176 |
+
|
| 177 |
+
[14] N. Abdelmageed, S. Schindler, JenTab Meets SemTab 2021's New Challenges, in: SemTab@ ISWC, 2021, pp. 42-53.
|
| 178 |
+
|
| 179 |
+
[15] V.-P. Huynh, J. Liu, Y. Chabot, F. Deuzé, T. Labbé, P. Monnin, R. Troncy, DAGOBAH: Table and Graph Contexts For Efficient Semantic Annotation Of Tabular Data, in: SemTab@ ISWC, 2021, pp. 19-31.
|
| 180 |
+
|
| 181 |
+
[16] B. De Meester, T. Seymoens, A. Dimou, R. Verborgh, Implementation-independent function reuse, Future Generation Computer Systems 110 (2020) 946-959.
|
| 182 |
+
|
| 183 |
+
[17] R. Avogadro, M. Cremaschi, MantisTable V: a novel and efficient approach to Semantic Table Interpretation, SemTab@ ISWC (2021) 79-91.
|
| 184 |
+
|
| 185 |
+
[18] M. Cremaschi, F. De Paoli, A. Rula, B. Spahiu, A fully automated approach to a complete semantic table interpretation, Future Generation Computer Systems 112 (2020) 478-500.
|
| 186 |
+
|
| 187 |
+
[19] W. E. Winkler, String comparator metrics and enhanced decision rules in the Fellegi-Sunter model of record linkage (1990).
|
| 188 |
+
|
| 189 |
+
[20] S. Jozashoori, A. Sakor, E. Iglesias, M.-E. Vidal, EABlock: A Declarative Entity Alignment Block for Knowledge Graph Creation Pipelines, in: Proceedings of the 37th ACM/SIGAPP Symposium On Applied Computing, 2022.
|
| 190 |
+
|
| 191 |
+
[21] V. I. Levenshtein, et al., Binary codes capable of correcting deletions, insertions, and reversals, in: Soviet physics doklady, volume 10, Soviet Union, 1966, pp. 707-710.
|
| 192 |
+
|
| 193 |
+
[22] G. Haesendonck, W. Maroy, P. Heyvaert, R. Verborgh, A. Dimou, Parallel RDF generation from heterogeneous big data, in: Proceedings of the International Workshop on Semantic Big Data, 2019, pp. 1-6.
|
| 194 |
+
|
| 195 |
+
[23] J. Arenas-Guerrero, D. Chaves-Fraga, J. Toledo, M. S. Pérez, O. Corcho, Morph-kgc: Scalable knowledge graph materialization with mapping partitions, Semantic Web (2022).
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DECLARATIVE DESCRIPTION OF KNOWLEDGE GRAPHS CONSTRUCTION AUTOMATION: STATUS & CHALLENGES
|
| 2 |
+
|
| 3 |
+
David Chaves-Fraga ${}^{1,2}$ , Anastasia Dimou ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ KU Leuven, Department of Computer Science, Sint-Katelijne-Waver, Belgium
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, Spain
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Nowadays, Knowledge Graphs (KG) are among the most powerful mechanisms to represent knowledge and integrate data from multiple domains. However, most of the available data sources are still described in heterogeneous data structures, schemes, and formats. The conversion of these sources into the desirable KG requires manual and time-consuming tasks, such as programming translation scripts, defining declarative mapping rules, etc. In this vision paper, we analyze the trends regarding the automation of KG construction but also the use of mapping languages for the same process, and align the two by analyzing their tasks and a few exemplary tools. Our aim is not to have a complete study but to investigate if there is potential in this direction and, if so, to discuss what challenges we need to address to guarantee the maintainability, explainability, and reproducibility of the KG construction.
|
| 12 |
+
|
| 13 |
+
§ KEYWORDS
|
| 14 |
+
|
| 15 |
+
Knowledge Graphs, Automation, Explainable AI, Declarative Rules
|
| 16 |
+
|
| 17 |
+
§ 1. INTRODUCTION
|
| 18 |
+
|
| 19 |
+
A lot of works on knowledge graph (KG) construction are focused on defining mapping languages to declaratively describe the transformation process, and on optimizing the execution of such declarative rules. The mapping languages rely on either dedicated syntaxes, such as the family of languages around the W3C recommended R2RML ${}^{1}$ (e.g., RML [1] or R2RML-F [2]), or on re-purposing existing specifications, such as query languages like the W3C recommended SPARQL ${}^{2}$ (e.g., SPARQL-Generate [3] or SPARQL-Anything [4]), or constraints languages like ShEx ${}^{3}$ (e.g., ShExML [5,6]).
|
| 20 |
+
|
| 21 |
+
Despite the plethora of mapping languages and the increasing number of optimizations for the execution of the declarative rules, these rules are still defined through a manual and time-consuming process, affecting negatively their adoption. Different solutions were proposed to automate the definition of mapping rules that describe how a KG should be constructed. On the one hand, MIRROR [7], D2RQ [8] and Ontop [9] follow a similar approach, extracting from the RDB schema a target ontology and the mapping correspondences. On the other hand, AutoMap4OBDA [10] and BootOX [11] consider an input ontology and generate actual R2RML mappings from the RDB. However, these solutions are focused on declarative solutions only for relational databases, while recent solutions investigate non-declarative automation of KG construction.
|
| 22 |
+
|
| 23 |
+
KGCW'22: International Workshop on Knolwedge Graph Construction, May 30, 2021, Creete, GRE
|
| 24 |
+
|
| 25 |
+
(C) david.chaves@upm.es (D. Chaves-Fraga); anastasia.dimou@kuleuven.be (A. Dimou)
|
| 26 |
+
|
| 27 |
+
© 0000-0003-3236-2789 (D. Chaves-Fraga); 0000-0003-2138-7972 (A. Dimou)
|
| 28 |
+
|
| 29 |
+
(C) ${}_{67}$ (C) C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
|
| 30 |
+
|
| 31 |
+
CEUR Workshop Proceedings (CEUR-WS.org)
|
| 32 |
+
|
| 33 |
+
${}^{1}$ http://www.w3.org/TR/r2rml/
|
| 34 |
+
|
| 35 |
+
${}^{2}$ https://www.w3.org/TR/sparql11-overview/
|
| 36 |
+
|
| 37 |
+
${}^{3}$ https://shex.io/
|
| 38 |
+
|
| 39 |
+
Beyond relational databases, the recent SemTab challenge ${}^{4}$ present a set of tabular datasets [12] with the aim of matching them automatically to external KGs, such as DBpedia and Wikidata. The proposed solutions $\left\lbrack {{13},{14},{15}}\right\rbrack$ address the problem using different techniques, such as heuristic rules, fuzzy searching over the KGs, or knowledge graph embeddings. Although their final objective is the same (obtain high precision and recall results) and they perform similar procedures, each solution implements its own workflow and addresses each proposed task by SemTab in different ways. Hence, making a fair and fine-grained comparison among the different solutions to understand how they obtain the actual results is not an easy task.
|
| 40 |
+
|
| 41 |
+
In this vision paper, we align the tasks followed by solutions for the automation of the semantic table annotation with concepts of existing declarative solutions. We indicatively select and analyze a few tools for the automation of KG construction and identify common steps. We discuss if they can be declaratively described relying on existing mapping languages, and what the challenges are to proceed in this direction. We consider the RDF Mapping Language (RML) [1] as a high-level and general representation to describe the schema transformations and its extension, the Function Ontology (FnO) [16] to describe the data transformations.
|
| 42 |
+
|
| 43 |
+
Our objective is not to present a complete study, but to investigate if there is potential in this direction. By describing the steps followed by different solutions in a more fine-grained and standard manner, we make the steps comparable, and we can better discuss what challenges we need to address to guarantee the maintainability, explainability and reproducibility of the KG construction, as well as to ensure the provenance of each performed task.
|
| 44 |
+
|
| 45 |
+
§ 2. TASK ALIGNMENT WITH MAPPING LANGUAGES
|
| 46 |
+
|
| 47 |
+
We analyze the different steps of the SemTab challenge, inspect the relation between the SemTab challenge tasks and align them with concepts from the declarative construction of RDF graphs (Figure 1). To achieve this, we include the relationship between each of the tasks and their potential declarations within a mapping language. We considered the RML mapping language because it is commonly used and the authors are more familiar with, but we are confident that the other mapping languages could express the same concepts. Before we proceed with the alignment, we give a small introduction on the SemTab challenge and RML:
|
| 48 |
+
|
| 49 |
+
SemTab challenge The SemTab challenge consists of three tasks: (i) cell to KG entity matching (CEA), which matches cells to individuals; (ii) column to KG class matching (CTA), which matches cells to classes; and (iii) column pair to KG property matching (CPA), which captures the relationships between pairs of columns.
|
| 50 |
+
|
| 51 |
+
RML The RDF mapping language (RML), a superset of the W3C recommended R2RML, expresses schema transformations from heterogeneous data to RDF. An RML mapping contains one or more Triple Maps which on their own turn contain a Subject Map to generate the subjects of the RDF triples, and zero or more Predicate Object Maps with pairs of Predicate and Object Maps to generate the predicates and the objects respectively for each incoming data record. RML was aligned with the Function Ontology (FnO) [16] to describe the data transformations which are required to construct the desired RDF graph, ensuring that the functions are independent from any implementation.
|
| 52 |
+
|
| 53 |
+
${}^{4}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/
|
| 54 |
+
|
| 55 |
+
< g r a p h i c s >
|
| 56 |
+
|
| 57 |
+
Figure 1: Automation tasks alignment within declarative mapping language. Example extracted from SemTab 2021 challenge, where the CEA, CTA and CPA tasks are aligned with a declarative construction of a knowledge graph using the RML mapping language (YARRRML serialisation).
|
| 58 |
+
|
| 59 |
+
We analyze how the different tasks of the challenge contribute in constructing a part of an RDF triple, and we align these tasks with the corresponding concepts of the RML mapping language that construct the same part of an RDF triple.
|
| 60 |
+
|
| 61 |
+
Cell-Entity Annotation (CEA): This task identifies the URI of an entity from a cell. In the target RDF graph, this is the subject or the object of the RDF triple. In Fig. 1, the Co10 values are used to obtain the subjects of the triples while the Co13 values generate the objects (both green colored in the RDF extract of Fig. 1). If a declarative approach is considered to generate these triples, for example in RML, the rr: subjectMap property is used (line 5 of RML doc in Fig. 1), which declares how the subjects of the triples are generated and the rr: object Map (line 8 of RML doc in Fig. 1), when the expected objects are in the form of URIs.
|
| 62 |
+
|
| 63 |
+
Column-Type Annotation (CTA): This task predicts the common class of a set of items given a column from the table. SemTab assumes that a table only generates one kind of entity (i.e. the first column is used for CTA). In Figure 1, we can observe that the URIs retrieved using Co10 are considered for obtaining the corresponding shared concept (i.e., restaurant) (red colored in the RDF extract of Fig. 1). Declaring the class in RML can be done through the shortcut rr: class property within the rr: SubjectMap (line 7 of RML doc in Fig. 1) or using a rr:predicateObjectMap with a rdf:type fixed predicate.
|
| 64 |
+
|
| 65 |
+
Columns-Property Annotation (CPA): This task aims to predict the property that relates the CTA column (subjects) to the rest of the columns. Fig. 1 shows a CPA task that relates the Co10 with the Co13 through the property architectural style (wdt : P149, yellow colored in the RDF extract). In RML, the predicates of the triples are declared using the rr: predicateMap property (line 8 of RML doc in Fig. 1), and unlike typical mapping rules, where it is usually assumed that predicates are constants (as they are declared in the input ontology), the predicates depend on the data, hence they are dynamically defined.
|
| 66 |
+
|
| 67 |
+
Based on the aforementioned analysis, we conclude that the tasks performed to automate the KG construction can be aligned with concepts from declarative mapping languages. The CEA task is aligned with the RDF term construction for the subject or the object of the RDF triple, the CTA task assigns the class and the CPA task aligns with the Predicate and Object Map.
|
| 68 |
+
|
| 69 |
+
§ 3. COMPARING SEMANTIC TABULAR MATCHING SYSTEMS
|
| 70 |
+
|
| 71 |
+
In this section, we analyze in detail the steps performed by some of the tools proposed for solving the SemTab challenge. The comparative analysis among the three selected engines (summarized in Table 1), is not meant to be exhaustive. We aim to identify if there are common steps and functions that the engines perform to accomplish the challenge's tasks and ultimately if it is possible and desired to declaratively describe them with mapping languages.
|
| 72 |
+
|
| 73 |
+
§ 3.1. SELECTED SYSTEMS
|
| 74 |
+
|
| 75 |
+
We indicatively selected the systems that: (i) obtained good results in the SemTab 2021 challenge ${}^{5}$ ; and (ii) have the source code openly available. Therefore, we included in this comparison JenTab [14], MTab [13] and MantisTable V [17]. The use of different terminologies for describing similar tasks (e.g., majority vote in Mantis V is referred as frequency) and the complexity of the proposed workflows, where the results from one of the task influence the others in a iterative way, create difficulties to compare the approaches and reproduce their results.
|
| 76 |
+
|
| 77 |
+
JenTab ${}^{6}$ participated in SemTab 2020 and 2021, and it was always positioned among the top five solutions for most rounds. It follows a heuristic-based approach proposing the CFS (Create, Filter, Select) approach for all tasks and with different configurations and workflows.
|
| 78 |
+
|
| 79 |
+
${\mathbf{{MTab}}}^{7}$ participated in all SemTab editions, winning the first prize in 2019 and 2020. Apart from the support of multilingual datasets, MTab implements several approaches for performing the entity search (i.e., CEA): keyword search, fuzzy search, and aggregation search ${}^{8}$ .
|
| 80 |
+
|
| 81 |
+
MantisTable ${\mathrm{V}}^{9}$ is an extended and improved version of MantisTable [18]. Similarly to JenTab, MantisTable has also participated in SemTab 2020 and 2021 editions. It implements a set of heuristic rules (similar as JenTab) and complex string similarity functions for the entity recognition task (like MTab). Additionally, it provides a general and efficient tool (LamAPI) to fetch the necessary data for all SemTab tasks, independently of the target KG.
|
| 82 |
+
|
| 83 |
+
${}^{5}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/2021
|
| 84 |
+
|
| 85 |
+
${}^{6}$ https://github.com/fusion-jena/JenTab
|
| 86 |
+
|
| 87 |
+
${}^{7}$ https://github.com/phucty/mtab_tool
|
| 88 |
+
|
| 89 |
+
${}^{8}$ https://mtab.app/mtabes/docs
|
| 90 |
+
|
| 91 |
+
${}^{9}$ https://bitbucket.org/disco_unimib/mantistable-v/
|
| 92 |
+
|
| 93 |
+
§ 3.2. OBSERVATIONS
|
| 94 |
+
|
| 95 |
+
The systems we inspected follow the same steps: they perform a preprocessing step, and setup lookup and datatype prediction services. Then the CEA task is performed followed by the CTA and CPA tasks which depend on the CEA task. Given that the systems follow the same steps, we could map the three main tasks (CEA, CPA, CTA) to the Create-Filter-Select (CFS) procedure proposed by JenTab (see Table 1).
|
| 96 |
+
|
| 97 |
+
We observe similarities in most tasks among the engines. The subtasks performed in the preprocessing step, are very similar in the three engines. The preprocessing tasks include several functions, such as fixing encoding issues, removing HTML tags or special characters, and detecting missing white spaces (see Table 1), and they usually delegate them to third-party libraries (e.g., ftfy ${}^{10}$ ). We observe similar tasks are performed when declarative solutions are used for cleaning and preparing the data. These preprocessing tasks are described with FnO in the case of RML and executed either together with the schema transformations or as a preprocessing task too.
|
| 98 |
+
|
| 99 |
+
The same occurs for the datatype prediction, where regular expressions are often used to detect if cell values are entities or literals, and what type of literals (string, date, or numbers). In the case of declarative solutions, this datatype inspection task is performed manually. However, adjusting the datatype is possible relying on functions for data transformations.
|
| 100 |
+
|
| 101 |
+
Most of them also incorporate a lookup step to retrieve the necessary data from the KGs (e.g., using SPARQL queries), including similarity functions or fuzzy search. The search engine for the KG lookups in JenTab and Mantis V is ElasticSearch, although the former implements the Jaro Winkler distance [19] while the later embeds it in a more efficient engine and exploits its query capabilities. Lookups were also incorporated in the case of declarative solutions [20], where lookup services retrieve a URI to identify an entity instead of assigning a new one.
|
| 102 |
+
|
| 103 |
+
As far as the actual tasks is concerned, each engine performs its own approach for the CEA, CTA, and CPA tasks, although we also find some similarities. The most important ones that are implemented in the three engines are: (i) the Levenshtein distance [21] for filtering candidates, and (ii) the majority vote (called frequency in Mantis V) for selecting the final annotations. We believe that the use of declarative approaches, such as the Function Ontology [16] for describing common functions (e.g., Levenshtein), could make the solutions more comparable. It would also be clearer if they perform the same function, and more explainable, as current solutions for the automation of KG construction act like blackboxes: neither their implementations are open sourced nor the declarative descriptions of what they execute are available. Providing at least declarative descriptions of the performed tasks would enhance the transparency of these solutions.
|
| 104 |
+
|
| 105 |
+
${}^{10}$ https://pypi.org/project/ftfy/
|
| 106 |
+
|
| 107 |
+
Table 1
|
| 108 |
+
|
| 109 |
+
Tasks comparison among different SemTab solutions
|
| 110 |
+
|
| 111 |
+
max width=
|
| 112 |
+
|
| 113 |
+
2|c|X JenTab MTab Mantis V
|
| 114 |
+
|
| 115 |
+
1-5
|
| 116 |
+
2|c|KG Lookup ElasticSearch on top of KG SPARQL Queries WikiGraph Generation Ad-hoc API LamAPI(ElasticSearch, Mongo and Python)
|
| 117 |
+
|
| 118 |
+
1-5
|
| 119 |
+
5*Preprocessing Fix encoding Y Y $\mathrm{N}$
|
| 120 |
+
|
| 121 |
+
2-5
|
| 122 |
+
Special characters Y N Y
|
| 123 |
+
|
| 124 |
+
2-5
|
| 125 |
+
Restore missing spaces Y $\mathrm{N}$ Y
|
| 126 |
+
|
| 127 |
+
2-5
|
| 128 |
+
Remove HTML tags $\mathrm{N}$ Y $\mathrm{N}$
|
| 129 |
+
|
| 130 |
+
2-5
|
| 131 |
+
Remove non-cell-values $\mathrm{N}$ Y $\mathrm{N}$
|
| 132 |
+
|
| 133 |
+
1-5
|
| 134 |
+
2|c|Datatype REGEX Type-based cleaning Cell values identification (literal, entity) SpaCy models for potential types Majority vote to define column type REGEX for datatypes exceeding a threshold Entity columns that do not exceed the threshold
|
| 135 |
+
|
| 136 |
+
1-5
|
| 137 |
+
3*CEA CREATE Different query rewriting techniques Keword search (BM25) Fuzzy search (Levenshtein distance) LamAPI lookup with IB similarity
|
| 138 |
+
|
| 139 |
+
2-5
|
| 140 |
+
FILTER Levenshtein distance (among others) Filter and hashing (Symetric Delete) Context similarities by row Levenshtein confidence score for entities Literals XXX
|
| 141 |
+
|
| 142 |
+
2-5
|
| 143 |
+
SELECT Levenshtein distance Highest context similarity xxxx
|
| 144 |
+
|
| 145 |
+
1-5
|
| 146 |
+
3*CTA CREATE Types from CEA Types from CEA Types from CEA
|
| 147 |
+
|
| 148 |
+
2-5
|
| 149 |
+
FILTER Remove the less popular types - -
|
| 150 |
+
|
| 151 |
+
2-5
|
| 152 |
+
SELECT Maiority vote Majority vote Majority vote
|
| 153 |
+
|
| 154 |
+
1-5
|
| 155 |
+
3*CPA CREATE Cell annotations (CEA) and fuzzy match for data properties Aggregate all properties from CEA by row Properties from CEA lookups
|
| 156 |
+
|
| 157 |
+
2-5
|
| 158 |
+
FILTER - - -
|
| 159 |
+
|
| 160 |
+
2-5
|
| 161 |
+
SELECT Majority vote Majority vote Majority vote
|
| 162 |
+
|
| 163 |
+
1-5
|
| 164 |
+
|
| 165 |
+
§ 4. CHALLENGES FOR A DECLARATIVE AUTOMATION OF KG CONSTRUCTION
|
| 166 |
+
|
| 167 |
+
We identify a set of challenges that need to be addressed to declaratively describe solutions for automatic KG construction. These challenges can be divided into two main categories: technical challenges and conceptual challenges.
|
| 168 |
+
|
| 169 |
+
On the technical side, there is a major difference between the solutions for the automation of KG construction and the execution of declarative KG construction solutions: The solutions for automatic KG construction rely on iterative processes that continuously refine and improves a task, while the different tasks influence each other. To the contrary, the declarative KG construction is a linear process that is executed only once. Not all declarative rules are executed linearly, solutions that restructure [6] or parallelize them [22, 23] are increasingly encountered. Thus, if the solutions for automatic KG construction are declaratively described, their iterative execution needs to be described as well. How do we do that with the mapping languages?
|
| 170 |
+
|
| 171 |
+
Besides the overall execution process, the iteration patterns are different. The solutions for automatic KG construction are applied to all directions, both per column and per row, and even combined. To the contrary, the declarative solutions are applied only per row, and the mapping languages are designed under this assumption. Should the mapping languages be extended to support more iteration patterns? If so, would the rml:iteration for RML and the relevant constructs in the other mapping languages be sufficient or more adjustments are required?
|
| 172 |
+
|
| 173 |
+
The solutions for automatic KG construction rely on interrelated tasks which may produce intermediate representations, and their results impact the rest of tasks. Thus, the declarative KG construction solutions need to deal with dynamic and recursive steps (e.g., intermediate representation of the input data sources and mapping rules, multiple function execution, etc.) that can negatively impact the generation process. Hence, declaratively describing is a challenge. Should the mapping languages be further extended then?
|
| 174 |
+
|
| 175 |
+
On the conceptual side, there are two main differences with respect to the training and target KG. In most real projects that declarative solutions tackle, the input data and sometimes the target ontology are only provided, but there is neither similar data to train the solutions nor existing KGs that can be used to find entities or to predict the relationships. While relying on ontology matching techniques between existing KGs (e.g., DBPedia, Wikidata) and the target ontology or exploiting NLP approaches between ontology and input sources documentation could be a solution for the latter, would it be realistic given that most ontologies are not aligned and not all of them provide documentation?
|
| 176 |
+
|
| 177 |
+
§ 5. CONCLUSIONS AND FUTURE WORK
|
| 178 |
+
|
| 179 |
+
In this paper, we analyze the KG construction solutions and compare the automatic with the declarative. While the tasks can be aligned with respect to what they achieve, their execution is fundamentally different and a direct alignment is not feasible.
|
| 180 |
+
|
| 181 |
+
Automatic solutions for KG construction are required to facilitate the adoption of KGs, but there are also merits when the automation tasks are declaratively described, with respect to maintenability, sustainability, and reproducibility. However, directly aligning the automatic solutions with the declarative solutions might be technically and conceptually challenging considering their different execution and iteration patterns. Extending the existing mapping languages would be a solution, but it would also require to address the identified challenges and not only. Would such extensions be feasible and desired or would they lead them beyond their purpose? Although, mapping languages are not the only approach to have declarative descriptions. Declarative descriptions of workflows emerge as well. Would that be a more viable solution? If so, would the automatic and declarative solutions keep on growing in different directions? These are questions that would be nice to reflect and discuss during the workshop.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Supporting Relational Database Joins for Generating Literals in R2RML
|
| 2 |
+
|
| 3 |
+
Christophe Debruyne ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ University of Liege - Montefiore Institute,4000 Liège, Belgium
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Since its publication, R2RML has provided us with a powerful tool for generating RDF from relational data, not necessarily manifested as relational databases. R2RML has its limitations, which are being recognized by W3C's Knowledge Graph Construction Community Group. That same group is currently developing a specification that supersedes R2RML in terms of its functionalities and the types of resources it can transform into RDF-primarily hierarchical documents. The community has a good understanding of problems of relational data and documents, even if they might need to be approached differently because of their different formalisms. In this paper, we present a challenge that has not been addressed yet for relational databases-generating literals based on (outer-)joins. We propose a simple extension of the R2RML vocabulary and extend the reference algorithm to support the generation of literals based on (outer-)joins. Furthermore, we implemented a proof-of-concept and demonstrated it using a dataset built for benchmarking joins. While it is not (yet) an extension of RML, this contribution informs us how to include such support and how it allows us to create self-contained mappings rather than relying on less elegant solutions.
|
| 10 |
+
|
| 11 |
+
## Keywords
|
| 12 |
+
|
| 13 |
+
R2RML, Knowledge Graph Generation, Outer-joins, Joins
|
| 14 |
+
|
| 15 |
+
## 1. Introduction
|
| 16 |
+
|
| 17 |
+
R2RML [1] is a powerful technique for transforming relational data into RDF and was published almost a decade ago. R2RML was conceived for relational databases, but can be applied to relational data. Since then, it inspired many initiatives to generalize this approach for other types of data such as RML [2] and xR2RML [3]. Others looked at extending aspects of (R2)RML not pertaining to the sources being transformed, but to tackle unaddressed challenges and requirements such as RDF Collections [3, 4] and functions [5, 6].
|
| 18 |
+
|
| 19 |
+
The R2RML Recommendation specified a reference algorithm in which relational joins (natural joins or equi-joins, to be specific) can be used to relate resources. The implementation can be broken into two parts: (1) the generation of triples based on a triples map $t{m}_{1}$ related to a logical source, and (2) the generation of triples relating subjects from $t{m}_{1}$ with those of another triples map $t{m}_{2}$ . While (2) does not use an outer-join, the combination of both (1) and (2) ensures that the data being transformed "behaves" as the result of an outer-join. The problem, however, is that support for such outer-joins is only limited to resources; there is no convenient way to do something similar for literals.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
Third International Workshop On Knowledge Graph Construction Co-located with the ESWC 2022, 30th May 2022, Crete, Greece
|
| 24 |
+
|
| 25 |
+
C. c.debruyne@uliege.be (C. Debruyne)
|
| 26 |
+
|
| 27 |
+
D 0000-0003-4734-3847 (C. Debruyne)
|
| 28 |
+
|
| 29 |
+
(C) ${}_{12}$ (C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
|
| 30 |
+
|
| 31 |
+
CEUR Workshop Proceedings (CEUR-WS.org)
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
title aka_title id movie id title imdb index kind id production year phonetic code episode_of_id season nr episode nr series years md5sum title imdb index kind id production_year imdb id phonetic_code episode of id season nr episode nr series years md5sum
|
| 36 |
+
|
| 37 |
+
Figure 1: The tables title and aka_title of the database. A title may be related to one or more aka_titles, and an aka_title may be related to one title.
|
| 38 |
+
|
| 39 |
+
This paper proposes a simple extension of R2RML for supporting joins for the generation of literals. It furthermore proposes how the reference algorithm should be extended. We demonstrate this extension using a fairly big relational databases developed for bench marking joins [7]. This benchmark provides us also with a realistic case, motivating the need for such an extension. This paper furthermore positions this contribution with other initiatives developed by the Knowledge Graph Generation community with the aim of opening a discussion.
|
| 40 |
+
|
| 41 |
+
### 2.The Problem
|
| 42 |
+
|
| 43 |
+
We framed the problem in the previous section. In this section, we will rephrase the problem and discuss several approaches to achieve the desired result that one can observe in practice. To this end, we will be using a running example based on the database developed by [7].
|
| 44 |
+
|
| 45 |
+
To benchmark the performance of joins, [7] developed a database based on the Internet Movie Database ${}^{1}$ (IMDb). In short, their motivation was that existing synthetic benchmarks may be biased and real and "messy" provided better grounds for comparison. While that aspect is not important for this paper, the database they developed did contain two big tables: title containing information about movies and their titles, and aka_t it le containing variations in titles (either an alternative title, or titles in different languages). Figure 1 depicts the relation between the two tables and their attributes. ${}^{2}$
|
| 46 |
+
|
| 47 |
+
There are two approaches to solving this problem with R2RML:
|
| 48 |
+
|
| 49 |
+
Sol1 The first is the creation of two triples maps with one dedicated to the generation of triples for the outer-join. The problem with this approach is that the mapping is not self-contained and that there are two distinct triples maps which need to be maintained. One also needs to document that this construct was necessary to facilitate this outer-join. The advantage is that there are two distinct processes for querying the underlying database and thus less overhead.
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
${}^{1}$ https://www.imdb.com/
|
| 54 |
+
|
| 55 |
+
${}^{2}$ The files were loaded into a MySQL database, but required some minor pre-processing: a handful of encoding issues in the files and NULL values in aka_table were represented with the number 0 . We also introduced a foreign key constraint that was not present in the SQL schema provided by [7]. The reason being that the foreign key constraint optimizes joins on these two tables. The tables contain 2528312 and 361472 respectively. There are 93 records from aka_table not referring to a record in title and 2322682 records in title have no alternative titles.
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
Sol2 The second, more naïve approach, is the use of one triples map with a (outer-)-join in its logical table. While this makes the triples map self-contained, unlike the approach above, but may require the processor to process many logical rows that generate the same triples.
|
| 60 |
+
|
| 61 |
+
We may observe, in the wild, cases of the first also being conducted for referencing object maps, especially when the processor used uses the reference algorithm. The problems with respect to self-containedness of triples maps still holds. An R2RML processor may internally "rewrite" referencing object maps as triples maps to optimize the process.
|
| 62 |
+
|
| 63 |
+
In the next section, we propose a small extension of R2RML to provide support for joins on literal values.
|
| 64 |
+
|
| 65 |
+
## 3. Proposed solution
|
| 66 |
+
|
| 67 |
+
In Listing 1, we demonstrate the extension. It introduces the predicate rrf :parentLogicalTable. ${}^{3}$ The domain of that predicate is rr: RefObjectMap and the range is rr: LogicalTable. Our extension requires that a rr: RefObjectMap must have either a rrf:parentLogicalTable or rr:parentTriplesMap. A referencing object map may now also generate literals. Where necessary, we will refer to object maps with a parent-triples map as "regular" referencing object maps.
|
| 68 |
+
|
| 69 |
+
---
|
| 70 |
+
|
| 71 |
+
<#title>
|
| 72 |
+
|
| 73 |
+
rr:logicalTable [ rr:tableName "title" ] ;
|
| 74 |
+
|
| 75 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/\{id\}" ; rr:class ex:Movie; ] ;
|
| 76 |
+
|
| 77 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] ;
|
| 78 |
+
|
| 79 |
+
rr:predicateObjectMap [
|
| 80 |
+
|
| 81 |
+
rr:predicate ex:title ;
|
| 82 |
+
|
| 83 |
+
rr:objectMap [
|
| 84 |
+
|
| 85 |
+
rr:column "title" ;
|
| 86 |
+
|
| 87 |
+
rrf:parentLogicalTable [ rr:tableName "aka_title" ] ;
|
| 88 |
+
|
| 89 |
+
rr:joinCondition [ rr:child "id" ; rr:parent "movie_id" ] ;
|
| 90 |
+
|
| 91 |
+
] ;
|
| 92 |
+
|
| 93 |
+
] ;
|
| 94 |
+
|
| 95 |
+
-
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
## Listing 1: Using parent-logical tables for managing joins
|
| 100 |
+
|
| 101 |
+
The reference algorithm ${}^{4}$ is extended as follows: step 6 will now iterate over all referencing object maps with a rr : parentTriplesMap and we add a 7th step for each referencing object map that uses a parent-logical table. The steps for generating are mostly the same. The two differences are: 1) it may generate any term type, and 2) the column names referred to by the object map are those of the parent. In other words, if both logical tables share a column $\mathrm{X}$ , then a reference to $\mathrm{X}$ would be to that of the parent. This behavior is consistent with that of regular referencing object maps. An implementation of this algorithm is made available. ${}^{5}$
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
${}^{3}$ The namespace rrf refers to the namespace used in [6].
|
| 106 |
+
|
| 107 |
+
${}^{4}$ https://www.w3.org/TR/r2rml/#generated-rdf
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
## 4. Demonstration
|
| 112 |
+
|
| 113 |
+
We now present a limited experiment comparing the performance of Sol1, Sol2, and our proposal using the relational database introduced in Section 2. The mappings for Sol1 and Sol2 are in Appendix A. In this experiment, we join using the tables as a whole. As R2RML requires result sets to have unique names for each column, we created a third table aka_title2 where each column received the suffix ' 2 '. We also created a foreign key from aka_title2 to title. We wanted to avoid using subqueries to rename the columns, and these may become materialized and thus have an unfairly negative impact on the outcome.
|
| 114 |
+
|
| 115 |
+
The experiment was run on a MacBook Pro with a 2.3 GHz Dual-Core Intel Core i5 processor and 16 GB 2133 MHz LPDDR3 RAM. The database was stored in a MySQL 8.0 database in a Docker container. The code for the experiment was written in Java and ran the result of each mapping 11 times, of which the first run was removed to avoid bias from a cold start. The code calls upon the extension of R2RML-F and registered timestamps before and after executing the mapping. We have not registered the time for writing the graph onto the hard disk.
|
| 116 |
+
|
| 117 |
+
From Figure 2, which shows the average run times in seconds, it is clear that the approach of using two different triples maps (Sol1) is much faster than the two other approaches, which comes as no surprise. The problem, however, is that we have two distinct triples maps and their relationship is not explicit. Placing the outer-join in the logical table (Sol2) has the worst performance. The outer join yields a result set with 155749 more records than the referred table and contains twice the number of attributes. The overhead can be significantly reduced by only selecting the columns of interest, but the three mappings refer to the logical tables as a whole. Unsurprisingly, our solution is less efficient than Sol1 but considerably more efficient than Sol2.
|
| 118 |
+
|
| 119 |
+
We may conclude from these initial results that the proposed solution is not only a viable solution. It also ensures that the mappings remain self-contained. While performance is crucial in knowledge graph generation, we argue that even the vocabulary is a contribution and that an R2RML processor can rewrite referencing object maps (both types) into distinct triples maps.
|
| 120 |
+
|
| 121 |
+
## 5. Discussion
|
| 122 |
+
|
| 123 |
+
In this paper, we extended the concept of rr: RefObjectMap to support joins for literal values. The reference algorithm for R2RML processes these in a separate loop for the generation of relations between subjects of two triples maps. Our approach added a similar step to the generation of literals based on a join. One may ask whether this approach may be adopted for term maps in general. The generation of subjects, predicates, and graphs for relational databases is based on a logical row. Generalizing this approach for such term maps may require a join per row, which is not efficient and is thus best done in the logical table of a triples map.
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
${}^{5}$ https://github.com/chrdebru/r2rml/tree/r2rml-join
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
Sol2 (outer join in TM) 100 105 110 115 120 Proposed solution 80 85 90
|
| 132 |
+
|
| 133 |
+
Figure 2: Time taken to process three mappings: Sol1-2 triples maps for the outer-join, Sol2- one triples map with the outer-join in the logical table, and out proposed solution.
|
| 134 |
+
|
| 135 |
+
As we can generate resources with our approach, one can question whether the notion of parent-triples maps is still necessary. The reference algorithm uses both logical tables, even though a processor can only select those used by the subject maps. The question rises: do we refer to (data in) sources, or do we refer to triples maps?
|
| 136 |
+
|
| 137 |
+
Related to this work is the approach proposed by [8] where they proposed "fields" to manipulate and even combine the source prior to generating RDF. Their work, demonstrated with hierarchical data, aimed to address the problem of references that may yield multiple results and that sources may contain data of mixed formats. They also introduced an abstraction allowing one to retrieve information via a reference that does not depend on the underlying reference formulation. To the best of my knowledge, support for relational databases and the addition of fields from different tables has not yet been published. However, as they declare fields on the logical source, such an approach may boil down to a situation similar to Sol2 mentioned in Section 2.
|
| 138 |
+
|
| 139 |
+
## 6. Conclusions
|
| 140 |
+
|
| 141 |
+
We addressed the problem of generating literals from an outer-join, which R2RML does not support. While interesting initiatives are proposed for mostly hierarchical documents, we wanted to address this problem for relational databases by extending R2RML. We proposed a small extension with few implications regarding the R2RML vocabulary. We also extended the reference algorithm and provided an implementation that we have analyzed in an experiment.
|
| 142 |
+
|
| 143 |
+
From this paper, we can conclude that, for relational databases, our approach is a viable solution. While not as efficient as disjoint triples maps, it may be worth considering not as an approach. It is essential not to consider this vocabulary extension as syntactic sugar, as that would imply it is shorthand for something semantically equivalent. In our approach, the mappings are self-contained, and the relationship between the two logical tables is thus explicit.
|
| 144 |
+
|
| 145 |
+
We have addressed this problem for relational databases and R2RML. We could envisage that such an approach could be part of RML, which has the ambition to supersede R2RML. How this approach would work for non-relational data is to be studied.
|
| 146 |
+
|
| 147 |
+
## A. Mappings Used in the Experiment
|
| 148 |
+
|
| 149 |
+
---
|
| 150 |
+
|
| 151 |
+
###MAPPING USED FOR SOL1 IN THE EXPERIMENT
|
| 152 |
+
|
| 153 |
+
<#title_tm>
|
| 154 |
+
|
| 155 |
+
rr:logicalTable [ rr:tableName "title" ] ;
|
| 156 |
+
|
| 157 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/\{id\}" ; rr:class ex:Movie; ] ;
|
| 158 |
+
|
| 159 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
|
| 160 |
+
|
| 161 |
+
<#aka_title_tm>
|
| 162 |
+
|
| 163 |
+
rr:logicalTable [ rr:tableName "aka_title" ] ;
|
| 164 |
+
|
| 165 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/\{movie_id\}" ; rr:class ex:Movie; ] ;
|
| 166 |
+
|
| 167 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
|
| 168 |
+
|
| 169 |
+
###MAPPING USED FOR SOL2 IN THE EXPERIMENT
|
| 170 |
+
|
| 171 |
+
<#title_tm>
|
| 172 |
+
|
| 173 |
+
rr:logicalTable [
|
| 174 |
+
|
| 175 |
+
rr:sqlQuery "SELECT * FROM title t LEFT OUTER JOIN aka_title2 a ON t.id = a.movie_ID2" ] ;
|
| 176 |
+
|
| 177 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/\{id\}" ; rr:class ex:Movie; ] ;
|
| 178 |
+
|
| 179 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ;
|
| 180 |
+
|
| 181 |
+
rr:objectMap [ rr:column "title2" ] ;
|
| 182 |
+
|
| 183 |
+
] .
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
## References
|
| 188 |
+
|
| 189 |
+
[1] S. Das, R. Cyganiak, S. Sundara, R2RML: RDB to RDF Mapping Language, 2012. URL: https://www.w3.org/TR/r2rml/.
|
| 190 |
+
|
| 191 |
+
[2] A. Dimou, M. V. Sande, P. Colpaert, R. Verborgh, E. Mannens, R. V. de Walle, RML: A Generic Language for Integrated RDF Mappings of Heterogeneous Data, in: C. Bizer, T. Heath, S. Auer, T. Berners-Lee (Eds.), Proceedings of the Workshop on Linked Data on the Web co-located with the 23rd International World Wide Web Conference (WWW 2014), Seoul, Korea, April 8, 2014., volume 1184 of CEUR Workshop Proceedings, CEUR-WS.org, 2014. URL: http://ceur-ws.org/Vol-1184/ldow2014_paper_01.pdf.
|
| 192 |
+
|
| 193 |
+
[3] F. Michel, L. Djimenou, C. Faron-Zucker, J. Montagnat, Translation of relational and non-relational databases into RDF with xr2rml, in: V. Monfort, K. Krempels, T. A. Majchrzak, Z. Turk (Eds.), WEBIST 2015 - Proceedings of the 11th International Conference on Web Information Systems and Technologies, Lisbon, Portugal, 20-22 May, 2015, SciTePress, 2015, pp. 443-454. URL: https://doi.org/10.5220/0005448304430454.doi:10.5220/0005448304430454.
|
| 194 |
+
|
| 195 |
+
[4] C. Debruyne, L. McKenna, D. O'Sullivan, Extending R2RML with support for rdf collections and containers to generate MADS-RDF datasets, volume 10450 LNCS, 2017. doi:10.1007/978-3-319-67008-9_42.
|
| 196 |
+
|
| 197 |
+
[5] B. D. Meester, W. Maroy, A. Dimou, R. Verborgh, E. Mannens, Declarative data transformations for linked data generation: The case of dbpedia, in: E. Blomqvist, D. Maynard, A. Gangemi, R. Hoekstra, P. Hitzler, O. Hartig (Eds.), The Semantic Web - 14th International Conference, ESWC 2017, Portoroz, Slovenia, May 28 - June 1, 2017, Proceedings, Part II, volume 10250 of Lecture Notes in Computer Science, 2017, pp. 33-48. URL: https://doi.org/10.1007/978-3-319-58451-5_3.doi:10.1007/ 978-3-319-58451-5\\_3.
|
| 198 |
+
|
| 199 |
+
[6] C. Debruyne, D. O'Sullivan, R2RML-F: towards sharing and executing domain logic in R2RML mappings, in: S. Auer, T. Berners-Lee, C. Bizer, T. Heath (Eds.), Proceedings of the Workshop on Linked Data on the Web, LDOW 2016, co-located with 25th International World Wide Web Conference (WWW 2016), volume 1593 of CEUR Workshop Proceedings, CEUR-WS.org, 2016. URL: http://ceur-ws.org/Vol-1593/article-13.pdf.
|
| 200 |
+
|
| 201 |
+
[7] V. Leis, A. Gubichev, A. Mirchev, P. A. Boncz, A. Kemper, T. Neumann, How good are query optimizers, really?, Proc. VLDB Endow. 9 (2015) 204-215. URL: http://www.vldb.org/pvldb/vol9/p204-leis.pdf.doi:10.14778/2850583.2850594.
|
| 202 |
+
|
| 203 |
+
[8] T. Delva, D. V. Assche, P. Heyvaert, B. D. Meester, A. Dimou, Integrating nested data into knowledge graphs with RML fields, in: D. Chaves-Fraga, A. Dimou, P. Heyvaert, F. Priyatna, J. F. Sequeda (Eds.), Proceedings of the 2nd International Workshop on Knowledge Graph Construction co-located with 18th Extended Semantic Web Conference (ESWC 2021), Online, June 6, 2021, volume 2873 of CEUR Workshop Proceedings, CEUR-WS.org, 2021. URL: http://ceur-ws.org/Vol-2873/paper9.pdf.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SUPPORTING RELATIONAL DATABASE JOINS FOR GENERATING LITERALS IN R2RML
|
| 2 |
+
|
| 3 |
+
Christophe Debruyne ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ University of Liege - Montefiore Institute,4000 Liège, Belgium
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Since its publication, R2RML has provided us with a powerful tool for generating RDF from relational data, not necessarily manifested as relational databases. R2RML has its limitations, which are being recognized by W3C's Knowledge Graph Construction Community Group. That same group is currently developing a specification that supersedes R2RML in terms of its functionalities and the types of resources it can transform into RDF-primarily hierarchical documents. The community has a good understanding of problems of relational data and documents, even if they might need to be approached differently because of their different formalisms. In this paper, we present a challenge that has not been addressed yet for relational databases-generating literals based on (outer-)joins. We propose a simple extension of the R2RML vocabulary and extend the reference algorithm to support the generation of literals based on (outer-)joins. Furthermore, we implemented a proof-of-concept and demonstrated it using a dataset built for benchmarking joins. While it is not (yet) an extension of RML, this contribution informs us how to include such support and how it allows us to create self-contained mappings rather than relying on less elegant solutions.
|
| 10 |
+
|
| 11 |
+
§ KEYWORDS
|
| 12 |
+
|
| 13 |
+
R2RML, Knowledge Graph Generation, Outer-joins, Joins
|
| 14 |
+
|
| 15 |
+
§ 1. INTRODUCTION
|
| 16 |
+
|
| 17 |
+
R2RML [1] is a powerful technique for transforming relational data into RDF and was published almost a decade ago. R2RML was conceived for relational databases, but can be applied to relational data. Since then, it inspired many initiatives to generalize this approach for other types of data such as RML [2] and xR2RML [3]. Others looked at extending aspects of (R2)RML not pertaining to the sources being transformed, but to tackle unaddressed challenges and requirements such as RDF Collections [3, 4] and functions [5, 6].
|
| 18 |
+
|
| 19 |
+
The R2RML Recommendation specified a reference algorithm in which relational joins (natural joins or equi-joins, to be specific) can be used to relate resources. The implementation can be broken into two parts: (1) the generation of triples based on a triples map $t{m}_{1}$ related to a logical source, and (2) the generation of triples relating subjects from $t{m}_{1}$ with those of another triples map $t{m}_{2}$ . While (2) does not use an outer-join, the combination of both (1) and (2) ensures that the data being transformed "behaves" as the result of an outer-join. The problem, however, is that support for such outer-joins is only limited to resources; there is no convenient way to do something similar for literals.
|
| 20 |
+
|
| 21 |
+
Third International Workshop On Knowledge Graph Construction Co-located with the ESWC 2022, 30th May 2022, Crete, Greece
|
| 22 |
+
|
| 23 |
+
C. c.debruyne@uliege.be (C. Debruyne)
|
| 24 |
+
|
| 25 |
+
D 0000-0003-4734-3847 (C. Debruyne)
|
| 26 |
+
|
| 27 |
+
(C) ${}_{12}$ (C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
|
| 28 |
+
|
| 29 |
+
CEUR Workshop Proceedings (CEUR-WS.org)
|
| 30 |
+
|
| 31 |
+
< g r a p h i c s >
|
| 32 |
+
|
| 33 |
+
Figure 1: The tables title and aka_title of the database. A title may be related to one or more aka_titles, and an aka_title may be related to one title.
|
| 34 |
+
|
| 35 |
+
This paper proposes a simple extension of R2RML for supporting joins for the generation of literals. It furthermore proposes how the reference algorithm should be extended. We demonstrate this extension using a fairly big relational databases developed for bench marking joins [7]. This benchmark provides us also with a realistic case, motivating the need for such an extension. This paper furthermore positions this contribution with other initiatives developed by the Knowledge Graph Generation community with the aim of opening a discussion.
|
| 36 |
+
|
| 37 |
+
§ 2.THE PROBLEM
|
| 38 |
+
|
| 39 |
+
We framed the problem in the previous section. In this section, we will rephrase the problem and discuss several approaches to achieve the desired result that one can observe in practice. To this end, we will be using a running example based on the database developed by [7].
|
| 40 |
+
|
| 41 |
+
To benchmark the performance of joins, [7] developed a database based on the Internet Movie Database ${}^{1}$ (IMDb). In short, their motivation was that existing synthetic benchmarks may be biased and real and "messy" provided better grounds for comparison. While that aspect is not important for this paper, the database they developed did contain two big tables: title containing information about movies and their titles, and aka_t it le containing variations in titles (either an alternative title, or titles in different languages). Figure 1 depicts the relation between the two tables and their attributes. ${}^{2}$
|
| 42 |
+
|
| 43 |
+
There are two approaches to solving this problem with R2RML:
|
| 44 |
+
|
| 45 |
+
Sol1 The first is the creation of two triples maps with one dedicated to the generation of triples for the outer-join. The problem with this approach is that the mapping is not self-contained and that there are two distinct triples maps which need to be maintained. One also needs to document that this construct was necessary to facilitate this outer-join. The advantage is that there are two distinct processes for querying the underlying database and thus less overhead.
|
| 46 |
+
|
| 47 |
+
${}^{1}$ https://www.imdb.com/
|
| 48 |
+
|
| 49 |
+
${}^{2}$ The files were loaded into a MySQL database, but required some minor pre-processing: a handful of encoding issues in the files and NULL values in aka_table were represented with the number 0 . We also introduced a foreign key constraint that was not present in the SQL schema provided by [7]. The reason being that the foreign key constraint optimizes joins on these two tables. The tables contain 2528312 and 361472 respectively. There are 93 records from aka_table not referring to a record in title and 2322682 records in title have no alternative titles.
|
| 50 |
+
|
| 51 |
+
Sol2 The second, more naïve approach, is the use of one triples map with a (outer-)-join in its logical table. While this makes the triples map self-contained, unlike the approach above, but may require the processor to process many logical rows that generate the same triples.
|
| 52 |
+
|
| 53 |
+
We may observe, in the wild, cases of the first also being conducted for referencing object maps, especially when the processor used uses the reference algorithm. The problems with respect to self-containedness of triples maps still holds. An R2RML processor may internally "rewrite" referencing object maps as triples maps to optimize the process.
|
| 54 |
+
|
| 55 |
+
In the next section, we propose a small extension of R2RML to provide support for joins on literal values.
|
| 56 |
+
|
| 57 |
+
§ 3. PROPOSED SOLUTION
|
| 58 |
+
|
| 59 |
+
In Listing 1, we demonstrate the extension. It introduces the predicate rrf :parentLogicalTable. ${}^{3}$ The domain of that predicate is rr: RefObjectMap and the range is rr: LogicalTable. Our extension requires that a rr: RefObjectMap must have either a rrf:parentLogicalTable or rr:parentTriplesMap. A referencing object map may now also generate literals. Where necessary, we will refer to object maps with a parent-triples map as "regular" referencing object maps.
|
| 60 |
+
|
| 61 |
+
<#title>
|
| 62 |
+
|
| 63 |
+
rr:logicalTable [ rr:tableName "title" ] ;
|
| 64 |
+
|
| 65 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/{id}" ; rr:class ex:Movie; ] ;
|
| 66 |
+
|
| 67 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] ;
|
| 68 |
+
|
| 69 |
+
rr:predicateObjectMap [
|
| 70 |
+
|
| 71 |
+
rr:predicate ex:title ;
|
| 72 |
+
|
| 73 |
+
rr:objectMap [
|
| 74 |
+
|
| 75 |
+
rr:column "title" ;
|
| 76 |
+
|
| 77 |
+
rrf:parentLogicalTable [ rr:tableName "aka_title" ] ;
|
| 78 |
+
|
| 79 |
+
rr:joinCondition [ rr:child "id" ; rr:parent "movie_id" ] ;
|
| 80 |
+
|
| 81 |
+
] ;
|
| 82 |
+
|
| 83 |
+
] ;
|
| 84 |
+
|
| 85 |
+
-
|
| 86 |
+
|
| 87 |
+
§ LISTING 1: USING PARENT-LOGICAL TABLES FOR MANAGING JOINS
|
| 88 |
+
|
| 89 |
+
The reference algorithm ${}^{4}$ is extended as follows: step 6 will now iterate over all referencing object maps with a rr : parentTriplesMap and we add a 7th step for each referencing object map that uses a parent-logical table. The steps for generating are mostly the same. The two differences are: 1) it may generate any term type, and 2) the column names referred to by the object map are those of the parent. In other words, if both logical tables share a column $\mathrm{X}$ , then a reference to $\mathrm{X}$ would be to that of the parent. This behavior is consistent with that of regular referencing object maps. An implementation of this algorithm is made available. ${}^{5}$
|
| 90 |
+
|
| 91 |
+
${}^{3}$ The namespace rrf refers to the namespace used in [6].
|
| 92 |
+
|
| 93 |
+
${}^{4}$ https://www.w3.org/TR/r2rml/#generated-rdf
|
| 94 |
+
|
| 95 |
+
§ 4. DEMONSTRATION
|
| 96 |
+
|
| 97 |
+
We now present a limited experiment comparing the performance of Sol1, Sol2, and our proposal using the relational database introduced in Section 2. The mappings for Sol1 and Sol2 are in Appendix A. In this experiment, we join using the tables as a whole. As R2RML requires result sets to have unique names for each column, we created a third table aka_title2 where each column received the suffix ' 2 '. We also created a foreign key from aka_title2 to title. We wanted to avoid using subqueries to rename the columns, and these may become materialized and thus have an unfairly negative impact on the outcome.
|
| 98 |
+
|
| 99 |
+
The experiment was run on a MacBook Pro with a 2.3 GHz Dual-Core Intel Core i5 processor and 16 GB 2133 MHz LPDDR3 RAM. The database was stored in a MySQL 8.0 database in a Docker container. The code for the experiment was written in Java and ran the result of each mapping 11 times, of which the first run was removed to avoid bias from a cold start. The code calls upon the extension of R2RML-F and registered timestamps before and after executing the mapping. We have not registered the time for writing the graph onto the hard disk.
|
| 100 |
+
|
| 101 |
+
From Figure 2, which shows the average run times in seconds, it is clear that the approach of using two different triples maps (Sol1) is much faster than the two other approaches, which comes as no surprise. The problem, however, is that we have two distinct triples maps and their relationship is not explicit. Placing the outer-join in the logical table (Sol2) has the worst performance. The outer join yields a result set with 155749 more records than the referred table and contains twice the number of attributes. The overhead can be significantly reduced by only selecting the columns of interest, but the three mappings refer to the logical tables as a whole. Unsurprisingly, our solution is less efficient than Sol1 but considerably more efficient than Sol2.
|
| 102 |
+
|
| 103 |
+
We may conclude from these initial results that the proposed solution is not only a viable solution. It also ensures that the mappings remain self-contained. While performance is crucial in knowledge graph generation, we argue that even the vocabulary is a contribution and that an R2RML processor can rewrite referencing object maps (both types) into distinct triples maps.
|
| 104 |
+
|
| 105 |
+
§ 5. DISCUSSION
|
| 106 |
+
|
| 107 |
+
In this paper, we extended the concept of rr: RefObjectMap to support joins for literal values. The reference algorithm for R2RML processes these in a separate loop for the generation of relations between subjects of two triples maps. Our approach added a similar step to the generation of literals based on a join. One may ask whether this approach may be adopted for term maps in general. The generation of subjects, predicates, and graphs for relational databases is based on a logical row. Generalizing this approach for such term maps may require a join per row, which is not efficient and is thus best done in the logical table of a triples map.
|
| 108 |
+
|
| 109 |
+
${}^{5}$ https://github.com/chrdebru/r2rml/tree/r2rml-join
|
| 110 |
+
|
| 111 |
+
< g r a p h i c s >
|
| 112 |
+
|
| 113 |
+
Figure 2: Time taken to process three mappings: Sol1-2 triples maps for the outer-join, Sol2- one triples map with the outer-join in the logical table, and out proposed solution.
|
| 114 |
+
|
| 115 |
+
As we can generate resources with our approach, one can question whether the notion of parent-triples maps is still necessary. The reference algorithm uses both logical tables, even though a processor can only select those used by the subject maps. The question rises: do we refer to (data in) sources, or do we refer to triples maps?
|
| 116 |
+
|
| 117 |
+
Related to this work is the approach proposed by [8] where they proposed "fields" to manipulate and even combine the source prior to generating RDF. Their work, demonstrated with hierarchical data, aimed to address the problem of references that may yield multiple results and that sources may contain data of mixed formats. They also introduced an abstraction allowing one to retrieve information via a reference that does not depend on the underlying reference formulation. To the best of my knowledge, support for relational databases and the addition of fields from different tables has not yet been published. However, as they declare fields on the logical source, such an approach may boil down to a situation similar to Sol2 mentioned in Section 2.
|
| 118 |
+
|
| 119 |
+
§ 6. CONCLUSIONS
|
| 120 |
+
|
| 121 |
+
We addressed the problem of generating literals from an outer-join, which R2RML does not support. While interesting initiatives are proposed for mostly hierarchical documents, we wanted to address this problem for relational databases by extending R2RML. We proposed a small extension with few implications regarding the R2RML vocabulary. We also extended the reference algorithm and provided an implementation that we have analyzed in an experiment.
|
| 122 |
+
|
| 123 |
+
From this paper, we can conclude that, for relational databases, our approach is a viable solution. While not as efficient as disjoint triples maps, it may be worth considering not as an approach. It is essential not to consider this vocabulary extension as syntactic sugar, as that would imply it is shorthand for something semantically equivalent. In our approach, the mappings are self-contained, and the relationship between the two logical tables is thus explicit.
|
| 124 |
+
|
| 125 |
+
We have addressed this problem for relational databases and R2RML. We could envisage that such an approach could be part of RML, which has the ambition to supersede R2RML. How this approach would work for non-relational data is to be studied.
|
| 126 |
+
|
| 127 |
+
§ A. MAPPINGS USED IN THE EXPERIMENT
|
| 128 |
+
|
| 129 |
+
###MAPPING USED FOR SOL1 IN THE EXPERIMENT
|
| 130 |
+
|
| 131 |
+
<#title_tm>
|
| 132 |
+
|
| 133 |
+
rr:logicalTable [ rr:tableName "title" ] ;
|
| 134 |
+
|
| 135 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/{id}" ; rr:class ex:Movie; ] ;
|
| 136 |
+
|
| 137 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
|
| 138 |
+
|
| 139 |
+
<#aka_title_tm>
|
| 140 |
+
|
| 141 |
+
rr:logicalTable [ rr:tableName "aka_title" ] ;
|
| 142 |
+
|
| 143 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/{movie_id}" ; rr:class ex:Movie; ] ;
|
| 144 |
+
|
| 145 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
|
| 146 |
+
|
| 147 |
+
###MAPPING USED FOR SOL2 IN THE EXPERIMENT
|
| 148 |
+
|
| 149 |
+
<#title_tm>
|
| 150 |
+
|
| 151 |
+
rr:logicalTable [
|
| 152 |
+
|
| 153 |
+
rr:sqlQuery "SELECT * FROM title t LEFT OUTER JOIN aka_title2 a ON t.id = a.movie_ID2" ] ;
|
| 154 |
+
|
| 155 |
+
rr:subjectMap [ rr:template "http://data.example.com/movie/{id}" ; rr:class ex:Movie; ] ;
|
| 156 |
+
|
| 157 |
+
rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ;
|
| 158 |
+
|
| 159 |
+
rr:objectMap [ rr:column "title2" ] ;
|
| 160 |
+
|
| 161 |
+
] .
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,301 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# What is needed in a Knowledge Graph Management Platform? A survey and a proposal
|
| 2 |
+
|
| 3 |
+
Samira Babalou* 12, Franziska Zander 12, Erik Kleinsteuber 1, Badr El
|
| 4 |
+
|
| 5 |
+
Haouni ${}^{1}$ , David Schellenberger Costa ${}^{2}$ , Jens Kattge ${}^{2}{}^{2}$ , Birgitta König-Ries-123
|
| 6 |
+
|
| 7 |
+
${}^{1}$ Heinz-Nixdorf Chair for Distributed Information Systems
|
| 8 |
+
|
| 9 |
+
Institute for Computer Science, Friedrich Schiller University Jena, Germany
|
| 10 |
+
|
| 11 |
+
${}^{2}$ German Center for Integrative Biodiversity Research (iDiv), Halle-Jena-Leipzig, Germany
|
| 12 |
+
|
| 13 |
+
${}^{3}$ Michael-Stifel-Center for Data-Driven and Simulation Science, Jena, Germany
|
| 14 |
+
|
| 15 |
+
${}^{4}$ Institute of Biology/Geobotany and Botanical Garden, Martin Luther University, Halle, Germany
|
| 16 |
+
|
| 17 |
+
corresponding author: samira.babalou@uni.jena.de
|
| 18 |
+
|
| 19 |
+
Abstract. Knowledge Graphs (KGs) play a significant and growing role for semantics-based support of a wide variety of applications. Until recently, creating and maintaining such knowledge graphs was done in a one-off manner requiring significant manual effort and expertise. Over the last few years, the first KG management platforms supporting the lifecycle of KGs from their creation to their maintenance and use have appeared. In this paper, we first survey these platforms. We then take a step further and identify common functionalities across such platforms. We discuss nineteen such functionalities categorized into four groups: creating, extending, using, and maintaining KGs. Based on the findings of this analysis, we present our proposed KG management platform for the biodiversity domain, iKNOW. We focus on the architecture and the KG creation workflow, but also touch on other aspects.
|
| 20 |
+
|
| 21 |
+
Keywords: Semantic Web . Knowledge Graph . Knowledge Graph Platform . Data Services and Functionality
|
| 22 |
+
|
| 23 |
+
## 1 Introduction
|
| 24 |
+
|
| 25 |
+
Increasingly, Knowledge Graphs (KGs) form the semantic data management backbone for a wide variety of applications. A KG [1] consists of nodes connected by edges. It is built from on a set of data sources via different techniques. Besides the instances, KGs can also contain schema information, which can be refined or augmented, e.g., by using a reasoner. Assigning unique identifiers to KG's entities can accelerate the interlinking with other resources on the web. The underlying structure of KGs opens a door for further functionalities such as visualization, supporting keyword search and complex queries via a SPARQL endpoint.
|
| 26 |
+
|
| 27 |
+
Although KGs have widely gained attention in industry and academia, developing and managing their lifecycle requires a huge effort, expertise, and different functionalities. While, in the beginning, KGs were typically one-off manual efforts, there is a growing awareness that to exploit the capabilities of Knowledge Graph technologies to the maximal extent, support for their creation, access, update, and maintenance is needed. Many of these functionalities are not specific to any given $\mathrm{{KG}}$ but can be provided rather generically. $\mathrm{{KG}}$ platforms aim to do just that.
|
| 28 |
+
|
| 29 |
+
As our contribution, in this paper, we survey existing KG management platforms and compare them in a general way. We then take a step further and analyze nineteen functionalities in four categories: creating, extending, using, and maintaining KGs. To the best of our knowledge, this is the first survey about KG platforms. Based on the findings of this survey and the needs in our domain, biodiversity research, we have designed our own KG platform. We present this platform, iKNOW, in the second part of the paper.
|
| 30 |
+
|
| 31 |
+
The rest of the paper is organized as follows. Section 2 surveys existing KG management platforms. The common functionalities of platforms are discussed in Section 3. Our proposal for a KG management platform focussed on biodiversity, iKNOW, is presented in Section 4. The paper is concluded in Section 5.
|
| 32 |
+
|
| 33 |
+
## 2 Literature Review
|
| 34 |
+
|
| 35 |
+
In this paper, we define a Knowledge Graph Platform as a web-based platform for creating, managing, and making use of KGs. Such platforms mostly cover the whole lifecycle of KG application and include relevant services or functionalities for interaction and management of KGs.
|
| 36 |
+
|
| 37 |
+
We contrast these from efforts to build an individual, specific KG. There have been many such efforts in different domains: e.g., Ozymandias [2] in the biodiversity domain, BCKG [3] in the biomedical domain, and I40KG [4] in the industrial domain. These KGs were built one time, and now their associated websites provide the KG access and usage. Such approaches are out of the scope of this paper. Rather, we focus on KG management platforms, which offer a set of operations such as generation and updates on the KG.
|
| 38 |
+
|
| 39 |
+
In the following subsections, we first present the survey methodology used in this paper, then we briefly summarize the existing KG management platforms and compare them in a general way.
|
| 40 |
+
|
| 41 |
+
### 2.1 Survey Methodology
|
| 42 |
+
|
| 43 |
+
In this subsection, we describe our systematic approach to finding publications on KG platforms: We have queried for the keyword "Knowledge Graph Platform" in the Google Scholar search engine ${}^{1}$ . At the time of querying, this resulted in 162 papers (including citation and patents). We used Publish or Perish 8 tool ${}^{2}$ to save the result of the query. The result is available in our GitHub repository 3 Among the list of papers, we selected the relevant papers manually. We aimed to select papers that focus on the KG management platform. Some papers appeared in the result of google scholar because our keyword exists in their texts (e.g., in the literature review section), but those papers mainly do not propose a new KG platform. We did not include such cases. Moreover, we did not consider survey papers and papers written in a language other than English. In our repository, we specified which papers have been selected, and for non-selected ones, we clarified the reason. As a result, we came up with ${11}\mathrm{{KG}}$ platforms, briefly detailed in the following sub-section.
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
${}^{1}$ https://scholar.google.com/ access on 09.02.2022
|
| 48 |
+
|
| 49 |
+
2 https://harzing.com/blog/2021/10/publish-or-perish-version-8
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
### 2.2 Existing KG Management Platforms
|
| 54 |
+
|
| 55 |
+
In this section, we give a brief overview of existing platforms:
|
| 56 |
+
|
| 57 |
+
- BBN (Blue Brain Nexus) [5] is an open-source platform. The KG in this platform can be built from datasets generated from heterogenous sources and formats. BBN has three main components: i) Nexus Delta, a set of services targeting developers for managing data and knowledge graph lifecycle; ii) Nexus Fusion, a web-based user interface enabling users to store, view, query, access, and share (meta)data and manage knowledge graphs; and iii) Nexus Forge, a Python user interface enabling data and knowledge engineers to build knowledge graphs from various data sources and formats using data mappings, transformations, and validations.
|
| 58 |
+
|
| 59 |
+
- CPS (Corpus Processing Service) [6] is a cloud platform to create and serve Knowledge Graphs over a set of corpus. It uses state-of-the-art natural language understanding models to extract entities and relationships from documents.
|
| 60 |
+
|
| 61 |
+
- HAPE (Heaven Ape) [7] is a programmable KG platform. The architecture of HAPE is designed in three parts: the client-side, which provides various kinds of services to the users; the server-side, which provides different knowledge management and processing, and the third part, which is KG's knowledge base. The applicability of the platform has been shown over DBpedia data. Moreover, the quality of created KG has been evaluated via metrics introduced in [8]. Although the authors in their published paper claimed that the platform is open to the public, to the best of our knowledge, there is no link to the platform source code or the online web portal.
|
| 62 |
+
|
| 63 |
+
- Metaphactory [9] is an enterprise platform for building Knowledge Graph management applications. This platform supports different categories of users (end-users, expert users, and application developers), has a customizable UI, and enables the rapid building of use case-specific applications. Metaphactory allows configuring and managing connections to many data repositories. In this platform, data sources are virtually integrated with an ontology-based data access engine, i.e., on-the-fly integration of diverse data sources. The platform is assessed via assessment parameters introduced in [10].
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
3 https://github.com/fusion-jena/iKNOW
|
| 68 |
+
|
| 69 |
+
---
|
| 70 |
+
|
| 71 |
+
- Meng et al., [11] proposed a power marketing KG platform. The authors used a Machine Learning (ML) method to extract knowledge from unstructured text. The knowledge instances are stored in relational data. The relationship of knowledge is stored in a graph database.
|
| 72 |
+
|
| 73 |
+
- MONOLITH [12] is a KG platform combined with Ontology-based Data Management (OBDM) capabilities over relational and non-relational databases to result in one (virtual) data source. The functionalities provided by MONOLITH can be split into two groups: one dedicated to managing OWL ontologies and providing OBDM services, exploiting the mappings between ontology and database; the other to managing KGs and providing services over them. These two groups are linked together, allowing to build the KGs through semantic data access from the results of the ontology queries.
|
| 74 |
+
|
| 75 |
+
- News Hunter [13] is geared towards supporting journalism by aggregating and semantically integrating news from a variety of sources. It is based on a microservices architecture and consists of a number of independent such services: First, an extensible set of harvesters are aggregated from information from individual sources or existing news. Harvested news items and relevant metadata are deduplicated and stored in a source database. A translator converts items into a canonical language; this allows for cross-language news linking and the application of the broad range of existing NLP (Natural Language Processing) tools. This step, called Lifting in the paper, runs the extracted news items through an NLP pipeline which performs named-entity recognition as well as sentiment and topic analysis. Results of this step are stored in a graph database. ML-based classifiers are used to assign labels to news items thereby annotating them with terms from a common ontology modeling. Via an enricher, the KG can be augmented by information from external sources, e.g., DBpedia Spotlight.
|
| 76 |
+
|
| 77 |
+
- TCMKG [14] is a KG platform for Traditional Chinese Medicine (TCM) based on the deep learning method. First, an ontology layer represents the knowledge-based diagnosis and treatment process. It includes core entities of the domain with their associated relations. Then, with the help of a named entity recognition (NER) model, TCM entities from unstructured data are extracted.
|
| 78 |
+
|
| 79 |
+
- UWKGM [15] is a modular web-based platform for KG management. It enables users to integrate different functionalities as RESTful API services into the platform to help different user roles customize the platform as needed. The platform consists of three main components: the backend (API), the frontend (UI), and the system manager (for installation, upgrading, and deployment). The embedded entity suggestion module enables automatic triple extraction and maintains human involvement for quality control.
|
| 80 |
+
|
| 81 |
+
- YABKO [16] is the successor of HAPE and aims to support the life cycle research on KGs. Researchers can upload their KGs and tools to the YABKO platform that can be free of use for other researchers' experiments. For any requested experiment, YABKO assigns necessary resources (space, time, KGs, tools) to it. After finishing an experiment, the short-term experiment will be dissolved, while the long-term ones can continue to exist on the condition of publishing their results. The core motivation of building YABKO is to help visitors use open-source techniques and resources to perform experiments on KGs and share experiences with other researchers.
|
| 82 |
+
|
| 83 |
+
- Yang et al., [17] proposed a cloud computing cultural knowledge platform over multiple data sources such as Chinese Wikis, lexical databases, and cultural websites. The platform restricts the knowledge in the field of Chinese public cultural services instead of common sense knowledge. The platform has a set of services for building, updating, and maintaining the KG. It uses rule-based reasoning methods to analyze the existing KG relations to predict the new possible relations.
|
| 84 |
+
|
| 85 |
+
### 2.3 Comparing Existing KG Management Platforms
|
| 86 |
+
|
| 87 |
+
In Table 1, we summarize general information about the introduced KG platforms with respect to: their Name, the Year of release (based on the published paper), the used Source Data Type to build KGs, their target applications in industry or Academia, their Open-Source accessibilities, the availability of an Online Demo, a test with a Use Case Study, and, finally, the supported ${KG}$ Construction Method by the platform. Looking at the table, one can observe that:
|
| 88 |
+
|
| 89 |
+
- most platforms have been introduced in the last three years. This shows that the field is still young and most likely still evolving. This observation is confirmed by our analysis of provided functionality (see below).
|
| 90 |
+
|
| 91 |
+
- the platforms are very heterogeneous with respect to the number and type of data sources they support.
|
| 92 |
+
|
| 93 |
+
- for KG construction, basically, all platforms follow an ETL (Extract, Transform, Load) process along with Machine Learning (ML) approaches. They differ in how adaptable this process is and, partially depending on the type of supported data sources, on the concrete steps involved in this process.
|
| 94 |
+
|
| 95 |
+
- a (to us) surprisingly high percentage of platforms are designed for use within industry (as opposed to academia). This may be one of the reasons why quite many of these platforms are not open source.
|
| 96 |
+
|
| 97 |
+
- all platforms had a use case study to show the capabilities of the platform by describing a specific KG's usage in a selected application domain.
|
| 98 |
+
|
| 99 |
+
## 3 Common Functionalities in KG Management Platforms
|
| 100 |
+
|
| 101 |
+
In this section, we take a closer look at the KG platforms, extract what functionalities they offer and compare them with respect to these functionalities. We consider a functionality for a platform if the functionality is mentioned in the respective papers. Platforms may possess other functionalities not mentioned in the papers. So a missing entry does not necessarily mean a platform does not offer certain functionality. Overall, many of the papers were surprisingly vague about what functionality the platforms offer, so that not always a clear decision was possible. From our analysis, we identified nineteen different functionalities which can be grouped into four categories as follows:
|
| 102 |
+
|
| 103 |
+
Table 1: Comparing existing KG management platforms concerning their names, the year of release, the type of source data used to build KGs, targeting academia or not, being open-source, availability of an online demo, testing in a use case study, and the KG construction method. ${\checkmark }^{ * }$ means currently not available and - shows not mentioned.
|
| 104 |
+
|
| 105 |
+
<table><tr><td>no.</td><td>Platform</td><td>Year</td><td>Source Data Type</td><td>Academia</td><td>Open- Source</td><td>Online Demo</td><td>Use Case Study</td><td>KG Construction Method</td></tr><tr><td>1</td><td>BBN [5]</td><td>2021</td><td>different types</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>customized ETL process</td></tr><tr><td>2</td><td>CPS [6]</td><td>2020</td><td>text</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>Machine Learning</td></tr><tr><td>3</td><td>HAPE [7]</td><td>2020</td><td>different types</td><td>✓</td><td>✘</td><td>✘</td><td>✓</td><td>-</td></tr><tr><td>4</td><td>Metaphactory [9]</td><td>2019</td><td>different types</td><td>✘</td><td>✘</td><td>✓</td><td>✓</td><td>customized ETL process</td></tr><tr><td>5</td><td>Meng et al 11</td><td>2021</td><td>unstructured text</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>Machine Learning</td></tr><tr><td>6</td><td>MONOLITH 12</td><td>2019</td><td>-</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td><td>customized ETL process</td></tr><tr><td>7</td><td>News Hunter [13</td><td>2020</td><td>text</td><td>-</td><td>✘</td><td>✘</td><td>✓</td><td>Machine Learning</td></tr><tr><td>8</td><td>$\mathrm{{TCMKG}}\left\lbrack {14}\right\rbrack$</td><td>2020</td><td>different types</td><td>-</td><td>✘</td><td>✘</td><td>✓</td><td>Machine Learning</td></tr><tr><td>9</td><td>UWKGM 15</td><td>2020</td><td>unstructured text</td><td>-</td><td>✓</td><td>✓</td><td>✓</td><td>customized ETL process</td></tr><tr><td>10</td><td>$\mathrm{{YABKO}}\left\lbrack {16}\right\rbrack$</td><td>2021</td><td>different types</td><td>✓</td><td>✘</td><td>✘</td><td>✓</td><td>-</td></tr><tr><td>11</td><td>Yang et al [17]</td><td>2017</td><td>different types</td><td>-</td><td>✘</td><td>✘</td><td>✓</td><td>Machine Learning</td></tr></table>
|
| 106 |
+
|
| 107 |
+
- Functionalities for creating a KG: The platform can support different functionalities to build the KG with the desired quality:
|
| 108 |
+
|
| 109 |
+
- Data preprocessing [5,7,14,17]: Before information from a data source can be used in a KG, several preprocessing steps may be needed. These include data cleaning and data transformation in a format suitable for ingestion.
|
| 110 |
+
|
| 111 |
+
- Entity and relation extraction [6,7,9,13-15,17]: In particular, when creating KGs out of unstructured information like documents, entity and relation extraction can require complex processing. But even for structured data, this step is often necessary.
|
| 112 |
+
|
| 113 |
+
- Schema generation [7,9,12-14,17]: If a KG is supposed to contain not just a set of instances, but also type information about them, a schema needs to be created.
|
| 114 |
+
|
| 115 |
+
- KG validation [5, 7, 9, 12, 16, 17: When a KG combines data from different sources, the initial data cleaning step, which happens at the level of an individual source, may not be sufficient to ensure that the integrated KG is consistent. Thus, the platform may take a further step on quality checking and validation of the KG.
|
| 116 |
+
|
| 117 |
+
- Functionalities for extending and augmenting KGs: This group of functionalities allows for extending KGs with additional information from other sources or from within the KG itself. While cross-linking extends a $\mathrm{{KG}}$ with information provided somewhere else, a variety of techniques are used to extend KGs "from within". They include reasoning to infer hidden knowledge, KG refinement and the computation of KG embeddings as a basis for link prediction and similarity determination.
|
| 118 |
+
|
| 119 |
+
- Cross-linking [5,9,13,17]: This functionality enables the cross-linking of KG'entities to other resources or KGs like Wikidata or DBpedia. According to the linked open data (LOD) principles [18], each knowledge resource on the web receives a stable, unique and resolvable identifier.
|
| 120 |
+
|
| 121 |
+
- KG embedding [7,9,14-17]: This is a popular method in particular for link prediction and similarity detection and can help to uncover hidden information in a KG.
|
| 122 |
+
|
| 123 |
+
- KG refinement [5, 15-17: In some cases, after checking the quality of the generated $\mathrm{{KG}}$ , a refinement process (e.g., validating the $\mathrm{{KG}}$ to identify errors and correcting the inconsistent statements) can take place.
|
| 124 |
+
|
| 125 |
+
- Reasoning [7, 12, 13, 16, 17]: The reasoning functionality can help more knowledge be inferred in a KG mainly with the help of a reasoner. We consider this as KG augmentation, too.
|
| 126 |
+
|
| 127 |
+
- Functionalities for using KGs: Depending mostly on the targeted user group, platforms can support one or several ways to interact with the created KG:
|
| 128 |
+
|
| 129 |
+
- GUI (Graphical User Interface) [5-7,9,11-17]: A GUI in a platform is functionality that eases user interaction with the platform.
|
| 130 |
+
|
| 131 |
+
- Visualization [5,7,9,11,14,15,17]: The platform can provide different types of visualization of the KG to help for better understanding. CPS [6 has a visualization type for building queries, only.
|
| 132 |
+
|
| 133 |
+
- Keyword search [5, 7, 9, 11, 12, 15-17: This functionality enables searching for a keyword over the developed KG in the platform.
|
| 134 |
+
|
| 135 |
+
- Query endpoint [5-7, 9, 11-14, 16, 17]: In the KG management platform, by a query endpoint functionality, the information over the KG can be queried mostly via SPARQL or using graph queries.
|
| 136 |
+
|
| 137 |
+
- Query catalog [9, 12]: The functionality of having a query catalog in the KG management platform enables to use pre-determined (customized) queries or store the queries for future reuse.
|
| 138 |
+
|
| 139 |
+
- Functionalities for maintaining and updating KGs: Once a KG has been built, it may be desirable to manage access, keep track of provenance, update the KG with new or additional sources, and curate it.
|
| 140 |
+
|
| 141 |
+
- Provenance tracking [5, 6, 9, 13]: The platform can track the provenance of KG's entities. Such functionalities can ease the maintenance and updating the KGs.
|
| 142 |
+
|
| 143 |
+
- Update KG 5, 9, 12, 14, 15 : A KG management platform can have the functionality to update and edit the previously generated KG. After this process, KG validation might be required.
|
| 144 |
+
|
| 145 |
+
- KG curation [5, 9, 15, 17]: The platform can have KG curation functionality that mostly relies on human curation.
|
| 146 |
+
|
| 147 |
+
- Different user roles [5, 7, 9, 11, 12, 15-17: The platform can have functionality that considers different user roles, such as end-users or expert users. This functionality can support different user groups with different access to the other platforms' functionalities.
|
| 148 |
+
|
| 149 |
+
- User management and security [5-7, 9, 11, 12, 15-17: This functionality can manage user access based on their roles and check the access level and security over the KG in the platform.
|
| 150 |
+
|
| 151 |
+
- Workflow management [5]: The platform can allow to store and replay the creation workflow that can be re-executed.
|
| 152 |
+
|
| 153 |
+
Table 2 shows the distribution of the functionalities across the KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. In the last row, we show the total number of supported functionalities of each platform. From this table, our lessons learned are:
|
| 154 |
+
|
| 155 |
+
- the functionalities in the "KG creation" category are a necessity; thus, they are covered by most platforms. However, one needs to keep in mind, that the platforms differ significantly in what exactly they offer here. Partly, this depends on the supported source data types (e.g., platforms geared towards building KGs from text typically provide NLP-based entity extraction).
|
| 156 |
+
|
| 157 |
+
- there is a low effort on developing functionalities regarding the KG maintenance category.
|
| 158 |
+
|
| 159 |
+
- the graphical user interface is the most supported functionality by all platforms.
|
| 160 |
+
|
| 161 |
+
- the workflow management is the least supported functionality by the existing platforms.
|
| 162 |
+
|
| 163 |
+
Overall, the figure quite clearly shows that this is a still young and immature field, where so far, no clear set of commonly offered functionality has evolved. We believe that this will happen over time. Meanwhile, potential users of a platform need to carefully check what their requirements are and whether a given platform meets them.
|
| 164 |
+
|
| 165 |
+
## 4 Our Proposal: a KG Management Platform in the Biodiversity Domain
|
| 166 |
+
|
| 167 |
+
Our work is motivated by a strong need for KGs in the Biodiversity Domain identified, e.g., by Page [2] and OpenBiodiv [19]. So far, in biodiversity as in many other domains, the few existing KGs have been created largely manually in one-off efforts. If the potential for KGs is to be leveraged for this important domain, it is our conviction, that a KG management platform providing both generic and discipline-specific (e.g., dealing with species) functionality is needed that allows Low-Code (or even No-Code) development, maintenance, and usage of KGs. Using such technologies will reduce the barriers for non-semantic web experts to use and finally benefit from KGs to explore new exciting findings.
|
| 168 |
+
|
| 169 |
+
The iKNOW project [20] aims to create such a platform, built around a semantic-based toolbox. The project is a joined effort by computer scientists and domain experts from the German Centre for Integrative Biodiversity Research
|
| 170 |
+
|
| 171 |
+
Table 2: Distribution of functionalities with respect to existing KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. The last row shows the number of supported functionalities of each platform.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+
(iDiv) 4. The work benefits from the wealth of well-curated data sources and expert knowledge on their creation, cleaning, and harmonization available at iDiv. Thus, for now, iKNOW focuses on the (semi-)automatic, reproducible transformation of tabular biodiversity data into RDF statements. It also includes provenance tracking to ensure reproducibility and update ability. Further, options for visualization, search, and query are planned. Once established, this platform will be open-source and available to the biodiversity community. Thus, it can significantly contribute to making biodiversity data widely available, easily discoverable, and integrable.
|
| 176 |
+
|
| 177 |
+
### 4.1 Workflow in the KG Creation Scenario
|
| 178 |
+
|
| 179 |
+
After the quite abstract high-level description of iKNOW above, let us now take a closer look at one key functionality, the creation of a new KG. In this paper, we view Knowledge Graph generation as a construction process from scratch, i.e., using a set of operations on one or more data sources to create a Knowledge Graph.
|
| 180 |
+
|
| 181 |
+
---
|
| 182 |
+
|
| 183 |
+
4 https://www.idiv.de/en/index.html
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+
Fig. 1: Workflow in the KG Creation Scenario at iKNOW.
|
| 190 |
+
|
| 191 |
+
Figure 1 shows the planned iKNOW workflow for the KG creation scenario. It is a generalized one based on the existing platforms. The workflow shows the data flow between the steps towards KG generation. Not all steps are mandatory; some optional processes in each step can add further value to the KG based on the user's needs.
|
| 192 |
+
|
| 193 |
+
For every uploaded dataset, we build a sub-KG. It will be the subgraph of the main KG in iKNOW. In the first step, users go through the authentication process. The verified users can upload their datasets. If required, the data cleaning process will take place. We offer different tools for this step, which users can select and adjust based on their needs. As we observed, most uploaded data in iKNOW are well-curated, so not all datasets might require this step. For this reason, we consider it as an optional step.
|
| 194 |
+
|
| 195 |
+
In the Entity Extraction step, we map the entities of the dataset to the corresponding concepts in the real world (which build instances of sub-KGs). This mapping is the basis for interlinking entities with external KGs like Wikidata or domain-specific ones. Each mapped entity is a node in the KG. For this process, we have embedded different tools at iKNOW, in which users can select the desired tool along with the desired external KGs.
|
| 196 |
+
|
| 197 |
+
In the Relation Extraction step, the relations between the KG's nodes will be extracted via the user-selected tool. Note that in the entity and relation extraction steps, the tools return the extracted entities and relations to the user. Through our GUI, the user can edit them (Data Authoring step).
|
| 198 |
+
|
| 199 |
+
Each column from the relational dataset refers to a category in the world. We consider the types of the column as classes in the KG. Along with the extracted relations in the previous step, the schema of this sub-KG will be created in the Schema Generation step.
|
| 200 |
+
|
| 201 |
+
In the Triple Generation step, (subject, predicate, object)-triples based on the extracted information from the previous steps will be created. Note that, nodes in the KG are subjects and objects, and relationships are predicates. The triples are generated for classes and instances in the sub-KG.
|
| 202 |
+
|
| 203 |
+
After these processes, the generated sub-KG can be used directly. However, one can take further steps such as: Triple Augmentation (generate new triples and extra relations to ease KG completion), Schema Refinement (refine the schema, e.g., via logical reasoning for the KG completion and correctness), Quality Checking (check the quality of the generated sub-KG), and Query Building (create customized SPARQL queries for the generated sub-KG).
|
| 204 |
+
|
| 205 |
+
In the Pushing step of our platform, the generated KGs are saved first at a temporal repository (shown by "non-curated repository" in Figure 1). After a manual data curation by domain experts in the Curation step, the KG will be published in the main repository of our platform. With this step, we aim to increase the trust and correctness of the information on the KG.
|
| 206 |
+
|
| 207 |
+
All information regarding the user-selected tools with parameters and settings along with the initial dataset and intermediate results will be saved in every step of our platform. With the help of this, users can redo the previous steps (which shows by arrows in both directions). Moreover, this enables us to track the provenance of created sub-KG. In each step mentioned above, we plan to have a tool-recommendation service to help the user select the right tool for every process. For that, we will consider different parameters, such as the characteristics of the dataset and tools.
|
| 208 |
+
|
| 209 |
+
### 4.2 iKNOW Architecture
|
| 210 |
+
|
| 211 |
+
Figure 2 shows the planned architecture of iKNOW in five layers:
|
| 212 |
+
|
| 213 |
+
- In the User Administration layer, access level and security will be controlled. Authorized users can generate or update the KG. All end-users can search and visualize the KG. The platform's admin can add new tools or functionalities and approve the user registration. The KG curator curates the recent changes on the KG (newly added sub-KG or updates on previous information on KG).
|
| 214 |
+
|
| 215 |
+
- The Web-based UI layer shows different scenarios for KG management: building a KG, updating the KG, visualizing the KG's triples, and keyword and SPARQL search.
|
| 216 |
+
|
| 217 |
+
- The Platform Services provides a set of required services for the KG management functionalities.
|
| 218 |
+
|
| 219 |
+
- The Data Access Infrastructure manages the communication of services and data storage.
|
| 220 |
+
|
| 221 |
+
- At the bottom level of the iKNOW platform, the Data Storage layer contains the graph database repository (triple management), provenance information, and user information management.
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+
Fig. 2: Architecture of iKNOW in five layers.
|
| 226 |
+
|
| 227 |
+
### 4.3 Implementation
|
| 228 |
+
|
| 229 |
+
The iKNOW platform is currently under development (https://planthub.idiv.de/iknow).The Python web framework Django 5 is used for the backend with a PostgreSQL 6 database to maintain users, services, tools, datasets, and the KG generation parameters in the iKNOW platform (used in provenance tracking). We use the compiler Svelte 7 with SvelteKit as a framework for building web applications to create a user-friendly web interface. For security, maintenance, and provenance reasons, all tools from external providers used within the workflow will be executed in a sandbox using Docker 8 . For managing the triplestore, we are using the graph database Blazegraph, Any sub-KG created by an end-user, first, will be placed at the non-curated triplestore. After curation by domain experts, the new sub-KG will be added to the curated triplestore. The curated triplestore also serves as the base for SPARQL queries and the keyword search via search engine Elasticsearch 10,
|
| 230 |
+
|
| 231 |
+
---
|
| 232 |
+
|
| 233 |
+
https://www.djangoproject.com
|
| 234 |
+
|
| 235 |
+
https://www.postgresql.org/
|
| 236 |
+
|
| 237 |
+
https://svelte.dev/
|
| 238 |
+
|
| 239 |
+
https://www.docker.com/
|
| 240 |
+
|
| 241 |
+
https://blazegraph.com/
|
| 242 |
+
|
| 243 |
+
https://www.elastic.co/elasticsearch/
|
| 244 |
+
|
| 245 |
+
---
|
| 246 |
+
|
| 247 |
+
iKNOW is a modular platform, which increases the flexibility of our platform and allows adding new tools. Our ultimate goal is to provide a large set of tool choices for the end-user. Although only a few tools are embedded so far, we plan to add more tools for each functionality in the platform. Then users have a variety of choices with respect to different needs and use cases. Our open-source code and modular designs of our platform make both the front and backend of our platform easily extendable. We encourage users (new developers) to use or extend our reusable UI components to speed up their development.
|
| 248 |
+
|
| 249 |
+
## 5 Outlook
|
| 250 |
+
|
| 251 |
+
In this paper, we surveyed eleven KG management platforms and provided a general view of their differences on the used data sources, KG construction approaches, and availability. Taking a closer look, we identified nineteen functionalities offered by one, several or all of these platforms and categorized them into four groups along the lifecycle of a KG. We observed that none of the surveyed platforms supports all of the functionalities. The only category that all platforms strongly support is creation of KGs. Beyond that, so far, there seems to be no agreement on a core set of functionalities. Even within the "creation" category, approaches vary a lot. Partly, this can be attributed to the data source types or user groups targeted by a platform. This, together with the fact that many of the platforms are not open source and/or not available so far limits the choice of platform potential users have. They need to check very carefully whether a specific platform matches their needs.
|
| 252 |
+
|
| 253 |
+
We did this analysis for our domain, biodiversity research. As a result, we presented our proposed platform, iKNOW.
|
| 254 |
+
|
| 255 |
+
We conclude that further, domain-specific platforms (or domain-specific extensions of general platforms) are needed to fully leverage the power of KGs across domains. We also recommend, that platform developers should strive to support KGs along their lifecycle beyond just the creation stage. We do believe that both developments will occur as the field matures.
|
| 256 |
+
|
| 257 |
+
## Acknowledgements
|
| 258 |
+
|
| 259 |
+
The work described in this paper is conducted in the iKNOW Flexpool project of iDiv, the German Centre for Integrative Biodiversity Research, funded by DFG (Project number 202548816). It is supported by iBID, iDiv's Biodiversity Data and Code Support unit. We thank our college Sven Thiel for comments on the manuscript.
|
| 260 |
+
|
| 261 |
+
## References
|
| 262 |
+
|
| 263 |
+
1. M. Nickel, K. Murphy, V. Tresp, and E. Gabrilovich, "A review of relational machine learning for knowledge graphs," Proceedings of the IEEE, vol. 104, no. 1, pp. 11-33, 2015.
|
| 264 |
+
|
| 265 |
+
2. R. D. Page, "Ozymandias: a biodiversity knowledge graph," PeerJ, vol. 7, p. e6739, 2019.
|
| 266 |
+
|
| 267 |
+
3. M. Manica, C. Auer, V. Weber, F. Zipoli, M. Dolfi, P. Staar, T. Laino, C. Bekas, A. Fujita, H. Toda, et al., "An information extraction and knowledge graph platform for accelerating biochemical discoveries," arXiv preprint arXiv:1907.08400, 2019.
|
| 268 |
+
|
| 269 |
+
4. S. R. Bader, I. Grangel-Gonzalez, P. Nanjappa, M.-E. Vidal, and M. Maleshkova, "A knowledge graph for industry 4.0," in European Semantic Web Conference, pp. 465-480, Springer, 2020.
|
| 270 |
+
|
| 271 |
+
5. M. F. Sy, B. Roman, S. Kerrien, D. M. Mendez, H. Genet, W. Wajerowicz, M. Dupont, I. Lavriushev, J. Machon, K. Pirman, et al., "Blue brain nexus: An open, secure, scalable system for knowledge graph management and data-driven science,"
|
| 272 |
+
|
| 273 |
+
6. P. W. Staar, M. Dolfi, and C. Auer, "Corpus processing service: A knowledge graph platform to perform deep data exploration on corpora," Applied AI Letters, vol. 1, no. 2, p. e20, 2020.
|
| 274 |
+
|
| 275 |
+
7. L. Ruqian, F. Chaoqun, W. Chuanqing, G. Shunfeng, Q. Han, S. Zhang, and C. Cungen, "Hape: A programmable big knowledge graph platform," Information Sciences, vol. 509, pp. 87-103, 2020.
|
| 276 |
+
|
| 277 |
+
8. R. Lu, X. Jin, S. Zhang, M. Qiu, and X. Wu, "A study on big knowledge and its engineering issues," IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 9, pp. 1630-1644, 2018.
|
| 278 |
+
|
| 279 |
+
9. P. Haase, D. M. Herzig, A. Kozlov, A. Nikolov, and J. Trame, "metaphactory: A platform for knowledge graph management," Semantic Web, vol. 10, no. 6, pp. 1109-1125, 2019.
|
| 280 |
+
|
| 281 |
+
10. M. Galkin, S. Auer, M.-E. Vidal, and S. Scerri, "Enterprise knowledge graphs: A semantic approach for knowledge management in the next generation of enterprise information systems.," in ICEIS (2), pp. 88-98, 2017.
|
| 282 |
+
|
| 283 |
+
11. W. Meng, D. Zhang, T. Guo, Z. Zong, Y. Liu, Y. Wang, J. Li, and W. Zhu, "Design and implementation of knowledge graph platform of power marketing," in 2021 International Conference on Computer Engineering and Application (ICCEA), pp. 295-298, IEEE, 2021.
|
| 284 |
+
|
| 285 |
+
12. L. Leporea, M. Namicia, G. Ronconia, M. Ruzzia, V. Santarellia, and D. F. Savoc, "Monolith: an obdm and knowledge graph management platform," in ISWC 2019 Satellites: Satellite Tracks (Posters & Demonstrations, Industry, and Outrageous Ideas) co-located with 18th International Semantic Web Conference (ISWC 2019), Auckland, New Zealand, 26-30 October 2019, vol. 2456, pp. 173-176, CEUR-WS.
|
| 286 |
+
|
| 287 |
+
13. A. Berven, O. A. Christensen, S. Moldeklev, A. L. Opdahl, and K. J. Villanger, "A knowledge-graph platform for newsrooms," Computers in Industry, vol. 123, p. 103321, 2020.
|
| 288 |
+
|
| 289 |
+
14. Z. Zheng, Y. Liu, Y. Zhang, and C. Wen, "Tcmkg: A deep learning based traditional chinese medicine knowledge graph platform," in 2020 IEEE International Conference on Knowledge Graph (ICKG), pp. 560-564, IEEE, 2020.
|
| 290 |
+
|
| 291 |
+
15. N. Kertkeidkachorn, R. Nararatwong, and R. Ichise, "Uwkgm: A modular platform for knowledge graph management," in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 3421-3424, 2020.
|
| 292 |
+
|
| 293 |
+
16. R. Lu, C. Fei, C. Wang, Y. Huang, and S. Zhang, "Yabko-yet another big knowledge organization," in 2021 IEEE International Conference on Big Knowledge (ICBK), pp. 245-252, IEEE, 2021.
|
| 294 |
+
|
| 295 |
+
17. Y. Yang, G. Zhang, J. Wang, S. Ye, and J. Hu, "Public cultural knowledge graph platform," in 2017 IEEE 11th International Conference on Semantic Computing (ICSC), pp. 322-327, IEEE, 2017.
|
| 296 |
+
|
| 297 |
+
18. C. Bizer, "The emerging web of linked data," IEEE intelligent systems, vol. 24, no. 5, pp. 87-92, 2009.
|
| 298 |
+
|
| 299 |
+
19. L. Penev, M. Dimitrova, V. Senderov, G. Zhelezov, T. Georgiev, P. Stoev, and K. Simov, "Openbiodiv: a knowledge graph for literature-extracted linked open data in biodiversity science," Publications, vol. 7, no. 2, p. 38, 2019.
|
| 300 |
+
|
| 301 |
+
20. S. Babalou, D. Schellenberger Costa, J. Kattge, C. Römermann, and B. König-Ries, "Towards a semantic toolbox for reproducible knowledge graph generation in the biodiversity domain-how to make the most out of biodiversity data," INFORMATIK 2021, 2021.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ WHAT IS NEEDED IN A KNOWLEDGE GRAPH MANAGEMENT PLATFORM? A SURVEY AND A PROPOSAL
|
| 2 |
+
|
| 3 |
+
Samira Babalou* 12, Franziska Zander 12, Erik Kleinsteuber 1, Badr El
|
| 4 |
+
|
| 5 |
+
Haouni ${}^{1}$ , David Schellenberger Costa ${}^{2}$ , Jens Kattge ${}^{2}{}^{2}$ , Birgitta König-Ries-123
|
| 6 |
+
|
| 7 |
+
${}^{1}$ Heinz-Nixdorf Chair for Distributed Information Systems
|
| 8 |
+
|
| 9 |
+
Institute for Computer Science, Friedrich Schiller University Jena, Germany
|
| 10 |
+
|
| 11 |
+
${}^{2}$ German Center for Integrative Biodiversity Research (iDiv), Halle-Jena-Leipzig, Germany
|
| 12 |
+
|
| 13 |
+
${}^{3}$ Michael-Stifel-Center for Data-Driven and Simulation Science, Jena, Germany
|
| 14 |
+
|
| 15 |
+
${}^{4}$ Institute of Biology/Geobotany and Botanical Garden, Martin Luther University, Halle, Germany
|
| 16 |
+
|
| 17 |
+
corresponding author: samira.babalou@uni.jena.de
|
| 18 |
+
|
| 19 |
+
Abstract. Knowledge Graphs (KGs) play a significant and growing role for semantics-based support of a wide variety of applications. Until recently, creating and maintaining such knowledge graphs was done in a one-off manner requiring significant manual effort and expertise. Over the last few years, the first KG management platforms supporting the lifecycle of KGs from their creation to their maintenance and use have appeared. In this paper, we first survey these platforms. We then take a step further and identify common functionalities across such platforms. We discuss nineteen such functionalities categorized into four groups: creating, extending, using, and maintaining KGs. Based on the findings of this analysis, we present our proposed KG management platform for the biodiversity domain, iKNOW. We focus on the architecture and the KG creation workflow, but also touch on other aspects.
|
| 20 |
+
|
| 21 |
+
Keywords: Semantic Web . Knowledge Graph . Knowledge Graph Platform . Data Services and Functionality
|
| 22 |
+
|
| 23 |
+
§ 1 INTRODUCTION
|
| 24 |
+
|
| 25 |
+
Increasingly, Knowledge Graphs (KGs) form the semantic data management backbone for a wide variety of applications. A KG [1] consists of nodes connected by edges. It is built from on a set of data sources via different techniques. Besides the instances, KGs can also contain schema information, which can be refined or augmented, e.g., by using a reasoner. Assigning unique identifiers to KG's entities can accelerate the interlinking with other resources on the web. The underlying structure of KGs opens a door for further functionalities such as visualization, supporting keyword search and complex queries via a SPARQL endpoint.
|
| 26 |
+
|
| 27 |
+
Although KGs have widely gained attention in industry and academia, developing and managing their lifecycle requires a huge effort, expertise, and different functionalities. While, in the beginning, KGs were typically one-off manual efforts, there is a growing awareness that to exploit the capabilities of Knowledge Graph technologies to the maximal extent, support for their creation, access, update, and maintenance is needed. Many of these functionalities are not specific to any given $\mathrm{{KG}}$ but can be provided rather generically. $\mathrm{{KG}}$ platforms aim to do just that.
|
| 28 |
+
|
| 29 |
+
As our contribution, in this paper, we survey existing KG management platforms and compare them in a general way. We then take a step further and analyze nineteen functionalities in four categories: creating, extending, using, and maintaining KGs. To the best of our knowledge, this is the first survey about KG platforms. Based on the findings of this survey and the needs in our domain, biodiversity research, we have designed our own KG platform. We present this platform, iKNOW, in the second part of the paper.
|
| 30 |
+
|
| 31 |
+
The rest of the paper is organized as follows. Section 2 surveys existing KG management platforms. The common functionalities of platforms are discussed in Section 3. Our proposal for a KG management platform focussed on biodiversity, iKNOW, is presented in Section 4. The paper is concluded in Section 5.
|
| 32 |
+
|
| 33 |
+
§ 2 LITERATURE REVIEW
|
| 34 |
+
|
| 35 |
+
In this paper, we define a Knowledge Graph Platform as a web-based platform for creating, managing, and making use of KGs. Such platforms mostly cover the whole lifecycle of KG application and include relevant services or functionalities for interaction and management of KGs.
|
| 36 |
+
|
| 37 |
+
We contrast these from efforts to build an individual, specific KG. There have been many such efforts in different domains: e.g., Ozymandias [2] in the biodiversity domain, BCKG [3] in the biomedical domain, and I40KG [4] in the industrial domain. These KGs were built one time, and now their associated websites provide the KG access and usage. Such approaches are out of the scope of this paper. Rather, we focus on KG management platforms, which offer a set of operations such as generation and updates on the KG.
|
| 38 |
+
|
| 39 |
+
In the following subsections, we first present the survey methodology used in this paper, then we briefly summarize the existing KG management platforms and compare them in a general way.
|
| 40 |
+
|
| 41 |
+
§ 2.1 SURVEY METHODOLOGY
|
| 42 |
+
|
| 43 |
+
In this subsection, we describe our systematic approach to finding publications on KG platforms: We have queried for the keyword "Knowledge Graph Platform" in the Google Scholar search engine ${}^{1}$ . At the time of querying, this resulted in 162 papers (including citation and patents). We used Publish or Perish 8 tool ${}^{2}$ to save the result of the query. The result is available in our GitHub repository 3 Among the list of papers, we selected the relevant papers manually. We aimed to select papers that focus on the KG management platform. Some papers appeared in the result of google scholar because our keyword exists in their texts (e.g., in the literature review section), but those papers mainly do not propose a new KG platform. We did not include such cases. Moreover, we did not consider survey papers and papers written in a language other than English. In our repository, we specified which papers have been selected, and for non-selected ones, we clarified the reason. As a result, we came up with ${11}\mathrm{{KG}}$ platforms, briefly detailed in the following sub-section.
|
| 44 |
+
|
| 45 |
+
${}^{1}$ https://scholar.google.com/ access on 09.02.2022
|
| 46 |
+
|
| 47 |
+
2 https://harzing.com/blog/2021/10/publish-or-perish-version-8
|
| 48 |
+
|
| 49 |
+
§ 2.2 EXISTING KG MANAGEMENT PLATFORMS
|
| 50 |
+
|
| 51 |
+
In this section, we give a brief overview of existing platforms:
|
| 52 |
+
|
| 53 |
+
* BBN (Blue Brain Nexus) [5] is an open-source platform. The KG in this platform can be built from datasets generated from heterogenous sources and formats. BBN has three main components: i) Nexus Delta, a set of services targeting developers for managing data and knowledge graph lifecycle; ii) Nexus Fusion, a web-based user interface enabling users to store, view, query, access, and share (meta)data and manage knowledge graphs; and iii) Nexus Forge, a Python user interface enabling data and knowledge engineers to build knowledge graphs from various data sources and formats using data mappings, transformations, and validations.
|
| 54 |
+
|
| 55 |
+
* CPS (Corpus Processing Service) [6] is a cloud platform to create and serve Knowledge Graphs over a set of corpus. It uses state-of-the-art natural language understanding models to extract entities and relationships from documents.
|
| 56 |
+
|
| 57 |
+
* HAPE (Heaven Ape) [7] is a programmable KG platform. The architecture of HAPE is designed in three parts: the client-side, which provides various kinds of services to the users; the server-side, which provides different knowledge management and processing, and the third part, which is KG's knowledge base. The applicability of the platform has been shown over DBpedia data. Moreover, the quality of created KG has been evaluated via metrics introduced in [8]. Although the authors in their published paper claimed that the platform is open to the public, to the best of our knowledge, there is no link to the platform source code or the online web portal.
|
| 58 |
+
|
| 59 |
+
* Metaphactory [9] is an enterprise platform for building Knowledge Graph management applications. This platform supports different categories of users (end-users, expert users, and application developers), has a customizable UI, and enables the rapid building of use case-specific applications. Metaphactory allows configuring and managing connections to many data repositories. In this platform, data sources are virtually integrated with an ontology-based data access engine, i.e., on-the-fly integration of diverse data sources. The platform is assessed via assessment parameters introduced in [10].
|
| 60 |
+
|
| 61 |
+
3 https://github.com/fusion-jena/iKNOW
|
| 62 |
+
|
| 63 |
+
* Meng et al., [11] proposed a power marketing KG platform. The authors used a Machine Learning (ML) method to extract knowledge from unstructured text. The knowledge instances are stored in relational data. The relationship of knowledge is stored in a graph database.
|
| 64 |
+
|
| 65 |
+
* MONOLITH [12] is a KG platform combined with Ontology-based Data Management (OBDM) capabilities over relational and non-relational databases to result in one (virtual) data source. The functionalities provided by MONOLITH can be split into two groups: one dedicated to managing OWL ontologies and providing OBDM services, exploiting the mappings between ontology and database; the other to managing KGs and providing services over them. These two groups are linked together, allowing to build the KGs through semantic data access from the results of the ontology queries.
|
| 66 |
+
|
| 67 |
+
* News Hunter [13] is geared towards supporting journalism by aggregating and semantically integrating news from a variety of sources. It is based on a microservices architecture and consists of a number of independent such services: First, an extensible set of harvesters are aggregated from information from individual sources or existing news. Harvested news items and relevant metadata are deduplicated and stored in a source database. A translator converts items into a canonical language; this allows for cross-language news linking and the application of the broad range of existing NLP (Natural Language Processing) tools. This step, called Lifting in the paper, runs the extracted news items through an NLP pipeline which performs named-entity recognition as well as sentiment and topic analysis. Results of this step are stored in a graph database. ML-based classifiers are used to assign labels to news items thereby annotating them with terms from a common ontology modeling. Via an enricher, the KG can be augmented by information from external sources, e.g., DBpedia Spotlight.
|
| 68 |
+
|
| 69 |
+
* TCMKG [14] is a KG platform for Traditional Chinese Medicine (TCM) based on the deep learning method. First, an ontology layer represents the knowledge-based diagnosis and treatment process. It includes core entities of the domain with their associated relations. Then, with the help of a named entity recognition (NER) model, TCM entities from unstructured data are extracted.
|
| 70 |
+
|
| 71 |
+
* UWKGM [15] is a modular web-based platform for KG management. It enables users to integrate different functionalities as RESTful API services into the platform to help different user roles customize the platform as needed. The platform consists of three main components: the backend (API), the frontend (UI), and the system manager (for installation, upgrading, and deployment). The embedded entity suggestion module enables automatic triple extraction and maintains human involvement for quality control.
|
| 72 |
+
|
| 73 |
+
* YABKO [16] is the successor of HAPE and aims to support the life cycle research on KGs. Researchers can upload their KGs and tools to the YABKO platform that can be free of use for other researchers' experiments. For any requested experiment, YABKO assigns necessary resources (space, time, KGs, tools) to it. After finishing an experiment, the short-term experiment will be dissolved, while the long-term ones can continue to exist on the condition of publishing their results. The core motivation of building YABKO is to help visitors use open-source techniques and resources to perform experiments on KGs and share experiences with other researchers.
|
| 74 |
+
|
| 75 |
+
* Yang et al., [17] proposed a cloud computing cultural knowledge platform over multiple data sources such as Chinese Wikis, lexical databases, and cultural websites. The platform restricts the knowledge in the field of Chinese public cultural services instead of common sense knowledge. The platform has a set of services for building, updating, and maintaining the KG. It uses rule-based reasoning methods to analyze the existing KG relations to predict the new possible relations.
|
| 76 |
+
|
| 77 |
+
§ 2.3 COMPARING EXISTING KG MANAGEMENT PLATFORMS
|
| 78 |
+
|
| 79 |
+
In Table 1, we summarize general information about the introduced KG platforms with respect to: their Name, the Year of release (based on the published paper), the used Source Data Type to build KGs, their target applications in industry or Academia, their Open-Source accessibilities, the availability of an Online Demo, a test with a Use Case Study, and, finally, the supported ${KG}$ Construction Method by the platform. Looking at the table, one can observe that:
|
| 80 |
+
|
| 81 |
+
* most platforms have been introduced in the last three years. This shows that the field is still young and most likely still evolving. This observation is confirmed by our analysis of provided functionality (see below).
|
| 82 |
+
|
| 83 |
+
* the platforms are very heterogeneous with respect to the number and type of data sources they support.
|
| 84 |
+
|
| 85 |
+
* for KG construction, basically, all platforms follow an ETL (Extract, Transform, Load) process along with Machine Learning (ML) approaches. They differ in how adaptable this process is and, partially depending on the type of supported data sources, on the concrete steps involved in this process.
|
| 86 |
+
|
| 87 |
+
* a (to us) surprisingly high percentage of platforms are designed for use within industry (as opposed to academia). This may be one of the reasons why quite many of these platforms are not open source.
|
| 88 |
+
|
| 89 |
+
* all platforms had a use case study to show the capabilities of the platform by describing a specific KG's usage in a selected application domain.
|
| 90 |
+
|
| 91 |
+
§ 3 COMMON FUNCTIONALITIES IN KG MANAGEMENT PLATFORMS
|
| 92 |
+
|
| 93 |
+
In this section, we take a closer look at the KG platforms, extract what functionalities they offer and compare them with respect to these functionalities. We consider a functionality for a platform if the functionality is mentioned in the respective papers. Platforms may possess other functionalities not mentioned in the papers. So a missing entry does not necessarily mean a platform does not offer certain functionality. Overall, many of the papers were surprisingly vague about what functionality the platforms offer, so that not always a clear decision was possible. From our analysis, we identified nineteen different functionalities which can be grouped into four categories as follows:
|
| 94 |
+
|
| 95 |
+
Table 1: Comparing existing KG management platforms concerning their names, the year of release, the type of source data used to build KGs, targeting academia or not, being open-source, availability of an online demo, testing in a use case study, and the KG construction method. ${\checkmark }^{ * }$ means currently not available and - shows not mentioned.
|
| 96 |
+
|
| 97 |
+
max width=
|
| 98 |
+
|
| 99 |
+
no. Platform Year Source Data Type Academia Open- Source Online Demo Use Case Study KG Construction Method
|
| 100 |
+
|
| 101 |
+
1-9
|
| 102 |
+
1 BBN [5] 2021 different types ✓ ✓ ✓ ✓ customized ETL process
|
| 103 |
+
|
| 104 |
+
1-9
|
| 105 |
+
2 CPS [6] 2020 text ✘ ✘ ✘ ✓ Machine Learning
|
| 106 |
+
|
| 107 |
+
1-9
|
| 108 |
+
3 HAPE [7] 2020 different types ✓ ✘ ✘ ✓ -
|
| 109 |
+
|
| 110 |
+
1-9
|
| 111 |
+
4 Metaphactory [9] 2019 different types ✘ ✘ ✓ ✓ customized ETL process
|
| 112 |
+
|
| 113 |
+
1-9
|
| 114 |
+
5 Meng et al 11 2021 unstructured text ✘ ✘ ✘ ✓ Machine Learning
|
| 115 |
+
|
| 116 |
+
1-9
|
| 117 |
+
6 MONOLITH 12 2019 - ✘ ✘ ✘ ✓ customized ETL process
|
| 118 |
+
|
| 119 |
+
1-9
|
| 120 |
+
7 News Hunter [13 2020 text - ✘ ✘ ✓ Machine Learning
|
| 121 |
+
|
| 122 |
+
1-9
|
| 123 |
+
8 $\mathrm{{TCMKG}}\left\lbrack {14}\right\rbrack$ 2020 different types - ✘ ✘ ✓ Machine Learning
|
| 124 |
+
|
| 125 |
+
1-9
|
| 126 |
+
9 UWKGM 15 2020 unstructured text - ✓ ✓ ✓ customized ETL process
|
| 127 |
+
|
| 128 |
+
1-9
|
| 129 |
+
10 $\mathrm{{YABKO}}\left\lbrack {16}\right\rbrack$ 2021 different types ✓ ✘ ✘ ✓ -
|
| 130 |
+
|
| 131 |
+
1-9
|
| 132 |
+
11 Yang et al [17] 2017 different types - ✘ ✘ ✓ Machine Learning
|
| 133 |
+
|
| 134 |
+
1-9
|
| 135 |
+
|
| 136 |
+
* Functionalities for creating a KG: The platform can support different functionalities to build the KG with the desired quality:
|
| 137 |
+
|
| 138 |
+
* Data preprocessing [5,7,14,17]: Before information from a data source can be used in a KG, several preprocessing steps may be needed. These include data cleaning and data transformation in a format suitable for ingestion.
|
| 139 |
+
|
| 140 |
+
* Entity and relation extraction [6,7,9,13-15,17]: In particular, when creating KGs out of unstructured information like documents, entity and relation extraction can require complex processing. But even for structured data, this step is often necessary.
|
| 141 |
+
|
| 142 |
+
* Schema generation [7,9,12-14,17]: If a KG is supposed to contain not just a set of instances, but also type information about them, a schema needs to be created.
|
| 143 |
+
|
| 144 |
+
* KG validation [5, 7, 9, 12, 16, 17: When a KG combines data from different sources, the initial data cleaning step, which happens at the level of an individual source, may not be sufficient to ensure that the integrated KG is consistent. Thus, the platform may take a further step on quality checking and validation of the KG.
|
| 145 |
+
|
| 146 |
+
* Functionalities for extending and augmenting KGs: This group of functionalities allows for extending KGs with additional information from other sources or from within the KG itself. While cross-linking extends a $\mathrm{{KG}}$ with information provided somewhere else, a variety of techniques are used to extend KGs "from within". They include reasoning to infer hidden knowledge, KG refinement and the computation of KG embeddings as a basis for link prediction and similarity determination.
|
| 147 |
+
|
| 148 |
+
* Cross-linking [5,9,13,17]: This functionality enables the cross-linking of KG'entities to other resources or KGs like Wikidata or DBpedia. According to the linked open data (LOD) principles [18], each knowledge resource on the web receives a stable, unique and resolvable identifier.
|
| 149 |
+
|
| 150 |
+
* KG embedding [7,9,14-17]: This is a popular method in particular for link prediction and similarity detection and can help to uncover hidden information in a KG.
|
| 151 |
+
|
| 152 |
+
* KG refinement [5, 15-17: In some cases, after checking the quality of the generated $\mathrm{{KG}}$ , a refinement process (e.g., validating the $\mathrm{{KG}}$ to identify errors and correcting the inconsistent statements) can take place.
|
| 153 |
+
|
| 154 |
+
* Reasoning [7, 12, 13, 16, 17]: The reasoning functionality can help more knowledge be inferred in a KG mainly with the help of a reasoner. We consider this as KG augmentation, too.
|
| 155 |
+
|
| 156 |
+
* Functionalities for using KGs: Depending mostly on the targeted user group, platforms can support one or several ways to interact with the created KG:
|
| 157 |
+
|
| 158 |
+
* GUI (Graphical User Interface) [5-7,9,11-17]: A GUI in a platform is functionality that eases user interaction with the platform.
|
| 159 |
+
|
| 160 |
+
* Visualization [5,7,9,11,14,15,17]: The platform can provide different types of visualization of the KG to help for better understanding. CPS [6 has a visualization type for building queries, only.
|
| 161 |
+
|
| 162 |
+
* Keyword search [5, 7, 9, 11, 12, 15-17: This functionality enables searching for a keyword over the developed KG in the platform.
|
| 163 |
+
|
| 164 |
+
* Query endpoint [5-7, 9, 11-14, 16, 17]: In the KG management platform, by a query endpoint functionality, the information over the KG can be queried mostly via SPARQL or using graph queries.
|
| 165 |
+
|
| 166 |
+
* Query catalog [9, 12]: The functionality of having a query catalog in the KG management platform enables to use pre-determined (customized) queries or store the queries for future reuse.
|
| 167 |
+
|
| 168 |
+
* Functionalities for maintaining and updating KGs: Once a KG has been built, it may be desirable to manage access, keep track of provenance, update the KG with new or additional sources, and curate it.
|
| 169 |
+
|
| 170 |
+
* Provenance tracking [5, 6, 9, 13]: The platform can track the provenance of KG's entities. Such functionalities can ease the maintenance and updating the KGs.
|
| 171 |
+
|
| 172 |
+
* Update KG 5, 9, 12, 14, 15 : A KG management platform can have the functionality to update and edit the previously generated KG. After this process, KG validation might be required.
|
| 173 |
+
|
| 174 |
+
* KG curation [5, 9, 15, 17]: The platform can have KG curation functionality that mostly relies on human curation.
|
| 175 |
+
|
| 176 |
+
* Different user roles [5, 7, 9, 11, 12, 15-17: The platform can have functionality that considers different user roles, such as end-users or expert users. This functionality can support different user groups with different access to the other platforms' functionalities.
|
| 177 |
+
|
| 178 |
+
* User management and security [5-7, 9, 11, 12, 15-17: This functionality can manage user access based on their roles and check the access level and security over the KG in the platform.
|
| 179 |
+
|
| 180 |
+
* Workflow management [5]: The platform can allow to store and replay the creation workflow that can be re-executed.
|
| 181 |
+
|
| 182 |
+
Table 2 shows the distribution of the functionalities across the KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. In the last row, we show the total number of supported functionalities of each platform. From this table, our lessons learned are:
|
| 183 |
+
|
| 184 |
+
* the functionalities in the "KG creation" category are a necessity; thus, they are covered by most platforms. However, one needs to keep in mind, that the platforms differ significantly in what exactly they offer here. Partly, this depends on the supported source data types (e.g., platforms geared towards building KGs from text typically provide NLP-based entity extraction).
|
| 185 |
+
|
| 186 |
+
* there is a low effort on developing functionalities regarding the KG maintenance category.
|
| 187 |
+
|
| 188 |
+
* the graphical user interface is the most supported functionality by all platforms.
|
| 189 |
+
|
| 190 |
+
* the workflow management is the least supported functionality by the existing platforms.
|
| 191 |
+
|
| 192 |
+
Overall, the figure quite clearly shows that this is a still young and immature field, where so far, no clear set of commonly offered functionality has evolved. We believe that this will happen over time. Meanwhile, potential users of a platform need to carefully check what their requirements are and whether a given platform meets them.
|
| 193 |
+
|
| 194 |
+
§ 4 OUR PROPOSAL: A KG MANAGEMENT PLATFORM IN THE BIODIVERSITY DOMAIN
|
| 195 |
+
|
| 196 |
+
Our work is motivated by a strong need for KGs in the Biodiversity Domain identified, e.g., by Page [2] and OpenBiodiv [19]. So far, in biodiversity as in many other domains, the few existing KGs have been created largely manually in one-off efforts. If the potential for KGs is to be leveraged for this important domain, it is our conviction, that a KG management platform providing both generic and discipline-specific (e.g., dealing with species) functionality is needed that allows Low-Code (or even No-Code) development, maintenance, and usage of KGs. Using such technologies will reduce the barriers for non-semantic web experts to use and finally benefit from KGs to explore new exciting findings.
|
| 197 |
+
|
| 198 |
+
The iKNOW project [20] aims to create such a platform, built around a semantic-based toolbox. The project is a joined effort by computer scientists and domain experts from the German Centre for Integrative Biodiversity Research
|
| 199 |
+
|
| 200 |
+
Table 2: Distribution of functionalities with respect to existing KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. The last row shows the number of supported functionalities of each platform.
|
| 201 |
+
|
| 202 |
+
< g r a p h i c s >
|
| 203 |
+
|
| 204 |
+
(iDiv) 4. The work benefits from the wealth of well-curated data sources and expert knowledge on their creation, cleaning, and harmonization available at iDiv. Thus, for now, iKNOW focuses on the (semi-)automatic, reproducible transformation of tabular biodiversity data into RDF statements. It also includes provenance tracking to ensure reproducibility and update ability. Further, options for visualization, search, and query are planned. Once established, this platform will be open-source and available to the biodiversity community. Thus, it can significantly contribute to making biodiversity data widely available, easily discoverable, and integrable.
|
| 205 |
+
|
| 206 |
+
§ 4.1 WORKFLOW IN THE KG CREATION SCENARIO
|
| 207 |
+
|
| 208 |
+
After the quite abstract high-level description of iKNOW above, let us now take a closer look at one key functionality, the creation of a new KG. In this paper, we view Knowledge Graph generation as a construction process from scratch, i.e., using a set of operations on one or more data sources to create a Knowledge Graph.
|
| 209 |
+
|
| 210 |
+
4 https://www.idiv.de/en/index.html
|
| 211 |
+
|
| 212 |
+
< g r a p h i c s >
|
| 213 |
+
|
| 214 |
+
Fig. 1: Workflow in the KG Creation Scenario at iKNOW.
|
| 215 |
+
|
| 216 |
+
Figure 1 shows the planned iKNOW workflow for the KG creation scenario. It is a generalized one based on the existing platforms. The workflow shows the data flow between the steps towards KG generation. Not all steps are mandatory; some optional processes in each step can add further value to the KG based on the user's needs.
|
| 217 |
+
|
| 218 |
+
For every uploaded dataset, we build a sub-KG. It will be the subgraph of the main KG in iKNOW. In the first step, users go through the authentication process. The verified users can upload their datasets. If required, the data cleaning process will take place. We offer different tools for this step, which users can select and adjust based on their needs. As we observed, most uploaded data in iKNOW are well-curated, so not all datasets might require this step. For this reason, we consider it as an optional step.
|
| 219 |
+
|
| 220 |
+
In the Entity Extraction step, we map the entities of the dataset to the corresponding concepts in the real world (which build instances of sub-KGs). This mapping is the basis for interlinking entities with external KGs like Wikidata or domain-specific ones. Each mapped entity is a node in the KG. For this process, we have embedded different tools at iKNOW, in which users can select the desired tool along with the desired external KGs.
|
| 221 |
+
|
| 222 |
+
In the Relation Extraction step, the relations between the KG's nodes will be extracted via the user-selected tool. Note that in the entity and relation extraction steps, the tools return the extracted entities and relations to the user. Through our GUI, the user can edit them (Data Authoring step).
|
| 223 |
+
|
| 224 |
+
Each column from the relational dataset refers to a category in the world. We consider the types of the column as classes in the KG. Along with the extracted relations in the previous step, the schema of this sub-KG will be created in the Schema Generation step.
|
| 225 |
+
|
| 226 |
+
In the Triple Generation step, (subject, predicate, object)-triples based on the extracted information from the previous steps will be created. Note that, nodes in the KG are subjects and objects, and relationships are predicates. The triples are generated for classes and instances in the sub-KG.
|
| 227 |
+
|
| 228 |
+
After these processes, the generated sub-KG can be used directly. However, one can take further steps such as: Triple Augmentation (generate new triples and extra relations to ease KG completion), Schema Refinement (refine the schema, e.g., via logical reasoning for the KG completion and correctness), Quality Checking (check the quality of the generated sub-KG), and Query Building (create customized SPARQL queries for the generated sub-KG).
|
| 229 |
+
|
| 230 |
+
In the Pushing step of our platform, the generated KGs are saved first at a temporal repository (shown by "non-curated repository" in Figure 1). After a manual data curation by domain experts in the Curation step, the KG will be published in the main repository of our platform. With this step, we aim to increase the trust and correctness of the information on the KG.
|
| 231 |
+
|
| 232 |
+
All information regarding the user-selected tools with parameters and settings along with the initial dataset and intermediate results will be saved in every step of our platform. With the help of this, users can redo the previous steps (which shows by arrows in both directions). Moreover, this enables us to track the provenance of created sub-KG. In each step mentioned above, we plan to have a tool-recommendation service to help the user select the right tool for every process. For that, we will consider different parameters, such as the characteristics of the dataset and tools.
|
| 233 |
+
|
| 234 |
+
§ 4.2 IKNOW ARCHITECTURE
|
| 235 |
+
|
| 236 |
+
Figure 2 shows the planned architecture of iKNOW in five layers:
|
| 237 |
+
|
| 238 |
+
* In the User Administration layer, access level and security will be controlled. Authorized users can generate or update the KG. All end-users can search and visualize the KG. The platform's admin can add new tools or functionalities and approve the user registration. The KG curator curates the recent changes on the KG (newly added sub-KG or updates on previous information on KG).
|
| 239 |
+
|
| 240 |
+
* The Web-based UI layer shows different scenarios for KG management: building a KG, updating the KG, visualizing the KG's triples, and keyword and SPARQL search.
|
| 241 |
+
|
| 242 |
+
* The Platform Services provides a set of required services for the KG management functionalities.
|
| 243 |
+
|
| 244 |
+
* The Data Access Infrastructure manages the communication of services and data storage.
|
| 245 |
+
|
| 246 |
+
* At the bottom level of the iKNOW platform, the Data Storage layer contains the graph database repository (triple management), provenance information, and user information management.
|
| 247 |
+
|
| 248 |
+
< g r a p h i c s >
|
| 249 |
+
|
| 250 |
+
Fig. 2: Architecture of iKNOW in five layers.
|
| 251 |
+
|
| 252 |
+
§ 4.3 IMPLEMENTATION
|
| 253 |
+
|
| 254 |
+
The iKNOW platform is currently under development (https://planthub.idiv.de/iknow).The Python web framework Django 5 is used for the backend with a PostgreSQL 6 database to maintain users, services, tools, datasets, and the KG generation parameters in the iKNOW platform (used in provenance tracking). We use the compiler Svelte 7 with SvelteKit as a framework for building web applications to create a user-friendly web interface. For security, maintenance, and provenance reasons, all tools from external providers used within the workflow will be executed in a sandbox using Docker 8 . For managing the triplestore, we are using the graph database Blazegraph, Any sub-KG created by an end-user, first, will be placed at the non-curated triplestore. After curation by domain experts, the new sub-KG will be added to the curated triplestore. The curated triplestore also serves as the base for SPARQL queries and the keyword search via search engine Elasticsearch 10,
|
| 255 |
+
|
| 256 |
+
https://www.djangoproject.com
|
| 257 |
+
|
| 258 |
+
https://www.postgresql.org/
|
| 259 |
+
|
| 260 |
+
https://svelte.dev/
|
| 261 |
+
|
| 262 |
+
https://www.docker.com/
|
| 263 |
+
|
| 264 |
+
https://blazegraph.com/
|
| 265 |
+
|
| 266 |
+
https://www.elastic.co/elasticsearch/
|
| 267 |
+
|
| 268 |
+
iKNOW is a modular platform, which increases the flexibility of our platform and allows adding new tools. Our ultimate goal is to provide a large set of tool choices for the end-user. Although only a few tools are embedded so far, we plan to add more tools for each functionality in the platform. Then users have a variety of choices with respect to different needs and use cases. Our open-source code and modular designs of our platform make both the front and backend of our platform easily extendable. We encourage users (new developers) to use or extend our reusable UI components to speed up their development.
|
| 269 |
+
|
| 270 |
+
§ 5 OUTLOOK
|
| 271 |
+
|
| 272 |
+
In this paper, we surveyed eleven KG management platforms and provided a general view of their differences on the used data sources, KG construction approaches, and availability. Taking a closer look, we identified nineteen functionalities offered by one, several or all of these platforms and categorized them into four groups along the lifecycle of a KG. We observed that none of the surveyed platforms supports all of the functionalities. The only category that all platforms strongly support is creation of KGs. Beyond that, so far, there seems to be no agreement on a core set of functionalities. Even within the "creation" category, approaches vary a lot. Partly, this can be attributed to the data source types or user groups targeted by a platform. This, together with the fact that many of the platforms are not open source and/or not available so far limits the choice of platform potential users have. They need to check very carefully whether a specific platform matches their needs.
|
| 273 |
+
|
| 274 |
+
We did this analysis for our domain, biodiversity research. As a result, we presented our proposed platform, iKNOW.
|
| 275 |
+
|
| 276 |
+
We conclude that further, domain-specific platforms (or domain-specific extensions of general platforms) are needed to fully leverage the power of KGs across domains. We also recommend, that platform developers should strive to support KGs along their lifecycle beyond just the creation stage. We do believe that both developments will occur as the field matures.
|
| 277 |
+
|
| 278 |
+
§ ACKNOWLEDGEMENTS
|
| 279 |
+
|
| 280 |
+
The work described in this paper is conducted in the iKNOW Flexpool project of iDiv, the German Centre for Integrative Biodiversity Research, funded by DFG (Project number 202548816). It is supported by iBID, iDiv's Biodiversity Data and Code Support unit. We thank our college Sven Thiel for comments on the manuscript.
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,317 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Transformation of Node to Knowledge Graph Embeddings for Faster Link Prediction in Social Networks
|
| 2 |
+
|
| 3 |
+
Archit Parnami* ${}^{1}$ , Mayuri Deshpande ${}^{2}$ , Anant Kumar Mishra ${}^{2}$ , and Minwoo Le ${\mathrm{e}}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ The University of North Carolina at Charlotte, NC, USA
|
| 6 |
+
|
| 7 |
+
aparnami@uncc.edu, minwoo.lee@uncc.edu
|
| 8 |
+
|
| 9 |
+
${}^{2}$ Siemens Corporate Technology, Charlotte, NC, USA
|
| 10 |
+
|
| 11 |
+
Abstract. Recent advances in neural networks have solved common graph problems such as link prediction, node classification, node clustering, node recommendation by developing embeddings of entities and relations into vector spaces. Graph embeddings encode the structural information present in a graph. The encoded embeddings then can be used to predict the missing links in a graph. However, obtaining the optimal embeddings for a graph can be a computationally challenging task specially in an embedded system. Two techniques which we focus on in this work are 1) node embeddings from random walk based methods and 2) knowledge graph embeddings. Random walk based embeddings are computationally inexpensive to obtain but are sub-optimal whereas knowledge graph embeddings perform better but are computationally expensive. In this work, we investigate a transformation model which converts node embeddings obtained from random walk based methods to embeddings obtained from knowledge graph methods directly without an increase in the computational cost. Extensive experimentation shows that the proposed transformation model can be used for solving link prediction in real-time.
|
| 12 |
+
|
| 13 |
+
Keywords: Knowledge Graphs - Node Embeddings - Link Prediction.
|
| 14 |
+
|
| 15 |
+
## 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
With the advancement in internet technology, online social networks have become part of people's everyday life. Their analysis can be used for targeted advertising, crime detection, detection of epidemics, behavioural analysis etc. Consequently, a lot of research has been devoted to computational analysis of these networks as they represent interactions between a group of people or community and it is of great interest to understand these underlying interactions. Generally, these networks are modeled as graphs where a node represents people or entity and an edge represent interactions, relationships or communication between two of them. For example, in a social network such as Facebook and Twitter, people are represented by nodes and the existence of an edge between two nodes would represent their friendship. Other examples would include a network of products purchased together on an E-commerce website like Amazon, a network of scientists publishing in a conference where an edge would represent their collaboration or a network of employees in a company working on a common project.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
* Work done while A. Parnami was as an intern at Siemens.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
Inherent nature of social networks is that they are dynamic, i.e., over time new edges are added as a network grows. Therefore, understanding the likelihood of future association between two nodes is a fundamental problem and is commonly known as link prediction [19]. Concretely, link prediction is to predict whether there will be a connection between two nodes in the future based on the existing structure of the graph and the existing attribute information of the nodes. For example, in social networks, link prediction can suggest new friends; in E-commerce, link prediction can recommend products to be purchased together [11]; in bioinformatics, it can find interaction between proteins [2]; in co-authorship networks, it can suggest new collaborations and in the security domain, link prediction can assist in identifying hidden groups of terrorists or criminals [3].
|
| 26 |
+
|
| 27 |
+
Over the years, a large number of link prediction methods have been proposed [21]. These methods are classified based on different aspects such as the network evolution rules that they model, the type and amount of information they used or their computational complexity. Similarity-based methods such as Common Neighbors [19], Jaccard's Coefficient, Adamic-Adar Index [1], Preferential Attachment [4], Katz Index [16] use different graph similarity metrics to predict links in a graph. Embedding learning methods [18,2,13,25] take a matrix representation of the network and factorize them to learn a low-dimensional latent representation/embedding for each node. Recently proposed network em-beddings such as DeepWalk [25] and node2vec [13] are in this category since they implicitly factorize some matrices [27].
|
| 28 |
+
|
| 29 |
+
Similar to these node embedding methods, recent years have also witnessed a rapid growth in knowledge graph embedding methods. A knowledge graph (KG) is a graph with entities of different types of nodes and various relations among them as edges. Link prediction in such a graph is known as knowledge graph completion. It is similar to link prediction in social network analysis, but more challenging because of the presence of multiple types of nodes and edges. For knowledge graph completion, we not only determine whether there is a link between two entities or not, but also predict the specific type of the link. For this reason, the traditional approaches of link prediction are not capable of knowledge graph completion. Therefore, to tackle this issue, a new research direction known as knowledge graph embedding has been proposed [24,8,31,20,15,7,28]. The main idea is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG.
|
| 30 |
+
|
| 31 |
+
Neither of these two approaches, however, can generate "optimal" embed-dings "quickly" for real-time link prediction on new graphs. Random walk based node embedding methods are computationally efficient but give poor results whereas KG-based methods produce optimal results but are computationally expensive. Thus, in this work, we mainly focus on embedding learning methods (i.e., Walk based node embedding methods and knowledge graph completion methods) which are capable of finding optimal embeddings quickly enough to meet real-time constraints for practical applications. To bridge the gap between computational time and performance of embeddings on link prediction, we propose the following contributions in this work:
|
| 32 |
+
|
| 33 |
+
- We compare the embedding's performance and computational cost of both Random walk based node embedding and KG-based embedding methods and empirically determine that Random walk based node embedding methods are faster but give sub-optimal results on link prediction whereas KG based embedding methods are computationally expensive but perform better on link prediction.
|
| 34 |
+
|
| 35 |
+
- We propose a transformation model that takes node embeddings from Random walk based node embedding methods and output near optimal embed-dings without an increase in computational cost.
|
| 36 |
+
|
| 37 |
+
- We demonstrate the results of transformation through extensive experimentation on various social network datasets of different graph sizes and different combinations of node embeddings and KG embedding methods.
|
| 38 |
+
|
| 39 |
+
## 2 Background
|
| 40 |
+
|
| 41 |
+
### 2.1 Problem Definition
|
| 42 |
+
|
| 43 |
+
Let ${G}_{\text{homo }} = \langle V, E, A\rangle$ be an unweighted, undirected homogeneous graph where $V$ is the set of vertices, $E$ is the set of observed links, i.e., $E \subset V \times V$ and $A$ is the adjacency matrix respectively. The graph $G$ represents the topological structure of the social network in which an edge $e = \langle u, v\rangle \in E$ represents an interaction that took place between $u$ and $v$ . Let $U$ denote the universal set containing all $\left( {\left| V\right| \times \left( {\left| V\right| - 1}\right) }\right) /2$ possible edges. Then, the set of non-existent links is $U - E$ . Our assumption is that there are some missing links (edges that will appear in future) in the set $U - E$ . Then the link prediction task is given the current network ${G}_{\text{homo }}$ , find out these missing edges.
|
| 44 |
+
|
| 45 |
+
Similarly, let ${G}_{kg} = \langle V, E, A\rangle$ be a Knowledge Graph (KG). A KG is a directed graph whose nodes are entities and edges are subject-property-object triple facts. Each edge of the form (head entity, relation, tail entity) (denoted as $\langle h, r, t\rangle )$ indicates a relationship $r$ from entity $h$ to entity $t$ . For example, $\langle$ Bob, isFriendOf, Sam $\rangle$ and $\langle$ Bob, livesIn, NewYork $\rangle$ . Note that the entities and relations in a KG are usually of different types. Link prediction in KGs aims to predict the missing $\mathrm{h}$ or $\mathrm{t}$ for a relation fact triple $\langle h, r, t\rangle$ , used in [9,6.8]. In this task, for each position of missing entity, the system is asked to rank a set of candidate entities from the knowledge graph, instead of only giving one best result [9.8].
|
| 46 |
+
|
| 47 |
+
We then formulate the problem of link prediction on graph $G$ such that $G \equiv {G}_{\text{homo }} \equiv {G}_{kg}$ , i.e., KG with only one type of entity and relation. Link prediction is then to predict the missing $h$ or $t$ for a relation fact triple $\langle h, r, t\rangle$ where both $h$ and $t$ are of same kind. For example $\langle {Bob},{isFriendOf},?\rangle$ or $\langle$ Sam, isFriendOf,? $\rangle$ .
|
| 48 |
+
|
| 49 |
+
### 2.2 Graph Embedding Methods
|
| 50 |
+
|
| 51 |
+
Graph embedding aims to represent a graph in a low dimensional space which preserves as much graph property information as possible. The differences between different graph embedding algorithms lie in how they define the graph property to be preserved. Different algorithms have different insights of the node (/edge/substructure/whole-graph) similarities and how to preserve them in the embedded space. Formally, given a graph $G = \langle V, E, A\rangle$ , a node embedding is a mapping ${f}_{1} : {v}_{i} \rightarrow {\mathbf{y}}_{\mathbf{i}} \in {\mathbb{R}}^{d}\;\forall i \in \left\lbrack n\right\rbrack$ where $d$ is the dimension of the embed-dings, $n$ the number of vertices and the function $f$ preserves some proximity measure defined on graph $G$ . If there are multiple types of links/relations in the graph then similar to node embeddings, relation embeddings can be obtained as $f : {r}_{j} \rightarrow {\mathbf{y}}_{\mathbf{j}} \in {\mathbb{R}}^{d}\;\forall j \in \left\lbrack k\right\rbrack$ where $k$ the number of types of relations.
|
| 52 |
+
|
| 53 |
+
Node Embeddings using Random Walk Random walks have been used to approximate many properties in the graph including node centrality [23] and similarity [26]. Their key innovation is optimizing the node embeddings so that nodes have similar embeddings if they tend to co-occur on short random walks over the graph. Thus, instead of using a deterministic measure of graph proximity [5], these random walk methods employ a flexible, stochastic measure of graph proximity, which has led to superior performance in a number of settings [12]. Two well known examples of random walk based methods are node2vec [13] and DeepWalk [25].
|
| 54 |
+
|
| 55 |
+
KG Embeddings KG embedding methods usually consists of three steps. The first step specifies the form in which entities and relations are represented in a continuous vector space. Entities are usually represented as vectors, i.e. deterministic points in the vector space [24, 8.31]. In the second step, a scoring function ${f}_{r}\left( {h, t}\right)$ is defined on each fact $\langle h, r, t\rangle$ to measure its plausibility. Facts observed in the KG tend to have higher scores than those that have not been observed. Finally, to learn those entity and relation representations (i.e., embed-dings), the third step solves an optimization problem that maximizes the total plausibility of observed facts as detailed in [30]. KG embedding methods which we use for experiments in this paper are TransE [8], TransH [31], TransD [20], RESCAL [32] and SimplE [17].
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Fig. 1: Transformation Model. Input Graph: Green edges are missing links and red edges represents present links. First, a random walk method outputs node embeddings (source) for a graph. These embeddings are then used to initialize KG embedding method, which outputs finetuned embeddings. A transformation model is then trained between source and finetuned embeddings.
|
| 60 |
+
|
| 61 |
+
## 3 Methodology
|
| 62 |
+
|
| 63 |
+
Transformation model is suggested to expedite fine-tuning process with KG-embedding methods. Let ${G}_{n, m}$ be a graph with $n$ vertices and $m$ edges. Given the node embeddings of the graph $G$ , we would want to transform them to optimal node embeddings.
|
| 64 |
+
|
| 65 |
+
### 3.1 Node Embedding Generation
|
| 66 |
+
|
| 67 |
+
The input graph ${G}_{n, m}$ is fed into one of the random walk based graph embed-dings methods (node2vec [13] or DeepWalk [25]), which gives us the node em-beddings. Let $f$ be a random walk based graph embedding method and ${E}_{\text{source }}^{i}$ denotes the output node embeddings:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{E}_{\text{source }}^{i} = f\left( {G}^{i}\right) \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where ${G}^{i}$ is the ${i}^{th}$ graph in the dataset of graphs $D = \left\{ {{G}^{1},{G}^{2},\ldots }\right\}$ and ${E}_{\text{source }}^{i} \in$ ${\mathbb{R}}^{n \times d}$ with the embedding dimension $d$ .
|
| 74 |
+
|
| 75 |
+
### 3.2 Knowledge Embedding Generation
|
| 76 |
+
|
| 77 |
+
In a KG-based embedding algorithm (such as TransE), the input is a graph and the initial embeddings are randomly initialized. The algorithm uses a scoring function and optimizes the initial embeddings to output the trained embeddings for the given graph. Since we are working with homogeneous graph with only one type of relation, we don't need to learn the embeddings for the relation, hence they are kept constant and only node embeddings are learnt. Let ${E}_{\text{initial }}^{i}$ be the initial node embeddings, ${E}_{\text{target }}^{i}$ be the trained embeddings and $g$ the KG method with parameters $\alpha$ .
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{E}_{\text{target }}^{i} = g\left( {{G}^{i},{E}_{\text{initial }}^{i};\alpha }\right) \tag{2}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where ${E}_{\text{target }}^{i} \in {R}^{n \times d}$ and ${E}_{\text{initial }}^{i} \in {R}^{n \times d}$ .
|
| 84 |
+
|
| 85 |
+
Instead of using randomly initialized embeddings ${E}_{\text{initial }}^{i}$ to obtain target embeddings ${E}_{\text{target }}^{i}$ , we can initialize with ${E}_{\text{source }}^{i}$ in Eq. (1) as
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{E}_{\text{finetuned }}^{i} = g\left( {{G}^{i},{E}_{\text{source }}^{i};\alpha }\right) \tag{3}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where ${E}_{\text{finetuned }}^{i} \in {R}^{n \times d}$ are fine tuned output embeddings. This idea of better initialization has also been explored previously in [22, 10] where it has been shown to result in embeddings of higher quality.
|
| 92 |
+
|
| 93 |
+
### 3.3 Transformation Model with Self-Attention
|
| 94 |
+
|
| 95 |
+
Using the node embeddings ${E}_{\text{source }}^{i}$ from Eq. (1) and fine-tuned KG embed-dings ${E}_{\text{finetuned }}^{i}$ from Eq. (3), we train a transformation model which can learn to transform the node embeddings from a node-based method to KG embed-dings. We adopt self-attention [29] on graph adjacency matrix as explained in Algorithm 1:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{E}_{\text{transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{source }}^{i};\theta }\right) \tag{4}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where ${E}_{\text{transformed }}^{i} \in {R}^{n \times d}$ are the transformed embeddings and $\theta$ are the parameters of the self-attention model.
|
| 102 |
+
|
| 103 |
+
The error between the fine-tuned and transformed embeddings is calculated using squared euclidean distance as:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{E}_{\text{error }}^{i} = 1/n\sum {\begin{Vmatrix}{E}_{\text{transformed }}^{i} - {E}_{\text{finetuned }}^{i}\end{Vmatrix}}^{2}. \tag{5}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
The loss on batch $\mathbf{X}$ of graphs is measured as:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\operatorname{Loss}\left( \mathbf{X}\right) = 1/b\mathop{\sum }\limits_{{i = 1}}^{b}{E}_{\text{error }}^{i} \tag{6}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $\mathbf{X} = \left\{ \left( {{E}_{\text{transformed }}^{i},{E}_{\text{finetuned }}^{i}}\right) \right\}$ and $b$ is the batch size. Since KG em-beddings are trained from facts/triplets which are obtained from the adjacency matrix of the graph, a self-attention model reinforced with information of the adjacency matrix when applied to node-embeddings is able to learn the transformation function as observed in our experiments (Figure 3). The proposed algorithm is summarized in Algorithm 2.
|
| 116 |
+
|
| 117 |
+
Algorithm 1: Self-attention on graph adjacency matrix
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
Function SelfAttention $\left( {{G}_{n, m},{E}_{n \times d}}\right)$
|
| 122 |
+
|
| 123 |
+
${A}_{n \times n} =$ Adjacency Matrix of ${G}_{n, m}$
|
| 124 |
+
|
| 125 |
+
${K}_{n \times d} =$ affine(E, d)
|
| 126 |
+
|
| 127 |
+
${Q}_{n \times d} = \operatorname{affine}\left( {\mathrm{E},\mathrm{d}}\right)$
|
| 128 |
+
|
| 129 |
+
${\text{ Logits }}_{n \times n} =$ matmul(Q, transpose(K))
|
| 130 |
+
|
| 131 |
+
AttendedLogit ${s}_{n \times n} =$ Logits $+ \mathrm{A}$
|
| 132 |
+
|
| 133 |
+
${V}_{n \times d} =$ affine(E, d)
|
| 134 |
+
|
| 135 |
+
${\text{Output}}_{n \times d} =$ matmul(AttendedLogits, V)
|
| 136 |
+
|
| 137 |
+
return Output
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
Algorithm 2: Training the transformation model
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
Input: Dataset of Graphs ${D}_{\text{train }} = \left\{ {{G}^{1},{G}^{2},\ldots ,{G}^{n}}\right\}$
|
| 146 |
+
|
| 147 |
+
foreach ${G}^{i}$ in ${D}_{\text{train }}$ do
|
| 148 |
+
|
| 149 |
+
${E}_{\text{source }}^{i} \leftarrow f\left( {G}^{i}\right)$
|
| 150 |
+
|
| 151 |
+
end
|
| 152 |
+
|
| 153 |
+
foreach ${G}^{i}$ in ${D}_{\text{train }}$ do
|
| 154 |
+
|
| 155 |
+
$\;;\;{E}_{finetuned}^{i} \leftarrow g\left( {{G}^{i},{E}_{source}^{i};\alpha }\right)$
|
| 156 |
+
|
| 157 |
+
end
|
| 158 |
+
|
| 159 |
+
while true do
|
| 160 |
+
|
| 161 |
+
$\mathbf{B} = \left\{ \left( {{E}_{\text{source }}^{i},{E}_{\text{finetuned }}^{i}}\right) \right\}$ DSample batch
|
| 162 |
+
|
| 163 |
+
foreach ${E}_{\text{source }}^{i}$ in $\mathbf{B}$ do
|
| 164 |
+
|
| 165 |
+
${E}_{\text{transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{source }}^{i};\theta }\right)$
|
| 166 |
+
|
| 167 |
+
end
|
| 168 |
+
|
| 169 |
+
$\mathbf{X} = \left\{ \left( {{E}_{\text{transformed }}^{i},{E}_{\text{finetuned }}^{i}}\right) \right\}$
|
| 170 |
+
|
| 171 |
+
$\theta \leftarrow \theta - \beta {\nabla }_{\theta }\operatorname{Loss}\left( \mathbf{X}\right)$ DUpdate
|
| 172 |
+
|
| 173 |
+
end
|
| 174 |
+
|
| 175 |
+
---
|
| 176 |
+
|
| 177 |
+
## 4 Experiments
|
| 178 |
+
|
| 179 |
+
### 4.1 Datasets
|
| 180 |
+
|
| 181 |
+
Yang, et. al [33] introduced social network datasets with ground-truth communities. Each dataset $D$ is a network having a total of $N$ nodes, $E$ edges and a set of communities (Table 1).
|
| 182 |
+
|
| 183 |
+
<table><tr><td>Dataset</td><td>Description</td><td>Nodes</td><td>Edges</td><td>Communities</td></tr><tr><td>YouTube</td><td>Friendship</td><td>1,134,890</td><td>2,987,624</td><td>8,385</td></tr><tr><td>DBLP</td><td>Co-authorship</td><td>317,080</td><td>1,049,866</td><td>13,477</td></tr><tr><td>Amazon</td><td>Co-purchasing</td><td>334,863</td><td>925,872</td><td>75,149</td></tr><tr><td>LiveJournal</td><td>Friendship</td><td>3,997,962</td><td>34,681,189</td><td>287,512</td></tr><tr><td>Orkut</td><td>Friendship</td><td>3,072,441</td><td>117,185,083</td><td>6,288,363</td></tr></table>
|
| 184 |
+
|
| 185 |
+
Table 1: Datasets
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+
Fig. 2: Histogram showing community size vs its frequency. DBLP, YouTube and Amazon datasets have smaller size communities and LiveJournal and Orkut have larger size communities.
|
| 190 |
+
|
| 191 |
+
The communities in each dataset are of different sizes. They range from a small size (1-20) to bigger sizes (380-400). There are more communities with small sizes and their frequency decreases as their size increases. This trend is depicted in Figure 2.
|
| 192 |
+
|
| 193 |
+
YouTube ${}^{3}$ , Orkut ${}^{3}$ and LiveJournal ${}^{3}$ are friendship networks where each community is a user-defined group. Nodes in the community represent users, and edges represent their friendship.
|
| 194 |
+
|
| 195 |
+
${\mathrm{{DBLP}}}^{3}$ is a co-authorship network where two authors are connected if they publish at least one paper together. A community is represented by a publication venue, e.g., journal or conference. Authors who published to a certain journal or conference form a community.
|
| 196 |
+
|
| 197 |
+
Amazon ${}^{3}$ co-purchasing network is based on Customers Who Bought This Item Also Bought feature of the Amazon website. If a product $i$ is frequently co-purchased with product $j$ , the graph contains an undirected edge from $i$ to $j$ . Each connected component in a product category defined by Amazon acts as a community where nodes represent products in the same category and edges indicate that we were purchased together.
|
| 198 |
+
|
| 199 |
+
### 4.2 Training
|
| 200 |
+
|
| 201 |
+
We consider each community in a dataset as an individual graph ${G}_{n, m}$ with vertices representing the entity in the community and edges representing the relationship. For training the transformation model, we select communities of particular size range which acts as dataset $D$ of graphs (Table 2). We randomly disable ${20}\%$ of the links (edges) in each graph to act as missing links for link prediction. In all the experiments, the embedding dimension is set to 32 , which works best in our pilot test. We used OpenNE ${}^{4}$ for generating node2vec and DeepWalk embeddings and OpenKE [14] for generating KG embeddings. The dataset $D$ of graphs is split into train, validation and test split of ${64}\% ,{16}\%$ , and ${20}\%$ respectively.
|
| 202 |
+
|
| 203 |
+
---
|
| 204 |
+
|
| 205 |
+
${}^{3}$ http://snap.stanford.edu/data/index.html#communities
|
| 206 |
+
|
| 207 |
+
---
|
| 208 |
+
|
| 209 |
+
<table><tr><td>Dataset</td><td>Graph Size</td><td>Number of Graphs</td><td>Average Degree</td><td>Average Density</td></tr><tr><td>YouTube</td><td>16-21</td><td>338</td><td>3.00</td><td>0.17</td></tr><tr><td>DBLP</td><td>16-21</td><td>654</td><td>4.93</td><td>0.29</td></tr><tr><td>Amazon</td><td>21-25</td><td>1425</td><td>4.00</td><td>0.18</td></tr><tr><td>LiveJournal</td><td>51-55</td><td>1504</td><td>6.11</td><td>0.12</td></tr><tr><td>LiveJournal</td><td>61-65</td><td>1101</td><td>7.20</td><td>0.11</td></tr><tr><td>LiveJournal</td><td>71-75</td><td>806</td><td>7.53</td><td>0.10</td></tr><tr><td>LiveJournal</td><td>81-85</td><td>672</td><td>6.58</td><td>0.08</td></tr><tr><td>LiveJournal</td><td>91-95</td><td>497</td><td>8.01</td><td>0.08</td></tr><tr><td>LiveJournal</td><td>101-105</td><td>400</td><td>6.85</td><td>0.06</td></tr><tr><td>LiveJournal</td><td>111-115</td><td>351</td><td>5.89</td><td>0.05</td></tr><tr><td>LiveJournal</td><td>121-125</td><td>332</td><td>7.67</td><td>0.06</td></tr><tr><td>Orkut</td><td>151-155</td><td>1868</td><td>7.20</td><td>0.04</td></tr><tr><td>Orkut</td><td>251-255</td><td>654</td><td>7.21</td><td>0.028</td></tr><tr><td>Orkut</td><td>351-355</td><td>335</td><td>7.33</td><td>0.020</td></tr></table>
|
| 210 |
+
|
| 211 |
+
Table 2: Selected datasets and graph size for experiments.
|
| 212 |
+
|
| 213 |
+
### 4.3 Evaluation Metrics
|
| 214 |
+
|
| 215 |
+
For evaluation, we use MRR and Precision@K.The algorithm predicts a list of ranked candidates for the incoming query. To remove pre-existing triples in the knowledge graph, filtering operation cleans them up from the list. MRR computes the mean of the reciprocal rank of the correct candidate in the list, and Precision@K evaluates the rate of correct candidates appearing in the top $\mathrm{K}$ candidates predicted. Due to space constraints, we only present the results for MRR. Results of Precision@K can be found at our GitHub ${}^{5}$ .
|
| 216 |
+
|
| 217 |
+
## 5 Results & Discussions
|
| 218 |
+
|
| 219 |
+
From the results depicted in Figure 3, we observe that the target KG embeddings (TransE, TransH, etc.) almost always outperforms random-walk based source embeddings (node2vec and DeepWalk) except in case of SimplE and DistMult where both the methods perform poorly. This can also be observed in Figure 4.
|
| 220 |
+
|
| 221 |
+
Finetuned KG embeddings achieved better or equivalent performance as compared to target KG embeddings. This can be confirmed by ANOVA test in Figure 4 where there is no significant difference between the MRRs obtained from finetuned and target KG embeddings in most cases. Specifically, translational based methods such as TransE, TransH, and TransD have equivalent performance for finetuned and target embeddings whereas SimplE, RESCAL, and DistMult have better finetuned embeddings than target embeddings as the graph size grows.
|
| 222 |
+
|
| 223 |
+
---
|
| 224 |
+
|
| 225 |
+
${}^{4}$ https://github.com/thunlp/OpenNE
|
| 226 |
+
|
| 227 |
+
${}^{5}$ https://github.com/ArchitParnami/GraphProject
|
| 228 |
+
|
| 229 |
+
---
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+
Fig. 3: Performance evaluation of different embeddings on link prediction using MRR (y-axis). Source (green) refers to embeddings from node2vec (left) and DeepWalk (right). Target (brown) refers to KG embeddings from TransE, TransH, TransD, SimplE, RESCAL, or DistMult. For each source and target pair, we evaluate finetuned (orange) embeddings (obtained by initializing target method with source embeddings) and transformed (red) embeddings (obtained by applying transformation model on source embeddings). Results are presented on different datasets of varying graph sizes.
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+
Fig. 4: ANOVA test of MRR scores from two embedding methods (Method 1 and Method 2). The difference of MRR scores between the two methods is significant when their p-values are $< {0.05}$ (light green) and not significant otherwise (light red). The values in each cell are the difference between the means of MRR scores from two methods (Method 2 - Method 1). The text in bold represents when Method 2 did better than Method 1. Source method refers to node2vec (left) and DeepWalk (right). Target method refers to TransE, TransH, TransD, SimplE, RESCAL, or DistMult in each row.
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+
Fig. 5: CPU Time (left y-axis) vs Graph Size (x-axis) and Mean MRR (right y-axis) vs Graph Size comparison of finetuned (TransE finetuned from node2vec) and transformed embeddings (from node2vec). As the graph size increases the time to obtain embeddings from KG methods (TransE) also increases significantly. However, there is no significant increase in time for the transformation (from node2vec) once we have the transformation model. The Mean MRR scores of both finetuned and transformed embeddings also drop with the increase in graph size, however, they perform equally good (for graphs <76). Note that finetuning time and transformation time both include time to obtain node2vec embeddings as well.
|
| 242 |
+
|
| 243 |
+
Transformed embeddings consistently outperform source embeddings and have similar performance to finetuned embeddings at least for graphs of sizes up to 65. The performance drop starts from graph size 71-75 in the transformation to TransD from DeepWalk whereas 81-85 in the transformation to TransE from node2vec. For RESCAL, the transformation works for larger sized graphs in node2vec and till 121-125 in DeepWalk.
|
| 244 |
+
|
| 245 |
+
As the graph size increases (top to bottom), the overall MRR scores decrease for all the embeddings as expected. In Figure 5, we compare computation time and MRR performance of transformed embeddings and finetuned embeddings where source method is node2vec and target method is TransE. It can be seen that the transformed embeddings give similar performance as finetuned embed-dings (without any significant increase in computational cost) up to graphs of size 71-75. Thereafter the transformed embeddings perform poorly, we attribute this to poor finetuned embeddings on which the transformation model was trained.
|
| 246 |
+
|
| 247 |
+
## 6 Conclusion
|
| 248 |
+
|
| 249 |
+
In this work, we have demonstrated that random-walk based node embedding (source) methods are computationally efficient but give sub-optmial results on link prediction in social networks whereas KG based embedding (target & fine-tuned) methods perform better but are computationally expensive. For our requirement of generating optimal embeddings quickly for real-time link prediction we proposed a self-attention based transformation model to convert walk-based embeddings to optimal KG embeddings. The proposed model works well for smaller graphs but as the complexity of the graph increases, the transformation performance decreases. For future work, our goal is to explore better transformation models for bigger graphs.
|
| 250 |
+
|
| 251 |
+
## References
|
| 252 |
+
|
| 253 |
+
1. Adamic, L.A., Adar, E.: Friends and neighbors on the web. Social networks 25(3), 211-230 (2003)
|
| 254 |
+
|
| 255 |
+
2. Airoldi, E.M., Blei, D.M., Fienberg, S.E., Xing, E.P., Jaakkola, T.: Mixed membership stochastic block models for relational data with application to protein-protein interactions. In: Proceedings of the international biometrics society annual meeting. vol. 15 (2006)
|
| 256 |
+
|
| 257 |
+
3. Al Hasan, M., Chaoji, V., Salem, S., Zaki, M.: Link prediction using supervised learning. In: SDM06: workshop on link analysis, counter-terrorism and security (2006)
|
| 258 |
+
|
| 259 |
+
4. Barabási, A.L., Albert, R.: Emergence of scaling in random networks. Science $\mathbf{{286}}\left( {5439}\right) ,{509} - {512}\left( {1999}\right)$
|
| 260 |
+
|
| 261 |
+
5. Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in neural information processing systems. pp. 585-591 (2002)
|
| 262 |
+
|
| 263 |
+
6. Bordes, A., Glorot, X., Weston, J., Bengio, Y.: Joint learning of words and meaning representations for open-text semantic parsing. In: Artificial Intelligence and Statistics. pp. 127-135 (2012)
|
| 264 |
+
|
| 265 |
+
7. Bordes, A., Glorot, X., Weston, J., Bengio, Y.: A semantic matching energy function for learning with multi-relational data. Machine Learning $\mathbf{{94}}\left( 2\right) ,{233} - {259}$ (2014)
|
| 266 |
+
|
| 267 |
+
8. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in neural information processing systems. pp. 2787-2795 (2013)
|
| 268 |
+
|
| 269 |
+
9. Bordes, A., Weston, J., Collobert, R., Bengio, Y.: Learning structured embeddings of knowledge bases. In: Twenty-Fifth AAAI Conference on Artificial Intelligence (2011)
|
| 270 |
+
|
| 271 |
+
10. Chen, H., Perozzi, B., Hu, Y., Skiena, S.: Harp: Hierarchical representation learning for networks. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
|
| 272 |
+
|
| 273 |
+
11. Chen, H., Li, X., Huang, Z.: Link prediction approach to collaborative filtering. In: Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries. pp. 141-142. IEEE (2005)
|
| 274 |
+
|
| 275 |
+
12. Goyal, P., Ferrara, E.: Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems 151, 78-94 (2018)
|
| 276 |
+
|
| 277 |
+
13. Grover, A., Leskovec, J.: node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 855-864. ACM (2016)
|
| 278 |
+
|
| 279 |
+
14. Han, X., Cao, S., Lv, X., Lin, Y., Liu, Z., Sun, M., Li, J.: Openke: An open toolkit for knowledge embedding. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 139-144 (2018)
|
| 280 |
+
|
| 281 |
+
15. Jenatton, R., Roux, N.L., Bordes, A., Obozinski, G.R.: A latent factor model for highly multi-relational data. In: Advances in Neural Information Processing Systems. pp. 3167-3175 (2012)
|
| 282 |
+
|
| 283 |
+
16. Katz, L.: A new status index derived from sociometric analysis. Psychometrika $\mathbf{{18}}\left( 1\right) ,{39} - {43}\left( {1953}\right)$
|
| 284 |
+
|
| 285 |
+
17. Kazemi, S.M., Poole, D.: Simple embedding for link prediction in knowledge graphs. In: Advances in Neural Information Processing Systems. pp. 4284-4295 (2018)
|
| 286 |
+
|
| 287 |
+
18. Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer (8), 30-37 (2009)
|
| 288 |
+
|
| 289 |
+
19. Liben-Nowell, D., Kleinberg, J.: The link-prediction problem for social networks. Journal of the American society for information science and technology $\mathbf{{58}}\left( 7\right)$ , 1019-1031 (2007)
|
| 290 |
+
|
| 291 |
+
20. Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: Twenty-ninth AAAI conference on artificial intelligence (2015)
|
| 292 |
+
|
| 293 |
+
21. Lü, L., Zhou, T.: Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications $\mathbf{{390}}\left( 6\right) ,{1150} - {1170}\left( {2011}\right)$
|
| 294 |
+
|
| 295 |
+
22. Luo, Y., Wang, Q., Wang, B., Guo, L.: Context-dependent knowledge graph embedding. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pp. 1656-1661 (2015)
|
| 296 |
+
|
| 297 |
+
23. Newman, M.E.: A measure of betweenness centrality based on random walks. Social networks $\mathbf{{27}}\left( 1\right) ,{39} - {54}\left( {2005}\right)$
|
| 298 |
+
|
| 299 |
+
24. Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: Proceedings of the 28th International Conference on International Conference on Machine Learning. vol. 11, pp. 809-816 (2011)
|
| 300 |
+
|
| 301 |
+
25. Perozzi, B., Al-Rfou, R., Skiena, S.: Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 701-710. ACM (2014)
|
| 302 |
+
|
| 303 |
+
26. Pirotte, A., Renders, J.M., Saerens, M., et al.: Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge & Data Engineering (3), 355-369 (2007)
|
| 304 |
+
|
| 305 |
+
27. Qiu, J., Dong, Y., Ma, H., Li, J., Wang, K., Tang, J.: Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. pp. 459-467. ACM (2018)
|
| 306 |
+
|
| 307 |
+
28. Socher, R., Chen, D., Manning, C.D., Ng, A.: Reasoning with neural tensor networks for knowledge base completion. In: Advances in neural information processing systems. pp. 926-934 (2013)
|
| 308 |
+
|
| 309 |
+
29. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in neural information processing systems. pp. 5998-6008 (2017)
|
| 310 |
+
|
| 311 |
+
30. Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering $\mathbf{{29}}\left( {12}\right) ,{2724} - {2743}\left( {2017}\right)$
|
| 312 |
+
|
| 313 |
+
31. Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: Twenty-Eighth AAAI conference on artificial intelligence (2014)
|
| 314 |
+
|
| 315 |
+
32. Yang, B., Yih, W.t., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 (2014)
|
| 316 |
+
|
| 317 |
+
33. Yang, J., Leskovec, J.: Defining and evaluating network communities based on ground-truth. Knowledge and Information Systems $\mathbf{{42}}\left( 1\right) ,{181} - {213}\left( {2015}\right)$
|
papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,294 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TRANSFORMATION OF NODE TO KNOWLEDGE GRAPH EMBEDDINGS FOR FASTER LINK PREDICTION IN SOCIAL NETWORKS
|
| 2 |
+
|
| 3 |
+
Archit Parnami* ${}^{1}$ , Mayuri Deshpande ${}^{2}$ , Anant Kumar Mishra ${}^{2}$ , and Minwoo Le ${\mathrm{e}}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ The University of North Carolina at Charlotte, NC, USA
|
| 6 |
+
|
| 7 |
+
aparnami@uncc.edu, minwoo.lee@uncc.edu
|
| 8 |
+
|
| 9 |
+
${}^{2}$ Siemens Corporate Technology, Charlotte, NC, USA
|
| 10 |
+
|
| 11 |
+
Abstract. Recent advances in neural networks have solved common graph problems such as link prediction, node classification, node clustering, node recommendation by developing embeddings of entities and relations into vector spaces. Graph embeddings encode the structural information present in a graph. The encoded embeddings then can be used to predict the missing links in a graph. However, obtaining the optimal embeddings for a graph can be a computationally challenging task specially in an embedded system. Two techniques which we focus on in this work are 1) node embeddings from random walk based methods and 2) knowledge graph embeddings. Random walk based embeddings are computationally inexpensive to obtain but are sub-optimal whereas knowledge graph embeddings perform better but are computationally expensive. In this work, we investigate a transformation model which converts node embeddings obtained from random walk based methods to embeddings obtained from knowledge graph methods directly without an increase in the computational cost. Extensive experimentation shows that the proposed transformation model can be used for solving link prediction in real-time.
|
| 12 |
+
|
| 13 |
+
Keywords: Knowledge Graphs - Node Embeddings - Link Prediction.
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
With the advancement in internet technology, online social networks have become part of people's everyday life. Their analysis can be used for targeted advertising, crime detection, detection of epidemics, behavioural analysis etc. Consequently, a lot of research has been devoted to computational analysis of these networks as they represent interactions between a group of people or community and it is of great interest to understand these underlying interactions. Generally, these networks are modeled as graphs where a node represents people or entity and an edge represent interactions, relationships or communication between two of them. For example, in a social network such as Facebook and Twitter, people are represented by nodes and the existence of an edge between two nodes would represent their friendship. Other examples would include a network of products purchased together on an E-commerce website like Amazon, a network of scientists publishing in a conference where an edge would represent their collaboration or a network of employees in a company working on a common project.
|
| 18 |
+
|
| 19 |
+
* Work done while A. Parnami was as an intern at Siemens.
|
| 20 |
+
|
| 21 |
+
Inherent nature of social networks is that they are dynamic, i.e., over time new edges are added as a network grows. Therefore, understanding the likelihood of future association between two nodes is a fundamental problem and is commonly known as link prediction [19]. Concretely, link prediction is to predict whether there will be a connection between two nodes in the future based on the existing structure of the graph and the existing attribute information of the nodes. For example, in social networks, link prediction can suggest new friends; in E-commerce, link prediction can recommend products to be purchased together [11]; in bioinformatics, it can find interaction between proteins [2]; in co-authorship networks, it can suggest new collaborations and in the security domain, link prediction can assist in identifying hidden groups of terrorists or criminals [3].
|
| 22 |
+
|
| 23 |
+
Over the years, a large number of link prediction methods have been proposed [21]. These methods are classified based on different aspects such as the network evolution rules that they model, the type and amount of information they used or their computational complexity. Similarity-based methods such as Common Neighbors [19], Jaccard's Coefficient, Adamic-Adar Index [1], Preferential Attachment [4], Katz Index [16] use different graph similarity metrics to predict links in a graph. Embedding learning methods [18,2,13,25] take a matrix representation of the network and factorize them to learn a low-dimensional latent representation/embedding for each node. Recently proposed network em-beddings such as DeepWalk [25] and node2vec [13] are in this category since they implicitly factorize some matrices [27].
|
| 24 |
+
|
| 25 |
+
Similar to these node embedding methods, recent years have also witnessed a rapid growth in knowledge graph embedding methods. A knowledge graph (KG) is a graph with entities of different types of nodes and various relations among them as edges. Link prediction in such a graph is known as knowledge graph completion. It is similar to link prediction in social network analysis, but more challenging because of the presence of multiple types of nodes and edges. For knowledge graph completion, we not only determine whether there is a link between two entities or not, but also predict the specific type of the link. For this reason, the traditional approaches of link prediction are not capable of knowledge graph completion. Therefore, to tackle this issue, a new research direction known as knowledge graph embedding has been proposed [24,8,31,20,15,7,28]. The main idea is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG.
|
| 26 |
+
|
| 27 |
+
Neither of these two approaches, however, can generate "optimal" embed-dings "quickly" for real-time link prediction on new graphs. Random walk based node embedding methods are computationally efficient but give poor results whereas KG-based methods produce optimal results but are computationally expensive. Thus, in this work, we mainly focus on embedding learning methods (i.e., Walk based node embedding methods and knowledge graph completion methods) which are capable of finding optimal embeddings quickly enough to meet real-time constraints for practical applications. To bridge the gap between computational time and performance of embeddings on link prediction, we propose the following contributions in this work:
|
| 28 |
+
|
| 29 |
+
* We compare the embedding's performance and computational cost of both Random walk based node embedding and KG-based embedding methods and empirically determine that Random walk based node embedding methods are faster but give sub-optimal results on link prediction whereas KG based embedding methods are computationally expensive but perform better on link prediction.
|
| 30 |
+
|
| 31 |
+
* We propose a transformation model that takes node embeddings from Random walk based node embedding methods and output near optimal embed-dings without an increase in computational cost.
|
| 32 |
+
|
| 33 |
+
* We demonstrate the results of transformation through extensive experimentation on various social network datasets of different graph sizes and different combinations of node embeddings and KG embedding methods.
|
| 34 |
+
|
| 35 |
+
§ 2 BACKGROUND
|
| 36 |
+
|
| 37 |
+
§ 2.1 PROBLEM DEFINITION
|
| 38 |
+
|
| 39 |
+
Let ${G}_{\text{ homo }} = \langle V,E,A\rangle$ be an unweighted, undirected homogeneous graph where $V$ is the set of vertices, $E$ is the set of observed links, i.e., $E \subset V \times V$ and $A$ is the adjacency matrix respectively. The graph $G$ represents the topological structure of the social network in which an edge $e = \langle u,v\rangle \in E$ represents an interaction that took place between $u$ and $v$ . Let $U$ denote the universal set containing all $\left( {\left| V\right| \times \left( {\left| V\right| - 1}\right) }\right) /2$ possible edges. Then, the set of non-existent links is $U - E$ . Our assumption is that there are some missing links (edges that will appear in future) in the set $U - E$ . Then the link prediction task is given the current network ${G}_{\text{ homo }}$ , find out these missing edges.
|
| 40 |
+
|
| 41 |
+
Similarly, let ${G}_{kg} = \langle V,E,A\rangle$ be a Knowledge Graph (KG). A KG is a directed graph whose nodes are entities and edges are subject-property-object triple facts. Each edge of the form (head entity, relation, tail entity) (denoted as $\langle h,r,t\rangle )$ indicates a relationship $r$ from entity $h$ to entity $t$ . For example, $\langle$ Bob, isFriendOf, Sam $\rangle$ and $\langle$ Bob, livesIn, NewYork $\rangle$ . Note that the entities and relations in a KG are usually of different types. Link prediction in KGs aims to predict the missing $\mathrm{h}$ or $\mathrm{t}$ for a relation fact triple $\langle h,r,t\rangle$ , used in [9,6.8]. In this task, for each position of missing entity, the system is asked to rank a set of candidate entities from the knowledge graph, instead of only giving one best result [9.8].
|
| 42 |
+
|
| 43 |
+
We then formulate the problem of link prediction on graph $G$ such that $G \equiv {G}_{\text{ homo }} \equiv {G}_{kg}$ , i.e., KG with only one type of entity and relation. Link prediction is then to predict the missing $h$ or $t$ for a relation fact triple $\langle h,r,t\rangle$ where both $h$ and $t$ are of same kind. For example $\langle {Bob},{isFriendOf},?\rangle$ or $\langle$ Sam, isFriendOf,? $\rangle$ .
|
| 44 |
+
|
| 45 |
+
§ 2.2 GRAPH EMBEDDING METHODS
|
| 46 |
+
|
| 47 |
+
Graph embedding aims to represent a graph in a low dimensional space which preserves as much graph property information as possible. The differences between different graph embedding algorithms lie in how they define the graph property to be preserved. Different algorithms have different insights of the node (/edge/substructure/whole-graph) similarities and how to preserve them in the embedded space. Formally, given a graph $G = \langle V,E,A\rangle$ , a node embedding is a mapping ${f}_{1} : {v}_{i} \rightarrow {\mathbf{y}}_{\mathbf{i}} \in {\mathbb{R}}^{d}\;\forall i \in \left\lbrack n\right\rbrack$ where $d$ is the dimension of the embed-dings, $n$ the number of vertices and the function $f$ preserves some proximity measure defined on graph $G$ . If there are multiple types of links/relations in the graph then similar to node embeddings, relation embeddings can be obtained as $f : {r}_{j} \rightarrow {\mathbf{y}}_{\mathbf{j}} \in {\mathbb{R}}^{d}\;\forall j \in \left\lbrack k\right\rbrack$ where $k$ the number of types of relations.
|
| 48 |
+
|
| 49 |
+
Node Embeddings using Random Walk Random walks have been used to approximate many properties in the graph including node centrality [23] and similarity [26]. Their key innovation is optimizing the node embeddings so that nodes have similar embeddings if they tend to co-occur on short random walks over the graph. Thus, instead of using a deterministic measure of graph proximity [5], these random walk methods employ a flexible, stochastic measure of graph proximity, which has led to superior performance in a number of settings [12]. Two well known examples of random walk based methods are node2vec [13] and DeepWalk [25].
|
| 50 |
+
|
| 51 |
+
KG Embeddings KG embedding methods usually consists of three steps. The first step specifies the form in which entities and relations are represented in a continuous vector space. Entities are usually represented as vectors, i.e. deterministic points in the vector space [24, 8.31]. In the second step, a scoring function ${f}_{r}\left( {h,t}\right)$ is defined on each fact $\langle h,r,t\rangle$ to measure its plausibility. Facts observed in the KG tend to have higher scores than those that have not been observed. Finally, to learn those entity and relation representations (i.e., embed-dings), the third step solves an optimization problem that maximizes the total plausibility of observed facts as detailed in [30]. KG embedding methods which we use for experiments in this paper are TransE [8], TransH [31], TransD [20], RESCAL [32] and SimplE [17].
|
| 52 |
+
|
| 53 |
+
< g r a p h i c s >
|
| 54 |
+
|
| 55 |
+
Fig. 1: Transformation Model. Input Graph: Green edges are missing links and red edges represents present links. First, a random walk method outputs node embeddings (source) for a graph. These embeddings are then used to initialize KG embedding method, which outputs finetuned embeddings. A transformation model is then trained between source and finetuned embeddings.
|
| 56 |
+
|
| 57 |
+
§ 3 METHODOLOGY
|
| 58 |
+
|
| 59 |
+
Transformation model is suggested to expedite fine-tuning process with KG-embedding methods. Let ${G}_{n,m}$ be a graph with $n$ vertices and $m$ edges. Given the node embeddings of the graph $G$ , we would want to transform them to optimal node embeddings.
|
| 60 |
+
|
| 61 |
+
§ 3.1 NODE EMBEDDING GENERATION
|
| 62 |
+
|
| 63 |
+
The input graph ${G}_{n,m}$ is fed into one of the random walk based graph embed-dings methods (node2vec [13] or DeepWalk [25]), which gives us the node em-beddings. Let $f$ be a random walk based graph embedding method and ${E}_{\text{ source }}^{i}$ denotes the output node embeddings:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{E}_{\text{ source }}^{i} = f\left( {G}^{i}\right) \tag{1}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
where ${G}^{i}$ is the ${i}^{th}$ graph in the dataset of graphs $D = \left\{ {{G}^{1},{G}^{2},\ldots }\right\}$ and ${E}_{\text{ source }}^{i} \in$ ${\mathbb{R}}^{n \times d}$ with the embedding dimension $d$ .
|
| 70 |
+
|
| 71 |
+
§ 3.2 KNOWLEDGE EMBEDDING GENERATION
|
| 72 |
+
|
| 73 |
+
In a KG-based embedding algorithm (such as TransE), the input is a graph and the initial embeddings are randomly initialized. The algorithm uses a scoring function and optimizes the initial embeddings to output the trained embeddings for the given graph. Since we are working with homogeneous graph with only one type of relation, we don't need to learn the embeddings for the relation, hence they are kept constant and only node embeddings are learnt. Let ${E}_{\text{ initial }}^{i}$ be the initial node embeddings, ${E}_{\text{ target }}^{i}$ be the trained embeddings and $g$ the KG method with parameters $\alpha$ .
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{E}_{\text{ target }}^{i} = g\left( {{G}^{i},{E}_{\text{ initial }}^{i};\alpha }\right) \tag{2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where ${E}_{\text{ target }}^{i} \in {R}^{n \times d}$ and ${E}_{\text{ initial }}^{i} \in {R}^{n \times d}$ .
|
| 80 |
+
|
| 81 |
+
Instead of using randomly initialized embeddings ${E}_{\text{ initial }}^{i}$ to obtain target embeddings ${E}_{\text{ target }}^{i}$ , we can initialize with ${E}_{\text{ source }}^{i}$ in Eq. (1) as
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{E}_{\text{ finetuned }}^{i} = g\left( {{G}^{i},{E}_{\text{ source }}^{i};\alpha }\right) \tag{3}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where ${E}_{\text{ finetuned }}^{i} \in {R}^{n \times d}$ are fine tuned output embeddings. This idea of better initialization has also been explored previously in [22, 10] where it has been shown to result in embeddings of higher quality.
|
| 88 |
+
|
| 89 |
+
§ 3.3 TRANSFORMATION MODEL WITH SELF-ATTENTION
|
| 90 |
+
|
| 91 |
+
Using the node embeddings ${E}_{\text{ source }}^{i}$ from Eq. (1) and fine-tuned KG embed-dings ${E}_{\text{ finetuned }}^{i}$ from Eq. (3), we train a transformation model which can learn to transform the node embeddings from a node-based method to KG embed-dings. We adopt self-attention [29] on graph adjacency matrix as explained in Algorithm 1:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{E}_{\text{ transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{ source }}^{i};\theta }\right) \tag{4}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where ${E}_{\text{ transformed }}^{i} \in {R}^{n \times d}$ are the transformed embeddings and $\theta$ are the parameters of the self-attention model.
|
| 98 |
+
|
| 99 |
+
The error between the fine-tuned and transformed embeddings is calculated using squared euclidean distance as:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{E}_{\text{ error }}^{i} = 1/n\sum {\begin{Vmatrix}{E}_{\text{ transformed }}^{i} - {E}_{\text{ finetuned }}^{i}\end{Vmatrix}}^{2}. \tag{5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
The loss on batch $\mathbf{X}$ of graphs is measured as:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\operatorname{Loss}\left( \mathbf{X}\right) = 1/b\mathop{\sum }\limits_{{i = 1}}^{b}{E}_{\text{ error }}^{i} \tag{6}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $\mathbf{X} = \left\{ \left( {{E}_{\text{ transformed }}^{i},{E}_{\text{ finetuned }}^{i}}\right) \right\}$ and $b$ is the batch size. Since KG em-beddings are trained from facts/triplets which are obtained from the adjacency matrix of the graph, a self-attention model reinforced with information of the adjacency matrix when applied to node-embeddings is able to learn the transformation function as observed in our experiments (Figure 3). The proposed algorithm is summarized in Algorithm 2.
|
| 112 |
+
|
| 113 |
+
Algorithm 1: Self-attention on graph adjacency matrix
|
| 114 |
+
|
| 115 |
+
Function SelfAttention $\left( {{G}_{n,m},{E}_{n \times d}}\right)$
|
| 116 |
+
|
| 117 |
+
${A}_{n \times n} =$ Adjacency Matrix of ${G}_{n,m}$
|
| 118 |
+
|
| 119 |
+
${K}_{n \times d} =$ affine(E, d)
|
| 120 |
+
|
| 121 |
+
${Q}_{n \times d} = \operatorname{affine}\left( {\mathrm{E},\mathrm{d}}\right)$
|
| 122 |
+
|
| 123 |
+
${\text{ Logits }}_{n \times n} =$ matmul(Q, transpose(K))
|
| 124 |
+
|
| 125 |
+
AttendedLogit ${s}_{n \times n} =$ Logits $+ \mathrm{A}$
|
| 126 |
+
|
| 127 |
+
${V}_{n \times d} =$ affine(E, d)
|
| 128 |
+
|
| 129 |
+
${\text{ Output }}_{n \times d} =$ matmul(AttendedLogits, V)
|
| 130 |
+
|
| 131 |
+
return Output
|
| 132 |
+
|
| 133 |
+
Algorithm 2: Training the transformation model
|
| 134 |
+
|
| 135 |
+
Input: Dataset of Graphs ${D}_{\text{ train }} = \left\{ {{G}^{1},{G}^{2},\ldots ,{G}^{n}}\right\}$
|
| 136 |
+
|
| 137 |
+
foreach ${G}^{i}$ in ${D}_{\text{ train }}$ do
|
| 138 |
+
|
| 139 |
+
${E}_{\text{ source }}^{i} \leftarrow f\left( {G}^{i}\right)$
|
| 140 |
+
|
| 141 |
+
end
|
| 142 |
+
|
| 143 |
+
foreach ${G}^{i}$ in ${D}_{\text{ train }}$ do
|
| 144 |
+
|
| 145 |
+
$\;;\;{E}_{finetuned}^{i} \leftarrow g\left( {{G}^{i},{E}_{source}^{i};\alpha }\right)$
|
| 146 |
+
|
| 147 |
+
end
|
| 148 |
+
|
| 149 |
+
while true do
|
| 150 |
+
|
| 151 |
+
$\mathbf{B} = \left\{ \left( {{E}_{\text{ source }}^{i},{E}_{\text{ finetuned }}^{i}}\right) \right\}$ DSample batch
|
| 152 |
+
|
| 153 |
+
foreach ${E}_{\text{ source }}^{i}$ in $\mathbf{B}$ do
|
| 154 |
+
|
| 155 |
+
${E}_{\text{ transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{ source }}^{i};\theta }\right)$
|
| 156 |
+
|
| 157 |
+
end
|
| 158 |
+
|
| 159 |
+
$\mathbf{X} = \left\{ \left( {{E}_{\text{ transformed }}^{i},{E}_{\text{ finetuned }}^{i}}\right) \right\}$
|
| 160 |
+
|
| 161 |
+
$\theta \leftarrow \theta - \beta {\nabla }_{\theta }\operatorname{Loss}\left( \mathbf{X}\right)$ DUpdate
|
| 162 |
+
|
| 163 |
+
end
|
| 164 |
+
|
| 165 |
+
§ 4 EXPERIMENTS
|
| 166 |
+
|
| 167 |
+
§ 4.1 DATASETS
|
| 168 |
+
|
| 169 |
+
Yang, et. al [33] introduced social network datasets with ground-truth communities. Each dataset $D$ is a network having a total of $N$ nodes, $E$ edges and a set of communities (Table 1).
|
| 170 |
+
|
| 171 |
+
max width=
|
| 172 |
+
|
| 173 |
+
Dataset Description Nodes Edges Communities
|
| 174 |
+
|
| 175 |
+
1-5
|
| 176 |
+
YouTube Friendship 1,134,890 2,987,624 8,385
|
| 177 |
+
|
| 178 |
+
1-5
|
| 179 |
+
DBLP Co-authorship 317,080 1,049,866 13,477
|
| 180 |
+
|
| 181 |
+
1-5
|
| 182 |
+
Amazon Co-purchasing 334,863 925,872 75,149
|
| 183 |
+
|
| 184 |
+
1-5
|
| 185 |
+
LiveJournal Friendship 3,997,962 34,681,189 287,512
|
| 186 |
+
|
| 187 |
+
1-5
|
| 188 |
+
Orkut Friendship 3,072,441 117,185,083 6,288,363
|
| 189 |
+
|
| 190 |
+
1-5
|
| 191 |
+
|
| 192 |
+
Table 1: Datasets
|
| 193 |
+
|
| 194 |
+
< g r a p h i c s >
|
| 195 |
+
|
| 196 |
+
Fig. 2: Histogram showing community size vs its frequency. DBLP, YouTube and Amazon datasets have smaller size communities and LiveJournal and Orkut have larger size communities.
|
| 197 |
+
|
| 198 |
+
The communities in each dataset are of different sizes. They range from a small size (1-20) to bigger sizes (380-400). There are more communities with small sizes and their frequency decreases as their size increases. This trend is depicted in Figure 2.
|
| 199 |
+
|
| 200 |
+
YouTube ${}^{3}$ , Orkut ${}^{3}$ and LiveJournal ${}^{3}$ are friendship networks where each community is a user-defined group. Nodes in the community represent users, and edges represent their friendship.
|
| 201 |
+
|
| 202 |
+
${\mathrm{{DBLP}}}^{3}$ is a co-authorship network where two authors are connected if they publish at least one paper together. A community is represented by a publication venue, e.g., journal or conference. Authors who published to a certain journal or conference form a community.
|
| 203 |
+
|
| 204 |
+
Amazon ${}^{3}$ co-purchasing network is based on Customers Who Bought This Item Also Bought feature of the Amazon website. If a product $i$ is frequently co-purchased with product $j$ , the graph contains an undirected edge from $i$ to $j$ . Each connected component in a product category defined by Amazon acts as a community where nodes represent products in the same category and edges indicate that we were purchased together.
|
| 205 |
+
|
| 206 |
+
§ 4.2 TRAINING
|
| 207 |
+
|
| 208 |
+
We consider each community in a dataset as an individual graph ${G}_{n,m}$ with vertices representing the entity in the community and edges representing the relationship. For training the transformation model, we select communities of particular size range which acts as dataset $D$ of graphs (Table 2). We randomly disable ${20}\%$ of the links (edges) in each graph to act as missing links for link prediction. In all the experiments, the embedding dimension is set to 32, which works best in our pilot test. We used OpenNE ${}^{4}$ for generating node2vec and DeepWalk embeddings and OpenKE [14] for generating KG embeddings. The dataset $D$ of graphs is split into train, validation and test split of ${64}\% ,{16}\%$ , and ${20}\%$ respectively.
|
| 209 |
+
|
| 210 |
+
${}^{3}$ http://snap.stanford.edu/data/index.html#communities
|
| 211 |
+
|
| 212 |
+
max width=
|
| 213 |
+
|
| 214 |
+
Dataset Graph Size Number of Graphs Average Degree Average Density
|
| 215 |
+
|
| 216 |
+
1-5
|
| 217 |
+
YouTube 16-21 338 3.00 0.17
|
| 218 |
+
|
| 219 |
+
1-5
|
| 220 |
+
DBLP 16-21 654 4.93 0.29
|
| 221 |
+
|
| 222 |
+
1-5
|
| 223 |
+
Amazon 21-25 1425 4.00 0.18
|
| 224 |
+
|
| 225 |
+
1-5
|
| 226 |
+
LiveJournal 51-55 1504 6.11 0.12
|
| 227 |
+
|
| 228 |
+
1-5
|
| 229 |
+
LiveJournal 61-65 1101 7.20 0.11
|
| 230 |
+
|
| 231 |
+
1-5
|
| 232 |
+
LiveJournal 71-75 806 7.53 0.10
|
| 233 |
+
|
| 234 |
+
1-5
|
| 235 |
+
LiveJournal 81-85 672 6.58 0.08
|
| 236 |
+
|
| 237 |
+
1-5
|
| 238 |
+
LiveJournal 91-95 497 8.01 0.08
|
| 239 |
+
|
| 240 |
+
1-5
|
| 241 |
+
LiveJournal 101-105 400 6.85 0.06
|
| 242 |
+
|
| 243 |
+
1-5
|
| 244 |
+
LiveJournal 111-115 351 5.89 0.05
|
| 245 |
+
|
| 246 |
+
1-5
|
| 247 |
+
LiveJournal 121-125 332 7.67 0.06
|
| 248 |
+
|
| 249 |
+
1-5
|
| 250 |
+
Orkut 151-155 1868 7.20 0.04
|
| 251 |
+
|
| 252 |
+
1-5
|
| 253 |
+
Orkut 251-255 654 7.21 0.028
|
| 254 |
+
|
| 255 |
+
1-5
|
| 256 |
+
Orkut 351-355 335 7.33 0.020
|
| 257 |
+
|
| 258 |
+
1-5
|
| 259 |
+
|
| 260 |
+
Table 2: Selected datasets and graph size for experiments.
|
| 261 |
+
|
| 262 |
+
§ 4.3 EVALUATION METRICS
|
| 263 |
+
|
| 264 |
+
For evaluation, we use MRR and Precision@K.The algorithm predicts a list of ranked candidates for the incoming query. To remove pre-existing triples in the knowledge graph, filtering operation cleans them up from the list. MRR computes the mean of the reciprocal rank of the correct candidate in the list, and Precision@K evaluates the rate of correct candidates appearing in the top $\mathrm{K}$ candidates predicted. Due to space constraints, we only present the results for MRR. Results of Precision@K can be found at our GitHub ${}^{5}$ .
|
| 265 |
+
|
| 266 |
+
§ 5 RESULTS & DISCUSSIONS
|
| 267 |
+
|
| 268 |
+
From the results depicted in Figure 3, we observe that the target KG embeddings (TransE, TransH, etc.) almost always outperforms random-walk based source embeddings (node2vec and DeepWalk) except in case of SimplE and DistMult where both the methods perform poorly. This can also be observed in Figure 4.
|
| 269 |
+
|
| 270 |
+
Finetuned KG embeddings achieved better or equivalent performance as compared to target KG embeddings. This can be confirmed by ANOVA test in Figure 4 where there is no significant difference between the MRRs obtained from finetuned and target KG embeddings in most cases. Specifically, translational based methods such as TransE, TransH, and TransD have equivalent performance for finetuned and target embeddings whereas SimplE, RESCAL, and DistMult have better finetuned embeddings than target embeddings as the graph size grows.
|
| 271 |
+
|
| 272 |
+
${}^{4}$ https://github.com/thunlp/OpenNE
|
| 273 |
+
|
| 274 |
+
${}^{5}$ https://github.com/ArchitParnami/GraphProject
|
| 275 |
+
|
| 276 |
+
< g r a p h i c s >
|
| 277 |
+
|
| 278 |
+
Fig. 3: Performance evaluation of different embeddings on link prediction using MRR (y-axis). Source (green) refers to embeddings from node2vec (left) and DeepWalk (right). Target (brown) refers to KG embeddings from TransE, TransH, TransD, SimplE, RESCAL, or DistMult. For each source and target pair, we evaluate finetuned (orange) embeddings (obtained by initializing target method with source embeddings) and transformed (red) embeddings (obtained by applying transformation model on source embeddings). Results are presented on different datasets of varying graph sizes.
|
| 279 |
+
|
| 280 |
+
< g r a p h i c s >
|
| 281 |
+
|
| 282 |
+
Fig. 4: ANOVA test of MRR scores from two embedding methods (Method 1 and Method 2). The difference of MRR scores between the two methods is significant when their p-values are $< {0.05}$ (light green) and not significant otherwise (light red). The values in each cell are the difference between the means of MRR scores from two methods (Method 2 - Method 1). The text in bold represents when Method 2 did better than Method 1. Source method refers to node2vec (left) and DeepWalk (right). Target method refers to TransE, TransH, TransD, SimplE, RESCAL, or DistMult in each row.
|
| 283 |
+
|
| 284 |
+
< g r a p h i c s >
|
| 285 |
+
|
| 286 |
+
Fig. 5: CPU Time (left y-axis) vs Graph Size (x-axis) and Mean MRR (right y-axis) vs Graph Size comparison of finetuned (TransE finetuned from node2vec) and transformed embeddings (from node2vec). As the graph size increases the time to obtain embeddings from KG methods (TransE) also increases significantly. However, there is no significant increase in time for the transformation (from node2vec) once we have the transformation model. The Mean MRR scores of both finetuned and transformed embeddings also drop with the increase in graph size, however, they perform equally good (for graphs <76). Note that finetuning time and transformation time both include time to obtain node2vec embeddings as well.
|
| 287 |
+
|
| 288 |
+
Transformed embeddings consistently outperform source embeddings and have similar performance to finetuned embeddings at least for graphs of sizes up to 65. The performance drop starts from graph size 71-75 in the transformation to TransD from DeepWalk whereas 81-85 in the transformation to TransE from node2vec. For RESCAL, the transformation works for larger sized graphs in node2vec and till 121-125 in DeepWalk.
|
| 289 |
+
|
| 290 |
+
As the graph size increases (top to bottom), the overall MRR scores decrease for all the embeddings as expected. In Figure 5, we compare computation time and MRR performance of transformed embeddings and finetuned embeddings where source method is node2vec and target method is TransE. It can be seen that the transformed embeddings give similar performance as finetuned embed-dings (without any significant increase in computational cost) up to graphs of size 71-75. Thereafter the transformed embeddings perform poorly, we attribute this to poor finetuned embeddings on which the transformation model was trained.
|
| 291 |
+
|
| 292 |
+
§ 6 CONCLUSION
|
| 293 |
+
|
| 294 |
+
In this work, we have demonstrated that random-walk based node embedding (source) methods are computationally efficient but give sub-optmial results on link prediction in social networks whereas KG based embedding (target & fine-tuned) methods perform better but are computationally expensive. For our requirement of generating optimal embeddings quickly for real-time link prediction we proposed a self-attention based transformation model to convert walk-based embeddings to optimal KG embeddings. The proposed model works well for smaller graphs but as the complexity of the graph increases, the transformation performance decreases. For future work, our goal is to explore better transformation models for bigger graphs.
|
papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,419 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Anonymous Author(s)
|
| 2 |
+
|
| 3 |
+
Anonymous Affiliation
|
| 4 |
+
|
| 5 |
+
Anonymous Email
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Link prediction (LP) has been recognized as an important task in graph learning with its board practical applications. A typical application of LP is to retrieve the top scoring neighbors for a given source node, such as the friend recommendation. These services desire the high inference scalability to find the top scoring neighbors from many candidate nodes at low latencies. There are two popular decoders that the recent LP models mainly use to compute the edge scores from node embeddings: the HadamardMLP and Dot Product decoders. After theoretical and empirical analysis, we find that the HadamardMLP decoders are generally more effective for LP. However, HadamardMLP lacks the scalability for retrieving top scoring neighbors on large graphs, since to the best of our knowledge, there does not exist an algorithm to retrieve the top scoring neighbors for HadamardMLP decoders in sublinear complexity. To make HadamardMLP scalable, we propose the Flashlight algorithm to accelerate the top scoring neighbor retrievals for HadamardMLP: a sublinear algorithm that progressively applies approximate maximum inner product search (MIPS) techniques with adaptively adjusted query embeddings. Empirical results show that Flashlight improves the inference speed of LP by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing effectiveness. Our work paves the way for large-scale LP applications with the effective HadamardMLP decoders by greatly accelerating their inference.
|
| 10 |
+
|
| 11 |
+
## 21 I Introduction
|
| 12 |
+
|
| 13 |
+
The goal of link prediction (LP) is to predict the missing links in a graph [1]. LP is drawing increasing attention in the past decade due to its board practical applications [2]. For instance, LP can be used to recommend new friends on social media [3], and recommend attractive items to the costumers on E-commerce sites [4], so as to improve the user experience. During inference, these applications demand the LP methods to retrieve the top scoring neighbors for a source node at low latencies. This is especially challenging on large graphs because the LP methods need to search many candidate nodes to find the top scoring neighbors.
|
| 14 |
+
|
| 15 |
+
There are two main kinds of architecture followed by the recent LP models. The first uses an encoder, e.g., GCN [5], to obtain the node-level embeddings and uses a decoder, e.g., Dot Product, to get the edge scores between the paired nodes [6]. The second crops a subgraph for every edge and computes the edge score from the subgraph directly [7]. The inference speed of the second is much lower than the first, so we focus on the first kind of models to achieve fast inference on large graphs. In the last years, extensive research focuses on developing more expressive LP encoders [6, 8]. However, much less work pays attention to the essential impacts of the choice of decoders on LP's performance. In this work, we theoretically and empirically analyze two popular LP decoders: Dot Product and HadamardMLP (a MLP following the Hadamard Product), and find that the latter is generally more effective than the former.
|
| 16 |
+
|
| 17 |
+
In practical applications, we should not only consider the effectiveness of LP, but also inference efficiency. Many LP applications generally require fast retrieval of the top scoring neighbors for low-latency services $\left\lbrack {3,9,{10}}\right\rbrack$ . For a Dot Product decoder, this retrieval can be approximated efficiently at the sublinear time complexity [11]. However, to the best of our knowledge, no such sublinear algorithms exist for the top scoring neighbor retrievals of the HadamardMLP decoders. This means
|
| 18 |
+
|
| 19 |
+
# Flashlight $\mathcal{L}$ : Scalable Link Prediction with Effective Decoders
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Two popular LP decoders: The Dot Product (left), equivalent to the element-wise summation following the Hadamard product, and the HadamardMLP decoder (right).
|
| 24 |
+
|
| 25 |
+
that for every source node, we have to iterate over all the nodes in the graph to compute the scores so as to find the top scoring neighbors for HadamardMLP, which is of linear complexity and cannot scale to large graphs.
|
| 26 |
+
|
| 27 |
+
To allow LP applications to enjoy the high effectiveness of HadamardMLP decoders while avoiding the poor inference scalability, we propose the scalable top scoring neighbor search algorithm named Flashlight. Our Flashlight progressively calls the well-developed approximate maximum inner product search (MIPS) techniques for a few iterations. At every iteration, we analyze the retrieved neighbors and adaptively adjust the query embedding for Flashlight to find the missed high scoring neighbors. Our Flashlight algorithm holds sublinear time complexity on finding top scoring neighbors for HadamardMLP decoders, allowing for fast and scalable inference. Empirical results show that Flashlight accelerates the inference of LP models by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing the effectiveness. Overall, our work paves the way for the use of effective LP decoders in practical settings by greatly accelerating their inference.
|
| 28 |
+
|
| 29 |
+
## 2 Revisiting Link Prediction Decoders
|
| 30 |
+
|
| 31 |
+
In this section, we formalize the link prediction (LP) problem and the LP decoders. Typically, many LP models include an encoder that learns the node-level embeddings ${\mathbf{x}}_{i}, i \in \mathcal{V}$ , where $\mathcal{V}$ is the set of nodes, and an decoder $\phi : {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \rightarrow \mathbb{R}$ that combines the node-level embeddings of a pair of nodes: ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ into a single score: ${s}_{ij}$ . If ${s}_{ij}$ is higher, the link between nodes $i$ and $j$ is more likely to exist. The state-of-the-art models generally use graph neural networks as the encoders [5, 6, 8, 12, 13]. From here on, we mainly focus on the decoder $\phi$ .
|
| 32 |
+
|
| 33 |
+
### 2.1 Dot Product Decoder
|
| 34 |
+
|
| 35 |
+
The most common decoder of link prediction is the Dot Product $\left\lbrack {6,8,{10}}\right\rbrack$ :
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
{s}_{ij} = {\phi }^{\text{dot }}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{:=}} {\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}, \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
where $\cdot$ denotes the dot product.
|
| 42 |
+
|
| 43 |
+
Training a link prediction model with the Dot Product decoder encourages the embeddings of the connected nodes to be close to each other. Intuitively, the score ${s}_{ij}$ can be thought as a measure of the squared Eulidean distance between the node embeddings ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ , as ${\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}\end{Vmatrix}}^{2} = {\begin{Vmatrix}{\mathbf{x}}_{i}\end{Vmatrix}}^{2} - 2{\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j} +$ ${\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}}^{2}$ , if the $\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}$ is constant over the neighbors $j \in \mathcal{N}$ , e.g., after normalization [14]. Because the node embeddings represent the semantic information of nodes, Dot Product assumes the homophily of graph topology, i.e., the semantically similar nodes are more likely to be connected.
|
| 44 |
+
|
| 45 |
+
### 2.2 HadamardMLP (MLP following Hadamard Product) Decoder
|
| 46 |
+
|
| 47 |
+
Multi layer perceptrons (MLPs) are known to be universal approximators that can approximate any continuous function on a compact set [15]. A MLP layer can be defined as a function $f : {\mathbb{R}}^{{d}_{\text{in }}} \rightarrow$ ${\mathbb{R}}^{{d}_{\text{out }}}$ :
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{f}_{\mathbf{W}}\left( \mathbf{x}\right) = \operatorname{ReLU}\left( {\mathbf{W}\mathbf{x}}\right) \tag{2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
which is parameterized by the learnable weight $\mathbf{W} \in {\mathbb{R}}^{{d}_{\text{out }} \times {d}_{\text{in }}}$ (the bias, if exists, can be represented by an additional column in $\mathbf{W}$ and an additional channel in the input $\mathbf{x}$ with the value as 1 ). ReLU is the activation function. In a MLP, several layers of $f$ are stacked, e.g., a 3-layer MLP can be formalized as ${f}_{{\mathbf{W}}_{3}}\left( {{f}_{{\mathbf{W}}_{2}}\left( {{f}_{{\mathbf{W}}_{1}}\left( \mathbf{x}\right) }\right) }\right)$ .
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+
Figure 2: HadamardMLP achieves higher Mean Reciprocal Rank (MRR, higher is better) than other decoders on the OGBL-CITATION2 [16] dataset with the encoder as GraphSAGE [12] and GCN [5]. More empirical results and the detailed settings are in Sec. 6.3.
|
| 58 |
+
|
| 59 |
+
The state-of-the-art models widely use a MLP following the Hadamard Product between the paired nodes as the decoder (short as the HadamardMLP decoders) [6, 8, 10, 16]:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{:=}} \operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}\left( {{f}_{{\mathbf{W}}_{L - 1}}\left( {\ldots {f}_{{\mathbf{W}}_{1}}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) \ldots }\right) }\right) , \tag{3}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $\odot$ denotes the Hadamard Product. Fig. 1 illustrates these two models the Dot Product and HadamardMLP decoders.
|
| 66 |
+
|
| 67 |
+
### 2.3 Other Link Prediction Decoders
|
| 68 |
+
|
| 69 |
+
In principle, every function that takes two vectors as the input and outputs a scalar can act as the decoder. For example, there are bilinear dot product decoder (short as Bilinear decoder) [6]:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{s}_{ij} = {\mathbf{h}}_{i}^{T}\mathbf{W}{\mathbf{h}}_{j}, \tag{4}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $\mathbf{W}$ is the learnable weight, and the MLPs following the concatenate decoder [6,10] (short as ConcatMLP decoder):
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{s}_{ij} = \operatorname{MLP}\left( {{\mathbf{h}}_{i}\parallel {\mathbf{h}}_{j}}\right) \tag{5}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
, etc. These two decoders are used much less than Dot Product and HadamardMLP in the state-of-the-art LP models possibly due to their lower effectiveness [6, 8, 10, 16].
|
| 82 |
+
|
| 83 |
+
### 2.4 HadamardMLP is Generally More Effective than Other Decoders
|
| 84 |
+
|
| 85 |
+
Dot Product demands the homophily of graph data to effectively infer the link between nodes. In contrast, thanks to the universal approximation capability, MLP can approximate any continuous function, and thus does not demand the homophily of graph data for effective LP. This gap in the expressiveness accounts for the performance difference of these two decoders on many datasets (see Sec. 6.3). We additionally show in Appendix. A that using a HadamardMLP is easy to learn Dot Product, which also partially accounts for the better effectiveness of the HadamardMLP decoders over the Dot Product. Existing work also finds that the effectiveness of Bilinear and ConcatMLP is generally worse than the HardmardMLP or Dot Product decoder [6, 8, 10, 16]. We confirm these findings more rigorously in the empirical results in Fig. 2 and more complete in Sec. 6.3.
|
| 86 |
+
|
| 87 |
+
## 3 Scalability of Link Prediction Decoders
|
| 88 |
+
|
| 89 |
+
Most academic studies focus on training runtime when discussing scalability. However, in industrial applications, the inference speed is often more important. The inference of many LP applications needs to retrieve the top scoring neighbors given a source node, e.g., recommending friends to a user for friend recommendation. Given a source node, if there are $n$ nodes in the graph, then the inference time complexity is $\mathcal{O}\left( n\right)$ if the decoder needs to iterate over all the $n$ nodes to compute the edge scores. For large scale applications, $n$ is typically in the range of millions, or even larger. The empirical results show that the inference time of finding the top scoring neighbors for a source node is longer than one second for HadamardMLP on the OGBL-CITATION2 dataset of nearly three million nodes (see Sec. 6.5).
|
| 90 |
+
|
| 91 |
+
For a Dot Product decoder, the problem of finding the top scoring neighbors can be approximated efficiently. This is a well-studied problem, known as approximate maximum inner product search (MIPS) [17, 18] (see Sec. 5.2 for a comprehensive literature review). MIPS techniques allow Dot Product' inference to be completed in a few milliseconds, even with millions of neighbors. There exists some work that tries to extend MIPS to the ConcatMLP [19, 20]. These methods hold strict assumptions on the models' training and are not directly applicable to the HadamardMLP. To the best of our knowledge, no such sublinear techniques exist for the top scoring neighbor retrieval with the HadamardMLP [10], which is a complex nonlinear function.
|
| 92 |
+
|
| 93 |
+
To summarize, the HadamardMLP decoder is not scalable for the real time LP services on large graphs, while the Dot Product decoder allows fast retrieval using the well established MIPS techniques.
|
| 94 |
+
|
| 95 |
+
## 4 Flashlight: Scalable Link Prediction with Effective Decoders
|
| 96 |
+
|
| 97 |
+
Sec. 2 has shown that the HadamardMLP decoder enjoys higher effectiveness than the Dot Product decoder, which supports the superior performance of HadamardMLP on many LP benchmarks. On the other hand, Sec. 3 has shown that the HadamardMLP is not scalable for real time LP applications on large graphs, while Dot Product supports the fast inference using the well-established MIPS techniques. In this section, we aim to devise fast inference algorithms for HadamardMLP to enable scalable LP with effective decoders.
|
| 98 |
+
|
| 99 |
+
We try to exploit the advances in the well-developed MIPS techniques to accelerate the inference of HadamardMLP. Specifically, we divide the top scoring retrievals for HadamardMLP predictors into a sequence of MIPS. Our algorithm works in a progressive manner. The query embedding in every search is adaptively adjusted to find the high scoring neighbors missed in the last search.
|
| 100 |
+
|
| 101 |
+
The challenge of retrieving the neighbors of highest scores for HadamardMLP is rooted in the unawareness of which neurons are activated, since if we know which neurons are activated, the nonlinear HadamardMLP degrades to a linear model. On the $l$ th MLP layer, we define the mask matrix ${\mathbf{M}}_{\mathcal{A}, l} \in {\mathbb{R}}^{{d}_{l} \times {d}_{l}}$ to represent the set of activated neurons $\mathcal{A}$ as
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{M}_{ij} = \left\{ \begin{array}{ll} 1, & \text{ if }i = j\text{ and }i \in \mathcal{A} \\ 0, & \text{ otherwise } \end{array}\right. \tag{6}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
With ${\mathbf{M}}_{\mathcal{A}, l}$ , we reformulate the HadamardMLP decoder as:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}{\mathbf{W}}_{L - 1}\ldots {\mathbf{M}}_{\mathcal{A},1}{\mathbf{W}}_{1}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
= \left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right) \cdot {\mathbf{x}}_{j} \tag{7}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
Because the vector ${\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}^{T}{\mathbf{w}}_{L}$ is determined by the weights of MLP and the activated neurons $\mathcal{A}$ , we term it as ${\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right)$ :
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \mathrel{\text{:=}} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}^{T}{\mathbf{w}}_{L} \tag{8}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
Given the source node $i$ , because the score ${s}_{ij}$ is obtained by the dot product between $\left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}^{T}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right)$ and the neighbor embedding ${\mathbf{x}}_{j}$ , we term the former vector as the query embedding $\mathbf{q}$ :
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathbf{q} \mathrel{\text{:=}} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} = {\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \odot {\mathbf{x}}_{i} \tag{9}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
In this way, we can reformulate the output of decoder ${\phi }^{MLP}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{\mathbf{j}}}\right)$ as
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = \mathbf{q} \cdot {\mathbf{x}}_{j}. \tag{10}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
In practice, we can use the $\mathbf{q}$ as the query embedding in MIPS to retrieve the neighbors of highest inner products, which correspond to the highest scores. Here, how to get the activated neurons $\mathcal{A}$ so as to obtain the query embedding $\mathbf{q}$ is an issue. Different node pairs activate different neurons $\mathcal{A}$ . Initially, without knowing which neurons are activated, we first assume all the neurons are activated, i.e., we have the initial query embedding as:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathbf{q}\left\lbrack 1\right\rbrack = \left( {\mathop{\prod }\limits_{{i = 1}}^{{L - 1}}{\mathbf{W}}_{i}^{T}}\right) {\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} \tag{11}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Algorithm 1 Flashlight $\#$ : progressively "illuminates" the semantic space to retrieve the high scoring neighbors for the LP HadamardMLP decoders.
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
Input: A trained HadamardMLP decoder ${\phi }^{\text{MLP }}$ that outputs the logit ${s}_{ij}$ for the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ . The
|
| 146 |
+
|
| 147 |
+
set of nodes $\mathcal{V}$ . The node embedding set $\mathcal{X} = \left\{ {{\mathbf{x}}_{i} \mid i \in \mathcal{V}}\right\}$ . A source node $i$ . The number of iterations
|
| 148 |
+
|
| 149 |
+
$T$ . The number of neighbors to retrieve at every iteration: $\mathbf{N} = \left\lbrack {{N}_{1},{N}_{2},\ldots ,{N}_{T}}\right\rbrack$ .
|
| 150 |
+
|
| 151 |
+
Output: The recommended neighbors $\mathcal{N}$ for the source node $i$ .
|
| 152 |
+
|
| 153 |
+
: Initialize the set of retrieved recommended neighbors $\mathcal{N} \leftarrow \varnothing$
|
| 154 |
+
|
| 155 |
+
Initialize the set of activated neurons as $\mathcal{A}\left\lbrack 0\right\rbrack$ as all the neurons in MLP.
|
| 156 |
+
|
| 157 |
+
for $t \leftarrow 1$ to $T$ do
|
| 158 |
+
|
| 159 |
+
Calculate the query embedding $\mathbf{q}\left\lbrack t\right\rbrack \leftarrow {\mathbf{x}}_{i} \odot {\operatorname{MLP}}_{\mathcal{A}\left\lbrack {t - 1}\right\rbrack }\left( \cdot \right)$ .
|
| 160 |
+
|
| 161 |
+
$\mathcal{N}\left\lbrack t\right\rbrack \leftarrow {N}_{t}$ neighbors in $\mathcal{X}$ that maximizes the inner product with $\mathbf{q}\left\lbrack t\right\rbrack$ .
|
| 162 |
+
|
| 163 |
+
$\mathcal{X} \leftarrow \mathcal{X} \smallsetminus \left\{ {{\mathbf{x}}_{j} \mid j \in \mathcal{N}\left\lbrack t\right\rbrack }\right\} .$
|
| 164 |
+
|
| 165 |
+
${j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)$
|
| 166 |
+
|
| 167 |
+
$\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right)$ .
|
| 168 |
+
|
| 169 |
+
$\mathcal{N} \leftarrow \mathcal{N} \cup \mathcal{N}\left\lbrack t\right\rbrack$ .
|
| 170 |
+
|
| 171 |
+
return $\mathcal{N}$
|
| 172 |
+
|
| 173 |
+
---
|
| 174 |
+
|
| 175 |
+
This initial design can reflect the general trends of increasing the edge scores on LP, without restricting which neurons are activated. We use $\mathbf{q}\left\lbrack 1\right\rbrack$ as the query embedding to retrieve the highest inner product neighbors as $\mathcal{N}\left\lbrack 1\right\rbrack$ in the first iteration. Then, given the retrieved neighbors in the $t$ th iteration as $\mathcal{N}\left\lbrack t\right\rbrack$ , we analyze the $\mathcal{N}\left\lbrack t\right\rbrack$ and adaptively adjust the query embedding $\mathbf{q}\left\lbrack {t + 1}\right\rbrack$ that we use in the next iteration to find more high scoring neighbors. Specifically, we operate the feed-forward to MLP for $\mathcal{N}\left( t\right)$ . We define the function $A\left( {\cdot , \cdot }\right)$ that returns the set of activated neurons for a MLP (the first input) with the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ (the second input). Then we can use it to extract $\mathcal{A}$ as:
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\mathcal{A} = A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{12}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
Then, we obtain the set of activated neurons of the highest scored neighbor at the $t$ th iteration as:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right) \text{, where }{j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{13}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
This implies that the neighbors activating $\mathcal{A}\left\lbrack t\right\rbrack$ can obtain the high edge scores. Then, if we take $\mathcal{A}\left\lbrack 1\right\rbrack$ as the set of neurons that we activate at the next query, we could find more high scoring neighbors. In this way, we set the neurons that we assume to activate in the next iteration as $\mathcal{A}\left\lbrack t\right\rbrack$ . We repeat the above iterations until enough neighbors are retrieved. The algorithm is summarized in Alg. 1.
|
| 188 |
+
|
| 189 |
+
We name our algorithm as Flashlight because it works like a flashlight to progressively "illuminates" the semantic space to find the high scoring neighbors. The query embeddings are like the lights sent from the flashlight. And our process of adjusting the query embeddings is just like progressively adjusting the "lights" from the "flashlight" by checking the "objects" found in the last "illumination".
|
| 190 |
+
|
| 191 |
+
In the experiments, we find that our Flashlight algorithm is effective to find the top scoring neighbors from the massive candidate neighbors. For example, in Fig. 3, our Flashlight is able to find the top 100 scoring neighbors from nearly three million candidates by retrieving only 200 neighbors in the large OGBL-CITATION2 graph dataset for the HadamardMLP decoders.
|
| 192 |
+
|
| 193 |
+
Complexity Analysis. Using MLP decoders to compute the LP probabilities of all the neighbors holds the complexity as $\mathcal{O}\left( N\right)$ , where $N$ is the number of nodes in the whole graph. Finding the top scoring neighbors from the exact probabilities of all the neighbors also holds the linear complexity $\mathcal{O}\left( N\right)$ . Overall, using MLP decoders to find the top scoring neighbors is of the time complexity $\mathcal{O}\left( N\right)$ . In contrast, our Flashlight progressively calls the MIPS techniques for a constant number of times invariant to the graph data, which leads to the sublinear complexity as same as MIPS. In conclusion, our Flashlight improves the scalability and applicability of HadamardMLP decoders by reducing their inference time complexity from linear to sublinear time.
|
| 194 |
+
|
| 195 |
+
Table 1: Statistics of datasets.
|
| 196 |
+
|
| 197 |
+
<table><tr><td>Dataset</td><td>OGBL-DDI</td><td>OGBL-COLLAB</td><td>OGBL-PPA</td><td>OGBL-CITATION2</td></tr><tr><td>#Nodes</td><td>4,267</td><td>235,868</td><td>576,289</td><td>2,927,963</td></tr><tr><td>$\mathbf{\# {Edges}}$</td><td>1,334,889</td><td>1,285,465</td><td>30,326,273</td><td>30,561,187</td></tr></table>
|
| 198 |
+
|
| 199 |
+
## 5 Related Work
|
| 200 |
+
|
| 201 |
+
### 5.1 Link Prediction Models
|
| 202 |
+
|
| 203 |
+
Existing LP models can be categorized into three families: heuristic feature based [3, 9, 21-23], latent embedding based [12, 24-28], and neural network based ones. The neural network-based link prediction models are mainly developed in recent years, which explore non-linear deep structural features with neural layers. Variational graph auto-encoders [13] predict links by encoding graph with graph convolutional layer [5]. Another two state-of-the-art neural models WLNM [29] and SEAL [30] use graph labeling algorithm to transfer union neighborhood of two nodes (enclosing subgraph) as meaningful matrix and employ convolutional neural layer or a novel graph neural layer DGCNN [31] for encoding. More recently, $\left\lbrack {6,8}\right\rbrack$ summarized the architectures LP models, and formally define the encoders and decoders.
|
| 204 |
+
|
| 205 |
+
Different from the previous work, we focus on analyzing the effectiveness of different LP decoders and improving the scalability of the effective LP decoders. In practice, we find that the Hadamard decoders exhibit superior effectiveness but poor scalability for inference. Our work significantly accelerates the inference of HadamardMLP decoders to make the effective LP scalable.
|
| 206 |
+
|
| 207 |
+
### 5.2 Maximum Inner Product Search
|
| 208 |
+
|
| 209 |
+
Finding the top scoring neighbors for the Dot Product decoder at the sublinear time complexity is a well studied research problem, known as the approximate maximum inner product search (MIPS). There are several approaches to MIPS: sampling based [11, 32, 33], LSH-based [34-37], graph based [38-40], and quantization approaches [17, 18]. MIPS is a fundamental building block in various application domains [41-46], such as information retrieval [47, 48], pattern recognition [49, 50], data mining [51, 52], machine learning [53, 54], and recommendation systems [55, 56].
|
| 210 |
+
|
| 211 |
+
With the explosive growth of datasets' scale and the inevitable curse of dimensionality, MIPS is essential to offer the scalable services. However, the HadamardMLP decoders are nonlinear and there do not exist the well studied sublinear complexity algorithms to find the top scoring neighbors for HadamardMLP [10]. In this work, we utilize the well studied approximate MIPS techniques with the adaptively adjusted query embeddings to find the top scoring neighbors for the MLP decoders in a progressive manner. Our method supports the plug-and-play use during inference and significantly acclerates the LP inference with the effective MLP decoders.
|
| 212 |
+
|
| 213 |
+
## 6 Experiments
|
| 214 |
+
|
| 215 |
+
In this section, we first compare the effectiveness of different LP decoders. We find that the HadamardMLP decoders generally perform better than other decoders. Then, we implement our 9 Flashlight algorithm with LP models to show that Flashlight effectively retrieves the top scoring neighbors for the HadamardMLP decoders. As a result, the inference efficiency and scalability of HadamardMLP decoders are improved significantly by our work.
|
| 216 |
+
|
| 217 |
+
### 6.1 Datasets
|
| 218 |
+
|
| 219 |
+
We evaluate the link prediction on Open Graph Benchmark (OGB) data [57]. We use four OGB datasets with different graph types, including OGBL-DDI, OGBL-COLLAB, OGBL-CITATION2, and OGBL-PPA. OGBL-DDI is a homogeneous, unweighted, undirected graph, representing the drug-drug interaction network. Each node represents a drug. Edges represent interactions between drugs. OGBL-COLLAB is an undirected graph, representing a subset of the collaboration network between authors indexed by MAG. Each node represents an author and edges indicate the collaboration between authors. All nodes come with 128-dimensional features. OGBL-CITATION2 is a directed graph, representing the citation network between a subset of papers extracted from MAG. Each node is a paper with 128-dimensional word2vec features. OGBL-PPA is an undirected, unweighted graph. Nodes represent proteins from 58 different species, and edges indicate biologically meaningful associations between proteins. The statistics of these datasets is presented in Table. 1.
|
| 220 |
+
|
| 221 |
+
Table 2: The test effectiveness comparison of LP decoders on four OGB datasets (DDI, COLLAB, PPA, and CITATION2) [16]. We report the results of the standard metrics averaged over 10 runs following the existing work $\left\lbrack {6,{16}}\right\rbrack$ . HadamardMLP is more effective than other decoders. Flashlight effectively retrieves the top scoring neighbors for HadamardMLP and keep its exact outputs.
|
| 222 |
+
|
| 223 |
+
<table><tr><td>Decoder</td><td>Dot Product</td><td>Bilinear</td><td>ConcatMLP</td><td>HadamardMLP</td><td>HadamardMLP w/ Flashlight</td></tr><tr><td colspan="6">OGBL-DDI</td></tr><tr><td>GCN [5]</td><td>${13.8} \pm {1.8}$</td><td>${16.1} \pm {1.2}$</td><td>${12.9} \pm {1.4}$</td><td>${37.1} \pm {5.1}$</td><td>${37.1} \pm {5.1}$</td></tr><tr><td>GraphSAGE [12]</td><td>${36.5} \pm {2.6}$</td><td>${39.4} \pm {1.7}$</td><td>${34.2} \pm {1.9}$</td><td>$\mathbf{{53.9} \pm {4.7}}$</td><td>$\mathbf{{53.9} \pm {4.7}}$</td></tr><tr><td>Node2Vec [27]</td><td>${11.6} \pm {1.9}$</td><td>${13.8} \pm {1.6}$</td><td>${10.8} \pm {1.7}$</td><td>${23.3} \pm {2.1}$</td><td>${23.3} \pm {2.1}$</td></tr><tr><td colspan="6">OGBL-COLLAB</td></tr><tr><td>GCN [5]</td><td>${42.9} \pm {0.7}$</td><td>${43.2} \pm {0.9}$</td><td>${42.3} \pm {1.0}$</td><td>${44.8} \pm {1.1}$</td><td>${44.8} \pm {1.1}$</td></tr><tr><td>GraphSAGE [12]</td><td>${37.3} \pm {0.9}$</td><td>${41.5} \pm {0.8}$</td><td>${37.0} \pm {0.7}$</td><td>${48.1} \pm {0.8}$</td><td>$\mathbf{{48.1} \pm {0.8}}$</td></tr><tr><td>Node2Vec [27]</td><td>${27.7} \pm {1.1}$</td><td>${31.5} \pm {1.0}$</td><td>${27.2} \pm {0.8}$</td><td>$\mathbf{{48.9} \pm {0.5}}$</td><td>${48.9} \pm {0.5}$</td></tr><tr><td colspan="6">OGBL-PPA</td></tr><tr><td>GCN [5]</td><td>${5.1} \pm {0.4}$</td><td>${5.8} \pm {0.5}$</td><td>${6.2} \pm {0.6}$</td><td>${18.7} \pm {1.3}$</td><td>$\mathbf{{18.7} \pm {1.3}}$</td></tr><tr><td>GraphSAGE [12]</td><td>${3.2} \pm {0.3}$</td><td>${6.5} \pm {0.7}$</td><td>${5.8} \pm {0.4}$</td><td>${16.6} \pm {2.4}$</td><td>${16.6} \pm {2.4}$</td></tr><tr><td>Node2Vec [27]</td><td>${4.2} \pm {0.5}$</td><td>${7.8} \pm {0.6}$</td><td>${8.3} \pm {0.4}$</td><td>$\mathbf{{22.3} \pm {0.8}}$</td><td>$\mathbf{{22.3} \pm {0.8}}$</td></tr><tr><td colspan="6">OGBL-CITATION2</td></tr><tr><td>GCN [5]</td><td>${65.3} \pm {0.4}$</td><td>${69.0} \pm {0.8}$</td><td>${62.7} \pm {0.3}$</td><td>$\mathbf{{84.7} \pm {0.2}}$</td><td>$\mathbf{{84.7} \pm {0.2}}$</td></tr><tr><td>GraphSAGE [12]</td><td>${62.2} \pm {0.7}$</td><td>${65.4} \pm {0.9}$</td><td>${60.8} \pm {0.6}$</td><td>$\mathbf{{80.4} \pm {0.1}}$</td><td>${80.4} \pm {0.1}$</td></tr><tr><td>Node2Vec [27]</td><td>${52.7} \pm {0.8}$</td><td>${54.1} \pm {0.6}$</td><td>${51.4} \pm {0.5}$</td><td>${61.4} \pm {0.1}$</td><td>$\mathbf{{61.4} \pm {0.1}}$</td></tr></table>
|
| 224 |
+
|
| 225 |
+
### 6.2 Hyper-parameter Settings
|
| 226 |
+
|
| 227 |
+
For all experiments in this section, we report the average and standard deviation over ten runs with different random seeds. The results are reported on the the best model selected using validation data. We set hyper-parameters of the used techniques and considered baseline methods, e.g., the batch size, the number of hidden units, the optimizer, and the learning rate as suggested by their authors. We use the recent MIPS method ScaNN [18] in the implementation of our Flashlight. For the hyper-parameters of our Flashlight, we have found in the experiments that the performance of Flashlight is robust to the change of hyper-parameters in a board range. Therefore, we simply set the number of iterations of our Flashlight as $T = 3$ and the number of retrieved neighbors constant as 200 per iteration by default. We run all experiments on a machine with 80 Intel(R) Xeon(R) E5-2698 v4 @ 2.20GHz CPUs, and a single NVIDIA V100 GPU with 16GB RAM.
|
| 228 |
+
|
| 229 |
+
### 6.3 Effectiveness of Link Prediction Decoders
|
| 230 |
+
|
| 231 |
+
We follow the standard benchmark settings of OGB datasets to evaluate the effectiveness of LP with different decoders. The benchmark setting of OGBL-DDI is to predict drug-drug interactions given information on already known drug-drug interactions. The performance is evaluated by Hits@20: each true drug interaction is ranked among a set of approximately 100,000 randomly-sampled negative drug interactions, and count the ratio of positive edges that are ranked at 20-place or above. The task of OGBL-COLLAB is to predict the future author collaboration relationships given the past collaborations. Evaluation metric is Hits50, where each true collaboration is ranked among a set of 100,000 randomly-sampled negative collaborations. The task of OGBL-PPA is to predict new association edges given the training edges. Evaluation metric is Hits@100, where each positive edge is ranked among 3,000,000 randomly-sampled negative edges. The task of OGBL-CITATION2 is predict missing citation given existing citations. The evaluation metric is Mean Reciprocal Rank (MRR), where the reciprocal rank of the true reference among 1,000 sampled negative candidates is calculated for each source nodes, and then the average is taken over all source nodes.
|
| 232 |
+
|
| 233 |
+
We implement different decoders as introduced in Sec. 2, including the Dot Product, Bilinear, ConcatMLP, and the HadamardMLP decoders, over the LP encoders, including GCN [5], GraphSAGE [12], and Node2Vec [27], to compare the effects of different decoders on the LP effectiveness. We present the results on the OGBL-DDI, OGBL-COLLAB, OGBL-PPA, and OGBL-CITATION2 datasets in Table. 2. We observe that the HadamardMLP decoder outperforms other decoders on all encoders and datasets. Our Flashlight algorithm can effectively retrieve the top scoring neighbors for the HadamardMLP decoder and keep the exact LP probabilities of HadamardMLPs' output, which leads to the same results of the HadamardMLP decoder with and without Flashlight.
|
| 234 |
+
|
| 235 |
+
Note that the benchmark settings of these datasets sample a small portion of negative edges for the test evaluation, which is not challenging enough to evaluate the scalability of LP decoders on retrieving the top scoring neighbors from massive candidates in practice.
|
| 236 |
+
|
| 237 |
+
### 6.4 The Flashlight Algorithm Effectively Finds the Top Scoring Neighbors
|
| 238 |
+
|
| 239 |
+
To evaluate the effectiveness of our Flashlight on retrieving the top scoring neighbors for the HadamardMLP decoder, we propose a more challenging test setting for the OGB LP datasets. Given a source node, we takes its top 100 scoring neighbors of the HadamardMLP decoder as the ground-truth for retrievals. We set the task as retrieving $k$ neighbors for a source node that can match the ground-truth neighbors as much as possible. We formally define the metric as Recall $@k$ , which is the portion of the ground-truth neighbors being in the top $k$ neighbors retrieved by different methods.
|
| 240 |
+
|
| 241 |
+
We sample 1000 nodes as the source nodes from the OGBL-DDI and OGBL-CITATION2 datasets respectively for evaluation. We evaluate the effectivness of our Flashlight algorithm by checking whether it can find the top scoring neighbors for every source node. We set the number of Flashlight iterations as 10 and the number of retrieved neighbors per iteration as 50 . We present the Recall@ $k$ for $k$ from 1 to 500 averaged over all the source nodes in Fig. 3. The "oracle" curve represents the performance of a optimum searcher, of which the retrieved top $k$ neighbors are exactly the top $k$ scoring neighbors of HadamardMLP.
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+
Figure 3: Recall $@k$ is the fraction of the 100 top scoring neighbors of HadamardMLP ranked in the top $k$ neighbors retrieved by Flashlight. We report Recall $@k$ averaged over all the source nodes on OGBL-CITATION2 and OGBL-DDI.
|
| 246 |
+
|
| 247 |
+
When $k = {100}$ , the 100 neighbors retrieved by our Flashlight can cover more than ${80}\%$ ground-truth neighbors. When $k \geq {200}$ , the recall reaches ${100}\%$ . As a comparison, if we randomly sample the candidate neighbors for retrievals, the Recall $@k$ grows linearly with $k$ and is less than $1 \times {10}^{-4}$ for $k = {100}$ on the OGBL-CITATION2 dataset. The curves of Flashlight is close the optimum curve of the "oracle". These results demonstrate the highly effectiveness of our Flashlight on finding the top scoring neighbors.
|
| 248 |
+
|
| 249 |
+
Given the large OGBL-Citation2 dataset and smaller DDI dataset, our Flashlight exhibits similar Recall $@k$ performance given different numbers $k$ of retrieved neighbors. This implies that our Flashlight can accurately find the top scoring neighbors for both small and large graphs.
|
| 250 |
+
|
| 251 |
+
### 6.5 Inference Efficiency of Link Prediction with Our Flashlight Algorithm
|
| 252 |
+
|
| 253 |
+
We use the throughputs to evaluate the inference speed of neighbor retrieval of different methods. The throughput is defined as how many source nodes that a method can serve to retrieve the top 100 scoring neighbors per second. Except for the LP models that follow the encoder and decoder architectures, e.g., GraphSAGE [12], GCN [5], and PLNLP [6], there are some subgraph based LP models, e.g., SUREL [7] and SEAL [58]. The common issue of the subgraph based models is the poor efficiency: they have to crop a seperate subgraph for every node pair to calculate the LP probability on the node pair. In this sense, the node embeddings cannot be shared on the LP calculation for different node pairs. This leads to the much lower inference speed of the subgraph based LP models than the encoder-decoder LP models. We compare the inference effeciency of different methods on the OGBL-CITATION2 dataset in Fig. 4, where we present the inference speed of different methods when achieving the ${100}\%$ recall@100 for the top 100 scoring neighbors.
|
| 254 |
+
|
| 255 |
+
We observe that our Flashlight significantly accelerate the inference speed of LP models GraphSAGE [12], GCN [5], and PLNLP [6] with the HadamardMLP decoders by more than 100 times. This gap will be even larger for the datasets of larger scales, because the inference with our Flashlight holds the sublinear time complexity while the HadamardMLP decoders holds the linear complexity. Note that the y-axis is in logoratimic scale. The subgraph based methods SUREL [7] and SEAL [58] hold the inference speed of throuputs lower than $1 \times {10}^{-2}$ and $1 \times {10}^{-3}$ respectively, which is not applicable to the practical services that require the low latency of milliseconds.
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+
Figure 4: The inference speed of different LP methods on the OGBL-CITATION2 dataset. The y-axis (througputs) is in the logarithmic scale.
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+
Figure 5: The tradeoff between the inference speed (y-axis) and the effectiveness of finding the top scoring neighbors (x-axis) on the OGBL-CITATION2 (left) and OGBL-PPA (right) datasets.
|
| 264 |
+
|
| 265 |
+
Taking a further step, we comprehensively evaluate the tradeoff between the inference speed and the effectiveness of finding the top scoring neighbors. Taking GraphSAGE as the encoder, we present the tradeoff curves between the throughputs and the Recall@100 on the OGBL-CITATION2 and OGBL-PPI datasets in Fig. 5. In comparison with our Flashlight, we take the HadamardMLP decoder with the Random Sampling as the baseline for comparison. For example, on the OGBL-CITATION2 dataset, when achieving the Recall@100 as more than 80%, the HadamardMLP with our Flashlight can serve more than 200 source nodes per second, while the HadamardMLP with the random sampling can only serve less than 1 node per second. Overall, our Flashlight achieves much better inference speed and effectiveness tradeoff than the HadamardMLP with random sampling.
|
| 266 |
+
|
| 267 |
+
## 7 Conclusion
|
| 268 |
+
|
| 269 |
+
Our theoretical and empirical analysis suggests that the HadamardMLP decoders are a better default choice than the Dot Product in terms of LP effectiveness. Because there does not exist a well-developed sublinear complexity top scoring neighbor searching algorithm for HadamardMLP, the HadamardMLP decoders are not scalable and cannot support the fast inference on large graphs. To resolve this issue, we propose the Flashlight algorithm to accelerate the inference of LP models with HadamardMLP decoders. Flashlight progressively operates the well-studied MIPS techniques for a few iterations. We adaptively adjust the query embeddings at every iteration to find more high scoring neighbors. Empirical results show that our Flashlight accelrates the inference of LP models by more than 100 times on the large OGBL-CITATION2 graph. Overall, our work paves the way for the use of strong LP decoders in practical settings by greatly accelerating their inference.
|
| 270 |
+
|
| 271 |
+
References
|
| 272 |
+
|
| 273 |
+
[1] Linyuan Lü and Tao Zhou. Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications, 390(6):1150-1170, 2011. 1
|
| 274 |
+
|
| 275 |
+
[2] Víctor Martínez, Fernando Berzal, and Juan-Carlos Cubero. A survey of link prediction in complex networks. ACM computing surveys (CSUR), 49(4):1-33, 2016. 1
|
| 276 |
+
|
| 277 |
+
[3] Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social networks, 25(3): 211-230, 2003. 1, 6
|
| 278 |
+
|
| 279 |
+
[4] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37, 2009. 1
|
| 280 |
+
|
| 281 |
+
[5] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 1, 2, 3, 6, 7, 8
|
| 282 |
+
|
| 283 |
+
[6] Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, and Hanjing Su. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936, 2021. 1, 2, 3, 6, 7, 8, 13
|
| 284 |
+
|
| 285 |
+
[7] Haoteng Yin, Muhan Zhang, Yanbang Wang, Jianguo Wang, and Pan Li. Algorithm and system co-design for efficient subgraph-based graph representation learning. arXiv preprint arXiv:2202.13538, 2022. 1, 8, 9
|
| 286 |
+
|
| 287 |
+
[8] Chuxiong Sun and Guoshi Wu. Adaptive graph diffusion networks with hop-wise attention. arXiv preprint arXiv:2012.15024, 2020. 1, 2, 3, 6, 13
|
| 288 |
+
|
| 289 |
+
[9] Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. The European Physical Journal B, 71(4):623-630, 2009. 1, 6
|
| 290 |
+
|
| 291 |
+
[10] Steffen Rendle, Walid Krichene, Li Zhang, and John Anderson. Neural collaborative filtering vs. matrix factorization revisited. In Fourteenth ACM conference on recommender systems, pages 240-248, 2020. 1, 2, 3, 4, 6, 13, 14
|
| 292 |
+
|
| 293 |
+
[11] Rui Liu, Tianyi Wu, and Barzan Mozafari. A bandit approach to maximum inner product search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4376-4383, 2019. 1, 6
|
| 294 |
+
|
| 295 |
+
[12] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. 2, 3, 6, 7, 8
|
| 296 |
+
|
| 297 |
+
[13] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308,2016. 2, 6
|
| 298 |
+
|
| 299 |
+
[14] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 974-983, 2018. 2
|
| 300 |
+
|
| 301 |
+
[15] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303-314, 1989. 2
|
| 302 |
+
|
| 303 |
+
[16] Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs. arXiv preprint arXiv:2103.09430, 2021.3,7
|
| 304 |
+
|
| 305 |
+
[17] Xinyan Dai, Xiao Yan, Kelvin KW Ng, Jiu Liu, and James Cheng. Norm-explicit quantization: Improving vector quantization for maximum inner product search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 51-58, 2020. 4, 6
|
| 306 |
+
|
| 307 |
+
[18] Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. Accelerating large-scale inference with anisotropic vector quantization. In International Conference on Machine Learning, pages 3887-3896. PMLR, 2020. 4, 6, 7
|
| 308 |
+
|
| 309 |
+
[19] Shulong Tan, Zhixin Zhou, Zhaozhuo Xu, and Ping Li. Fast item ranking under neural network based measures. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 591-599, 2020. 4
|
| 310 |
+
|
| 311 |
+
[20] Rihan Chen, Bin Liu, Han Zhu, Yaoxuan Wang, Qi Li, Buting Ma, Qingbo Hua, Jun Jiang, Yunlong Xu, Hongbo Deng, et al. Approximate nearest neighbor search under neural similarity metric for large-scale recommendation. arXiv preprint arXiv:2202.10226, 2022. 4
|
| 312 |
+
|
| 313 |
+
[21] Gobinda G Chowdhury. Introduction to modern information retrieval. Facet publishing, 2010. 6
|
| 314 |
+
|
| 315 |
+
[22] David Liben-Nowell and Jon Kleinberg. The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management, pages 556-559, 2003.
|
| 316 |
+
|
| 317 |
+
[23] Glen Jeh and Jennifer Widom. Simrank: a measure of structural-context similarity. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 538-543, 2002. 6
|
| 318 |
+
|
| 319 |
+
[24] Aditya Krishna Menon and Charles Elkan. Link prediction via matrix factorization. In Joint european conference on machine learning and knowledge discovery in databases, pages 437- 452. Springer, 2011. 6
|
| 320 |
+
|
| 321 |
+
[25] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710, 2014.
|
| 322 |
+
|
| 323 |
+
[26] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067-1077, 2015.
|
| 324 |
+
|
| 325 |
+
[27] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855-864, 2016. 7
|
| 326 |
+
|
| 327 |
+
[28] Zhitao Wang, Chengyao Chen, and Wenjie Li. Predictive network representation learning for link prediction. In Proceedings of the 40th international ACM SIGIR conference on research and development in information retrieval, pages 969-972, 2017. 6
|
| 328 |
+
|
| 329 |
+
[29] Muhan Zhang and Yixin Chen. Weisfeiler-lehman neural machine for link prediction. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 575-583, 2017. 6
|
| 330 |
+
|
| 331 |
+
[30] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018. 6
|
| 332 |
+
|
| 333 |
+
[31] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. 6
|
| 334 |
+
|
| 335 |
+
[32] Edith Cohen and David D Lewis. Approximating matrix multiplication for pattern recognition tasks. Journal of Algorithms, 30(2):211-252, 1999. 6
|
| 336 |
+
|
| 337 |
+
[33] Hsiang-Fu Yu, Cho-Jui Hsieh, Qi Lei, and Inderjit S Dhillon. A greedy approach for budgeted maximum inner product search. Advances in neural information processing systems, 30, 2017. 6
|
| 338 |
+
|
| 339 |
+
[34] Qiang Huang, Guihong Ma, Jianlin Feng, Qiong Fang, and Anthony KH Tung. Accurate and fast asymmetric locality-sensitive hashing scheme for maximum inner product search. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1561-1570, 2018. 6
|
| 340 |
+
|
| 341 |
+
[35] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In International Conference on Machine Learning, pages 1926-1934. PMLR, 2015.
|
| 342 |
+
|
| 343 |
+
[36] Anshumali Shrivastava and Ping Li. Asymmetric Ish (alsh) for sublinear time maximum inner product search (mips). Advances in neural information processing systems, 27, 2014.
|
| 344 |
+
|
| 345 |
+
[37] Xiao Yan, Jinfeng Li, Xinyan Dai, Hongzhi Chen, and James Cheng. Norm-ranging lsh for maximum inner product search. Advances in Neural Information Processing Systems, 31, 2018. 6
|
| 346 |
+
|
| 347 |
+
[38] Jie Liu, Xiao Yan, Xinyan Dai, Zhirong Li, James Cheng, and Ming-Chang Yang. Understanding and improving proximity graph based maximum inner product search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 139-146, 2020. 6
|
| 348 |
+
|
| 349 |
+
[39] Stanislav Morozov and Artem Babenko. Non-metric similarity graphs for maximum inner product search. Advances in Neural Information Processing Systems, 31, 2018.
|
| 350 |
+
|
| 351 |
+
[40] Zhixin Zhou, Shulong Tan, Zhaozhuo Xu, and Ping Li. Möbius transformation for fast inner product search on graph. Advances in Neural Information Processing Systems, 32, 2019. 6
|
| 352 |
+
|
| 353 |
+
[41] Kazuo Aoyama, Kazumi Saito, Hiroshi Sawada, and Naonori Ueda. Fast approximate similarity search based on degree-reduced neighborhood graphs. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1055-1063, 2011. 6
|
| 354 |
+
|
| 355 |
+
[42] Akhil Arora, Sakshi Sinha, Piyush Kumar, and Arnab Bhattacharya. Hd-index: Pushing the scalability-accuracy boundary for approximate knn search in high-dimensional spaces. arXiv preprint arXiv:1804.06829, 2018.
|
| 356 |
+
|
| 357 |
+
[43] Cong Fu, Chao Xiang, Changxu Wang, and Deng Cai. Fast approximate nearest neighbor search with the navigating spreading-out graph. arXiv preprint arXiv:1707.00143, 2017.
|
| 358 |
+
|
| 359 |
+
[44] Yu A Malkov and Dmitry A Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, 42(4):824-836, 2018.
|
| 360 |
+
|
| 361 |
+
[45] Philipp M Riegger. Literature survey on nearest neighbor search and search in graphs. 2010.
|
| 362 |
+
|
| 363 |
+
[46] Wenhui Zhou, Chunfeng Yuan, Rong Gu, and Yihua Huang. Large scale nearest neighbors search based on neighborhood graph. In 2013 International Conference on Advanced Cloud and Big Data, pages 181-186. IEEE, 2013. 6
|
| 364 |
+
|
| 365 |
+
[47] Myron Flickner, Harpreet Sawhney, Wayne Niblack, Jonathan Ashley, Qian Huang, Byron Dom, Monika Gorkani, Jim Hafner, Denis Lee, Dragutin Petkovic, et al. Query by image and video content: The qbic system. computer, 28(9):23-32, 1995. 6
|
| 366 |
+
|
| 367 |
+
[48] Chun Jiang Zhu, Tan Zhu, Haining Li, Jinbo Bi, and Minghu Song. Accelerating large-scale molecular similarity search through exploiting high performance computing. In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 330-333. IEEE, 2019. 6
|
| 368 |
+
|
| 369 |
+
[49] Thomas Cover and Peter Hart. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21-27, 1967. 6
|
| 370 |
+
|
| 371 |
+
[50] Atsutake Kosuge and Takashi Oshima. An object-pose estimation acceleration technique for picking robot applications by using graph-reusing k-nn search. In 2019 First International Conference on Graph Computing (GC), pages 68-74. IEEE, 2019. 6
|
| 372 |
+
|
| 373 |
+
[51] Qiang Huang, Jianlin Feng, Qiong Fang, Wilfred Ng, and Wei Wang. Query-aware locality-sensitive hashing scheme for $l\_ p$ norm. The VLDB Journal,26(5):683-708,2017. 6
|
| 374 |
+
|
| 375 |
+
[52] Masajiro Iwasaki. Pruned bi-directed k-nearest neighbor graph for proximity search. In International Conference on Similarity Search and Applications, pages 20-33. Springer, 2016. 6
|
| 376 |
+
|
| 377 |
+
[53] Yuan Cao, Heng Qi, Wenrui Zhou, Jien Kato, Keqiu Li, Xiulong Liu, and Jie Gui. Binary hashing for approximate nearest neighbor search on big data: A survey. IEEE Access, 6: 2039-2054, 2017. 6
|
| 378 |
+
|
| 379 |
+
[54] Scott Cost and Steven Salzberg. A weighted nearest neighbor algorithm for learning with symbolic features. Machine learning, 10(1):57-78, 1993. 6
|
| 380 |
+
|
| 381 |
+
[55] Yitong Meng, Xinyan Dai, Xiao Yan, James Cheng, Weiwen Liu, Jun Guo, Benben Liao, and Guangyong Chen. Pmd: An optimal transportation-based user distance for recommender systems. In European Conference on Information Retrieval, pages 272-280. Springer, 2020. 6
|
| 382 |
+
|
| 383 |
+
[56] Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pages 285-295, 2001. 6
|
| 384 |
+
|
| 385 |
+
[57] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. 6
|
| 386 |
+
|
| 387 |
+
[58] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, 34:9061-9073, 2021. 8, 9
|
| 388 |
+
|
| 389 |
+
[59] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, pages 242-252. PMLR, 2019. 13
|
| 390 |
+
|
| 391 |
+
[60] Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with neural networks. In International conference on machine learning, pages 1908-1916. PMLR, 2014.13
|
| 392 |
+
|
| 393 |
+
## A Learning a Dot Product decoder with a HadamardMLP decoder is Easy
|
| 394 |
+
|
| 395 |
+
Before we have discussed the limitations of the Dot Product decoder. An interesting questions is whether the HadamardMLP decoder can replace the Dot Product decoder by approximating it. If the MLP decoder can learn a dot product easily, it is safe to use MLP decoder instead of the dot product ones in most cases. There are similar problems actively studied in machine learning. Existing work imply that the difficulty scales polynomial with dimensionality $d$ and $1/\epsilon$ in theory [10,59,60]. This motivates us to investigate the question empirically.
|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
|
| 399 |
+
Figure 6: A MLP decoder can learn a Dot Product decoder well with enough training data. The left and right figures shows the MSE differences (y-axis) per epoch (x-axis) between the outputs of dot product and the MLP decoders given different training sizes with the input embedding dimenionality as $d = {64}$ and $d = {128}$ respectively. The naive output denotes the outputs of zeros.
|
| 400 |
+
|
| 401 |
+

|
| 402 |
+
|
| 403 |
+
Figure 7: Test inverse MSE differences between the outputs of Dot Product and MLP decoders after convergence (y-axis) versus the training set size (x-axis).
|
| 404 |
+
|
| 405 |
+
We set up a synthetic learning task where given two embeddings ${\mathbf{x}}_{i},{\mathbf{x}}_{j} \in {\mathbb{R}}^{d}$ and a label ${\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}$ , we want to obtain a MLP function that approximates the ${\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}$ with the inputs ${\mathbf{x}}_{i},{\mathbf{x}}_{j} \in {\mathbb{R}}^{d}$ . For this experiment, we create the datasets including the embedding matrix as $\mathbf{E} \in {\mathbb{R}}^{{10}^{6} \times d}$ . We draw every row in $\mathbf{E}$ from $\mathcal{N}\left( {0,\mathbf{I}}\right)$ independently. Then, we uniformly sample (without replacement) ${10}^{4}$ and $S$ embedding pair combinations from $\mathbf{E}$ to form the test and training sets (no overlap) respectively.
|
| 406 |
+
|
| 407 |
+
We train the MLP on the training and evalute it on the test set. For the architecture of the MLP, we keep it simple: we follow the existing work $\left\lbrack {6,8}\right\rbrack$ to set the number of layers as 2 and the number of hidden units as same as the input embeddings: $d$ . For the optimizer, we also folow the existing work $\left\lbrack {6,8}\right\rbrack$ to choose the Adam optimizer. As for evaluation metrics, we compute the MSE (Mean Squared Error) differences between the predicted score of the MLP and the dot product decoders. We measure the MSE of a naive model that predicts always 0 (the average rating). Every experiment is repeated 5 times and we report the mean.
|
| 408 |
+
|
| 409 |
+
Fig. 6 shows the approximation errors on the MLP per epoch given different number of training pairs and dimensions. The figure suggests that an MLP can easily approximate the dot product with enough training data. Consistent with the theory, the number of samples needed scales polynominally with the increasing dimensions and reduced errors. Ancedotally, we observe the number of needed training samples is about $\mathcal{O}\left( {{d}^{\alpha }/{\epsilon }^{\beta }}\right)$ for $\alpha \approx 2,\beta \ll 1$ (see Fig. 7). In all cases, the MSE errors of the MLP decoder are negligible compared with the naive output.
|
| 410 |
+
|
| 411 |
+
This experiment shows that an MLP can easily approximate the dot product with enough training data. We hope this can explain, at least partially, why the MLP decoder generally performs better than the dot product.
|
| 412 |
+
|
| 413 |
+
Our conclusion seems to be distinct to to the existing work [10], which claims that the ConcatMLP is hard to learn a Dot Product. Actually, our conclusion is not conflicted with that in [10]. This ConcatMLP decoder processes the concatenation of the paired embeddings instead of the Hadamard product of the paired embeddings as the HadamardMLP. The HadamardMLP holds the inductive bias similar to the Dot Product, which makes the former easily learns the latter. Actually, we show that a simple two-layer MLP with only two hidden units is equivalent to the Dot Product with specific weights. We assign the first layer weights for two hidden units as1and $- \mathbf{1}$ and the second layer weights as ones. Then, we have its output as:
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{\mathbf{i}},{\mathbf{x}}_{j}}\right) = \operatorname{ReLU}\left( {\mathbf{1} \cdot \left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) }\right) + \operatorname{ReLU}\left( {-\mathbf{1} \cdot \left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) }\right) = \mathbf{1} \cdot \left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) = {\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}, \tag{14}
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
which is equivalent to the Dot Product decoder. From this result, we find that any MLP decoder with the careful initialization is equivalent to the Dot Product decoder and thus can learn the Dot Product easily.
|
papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,327 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Anonymous Author(s)
|
| 2 |
+
|
| 3 |
+
Anonymous Affiliation
|
| 4 |
+
|
| 5 |
+
Anonymous Email
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Link prediction (LP) has been recognized as an important task in graph learning with its board practical applications. A typical application of LP is to retrieve the top scoring neighbors for a given source node, such as the friend recommendation. These services desire the high inference scalability to find the top scoring neighbors from many candidate nodes at low latencies. There are two popular decoders that the recent LP models mainly use to compute the edge scores from node embeddings: the HadamardMLP and Dot Product decoders. After theoretical and empirical analysis, we find that the HadamardMLP decoders are generally more effective for LP. However, HadamardMLP lacks the scalability for retrieving top scoring neighbors on large graphs, since to the best of our knowledge, there does not exist an algorithm to retrieve the top scoring neighbors for HadamardMLP decoders in sublinear complexity. To make HadamardMLP scalable, we propose the Flashlight algorithm to accelerate the top scoring neighbor retrievals for HadamardMLP: a sublinear algorithm that progressively applies approximate maximum inner product search (MIPS) techniques with adaptively adjusted query embeddings. Empirical results show that Flashlight improves the inference speed of LP by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing effectiveness. Our work paves the way for large-scale LP applications with the effective HadamardMLP decoders by greatly accelerating their inference.
|
| 10 |
+
|
| 11 |
+
§ 21 I INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The goal of link prediction (LP) is to predict the missing links in a graph [1]. LP is drawing increasing attention in the past decade due to its board practical applications [2]. For instance, LP can be used to recommend new friends on social media [3], and recommend attractive items to the costumers on E-commerce sites [4], so as to improve the user experience. During inference, these applications demand the LP methods to retrieve the top scoring neighbors for a source node at low latencies. This is especially challenging on large graphs because the LP methods need to search many candidate nodes to find the top scoring neighbors.
|
| 14 |
+
|
| 15 |
+
There are two main kinds of architecture followed by the recent LP models. The first uses an encoder, e.g., GCN [5], to obtain the node-level embeddings and uses a decoder, e.g., Dot Product, to get the edge scores between the paired nodes [6]. The second crops a subgraph for every edge and computes the edge score from the subgraph directly [7]. The inference speed of the second is much lower than the first, so we focus on the first kind of models to achieve fast inference on large graphs. In the last years, extensive research focuses on developing more expressive LP encoders [6, 8]. However, much less work pays attention to the essential impacts of the choice of decoders on LP's performance. In this work, we theoretically and empirically analyze two popular LP decoders: Dot Product and HadamardMLP (a MLP following the Hadamard Product), and find that the latter is generally more effective than the former.
|
| 16 |
+
|
| 17 |
+
In practical applications, we should not only consider the effectiveness of LP, but also inference efficiency. Many LP applications generally require fast retrieval of the top scoring neighbors for low-latency services $\left\lbrack {3,9,{10}}\right\rbrack$ . For a Dot Product decoder, this retrieval can be approximated efficiently at the sublinear time complexity [11]. However, to the best of our knowledge, no such sublinear algorithms exist for the top scoring neighbor retrievals of the HadamardMLP decoders. This means
|
| 18 |
+
|
| 19 |
+
§ FLASHLIGHT $\MATHCAL{L}$ : SCALABLE LINK PREDICTION WITH EFFECTIVE DECODERS
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Two popular LP decoders: The Dot Product (left), equivalent to the element-wise summation following the Hadamard product, and the HadamardMLP decoder (right).
|
| 24 |
+
|
| 25 |
+
that for every source node, we have to iterate over all the nodes in the graph to compute the scores so as to find the top scoring neighbors for HadamardMLP, which is of linear complexity and cannot scale to large graphs.
|
| 26 |
+
|
| 27 |
+
To allow LP applications to enjoy the high effectiveness of HadamardMLP decoders while avoiding the poor inference scalability, we propose the scalable top scoring neighbor search algorithm named Flashlight. Our Flashlight progressively calls the well-developed approximate maximum inner product search (MIPS) techniques for a few iterations. At every iteration, we analyze the retrieved neighbors and adaptively adjust the query embedding for Flashlight to find the missed high scoring neighbors. Our Flashlight algorithm holds sublinear time complexity on finding top scoring neighbors for HadamardMLP decoders, allowing for fast and scalable inference. Empirical results show that Flashlight accelerates the inference of LP models by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing the effectiveness. Overall, our work paves the way for the use of effective LP decoders in practical settings by greatly accelerating their inference.
|
| 28 |
+
|
| 29 |
+
§ 2 REVISITING LINK PREDICTION DECODERS
|
| 30 |
+
|
| 31 |
+
In this section, we formalize the link prediction (LP) problem and the LP decoders. Typically, many LP models include an encoder that learns the node-level embeddings ${\mathbf{x}}_{i},i \in \mathcal{V}$ , where $\mathcal{V}$ is the set of nodes, and an decoder $\phi : {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \rightarrow \mathbb{R}$ that combines the node-level embeddings of a pair of nodes: ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ into a single score: ${s}_{ij}$ . If ${s}_{ij}$ is higher, the link between nodes $i$ and $j$ is more likely to exist. The state-of-the-art models generally use graph neural networks as the encoders [5, 6, 8, 12, 13]. From here on, we mainly focus on the decoder $\phi$ .
|
| 32 |
+
|
| 33 |
+
§ 2.1 DOT PRODUCT DECODER
|
| 34 |
+
|
| 35 |
+
The most common decoder of link prediction is the Dot Product $\left\lbrack {6,8,{10}}\right\rbrack$ :
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
{s}_{ij} = {\phi }^{\text{ dot }}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{ := }} {\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}, \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
where $\cdot$ denotes the dot product.
|
| 42 |
+
|
| 43 |
+
Training a link prediction model with the Dot Product decoder encourages the embeddings of the connected nodes to be close to each other. Intuitively, the score ${s}_{ij}$ can be thought as a measure of the squared Eulidean distance between the node embeddings ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ , as ${\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}\end{Vmatrix}}^{2} = {\begin{Vmatrix}{\mathbf{x}}_{i}\end{Vmatrix}}^{2} - 2{\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j} +$ ${\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}}^{2}$ , if the $\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}$ is constant over the neighbors $j \in \mathcal{N}$ , e.g., after normalization [14]. Because the node embeddings represent the semantic information of nodes, Dot Product assumes the homophily of graph topology, i.e., the semantically similar nodes are more likely to be connected.
|
| 44 |
+
|
| 45 |
+
§ 2.2 HADAMARDMLP (MLP FOLLOWING HADAMARD PRODUCT) DECODER
|
| 46 |
+
|
| 47 |
+
Multi layer perceptrons (MLPs) are known to be universal approximators that can approximate any continuous function on a compact set [15]. A MLP layer can be defined as a function $f : {\mathbb{R}}^{{d}_{\text{ in }}} \rightarrow$ ${\mathbb{R}}^{{d}_{\text{ out }}}$ :
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{f}_{\mathbf{W}}\left( \mathbf{x}\right) = \operatorname{ReLU}\left( {\mathbf{W}\mathbf{x}}\right) \tag{2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
which is parameterized by the learnable weight $\mathbf{W} \in {\mathbb{R}}^{{d}_{\text{ out }} \times {d}_{\text{ in }}}$ (the bias, if exists, can be represented by an additional column in $\mathbf{W}$ and an additional channel in the input $\mathbf{x}$ with the value as 1 ). ReLU is the activation function. In a MLP, several layers of $f$ are stacked, e.g., a 3-layer MLP can be formalized as ${f}_{{\mathbf{W}}_{3}}\left( {{f}_{{\mathbf{W}}_{2}}\left( {{f}_{{\mathbf{W}}_{1}}\left( \mathbf{x}\right) }\right) }\right)$ .
|
| 54 |
+
|
| 55 |
+
< g r a p h i c s >
|
| 56 |
+
|
| 57 |
+
Figure 2: HadamardMLP achieves higher Mean Reciprocal Rank (MRR, higher is better) than other decoders on the OGBL-CITATION2 [16] dataset with the encoder as GraphSAGE [12] and GCN [5]. More empirical results and the detailed settings are in Sec. 6.3.
|
| 58 |
+
|
| 59 |
+
The state-of-the-art models widely use a MLP following the Hadamard Product between the paired nodes as the decoder (short as the HadamardMLP decoders) [6, 8, 10, 16]:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{ := }} \operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}\left( {{f}_{{\mathbf{W}}_{L - 1}}\left( {\ldots {f}_{{\mathbf{W}}_{1}}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) \ldots }\right) }\right) , \tag{3}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $\odot$ denotes the Hadamard Product. Fig. 1 illustrates these two models the Dot Product and HadamardMLP decoders.
|
| 66 |
+
|
| 67 |
+
§ 2.3 OTHER LINK PREDICTION DECODERS
|
| 68 |
+
|
| 69 |
+
In principle, every function that takes two vectors as the input and outputs a scalar can act as the decoder. For example, there are bilinear dot product decoder (short as Bilinear decoder) [6]:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{s}_{ij} = {\mathbf{h}}_{i}^{T}\mathbf{W}{\mathbf{h}}_{j}, \tag{4}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $\mathbf{W}$ is the learnable weight, and the MLPs following the concatenate decoder [6,10] (short as ConcatMLP decoder):
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{s}_{ij} = \operatorname{MLP}\left( {{\mathbf{h}}_{i}\parallel {\mathbf{h}}_{j}}\right) \tag{5}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
, etc. These two decoders are used much less than Dot Product and HadamardMLP in the state-of-the-art LP models possibly due to their lower effectiveness [6, 8, 10, 16].
|
| 82 |
+
|
| 83 |
+
§ 2.4 HADAMARDMLP IS GENERALLY MORE EFFECTIVE THAN OTHER DECODERS
|
| 84 |
+
|
| 85 |
+
Dot Product demands the homophily of graph data to effectively infer the link between nodes. In contrast, thanks to the universal approximation capability, MLP can approximate any continuous function, and thus does not demand the homophily of graph data for effective LP. This gap in the expressiveness accounts for the performance difference of these two decoders on many datasets (see Sec. 6.3). We additionally show in Appendix. A that using a HadamardMLP is easy to learn Dot Product, which also partially accounts for the better effectiveness of the HadamardMLP decoders over the Dot Product. Existing work also finds that the effectiveness of Bilinear and ConcatMLP is generally worse than the HardmardMLP or Dot Product decoder [6, 8, 10, 16]. We confirm these findings more rigorously in the empirical results in Fig. 2 and more complete in Sec. 6.3.
|
| 86 |
+
|
| 87 |
+
§ 3 SCALABILITY OF LINK PREDICTION DECODERS
|
| 88 |
+
|
| 89 |
+
Most academic studies focus on training runtime when discussing scalability. However, in industrial applications, the inference speed is often more important. The inference of many LP applications needs to retrieve the top scoring neighbors given a source node, e.g., recommending friends to a user for friend recommendation. Given a source node, if there are $n$ nodes in the graph, then the inference time complexity is $\mathcal{O}\left( n\right)$ if the decoder needs to iterate over all the $n$ nodes to compute the edge scores. For large scale applications, $n$ is typically in the range of millions, or even larger. The empirical results show that the inference time of finding the top scoring neighbors for a source node is longer than one second for HadamardMLP on the OGBL-CITATION2 dataset of nearly three million nodes (see Sec. 6.5).
|
| 90 |
+
|
| 91 |
+
For a Dot Product decoder, the problem of finding the top scoring neighbors can be approximated efficiently. This is a well-studied problem, known as approximate maximum inner product search (MIPS) [17, 18] (see Sec. 5.2 for a comprehensive literature review). MIPS techniques allow Dot Product' inference to be completed in a few milliseconds, even with millions of neighbors. There exists some work that tries to extend MIPS to the ConcatMLP [19, 20]. These methods hold strict assumptions on the models' training and are not directly applicable to the HadamardMLP. To the best of our knowledge, no such sublinear techniques exist for the top scoring neighbor retrieval with the HadamardMLP [10], which is a complex nonlinear function.
|
| 92 |
+
|
| 93 |
+
To summarize, the HadamardMLP decoder is not scalable for the real time LP services on large graphs, while the Dot Product decoder allows fast retrieval using the well established MIPS techniques.
|
| 94 |
+
|
| 95 |
+
§ 4 FLASHLIGHT: SCALABLE LINK PREDICTION WITH EFFECTIVE DECODERS
|
| 96 |
+
|
| 97 |
+
Sec. 2 has shown that the HadamardMLP decoder enjoys higher effectiveness than the Dot Product decoder, which supports the superior performance of HadamardMLP on many LP benchmarks. On the other hand, Sec. 3 has shown that the HadamardMLP is not scalable for real time LP applications on large graphs, while Dot Product supports the fast inference using the well-established MIPS techniques. In this section, we aim to devise fast inference algorithms for HadamardMLP to enable scalable LP with effective decoders.
|
| 98 |
+
|
| 99 |
+
We try to exploit the advances in the well-developed MIPS techniques to accelerate the inference of HadamardMLP. Specifically, we divide the top scoring retrievals for HadamardMLP predictors into a sequence of MIPS. Our algorithm works in a progressive manner. The query embedding in every search is adaptively adjusted to find the high scoring neighbors missed in the last search.
|
| 100 |
+
|
| 101 |
+
The challenge of retrieving the neighbors of highest scores for HadamardMLP is rooted in the unawareness of which neurons are activated, since if we know which neurons are activated, the nonlinear HadamardMLP degrades to a linear model. On the $l$ th MLP layer, we define the mask matrix ${\mathbf{M}}_{\mathcal{A},l} \in {\mathbb{R}}^{{d}_{l} \times {d}_{l}}$ to represent the set of activated neurons $\mathcal{A}$ as
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{M}_{ij} = \left\{ \begin{array}{ll} 1, & \text{ if }i = j\text{ and }i \in \mathcal{A} \\ 0, & \text{ otherwise } \end{array}\right. \tag{6}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
With ${\mathbf{M}}_{\mathcal{A},l}$ , we reformulate the HadamardMLP decoder as:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}{\mathbf{W}}_{L - 1}\ldots {\mathbf{M}}_{\mathcal{A},1}{\mathbf{W}}_{1}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
= \left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right) \cdot {\mathbf{x}}_{j} \tag{7}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
Because the vector ${\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}^{T}{\mathbf{w}}_{L}$ is determined by the weights of MLP and the activated neurons $\mathcal{A}$ , we term it as ${\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right)$ :
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \mathrel{\text{ := }} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}^{T}{\mathbf{w}}_{L} \tag{8}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
Given the source node $i$ , because the score ${s}_{ij}$ is obtained by the dot product between $\left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}^{T}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right)$ and the neighbor embedding ${\mathbf{x}}_{j}$ , we term the former vector as the query embedding $\mathbf{q}$ :
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathbf{q} \mathrel{\text{ := }} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} = {\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \odot {\mathbf{x}}_{i} \tag{9}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
In this way, we can reformulate the output of decoder ${\phi }^{MLP}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{\mathbf{j}}}\right)$ as
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = \mathbf{q} \cdot {\mathbf{x}}_{j}. \tag{10}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
In practice, we can use the $\mathbf{q}$ as the query embedding in MIPS to retrieve the neighbors of highest inner products, which correspond to the highest scores. Here, how to get the activated neurons $\mathcal{A}$ so as to obtain the query embedding $\mathbf{q}$ is an issue. Different node pairs activate different neurons $\mathcal{A}$ . Initially, without knowing which neurons are activated, we first assume all the neurons are activated, i.e., we have the initial query embedding as:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathbf{q}\left\lbrack 1\right\rbrack = \left( {\mathop{\prod }\limits_{{i = 1}}^{{L - 1}}{\mathbf{W}}_{i}^{T}}\right) {\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} \tag{11}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Algorithm 1 Flashlight $\#$ : progressively "illuminates" the semantic space to retrieve the high scoring neighbors for the LP HadamardMLP decoders.
|
| 142 |
+
|
| 143 |
+
Input: A trained HadamardMLP decoder ${\phi }^{\text{ MLP }}$ that outputs the logit ${s}_{ij}$ for the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ . The
|
| 144 |
+
|
| 145 |
+
set of nodes $\mathcal{V}$ . The node embedding set $\mathcal{X} = \left\{ {{\mathbf{x}}_{i} \mid i \in \mathcal{V}}\right\}$ . A source node $i$ . The number of iterations
|
| 146 |
+
|
| 147 |
+
$T$ . The number of neighbors to retrieve at every iteration: $\mathbf{N} = \left\lbrack {{N}_{1},{N}_{2},\ldots ,{N}_{T}}\right\rbrack$ .
|
| 148 |
+
|
| 149 |
+
Output: The recommended neighbors $\mathcal{N}$ for the source node $i$ .
|
| 150 |
+
|
| 151 |
+
: Initialize the set of retrieved recommended neighbors $\mathcal{N} \leftarrow \varnothing$
|
| 152 |
+
|
| 153 |
+
Initialize the set of activated neurons as $\mathcal{A}\left\lbrack 0\right\rbrack$ as all the neurons in MLP.
|
| 154 |
+
|
| 155 |
+
for $t \leftarrow 1$ to $T$ do
|
| 156 |
+
|
| 157 |
+
Calculate the query embedding $\mathbf{q}\left\lbrack t\right\rbrack \leftarrow {\mathbf{x}}_{i} \odot {\operatorname{MLP}}_{\mathcal{A}\left\lbrack {t - 1}\right\rbrack }\left( \cdot \right)$ .
|
| 158 |
+
|
| 159 |
+
$\mathcal{N}\left\lbrack t\right\rbrack \leftarrow {N}_{t}$ neighbors in $\mathcal{X}$ that maximizes the inner product with $\mathbf{q}\left\lbrack t\right\rbrack$ .
|
| 160 |
+
|
| 161 |
+
$\mathcal{X} \leftarrow \mathcal{X} \smallsetminus \left\{ {{\mathbf{x}}_{j} \mid j \in \mathcal{N}\left\lbrack t\right\rbrack }\right\} .$
|
| 162 |
+
|
| 163 |
+
${j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)$
|
| 164 |
+
|
| 165 |
+
$\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right)$ .
|
| 166 |
+
|
| 167 |
+
$\mathcal{N} \leftarrow \mathcal{N} \cup \mathcal{N}\left\lbrack t\right\rbrack$ .
|
| 168 |
+
|
| 169 |
+
return $\mathcal{N}$
|
| 170 |
+
|
| 171 |
+
This initial design can reflect the general trends of increasing the edge scores on LP, without restricting which neurons are activated. We use $\mathbf{q}\left\lbrack 1\right\rbrack$ as the query embedding to retrieve the highest inner product neighbors as $\mathcal{N}\left\lbrack 1\right\rbrack$ in the first iteration. Then, given the retrieved neighbors in the $t$ th iteration as $\mathcal{N}\left\lbrack t\right\rbrack$ , we analyze the $\mathcal{N}\left\lbrack t\right\rbrack$ and adaptively adjust the query embedding $\mathbf{q}\left\lbrack {t + 1}\right\rbrack$ that we use in the next iteration to find more high scoring neighbors. Specifically, we operate the feed-forward to MLP for $\mathcal{N}\left( t\right)$ . We define the function $A\left( {\cdot , \cdot }\right)$ that returns the set of activated neurons for a MLP (the first input) with the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ (the second input). Then we can use it to extract $\mathcal{A}$ as:
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\mathcal{A} = A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{12}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
Then, we obtain the set of activated neurons of the highest scored neighbor at the $t$ th iteration as:
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right) \text{ , where }{j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{13}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
This implies that the neighbors activating $\mathcal{A}\left\lbrack t\right\rbrack$ can obtain the high edge scores. Then, if we take $\mathcal{A}\left\lbrack 1\right\rbrack$ as the set of neurons that we activate at the next query, we could find more high scoring neighbors. In this way, we set the neurons that we assume to activate in the next iteration as $\mathcal{A}\left\lbrack t\right\rbrack$ . We repeat the above iterations until enough neighbors are retrieved. The algorithm is summarized in Alg. 1.
|
| 184 |
+
|
| 185 |
+
We name our algorithm as Flashlight because it works like a flashlight to progressively "illuminates" the semantic space to find the high scoring neighbors. The query embeddings are like the lights sent from the flashlight. And our process of adjusting the query embeddings is just like progressively adjusting the "lights" from the "flashlight" by checking the "objects" found in the last "illumination".
|
| 186 |
+
|
| 187 |
+
In the experiments, we find that our Flashlight algorithm is effective to find the top scoring neighbors from the massive candidate neighbors. For example, in Fig. 3, our Flashlight is able to find the top 100 scoring neighbors from nearly three million candidates by retrieving only 200 neighbors in the large OGBL-CITATION2 graph dataset for the HadamardMLP decoders.
|
| 188 |
+
|
| 189 |
+
Complexity Analysis. Using MLP decoders to compute the LP probabilities of all the neighbors holds the complexity as $\mathcal{O}\left( N\right)$ , where $N$ is the number of nodes in the whole graph. Finding the top scoring neighbors from the exact probabilities of all the neighbors also holds the linear complexity $\mathcal{O}\left( N\right)$ . Overall, using MLP decoders to find the top scoring neighbors is of the time complexity $\mathcal{O}\left( N\right)$ . In contrast, our Flashlight progressively calls the MIPS techniques for a constant number of times invariant to the graph data, which leads to the sublinear complexity as same as MIPS. In conclusion, our Flashlight improves the scalability and applicability of HadamardMLP decoders by reducing their inference time complexity from linear to sublinear time.
|
| 190 |
+
|
| 191 |
+
Table 1: Statistics of datasets.
|
| 192 |
+
|
| 193 |
+
max width=
|
| 194 |
+
|
| 195 |
+
Dataset OGBL-DDI OGBL-COLLAB OGBL-PPA OGBL-CITATION2
|
| 196 |
+
|
| 197 |
+
1-5
|
| 198 |
+
#Nodes 4,267 235,868 576,289 2,927,963
|
| 199 |
+
|
| 200 |
+
1-5
|
| 201 |
+
$\mathbf{\# {Edges}}$ 1,334,889 1,285,465 30,326,273 30,561,187
|
| 202 |
+
|
| 203 |
+
1-5
|
| 204 |
+
|
| 205 |
+
§ 5 RELATED WORK
|
| 206 |
+
|
| 207 |
+
§ 5.1 LINK PREDICTION MODELS
|
| 208 |
+
|
| 209 |
+
Existing LP models can be categorized into three families: heuristic feature based [3, 9, 21-23], latent embedding based [12, 24-28], and neural network based ones. The neural network-based link prediction models are mainly developed in recent years, which explore non-linear deep structural features with neural layers. Variational graph auto-encoders [13] predict links by encoding graph with graph convolutional layer [5]. Another two state-of-the-art neural models WLNM [29] and SEAL [30] use graph labeling algorithm to transfer union neighborhood of two nodes (enclosing subgraph) as meaningful matrix and employ convolutional neural layer or a novel graph neural layer DGCNN [31] for encoding. More recently, $\left\lbrack {6,8}\right\rbrack$ summarized the architectures LP models, and formally define the encoders and decoders.
|
| 210 |
+
|
| 211 |
+
Different from the previous work, we focus on analyzing the effectiveness of different LP decoders and improving the scalability of the effective LP decoders. In practice, we find that the Hadamard decoders exhibit superior effectiveness but poor scalability for inference. Our work significantly accelerates the inference of HadamardMLP decoders to make the effective LP scalable.
|
| 212 |
+
|
| 213 |
+
§ 5.2 MAXIMUM INNER PRODUCT SEARCH
|
| 214 |
+
|
| 215 |
+
Finding the top scoring neighbors for the Dot Product decoder at the sublinear time complexity is a well studied research problem, known as the approximate maximum inner product search (MIPS). There are several approaches to MIPS: sampling based [11, 32, 33], LSH-based [34-37], graph based [38-40], and quantization approaches [17, 18]. MIPS is a fundamental building block in various application domains [41-46], such as information retrieval [47, 48], pattern recognition [49, 50], data mining [51, 52], machine learning [53, 54], and recommendation systems [55, 56].
|
| 216 |
+
|
| 217 |
+
With the explosive growth of datasets' scale and the inevitable curse of dimensionality, MIPS is essential to offer the scalable services. However, the HadamardMLP decoders are nonlinear and there do not exist the well studied sublinear complexity algorithms to find the top scoring neighbors for HadamardMLP [10]. In this work, we utilize the well studied approximate MIPS techniques with the adaptively adjusted query embeddings to find the top scoring neighbors for the MLP decoders in a progressive manner. Our method supports the plug-and-play use during inference and significantly acclerates the LP inference with the effective MLP decoders.
|
| 218 |
+
|
| 219 |
+
§ 6 EXPERIMENTS
|
| 220 |
+
|
| 221 |
+
In this section, we first compare the effectiveness of different LP decoders. We find that the HadamardMLP decoders generally perform better than other decoders. Then, we implement our 9 Flashlight algorithm with LP models to show that Flashlight effectively retrieves the top scoring neighbors for the HadamardMLP decoders. As a result, the inference efficiency and scalability of HadamardMLP decoders are improved significantly by our work.
|
| 222 |
+
|
| 223 |
+
§ 6.1 DATASETS
|
| 224 |
+
|
| 225 |
+
We evaluate the link prediction on Open Graph Benchmark (OGB) data [57]. We use four OGB datasets with different graph types, including OGBL-DDI, OGBL-COLLAB, OGBL-CITATION2, and OGBL-PPA. OGBL-DDI is a homogeneous, unweighted, undirected graph, representing the drug-drug interaction network. Each node represents a drug. Edges represent interactions between drugs. OGBL-COLLAB is an undirected graph, representing a subset of the collaboration network between authors indexed by MAG. Each node represents an author and edges indicate the collaboration between authors. All nodes come with 128-dimensional features. OGBL-CITATION2 is a directed graph, representing the citation network between a subset of papers extracted from MAG. Each node is a paper with 128-dimensional word2vec features. OGBL-PPA is an undirected, unweighted graph. Nodes represent proteins from 58 different species, and edges indicate biologically meaningful associations between proteins. The statistics of these datasets is presented in Table. 1.
|
| 226 |
+
|
| 227 |
+
Table 2: The test effectiveness comparison of LP decoders on four OGB datasets (DDI, COLLAB, PPA, and CITATION2) [16]. We report the results of the standard metrics averaged over 10 runs following the existing work $\left\lbrack {6,{16}}\right\rbrack$ . HadamardMLP is more effective than other decoders. Flashlight effectively retrieves the top scoring neighbors for HadamardMLP and keep its exact outputs.
|
| 228 |
+
|
| 229 |
+
max width=
|
| 230 |
+
|
| 231 |
+
Decoder Dot Product Bilinear ConcatMLP HadamardMLP HadamardMLP w/ Flashlight
|
| 232 |
+
|
| 233 |
+
1-6
|
| 234 |
+
6|c|OGBL-DDI
|
| 235 |
+
|
| 236 |
+
1-6
|
| 237 |
+
GCN [5] ${13.8} \pm {1.8}$ ${16.1} \pm {1.2}$ ${12.9} \pm {1.4}$ ${37.1} \pm {5.1}$ ${37.1} \pm {5.1}$
|
| 238 |
+
|
| 239 |
+
1-6
|
| 240 |
+
GraphSAGE [12] ${36.5} \pm {2.6}$ ${39.4} \pm {1.7}$ ${34.2} \pm {1.9}$ $\mathbf{{53.9} \pm {4.7}}$ $\mathbf{{53.9} \pm {4.7}}$
|
| 241 |
+
|
| 242 |
+
1-6
|
| 243 |
+
Node2Vec [27] ${11.6} \pm {1.9}$ ${13.8} \pm {1.6}$ ${10.8} \pm {1.7}$ ${23.3} \pm {2.1}$ ${23.3} \pm {2.1}$
|
| 244 |
+
|
| 245 |
+
1-6
|
| 246 |
+
6|c|OGBL-COLLAB
|
| 247 |
+
|
| 248 |
+
1-6
|
| 249 |
+
GCN [5] ${42.9} \pm {0.7}$ ${43.2} \pm {0.9}$ ${42.3} \pm {1.0}$ ${44.8} \pm {1.1}$ ${44.8} \pm {1.1}$
|
| 250 |
+
|
| 251 |
+
1-6
|
| 252 |
+
GraphSAGE [12] ${37.3} \pm {0.9}$ ${41.5} \pm {0.8}$ ${37.0} \pm {0.7}$ ${48.1} \pm {0.8}$ $\mathbf{{48.1} \pm {0.8}}$
|
| 253 |
+
|
| 254 |
+
1-6
|
| 255 |
+
Node2Vec [27] ${27.7} \pm {1.1}$ ${31.5} \pm {1.0}$ ${27.2} \pm {0.8}$ $\mathbf{{48.9} \pm {0.5}}$ ${48.9} \pm {0.5}$
|
| 256 |
+
|
| 257 |
+
1-6
|
| 258 |
+
6|c|OGBL-PPA
|
| 259 |
+
|
| 260 |
+
1-6
|
| 261 |
+
GCN [5] ${5.1} \pm {0.4}$ ${5.8} \pm {0.5}$ ${6.2} \pm {0.6}$ ${18.7} \pm {1.3}$ $\mathbf{{18.7} \pm {1.3}}$
|
| 262 |
+
|
| 263 |
+
1-6
|
| 264 |
+
GraphSAGE [12] ${3.2} \pm {0.3}$ ${6.5} \pm {0.7}$ ${5.8} \pm {0.4}$ ${16.6} \pm {2.4}$ ${16.6} \pm {2.4}$
|
| 265 |
+
|
| 266 |
+
1-6
|
| 267 |
+
Node2Vec [27] ${4.2} \pm {0.5}$ ${7.8} \pm {0.6}$ ${8.3} \pm {0.4}$ $\mathbf{{22.3} \pm {0.8}}$ $\mathbf{{22.3} \pm {0.8}}$
|
| 268 |
+
|
| 269 |
+
1-6
|
| 270 |
+
6|c|OGBL-CITATION2
|
| 271 |
+
|
| 272 |
+
1-6
|
| 273 |
+
GCN [5] ${65.3} \pm {0.4}$ ${69.0} \pm {0.8}$ ${62.7} \pm {0.3}$ $\mathbf{{84.7} \pm {0.2}}$ $\mathbf{{84.7} \pm {0.2}}$
|
| 274 |
+
|
| 275 |
+
1-6
|
| 276 |
+
GraphSAGE [12] ${62.2} \pm {0.7}$ ${65.4} \pm {0.9}$ ${60.8} \pm {0.6}$ $\mathbf{{80.4} \pm {0.1}}$ ${80.4} \pm {0.1}$
|
| 277 |
+
|
| 278 |
+
1-6
|
| 279 |
+
Node2Vec [27] ${52.7} \pm {0.8}$ ${54.1} \pm {0.6}$ ${51.4} \pm {0.5}$ ${61.4} \pm {0.1}$ $\mathbf{{61.4} \pm {0.1}}$
|
| 280 |
+
|
| 281 |
+
1-6
|
| 282 |
+
|
| 283 |
+
§ 6.2 HYPER-PARAMETER SETTINGS
|
| 284 |
+
|
| 285 |
+
For all experiments in this section, we report the average and standard deviation over ten runs with different random seeds. The results are reported on the the best model selected using validation data. We set hyper-parameters of the used techniques and considered baseline methods, e.g., the batch size, the number of hidden units, the optimizer, and the learning rate as suggested by their authors. We use the recent MIPS method ScaNN [18] in the implementation of our Flashlight. For the hyper-parameters of our Flashlight, we have found in the experiments that the performance of Flashlight is robust to the change of hyper-parameters in a board range. Therefore, we simply set the number of iterations of our Flashlight as $T = 3$ and the number of retrieved neighbors constant as 200 per iteration by default. We run all experiments on a machine with 80 Intel(R) Xeon(R) E5-2698 v4 @ 2.20GHz CPUs, and a single NVIDIA V100 GPU with 16GB RAM.
|
| 286 |
+
|
| 287 |
+
§ 6.3 EFFECTIVENESS OF LINK PREDICTION DECODERS
|
| 288 |
+
|
| 289 |
+
We follow the standard benchmark settings of OGB datasets to evaluate the effectiveness of LP with different decoders. The benchmark setting of OGBL-DDI is to predict drug-drug interactions given information on already known drug-drug interactions. The performance is evaluated by Hits@20: each true drug interaction is ranked among a set of approximately 100,000 randomly-sampled negative drug interactions, and count the ratio of positive edges that are ranked at 20-place or above. The task of OGBL-COLLAB is to predict the future author collaboration relationships given the past collaborations. Evaluation metric is Hits50, where each true collaboration is ranked among a set of 100,000 randomly-sampled negative collaborations. The task of OGBL-PPA is to predict new association edges given the training edges. Evaluation metric is Hits@100, where each positive edge is ranked among 3,000,000 randomly-sampled negative edges. The task of OGBL-CITATION2 is predict missing citation given existing citations. The evaluation metric is Mean Reciprocal Rank (MRR), where the reciprocal rank of the true reference among 1,000 sampled negative candidates is calculated for each source nodes, and then the average is taken over all source nodes.
|
| 290 |
+
|
| 291 |
+
We implement different decoders as introduced in Sec. 2, including the Dot Product, Bilinear, ConcatMLP, and the HadamardMLP decoders, over the LP encoders, including GCN [5], GraphSAGE [12], and Node2Vec [27], to compare the effects of different decoders on the LP effectiveness. We present the results on the OGBL-DDI, OGBL-COLLAB, OGBL-PPA, and OGBL-CITATION2 datasets in Table. 2. We observe that the HadamardMLP decoder outperforms other decoders on all encoders and datasets. Our Flashlight algorithm can effectively retrieve the top scoring neighbors for the HadamardMLP decoder and keep the exact LP probabilities of HadamardMLPs' output, which leads to the same results of the HadamardMLP decoder with and without Flashlight.
|
| 292 |
+
|
| 293 |
+
Note that the benchmark settings of these datasets sample a small portion of negative edges for the test evaluation, which is not challenging enough to evaluate the scalability of LP decoders on retrieving the top scoring neighbors from massive candidates in practice.
|
| 294 |
+
|
| 295 |
+
§ 6.4 THE FLASHLIGHT ALGORITHM EFFECTIVELY FINDS THE TOP SCORING NEIGHBORS
|
| 296 |
+
|
| 297 |
+
To evaluate the effectiveness of our Flashlight on retrieving the top scoring neighbors for the HadamardMLP decoder, we propose a more challenging test setting for the OGB LP datasets. Given a source node, we takes its top 100 scoring neighbors of the HadamardMLP decoder as the ground-truth for retrievals. We set the task as retrieving $k$ neighbors for a source node that can match the ground-truth neighbors as much as possible. We formally define the metric as Recall $@k$ , which is the portion of the ground-truth neighbors being in the top $k$ neighbors retrieved by different methods.
|
| 298 |
+
|
| 299 |
+
We sample 1000 nodes as the source nodes from the OGBL-DDI and OGBL-CITATION2 datasets respectively for evaluation. We evaluate the effectivness of our Flashlight algorithm by checking whether it can find the top scoring neighbors for every source node. We set the number of Flashlight iterations as 10 and the number of retrieved neighbors per iteration as 50 . We present the Recall@ $k$ for $k$ from 1 to 500 averaged over all the source nodes in Fig. 3. The "oracle" curve represents the performance of a optimum searcher, of which the retrieved top $k$ neighbors are exactly the top $k$ scoring neighbors of HadamardMLP.
|
| 300 |
+
|
| 301 |
+
< g r a p h i c s >
|
| 302 |
+
|
| 303 |
+
Figure 3: Recall $@k$ is the fraction of the 100 top scoring neighbors of HadamardMLP ranked in the top $k$ neighbors retrieved by Flashlight. We report Recall $@k$ averaged over all the source nodes on OGBL-CITATION2 and OGBL-DDI.
|
| 304 |
+
|
| 305 |
+
When $k = {100}$ , the 100 neighbors retrieved by our Flashlight can cover more than ${80}\%$ ground-truth neighbors. When $k \geq {200}$ , the recall reaches ${100}\%$ . As a comparison, if we randomly sample the candidate neighbors for retrievals, the Recall $@k$ grows linearly with $k$ and is less than $1 \times {10}^{-4}$ for $k = {100}$ on the OGBL-CITATION2 dataset. The curves of Flashlight is close the optimum curve of the "oracle". These results demonstrate the highly effectiveness of our Flashlight on finding the top scoring neighbors.
|
| 306 |
+
|
| 307 |
+
Given the large OGBL-Citation2 dataset and smaller DDI dataset, our Flashlight exhibits similar Recall $@k$ performance given different numbers $k$ of retrieved neighbors. This implies that our Flashlight can accurately find the top scoring neighbors for both small and large graphs.
|
| 308 |
+
|
| 309 |
+
§ 6.5 INFERENCE EFFICIENCY OF LINK PREDICTION WITH OUR FLASHLIGHT ALGORITHM
|
| 310 |
+
|
| 311 |
+
We use the throughputs to evaluate the inference speed of neighbor retrieval of different methods. The throughput is defined as how many source nodes that a method can serve to retrieve the top 100 scoring neighbors per second. Except for the LP models that follow the encoder and decoder architectures, e.g., GraphSAGE [12], GCN [5], and PLNLP [6], there are some subgraph based LP models, e.g., SUREL [7] and SEAL [58]. The common issue of the subgraph based models is the poor efficiency: they have to crop a seperate subgraph for every node pair to calculate the LP probability on the node pair. In this sense, the node embeddings cannot be shared on the LP calculation for different node pairs. This leads to the much lower inference speed of the subgraph based LP models than the encoder-decoder LP models. We compare the inference effeciency of different methods on the OGBL-CITATION2 dataset in Fig. 4, where we present the inference speed of different methods when achieving the ${100}\%$ recall@100 for the top 100 scoring neighbors.
|
| 312 |
+
|
| 313 |
+
We observe that our Flashlight significantly accelerate the inference speed of LP models GraphSAGE [12], GCN [5], and PLNLP [6] with the HadamardMLP decoders by more than 100 times. This gap will be even larger for the datasets of larger scales, because the inference with our Flashlight holds the sublinear time complexity while the HadamardMLP decoders holds the linear complexity. Note that the y-axis is in logoratimic scale. The subgraph based methods SUREL [7] and SEAL [58] hold the inference speed of throuputs lower than $1 \times {10}^{-2}$ and $1 \times {10}^{-3}$ respectively, which is not applicable to the practical services that require the low latency of milliseconds.
|
| 314 |
+
|
| 315 |
+
< g r a p h i c s >
|
| 316 |
+
|
| 317 |
+
Figure 4: The inference speed of different LP methods on the OGBL-CITATION2 dataset. The y-axis (througputs) is in the logarithmic scale.
|
| 318 |
+
|
| 319 |
+
< g r a p h i c s >
|
| 320 |
+
|
| 321 |
+
Figure 5: The tradeoff between the inference speed (y-axis) and the effectiveness of finding the top scoring neighbors (x-axis) on the OGBL-CITATION2 (left) and OGBL-PPA (right) datasets.
|
| 322 |
+
|
| 323 |
+
Taking a further step, we comprehensively evaluate the tradeoff between the inference speed and the effectiveness of finding the top scoring neighbors. Taking GraphSAGE as the encoder, we present the tradeoff curves between the throughputs and the Recall@100 on the OGBL-CITATION2 and OGBL-PPI datasets in Fig. 5. In comparison with our Flashlight, we take the HadamardMLP decoder with the Random Sampling as the baseline for comparison. For example, on the OGBL-CITATION2 dataset, when achieving the Recall@100 as more than 80%, the HadamardMLP with our Flashlight can serve more than 200 source nodes per second, while the HadamardMLP with the random sampling can only serve less than 1 node per second. Overall, our Flashlight achieves much better inference speed and effectiveness tradeoff than the HadamardMLP with random sampling.
|
| 324 |
+
|
| 325 |
+
§ 7 CONCLUSION
|
| 326 |
+
|
| 327 |
+
Our theoretical and empirical analysis suggests that the HadamardMLP decoders are a better default choice than the Dot Product in terms of LP effectiveness. Because there does not exist a well-developed sublinear complexity top scoring neighbor searching algorithm for HadamardMLP, the HadamardMLP decoders are not scalable and cannot support the fast inference on large graphs. To resolve this issue, we propose the Flashlight algorithm to accelerate the inference of LP models with HadamardMLP decoders. Flashlight progressively operates the well-studied MIPS techniques for a few iterations. We adaptively adjust the query embeddings at every iteration to find more high scoring neighbors. Empirical results show that our Flashlight accelrates the inference of LP models by more than 100 times on the large OGBL-CITATION2 graph. Overall, our work paves the way for the use of strong LP decoders in practical settings by greatly accelerating their inference.
|
papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,413 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
A key problem in network theory is how to reconfigure a graph in order to optimize a quantifiable objective. Given the ubiquity of networked systems, such work has broad practical applications in a variety of situations, ranging from drug and material design to telecommunications. The large decision space of possible reconfigurations, however, makes this problem computationally intensive. In this paper, we cast the problem of network rewiring for optimizing a specified structural property as a Markov Decision Process (MDP), in which a decision-maker is given a budget of modifications that are performed sequentially. We then propose a general approach based on the Deep Q-Network (DQN) algorithm and graph neural networks (GNNs) that can efficiently learn strategies for rewiring networks. We then discuss a cybersecurity case study, i.e., an application to the computer network reconfiguration problem for intrusion protection. In a typical scenario, an attacker might have a (partial) map of the system they plan to penetrate; if the network is effectively "scrambled", they would not be able to navigate it since their prior knowledge would become obsolete. This can be viewed as an entropy maximization problem, in which the goal is to increase the surprise of the network. Indeed, entropy acts as a proxy measurement of the difficulty of navigating the network topology. We demonstrate the general ability of the proposed method to obtain better entropy gains than random rewiring on synthetic and real-world graphs while being computationally inexpensive, as well as being able to generalize to larger graphs than those seen during training. Simulations of attack scenarios confirm the effectiveness of the learned rewiring strategies.
|
| 12 |
+
|
| 13 |
+
## 24 1 Introduction
|
| 14 |
+
|
| 15 |
+
A key problem in network theory is how to rewire a graph in order to optimize a given quantifiable objective. Addressing this problem might have applications in several domains, given the fact several systems of practical interest can be represented as graphs $\left\lbrack {{23},{24},{29},{49},{50}}\right\rbrack$ . A large body of literature studies how to construct and design networks in order to optimize some quantifiable goal, such as robustness in supply chain and wireless sensor networks [40, 53] or ADME properties of molecules $\left\lbrack {{19},{39}}\right\rbrack$ . Given the intractable number of distinct configurations of even relatively small networks, optimizing these structural and topological properties is generally a non-trivial task that has been approached from various angles in graph theory $\left\lbrack {{15},{18}}\right\rbrack$ and also studied from heuristic perspectives $\left\lbrack {{21},{35}}\right\rbrack$ . Exact solutions are too computationally expensive to obtain and heuristic methods are generally sub-optimal and do not generalize well to unseen instances.
|
| 16 |
+
|
| 17 |
+
The adoption of graph neural networks (GNNs) [41] and deep reinforcement learning (RL) [36] techniques have lead to promising approaches to the problem of optimizing graph processes or structure $\left\lbrack {{14},{16},{30}}\right\rbrack$ . A fundamental structural modification is rewiring, in which edges (e.g., links in a computer network) are reconfigured such that the topology is changed while their total number remains constant. The problem of rewiring to optimize a structural property has not been studied in the literature.
|
| 18 |
+
|
| 19 |
+
In this paper, we present a solution to the network rewiring problem for optimizing a specified structural property. We formulate this task as a Markov Decision Process (MDP), in which a decision-maker is given a budget of rewiring operations that are performed sequentially. We then propose an approach based on the Deep Q-Network (DQN) algorithm and GNNs that can efficiently learn strategies for rewiring networks. We evaluate the method by means of a realistic cybersecurity case study. In particular, we assume a scenario in which an attacker has entered a computer network and aims to reach a particular node of interest. We also assume that the attacker has partial knowledge of the underlying graph topology, which is used to reach a given target inside the network. The goal is to learn a rewiring process for modifying the structure of the graph so as to disrupt the capability of the attacker to reach its target, all the while keeping the network operational. This can be seen as an example of moving target defense (MTD) [8]. We frame the solution as an entropy maximization problem, in which the goal is to increase the surprise of the network in order to disrupt the navigation of the attacker inside it. Indeed, entropy acts as proxy measurement of the difficulty of this task, with an increase in entropy corresponding to an increase its difficulty. In particular, we consider two measures of network entropy - namely Shannon entropy and Maximal Entropy Random Walk (MERW), and we compare their effectiveness.
|
| 20 |
+
|
| 21 |
+
More specifically, the contributions of this paper can be summarized as follows:
|
| 22 |
+
|
| 23 |
+
- We formulate the problem of graph rewiring so as to maximize a global structural property as an MDP, in which a central decision-maker is given a certain budget of rewiring operations that are performed sequentially. We formulate an approach that combines GNN architectures and the DQN algorithm to learn an optimal set of rewiring actions by trial-and-error;
|
| 24 |
+
|
| 25 |
+
- We present an extensive case study of the proposed approach in the context of defense against network intrusion by an attacker. We show that our method is able to obtain better gains in entropy than random rewiring, while scaling to larger networks than a local greedy search, and generalizing to larger out-of-distribution graphs in some cases. Furthermore, we demonstrate the effectiveness of this approach by simulating the movement of an attacker in the network, finding that indeed the applied modifications increase the difficulty for the attacker to reach its targets in both synthetic and real-world graph topologies.
|
| 26 |
+
|
| 27 |
+
## 2 Related work
|
| 28 |
+
|
| 29 |
+
RL for graph reconfiguration. Recently, an increasing amount of research has been conducted on the use of reinforcement learning in graph reconfiguration. In particular, in [14] a solution based on reinforcement learning for modifying graphs with the aim of attacking both node and graph classification is presented. In addition, the authors briefly introduce a defense method using adversarial training and edge removal, which decreases their proposed classifier attack rate slightly by $1\%$ . This defense strategy is however only effective on the attack strategy it is trained on and does not generalize. Instead, the authors of [34] use a reinforcement learning approach to learn an attack strategy for neural network classifiers of graph topologies based on edge rewiring, and show that they are able to achieve misclassification with changes that are less noticeable compared to edge and vertex removal and addition. Our paper focuses on a different problem that does not involve classification tasks, but the maximization of a given network objective function. In [16] reinforcement learning techniques are applied to the problem of optimizing the robustness of a graph by means of graph construction; the authors show that their proposed method is able to outperform existing techniques and generalize to different graphs. In the present work, we optimize a global structural property through rewiring instead of constructing a graph through edge addition.
|
| 30 |
+
|
| 31 |
+
Graph robustness and attacks. A related research area is the optimization of graph robustness [37], which denotes the capacity of a graph to withstand targeted attacks and random failures. [42] demonstrates how small changes in complex networks such as an electricity system or the Internet can improve their robustness against malicious attacks. [6] investigates several heuristic reconfiguration techniques that aim to improve graph robustness without substantially modifying the network structure, and find that preferential rewiring is superior to random rewiring. The authors of [11] extend this study to a framework that can accommodate multiple rewiring strategies and objectives. Several works have used information-based complexity metrics in the context of network defense or attack strategies: [27] proposes a network security metric to assess network vulnerability by measuring the Kolmogorov complexity of effective attack paths. The underlying reasoning is that the more complex attack paths have to be in order to harm a network, the less vulnerable a network is to external attacks. Furthermore, [25] investigates the vulnerability of complex networks, finding that attacks based on edge and vertex removal are substantially more effective when the network properties are recomputed after each attack.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Figure 1: Illustrative example of the MDP timesteps comprising a single rewiring operation. The agent observes an initial state ${S}_{0} = \left( {{G}_{0},\varnothing ,\varnothing }\right)$ (first panel), from which it then selects a base node ${v}_{1} = \{ 1\}$ that will be rewired (second panel). Given the new state that contains the initial graph and the selected base node, the agent selects a target node ${v}_{2} = \{ 5\}$ to which an edge will be added (third panel). Finally, a third node ${v}_{3} = \{ 0\}$ is selected from the neighborhood of ${v}_{1} = \{ 1\}$ and the corresponding edge is removed (last panel). After a sequence of $b$ rewiring operations, the agent will receive a reward proportional to the improvement in the objective function $\mathcal{F}$ .
|
| 36 |
+
|
| 37 |
+
Cybersecurity and network defense. In the last decade and in recent years in particular, a drastic surge in cyberattacks on governmental and industrial organizations has exposed the imminent vulnerability of global society to cyberthreats [43]. The targeted digital systems are generally structured as a network in which entities in the system communicate and share resources among each other. Typically, attackers seek to gain unauthorized access to the underlying network through an entry point and search for highly valuable nodes in order to infect these digital systems with malicious software such as viruses, ransomware and spyware [3], enabling them to extract sensitive information or control the functioning of the network [26]. Moving target defense (MTD) is a cybersecurity defense technique by which a network and the underlying software are dynamically changed to counteract attack strategies [4, 8, 9, 44, 51] Most existing MTD techniques involve NP-hard problems, and approximate or heuristic solutions are often impractical [8]. We note that while most studies are applied to specific software architectures, which prevent them from being applied effectively to large scale deployments, in this work we focus on modeling this problem from an abstract, infrastructure-agnostic perspective.
|
| 38 |
+
|
| 39 |
+
## 3 Graph rewiring as an MDP
|
| 40 |
+
|
| 41 |
+
### 3.1 Problem statement
|
| 42 |
+
|
| 43 |
+
We define a graph (network) as $G = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\}$ is the set of $n = \left| \mathcal{V}\right|$ vertices (nodes) and $\mathcal{E} = \left\{ {{e}_{1},\ldots ,{e}_{m}}\right\}$ is the set of $m = \left| \mathcal{E}\right|$ edges (links). A rewiring operation $\gamma \left( {G,{v}_{i},{v}_{j},{v}_{k}}\right)$ transforms the graph $G$ by adding the non-edge $\left( {{v}_{i},{v}_{j}}\right)$ and removing the existing edge $\left( {{v}_{i},{v}_{k}}\right)$ ; we denote the set of all such operations by $\Gamma$ . Given a budget $b \propto m$ of rewiring operations, and a global objective function $\mathcal{F}\left( G\right)$ to be maximized, the goal is to find the set of unique rewiring operations out of ${\Gamma }^{b}$ such that the resulting graph ${G}^{\prime }$ maximizes $\mathcal{F}\left( {G}^{\prime }\right)$ .
|
| 44 |
+
|
| 45 |
+
Since the size of the set of possible rewirings grows rapidly with the graph size, we cast this problem as a sequential decision-making process, which is detailed below.
|
| 46 |
+
|
| 47 |
+
### 3.2 MDP framework
|
| 48 |
+
|
| 49 |
+
We let every rewiring operation consist of three sub-steps: 1) base node selection; 2) node selection for edge addition; and 3) node selection for edge removal. We precede the edge removal step by edge addition to suppress potential disconnections of the graph. The rewiring procedure is illustrated in Figure 1. For reducing the size of the decision space, we model each sub-step of the rewiring operation as a separate timestep in the MDP itself. Its elements are defined as:
|
| 50 |
+
|
| 51 |
+
State. The state ${S}_{t}$ is the tuple ${S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right)$ , containing the graph ${G}_{t} = \left( {\mathcal{V},{\mathcal{E}}_{t}}\right)$ , the chosen base node ${a}_{1}$ , and the chosen addition node ${a}_{2}$ . The base node and addition node may be null $\left( \varnothing \right)$ depending on the rewiring operation sub-step.
|
| 52 |
+
|
| 53 |
+
Actions. We specify three distinct action spaces ${\mathcal{A}}_{\widehat{t}}\left( {S}_{t}\right)$ , where $\widehat{t} \mathrel{\text{:=}} \left( \begin{array}{ll} t & \text{ mod }3 \end{array}\right)$ denotes the sub-step within a rewiring operation. Letting the degree of node $v$ be ${k}_{v}$ , they are defined as:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\mathcal{A}}_{0}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,\varnothing ,\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V}\left| {0 < {k}_{v} < }\right| \mathcal{V} \mid - 1}\right\} , \tag{1}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\mathcal{A}}_{1}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1}, v}\right) \notin {\mathcal{E}}_{t}}\right\} , \tag{2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\mathcal{A}}_{2}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},{a}_{2}}\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1}, v}\right) \in {\mathcal{E}}_{t} \smallsetminus \left( {{a}_{1},{a}_{2}}\right) }\right\} . \tag{3}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
Transitions. Transitions are deterministic; the model $P\left( {{S}_{t} = {s}^{\prime } \mid {S}_{t - 1} = s,{A}_{t - 1} = {a}_{t - 1}}\right)$ transitions to state ${S}^{\prime }$ with probability 1, where:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{S}^{\prime } = \left\{ \begin{array}{lll} \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1}}\right) ,{a}_{1},\varnothing }\right) , & \text{ if }3 \mid t + 2 & \text{ mark base node } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \cup \left( {{a}_{1},{a}_{2}}\right) }\right) ,{a}_{1},{a}_{2}}\right) , & \text{ if }3 \mid t & \text{ mark addition node }\& \text{ add edge } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \smallsetminus \left( {{a}_{1},{a}_{3}}\right) }\right) ,\varnothing ,\varnothing }\right) , & \text{ if }3 \mid t + 1 & \text{ remove edge }\& \text{ reset marked nodes } \end{array}\right.
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
(4)
|
| 74 |
+
|
| 75 |
+
Rewards. The reward signal ${R}_{t}$ is proportional to the difference in the value of the objective function $\mathcal{F}$ before and after the graph reconfiguration. Furthermore, a key operational constraint in the domain we consider is that the network remains connected after the rewiring operations. Instead of running connectivity algorithms at every time-step to determine if a potential removed edge disconnects the graph, we encourage maintaining connectivity by giving a penalty $\bar{r} < 0$ at the end of the episode if the graph becomes disconnected. All rewards and penalties are provided at the final timestep $T$ , and no intermediate rewards are given. This enables the flexibility to discover long-term strategies that maximize the total cumulative reward of a sequence of reconfigurations rather than a single-step rewiring operation, even if the graph is disconnected during intermediate steps. Concretely, given an initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we define the reward function at timestep $t$ as:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{R}_{t} = \left\{ \begin{array}{ll} {c}_{\mathcal{F}} \cdot \left( {\mathcal{F}\left( {G}_{t}\right) - \mathcal{F}\left( {G}_{0}\right) }\right) & \text{ if }t = T \land c\left( G\right) = 1, \\ \bar{r} & \text{ if }t = T \land c\left( G\right) \geq 2, \\ 0 & \text{ otherwise,} \end{array}\right. \tag{5}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $c\left( G\right)$ denotes the number of connected components of $G$ , and $\bar{r} < 0$ is the disconnection penalty. As the different objective functions may act on different scales, we use a reward scaling ${c}_{\mathcal{F}}$ , which we empirically establish for every objective function $\mathcal{F}$ .
|
| 82 |
+
|
| 83 |
+
## 4 Reinforcement learning representation and parametrization
|
| 84 |
+
|
| 85 |
+
In this section, we extend the graph representation and value function approximation parametrizations proposed in past work $\left\lbrack {{14},{16}}\right\rbrack$ for the problem of graph rewiring.
|
| 86 |
+
|
| 87 |
+
### 4.1 Graph representation
|
| 88 |
+
|
| 89 |
+
As the state and action spaces in network reconfiguration quickly become intractable for a sequence of rewiring operations, we require a graph representation that generalizes over similar states and actions. To this end, we use a GNN architecture that is based on a mean field inference method [46]. More specifically, we use a variant of the structure2vec [13] embedding method to represent every node ${v}_{i} \in \mathcal{V}$ in a graph $G = \left( {\mathcal{V},\mathcal{E}}\right)$ by an embedding vector ${\mu }_{i}$ . This embedding vector is constructed in an iterative process by linearly transforming feature vectors ${x}_{i}$ with a set of weights $\left\{ {{\theta }^{\left( 1\right) },{\theta }^{\left( 2\right) }}\right\}$ , aggregating the ${x}_{i}$ with the feature vectors of neighboring nodes ${v}_{j} \in {\mathcal{N}}_{i}$ , then applying the nonlinear Rectified Linear Unit (ReLU) activation function. Hence, at every step $l \in \left( {1,2,\ldots , L}\right)$ , embedding vectors are updated according to:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\mu }_{i}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {{\theta }^{\left( 1\right) }{x}_{i} + {\theta }^{\left( 2\right) }\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}}}{\mu }_{j}^{\left( l\right) }}\right) , \tag{6}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where all embedding vectors are initialized as ${\mu }_{i}^{\left( 0\right) } = \mathbf{0}$ . After $L$ iterations of feature aggregation, we obtain the node embedding vectors ${\mu }_{i} \equiv {\mu }_{i}^{\left( L\right) }$ . By summing the embedding vectors of nodes in a graph $G$ , we obtain its permutation-invariant embedding: $\mu \left( G\right) = \mathop{\sum }\limits_{{i \in \mathcal{V}}}{\mu }_{i}$ . These invariant graph embeddings represent part of the state that the RL agent observes. Aside from permutation invariance, such embeddings allow learned models to be applied to graphs of different sizes, potentially larger than those seen during training.
|
| 96 |
+
|
| 97 |
+
### 4.2 Value function approximation
|
| 98 |
+
|
| 99 |
+
Due to the intractable size of the state-action space in graph reconfiguration tasks, we make use of neural networks to learn approximations of the state-action values $Q\left( {s, a}\right)$ [47]. More specifically, as the action spaces defined in Equation (1) are discrete, we use the DQN algorithm [36] to update the state-action values as follows:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
Q\left( {s, a}\right) \leftarrow Q\left( {s, a}\right) + \alpha \left\lbrack {r + \gamma \mathop{\max }\limits_{{{a}^{\prime } \in \mathcal{A}}}Q\left( {{s}^{\prime },{a}^{\prime }}\right) - Q\left( {s, a}\right) }\right\rbrack . \tag{7}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
The DQN algorithm uses an experience replay buffer [33] from which it samples previously observed transitions $\left( {s, a, r,{s}^{\prime }}\right)$ , and periodically synchronizes a target network with the parameters of the Q-network. The target network is used in the computation of the learning target for estimating the Q-value of the best action in the next timestep, making the learning more stable as the parameters are - kept fixed between updates. We use three separate MLP parametrizations of the Q-function, each corresponding to one of the three sub-steps of the rewiring procedure:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{Q}_{1}\left( {{S}_{t} = \left( {{G}_{t},\varnothing ,\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 3\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 4\right) }\left\lbrack {{\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8a}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{Q}_{2}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 5\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 6\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8b}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
{Q}_{3}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right) ,{A}_{t}}\right) = {\theta }^{\left( 7\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 8\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{a}_{2}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8c}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $\oplus$ denotes concatenation. We highlight that, since the underlying structure2vec parameters shown in Equation (6) are shared, the combined set of the learnable parameters in our model is $\Theta = {\left\{ {\theta }^{\left( i\right) }\right\} }_{i = 1}^{8}$ . During validation and test time, we derive a greedy policy from the above learned Q-functions as $\arg \mathop{\max }\limits_{{a \in {\mathcal{A}}_{t}}}Q\left( {s, a}\right)$ . During training, however, we use a linearly decaying $\epsilon$ -greedy behavioral policy. We refer the reader to Appendix B for a detailed description of our implementation.
|
| 120 |
+
|
| 121 |
+
## 5 Case study: network reconfiguration for intrusion defense
|
| 122 |
+
|
| 123 |
+
In this section, we detail the specifics of our intrusion defense application scenario. We first present the definition of the objective functions we leverage, which act as proxy metrics for the difficulty of navigating the graph. Secondly, we detail the procedure we use for simulating attacker behavior during an intrusion, which will allow us to compare the pre- and post-rewiring costs of traversal.
|
| 124 |
+
|
| 125 |
+
### 5.1 Objective functions for network obfuscation
|
| 126 |
+
|
| 127 |
+
Our goal is to reconfigure the network so as to deter an attacker with partial knowledge of the network topology. Equivalently, we seek to modify the network so as to increase the surprise of the network and render this prior knowledge obsolete, while keep the network operational. A natural formalization of surprise is the concept of entropy, which measures the quantity of information encoded in a graph or, equivalently, its complexity.
|
| 128 |
+
|
| 129 |
+
As measures of entropy, we investigate two graph quantities that are invariant to permutations in representation: the Shannon entropy of the degree distribution [2] and the Maximum Entropy Random Walk (MERW) [7] calculated from the spectrum of the adjacency matrix. The former captures the idea that graphs with heterogeneous degrees are less predictable than regular graphs, while the latter is related to random walks on the network. Whereas generic random walks generally do not
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+
Figure 2: Illustrative example of the evaluation process for a network reconfiguration. (i) The graph is rewired by our approach, removing and adding the highlighted edges respectively. (ii) The leftmost nodes in the graph become unreachable by the attacker from the entry point marked E, and hence a path to them must be rediscovered by exploring the graph. (iii) To reach the nodes, the attacker pays a cost of 1 and 2 respectively for "unlocking" the previously unseen links along the highlighted paths. The total cost induced by the rewiring strategy is ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }} = 3$ .
|
| 134 |
+
|
| 135 |
+
maximize entropy [17], MERW uses a specific choice of transition probabilities that ensures every trajectory of fixed length is equiprobable, resulting in a maximal global entropy in the limit of infinite trajectory length. Although the local transition probabilities depend on the global structure of the graph, the generating process is local [7]. More formally, the two objective functions are formulated as follows: the Shannon entropy is defined as ${\mathcal{F}}_{\text{Shannon }}\left( G\right) = - \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}q\left( k\right) {\log }_{2}q\left( k\right)$ , where $q\left( k\right)$ is the degree distribution; MERW is defined as ${\mathcal{F}}_{\text{MERW }}\left( G\right) = \ln \lambda$ , where $\lambda$ is the largest eigenvalue of the adjacency matrix. In terms of time complexity, computing the Shannon entropy scales as $\mathcal{O}\left( n\right)$ . The calculation of MERW has instead an $\mathcal{O}\left( {n}^{3}\right)$ complexity due to the eigendecomposition required to compute the spectrum of the adjacency matrix.
|
| 136 |
+
|
| 137 |
+
It is worth noting that, in preliminary experiments, we have additionally investigated objective functions related to the Kolmogorov complexity. Also known as algorithmic complexity, this measure does not suffer from distributional dependencies [32]. As the Kolmogorov complexity is theoretically incomputable [10], we used graph compression algorithms such as bzip-2 [12] and Block Decomposition Methods [52] to approximate the Kolmogorov complexity. However, as these approximations depend on the representation of the graph such as the adjacency matrix, one has to consider many permutations of the graph representation. Compressing the representation for a sufficient number of permutations becomes infeasible even for small graphs. While the MERW objective function is also derived from the adjacency matrix through its largest eigenvalue, it does not suffer from this artifact as the spectrum of the adjacency matrix is invariant to permutations.
|
| 138 |
+
|
| 139 |
+
### 5.2 Simulating and evaluating attacker behavior
|
| 140 |
+
|
| 141 |
+
Given an initial connected and undirected graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we model the attacker as having entered the network through an arbitrary node $u \in \mathcal{V}$ , and having built a local map ${\mathcal{M}}_{0}^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{0}^{u}}\right)$ around this entry point, where ${\mathcal{V}}^{v} \subset \mathcal{V}$ is the set of nodes and ${\mathcal{E}}_{0}^{u} \subset {\mathcal{E}}_{0}$ is the set of edges in the map. The rewiring procedure transforms the initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ to the graph ${G}_{ * } = \left( {\mathcal{V},{\mathcal{E}}_{ * }}\right)$ , yielding the new local map ${\mathcal{M}}_{ * }^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{ * }^{u}}\right)$ that is unknown to the attacker. Our goal is to evaluate the effectiveness of the reconfiguration by measuring how "stale" the prior information of the attacker has become in comparison to the new map: if the attacker struggles to find its targets in the updated topology, the rewiring has succeeded.
|
| 142 |
+
|
| 143 |
+
Let $\overline{{\mathcal{V}}^{u}}$ denote the set of nodes in the new local map ${\mathcal{M}}_{ * }^{u}$ that are unreachable through at least one trajectory composed of original edges ${E}_{0}^{u}$ in the old map. For each newly unreachable node ${v}_{i}$ , we measure the cost ${\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ of finding it with a forward random walk, in which the random walker only returns to the previous node if the current node has no other outgoing links. Every time the random walker encounters a link that is (i) not included in ${E}_{0}^{u}$ and (ii) not yet encountered during the random walk, the cost increases by one. This simulates the cost of having to explore the new graph topology due to the reconfigurations that were introduced. Finally, we let ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }} = \mathop{\sum }\limits_{{{v}_{i} \in {\mathcal{V}}^{u}}}{\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ denote the sum of the costs for all newly unreachable nodes, which is our metric for the effectiveness of a rewiring strategy. An illustrative example of a forward random walk and cost evaluation is shown in Figure 2, and a formal description is presented in Algorithm 1 in Appendix B to aid reproducibility.
|
| 144 |
+
|
| 145 |
+
## 6 Experiments
|
| 146 |
+
|
| 147 |
+
### 6.1 Experimental setup
|
| 148 |
+
|
| 149 |
+
Training and evaluation procedure. Our agent is trained on synthetic graphs of size $n = {30}$ that are generated using the graph models listed below. The given budget is ${15}\%$ of the total edges $m$ that are present in the initial graph. When performing the attacker simulations, the initial local map contains the subgraph induced by all nodes that are 2 hops away from the entry point, which is sampled without replacement from the node set. Training occurs separately for each graph model and objective $\mathcal{F}$ on a set of graphs ${\mathcal{G}}_{\text{train }}$ of size $\left| {\mathcal{G}}_{\text{train }}\right| = 6 \cdot {10}^{2}$ . Every 10 training steps, we measure the performance on a disjoint validation set ${\mathcal{G}}_{\text{validation }}$ of size $\left| {\mathcal{G}}_{\text{validation }}\right| = 2 \cdot {10}^{2}$ . We perform reconfiguration operations on a test set ${\mathcal{G}}_{\text{test }}$ of size $\left| {\mathcal{G}}_{\text{test }}\right| = {10}^{2}$ . To account for stochasticity, we train our models with 10 different seeds and present mean and confidence intervals accordingly. Further details about the experimental procedure (e.g., hyperparameter optimization) can be found in Appendix B.
|
| 150 |
+
|
| 151 |
+
Synthetic graphs. We evaluate the approaches on graphs generated by the following models:
|
| 152 |
+
|
| 153 |
+
Barabási-Albert (BA): A preferential attachment model where nodes joining the network are linked to $M$ nodes [5]. We consider values of ${M}_{ba} = 2$ and ${M}_{ba} = 1$ (abbreviated BA-2 and BA-1).
|
| 154 |
+
|
| 155 |
+
Watts-Strogatz (WS): A model that starts with a ring lattice of nodes with degree $k$ . Each edge is rewired to a random node with probability $p$ , yielding characteristically small shortest path lengths [48]. We use $k = 4$ and $p = {0.1}$ .
|
| 156 |
+
|
| 157 |
+
Erdős-Rényi (ER): A random graph model in which the existence of each edge is governed by a uniform probability $p$ [20]. We use $p = {0.15}$ .
|
| 158 |
+
|
| 159 |
+
Real-world graphs. We also consider the real-world Unified Host and Network (UHN) dataset [45], which is a subset of network and host events from an enterprise network. We transform this dataset into a graph by identifying the bidirectional links between hosts appearing in these records, obtaining a graph with $n = {461}$ nodes and $m = {790}$ edges. Further information about this processing can be found in Appendix B.
|
| 160 |
+
|
| 161 |
+
Baselines. We compare the approach against two baselines: Random, which acts in the same MDP as the agent but chooses actions uniformly, and Greedy, which is a shallow one-step search over all rewirings from a given configuration. The latter picks the rewiring that gives the largest improvement in $\mathcal{F}$ . As this search scales very poorly with graph size and budget, we only evaluate it on graphs of size 30 that are used to train the DQN as a comparison point for validating the learned strategies.
|
| 162 |
+
|
| 163 |
+
### 6.2 Entropy maximization results
|
| 164 |
+
|
| 165 |
+
We first consider the results for the maximization of the entropy-based objectives. The gains in entropy obtained by the methods on the held-out test set are shown in Table 1 , while training curves are presented in Appendix A. The results demonstrate that the approach discovers better reconfiguration strategies than random rewiring in all cases, and even the greedy search in one setting. Furthermore, we evaluate the out-of-distribution generalization properties of the learned models along two dimensions: varying the graph size $n \in \left\lbrack {{10},{300}}\right\rbrack$ and the budget $b$ as a percentage of existing edges $\in \{ 5,{10},{15},{20},{25}\}$ . The results for this experiment (from which Greedy is excluded due to poor scalability) are shown in Figure 3. We find that, with the exception of the (BA, ${\mathcal{F}}_{\text{Shannon }}$ ) combination, the learned models generalize well to graphs substantially larger in size as well as varying rewiring budgets.
|
| 166 |
+
|
| 167 |
+
Table 1: Entropy gains on test graphs with $n = {30}$ .
|
| 168 |
+
|
| 169 |
+
<table><tr><td>$\mathcal{F}$</td><td>${\mathcal{G}}_{\text{test }}$</td><td>DQN</td><td>Greedy</td><td>Random</td></tr><tr><td rowspan="4">$\Delta {\mathcal{F}}_{MERW}$</td><td>BA-2</td><td>${0.197}_{\pm {0.002}}$</td><td>${0.225} \pm {0.003}$</td><td>$- {0.019}_{\pm {0.003}}$</td></tr><tr><td>BA-1</td><td>${0.167}_{\pm {0.003}}$</td><td>${0.135}_{\pm {0.003}}$</td><td>$- {0.045}_{\pm {0.004}}$</td></tr><tr><td>ER</td><td>${0.182}_{\pm {0.004}}$</td><td>${0.209}_{\pm {0.012}}$</td><td>$- {0.005}_{\pm {0.003}}$</td></tr><tr><td>WS</td><td>${0.233}_{\pm {0.003}}$</td><td>${0.298}_{\pm {0.002}}$</td><td>${0.035}_{\pm {0.002}}$</td></tr><tr><td rowspan="4">$\Delta {\mathcal{F}}_{\text{Shannon }}$</td><td>BA-2</td><td>${0.541}_{\pm {0.009}}$</td><td>${0.724} \pm {0.015}$</td><td>${0.252} \pm {0.024}$</td></tr><tr><td>BA-1</td><td>${0.167}_{\pm {0.008}}$</td><td>${0.242}_{\pm {0.012}}$</td><td>${0.084}_{\pm {0.015}}$</td></tr><tr><td>ER</td><td>${0.101} \pm {0.012}$</td><td>${0.400}_{\pm {0.023}}$</td><td>$- {0.022}_{\pm {0.018}}$</td></tr><tr><td>WS</td><td>${0.926}_{\pm {0.016}}$</td><td>${1.116} \pm {0.022}$</td><td>${0.567}_{\pm {0.036}}$</td></tr></table>
|
| 170 |
+
|
| 171 |
+
### 6.3 Evaluating the reconfiguration impact
|
| 172 |
+
|
| 173 |
+
We next evaluate the performance of the learned models for entropy maximization on the downstream task of disrupting the navigation of the graph by the attacker.
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 3: Evaluation of the out-of-distribution generalization performance (higher is better) of the learned entropy maximization models as a function of graph size (top) and budget size (bottom). All models are trained on graphs with $n = {30}$ . In the bottom figure, the solid and dotted lines represent graphs with $n = {30}$ and $n = {100}$ respectively. Note the different $\mathrm{x}$ -axes used for ER graphs due to their high edge density.
|
| 178 |
+
|
| 179 |
+
Synthetic graphs. The results for synthetic graphs are shown in Figure 4 in an out-of-distribution setting as a function of graph size, a regime in which the Greedy baseline is too expensive to scale. We find that the best proxy metric varies with the class of synthetic graphs - Shannon entropy performs better for BA graphs, MERW performs better for ER, and performance is similar for WS. Strong out-of-distribution generalization performance is observed for 3 out of 4 synthetic graph models. The results also show that, in the case of WS graphs, even though the performance in terms of the metric itself is high (as shown in Figure 3), the objective is not a suitable proxy for the downstream task in an out-of-distribution setting since the random walk cost decays rapidly. This might be explained by the fact that the graph topology is derived through a rewiring process of cliques of nodes of a given size.
|
| 180 |
+
|
| 181 |
+
Real-world graphs. We also evaluate the models trained on synthetic graphs on the real-world graph constructed from the UHN dataset. Results are shown in Table 2. All but one of the trained models maintain a statistically significant random walk cost difference over the Random baseline. The best-performing models were trained on the (WS, ${\mathcal{F}}_{MERW}$ ) and (BA-1, ${\mathcal{F}}_{\text{Shannon }}$ ) combinations, obtaining total gains in random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }}$ of ${136}\%$ and ${125}\%$ respectively. The Greedy baseline is not applicable for a graph of this size.
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+
Figure 4: Evaluation of the learned rewiring strategies for entropy maximization on the downstream task of disrupting attacker navigation. All models are trained on graphs with $n = {30}$ . The random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }}$ (higher is better) is normalized by $n$ for meaningful comparisons. Note the different $\mathrm{x}$ -axis used for ER graphs due to their high edge density.
|
| 186 |
+
|
| 187 |
+
303
|
| 188 |
+
|
| 189 |
+
## 7 Conclusion
|
| 190 |
+
|
| 191 |
+
Summary. In this work, we have addressed the problem of graph reconfiguration for the optimization of a given property of a networked system, a computationally challenging problem given the generally large decision space. We have then have formulated it as a Markov Decision Process that treats rewirings as sequential, and proposed an approach based on deep reinforcement learning and graph neural networks for efficient learning of network reconfigurations. As a case study, we have applied the proposed method to a cybersecurity scenario in which the task is to disrupt the navigation of potential intruders in a computer network. We have assumed that the goal of the intruder is to navigate the network given some knowledge about its topology. In order to disrupt the attack, we have designed a mechanism for increasing the level of surprise of the network through entropy maximization by means of network rewiring. More specifically, in terms of the objective of the optimization process, we have considered two entropy metrics that quantify the predictability of the network topology, and demonstrated that our method generalizes well on unseen graphs with varying rewiring budgets and different numbers of nodes. We have also validated the effectiveness of the learned models for increasing path lengths towards targeted nodes. The proposed approach outperforms the considered baselines on both synthetic and real-world graphs.
|
| 192 |
+
|
| 193 |
+
Table 2: Total random walk cost of models applied to the real-world UHN graph $\left( {n = {461}, m = {790}}\right)$ .
|
| 194 |
+
|
| 195 |
+
<table><tr><td/><td>$\mathcal{F}$</td><td/><td>${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }}/n\left( \widehat{ \uparrow }\right)$</td></tr><tr><td rowspan="8">DQN</td><td rowspan="3">${\mathcal{F}}_{MERW}$</td><td>BA-2</td><td>${3.087}_{\pm {0.225}}$</td></tr><tr><td>BA-1</td><td>${1.294}_{\pm {0.185}}$</td></tr><tr><td>ER</td><td>${2.887}_{\pm {0.335}}$</td></tr><tr><td/><td>WS</td><td>${\mathbf{{4.888}}}_{\pm {0.568}}$</td></tr><tr><td>${\mathcal{F}}_{\text{Shannon }}$</td><td>BA-2</td><td>${3.774}_{\pm {0.445}}$</td></tr><tr><td/><td>BA-1</td><td>${\mathbf{{4.660}}}_{\pm {0.461}}$</td></tr><tr><td/><td>ER</td><td>${3.891}_{\pm {0.559}}$</td></tr><tr><td/><td>WS</td><td>${3.555}_{\pm {0.318}}$</td></tr><tr><td>Random</td><td>-</td><td>-</td><td>${2.071}_{\pm {0.289}}$</td></tr><tr><td>Greedy</td><td>-</td><td>-</td><td>∞</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Limitations and future work. An advantage of the proposed approach is that it does not require any knowledge of the exact position of the attacker as the traversal of the graph takes place. One may also consider a real-time scenario in which the network reconfiguration aims to "close off" the attacker given knowledge of their location, which may lead to a more efficient defense if such information is available. We have also adopted a simple model of attacker navigation (forward random walks). Different, more complex navigation strategies (e.g., targeting vulnerable machines) can also be considered. This knowledge might be integrated as part of the training process, for example by increasing the probability of rewiring of edges around these nodes through a corresponding reward structure (i.e., higher reward for protecting more sensitive nodes). More generally, we have identified an important application to cybersecurity, which might have a positive impact in safeguarding networks from malicious intrusions. With respect to potential dual-use, we note that the proposed defense mechanism cannot be exploited by attackers directly, since it requires knowledge of at least part of the underlying network topology.
|
| 198 |
+
|
| 199 |
+
References
|
| 200 |
+
|
| 201 |
+
[1] Réka Albert and Albert-László Barabási. Statistical Mechanics of Complex Networks. Reviews of Modern Physics, 74:47-97, 2002.
|
| 202 |
+
|
| 203 |
+
[2] Kartik Anand and Ginestra Bianconi. Entropy measures for networks: Toward an information theory of complex topologies. Physical Review E, 80(4), 2009. 5
|
| 204 |
+
|
| 205 |
+
[3] Ross Anderson. Security Engineering: a Guide to Building Dependable Distributed Systems. John Wiley & Sons, 2020. 3
|
| 206 |
+
|
| 207 |
+
[4] Abdullah Aydeger, Nico Saputro, Kemal Akkaya, and Mohammed Rahman. Mitigating crossfire attacks using SDN-based moving target defense. In ${LCN}$ , pages ${627} - {630}$ . IEEE,2016. 3
|
| 208 |
+
|
| 209 |
+
[5] Albert-László Barabási and Réka Albert. Emergence of Scaling in Random Networks. Science, 286(5439):509-512, 1999. 7
|
| 210 |
+
|
| 211 |
+
[6] Alina Beygelzimer, Geoffrey Grinstein, Ralph Linsker, and Irina Rish. Improving Network Robustness by Edge Modification. Physica A: Statistical Mechanics and its Applications, 357 (3-4):593-612,2005. 2
|
| 212 |
+
|
| 213 |
+
[7] Zdzisław Stanisfaw Burda, Jarosłav Duda, Jean-Marc Luck, and Bartlomiej Waclaw. Localization of the Maximal Entropy Random Walk. Physical Review Letters, 102(16), 2009. 5, 6
|
| 214 |
+
|
| 215 |
+
[8] Gui-lin Cai, Bao-sheng Wang, Wei Hu, and Tian-zuo Wang. Moving target defense: state of the art and characteristics. Frontiers of Information Technology & Electronic Engineering, 17(11): 1122-1153, 2016. 2, 3
|
| 216 |
+
|
| 217 |
+
[9] Thomas E Carroll, Michael Crouse, Errin W Fulp, and Kenneth S Berenhaut. Analysis of network address shuffling as a moving target defense. In ${ICC}$ , pages 701-706. IEEE,2014. 3
|
| 218 |
+
|
| 219 |
+
[10] Gregory J. Chaitin. On the Length of Programs for Computing Finite Binary Sequences. Journal of the ACM, 13(4):547-569, 10 1966. 6
|
| 220 |
+
|
| 221 |
+
[11] Hau Chan and Leman Akoglu. Optimizing network robustness by edge rewiring: a general framework. Data Mining and Knowledge Discovery, 30(5):1395-1425, 2016. 2
|
| 222 |
+
|
| 223 |
+
[12] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-Interscience, New York, 2nd edition, 1991. 6
|
| 224 |
+
|
| 225 |
+
[13] Hanjun Dai, Bo Dai, and Le Song. Discriminative Embeddings of Latent Variable Models for Structured Data. In ICML, volume 6, pages 3970-3986, 2016. 4, 14
|
| 226 |
+
|
| 227 |
+
[14] Hanjun Dai, Hui Li, Tian Tian, Huang Xin, Lin Wang, Zhu Jun, and Song Le. Adversarial Attack on Graph Structured Data. In ICML, volume 3, pages 1799-1808, 2018. 1, 2, 4, 14
|
| 228 |
+
|
| 229 |
+
[15] George B. Dantzig, D. Ray Fulkerson, and Selmer Johnson. Solution of a large scale traveling salesman problem. Operations Research, pages 393-410, 1954. 1
|
| 230 |
+
|
| 231 |
+
[16] Victor-Alexandru Darvariu, Stephen Hailes, and Mirco Musolesi. Goal-directed graph construction using reinforcement learning. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 477(2254), 2021. 1, 2, 4, 14
|
| 232 |
+
|
| 233 |
+
[17] Jarek Duda. From Maximal Entropy Random Walk to Quantum Thermodynamics. In Journal of Physics: Conference Series, volume 361, 2012. 6
|
| 234 |
+
|
| 235 |
+
[18] Jack Edmonds and Richard M Karp. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. Journal of the Association for Computing Machinery, 19(2):248-264, 1972.1
|
| 236 |
+
|
| 237 |
+
[19] Sean Ekins, J. Dana Honeycutt, and James T. Metz. Evolving molecules using multi-objective optimization: Applying to ADME/Tox. Drug Discovery Today, 15(11-12):451-460, 6 2010. 1
|
| 238 |
+
|
| 239 |
+
[20] Paul Erdős and Alfréd Rényi. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17-60, 1960. 7
|
| 240 |
+
|
| 241 |
+
[21] Arpita Ghosh and Stephen Boyd. Growing Well-connected Graphs. Proceedings of the 45th IEEE Conference on Decision & Control, 2006. 1
|
| 242 |
+
|
| 243 |
+
[22] Xavier Glorot and Yoshua Bengio. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Journal of Machine Learning Research, volume 9, pages 249-256, 2010. 14
|
| 244 |
+
|
| 245 |
+
[23] Nils Goldbeck, Panagiotis Angeloudis, and Washington Y. Ochieng. Resilience assessment for interdependent urban infrastructure systems using dynamic network flow models. Reliability Engineering and System Safety, 188:62-79, 8 2019. 1
|
| 246 |
+
|
| 247 |
+
[24] Roger Guimerà, Stefano Mossa, Adrian Turtschi, and LA Nunes Amaral. The worldwide air transportation network: Anomalous centrality, community structure, and cities' global roles. Proceedings of the National Academy of Sciences, 102(22), 2005. 1
|
| 248 |
+
|
| 249 |
+
[25] Petter Holme, Beom Jun Kim, Chang No Yoon, and Seung Kee Han. Attack Vulnerability of Complex Networks. Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, 65(5):14, 2002. 3
|
| 250 |
+
|
| 251 |
+
[26] Keman Huang, Michael Siegel, and Stuart Madnick. Systematically Understanding the Cyber Attack Business: A Survey. ACM Computing Surveys (CSUR), 51(4):1-36, 2018. 3
|
| 252 |
+
|
| 253 |
+
[27] Nwokedi Idika and Bharat Bhargava. A Kolmogorov Complexity Approach for Measuring Attack Path Complexity. In IFIP Advances in Information and Communication Technology, volume 354 AICT, pages 281-292, 2011. 2
|
| 254 |
+
|
| 255 |
+
[28] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, volume 1, pages 448-456, 2015. 14
|
| 256 |
+
|
| 257 |
+
[29] Steven Kearnes, Kevin Mccloskey, Marc Berndl, Vijay Pande, and Patrick Riley. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer-Aided Molecular Design, 30:595-608, 2016. 1
|
| 258 |
+
|
| 259 |
+
[30] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In NeurIPS, 2017. 1
|
| 260 |
+
|
| 261 |
+
[31] Diederik P Kingma and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. In ICLR, 2015.14
|
| 262 |
+
|
| 263 |
+
[32] Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Texts in Computer Science. Springer International Publishing, 2019. 6
|
| 264 |
+
|
| 265 |
+
[33] Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8(3):293-321, 1992. 5
|
| 266 |
+
|
| 267 |
+
[34] Yao Ma, Suhang Wang, Lingfei Wu, and Jiliang Tang. Attacking Graph Convolutional Networks via Rewiring. In ${ICLR},{2020.2}$
|
| 268 |
+
|
| 269 |
+
[35] Madhav V Marathe, Heinz Breu, Harry B Hunt III, Shankar S Ravi, and Daniel J Rosenkrantz. Simple heuristics for unit disk graphs. Networks, 25(2):59-68, 3 1995. ISSN 1097-0037. 1
|
| 270 |
+
|
| 271 |
+
[36] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. 1,5
|
| 272 |
+
|
| 273 |
+
[37] Mark E.J. Newman. Networks. Oxford University Press, 2018. 2
|
| 274 |
+
|
| 275 |
+
[38] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In NeurIPS, volume 32, 2019. 14
|
| 276 |
+
|
| 277 |
+
[39] Douglas E.V. Pires, Tom L. Blundell, and David B. Ascher. pkCSM: Predicting small-molecule pharmacokinetic and toxicity properties using graph-based signatures. Journal of Medicinal Chemistry, 58(9):4066-4072, 5 2015. 1
|
| 278 |
+
|
| 279 |
+
[40] Tie Qiu, Jie Liu, Weisheng Si, and Dapeng Oliver Wu. Robustness optimization scheme with multi-population co-evolution for scale-free wireless sensor networks. IEEE/ACM Transactions on Networking, 27(3):1028-1042, 2019. 1
|
| 280 |
+
|
| 281 |
+
[41] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The Graph Neural Network Model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. 1
|
| 282 |
+
|
| 283 |
+
[42] Christian M Schneider, André A Moreira, José S Andrade, Shlomo Havlin, and Hans J Herrmann. Mitigation of Malicious Attacks on Networks. Proceedings of the National Academy of Sciences, 108(10):3838-3841, 3 2011. 2
|
| 284 |
+
|
| 285 |
+
[43] Bruce Schneier. Secrets and Lies: Digital Security in a Networked World. John Wiley & Sons, 2015. 3
|
| 286 |
+
|
| 287 |
+
[44] Sailik Sengupta, Ankur Chowdhary, Dijiang Huang, and Subbarao Kambhampati. Moving target defense for the placement of intrusion detection systems in the cloud. In International Conference on Decision and Game Theory for Security, pages 326-345. Springer, 2018. 3
|
| 288 |
+
|
| 289 |
+
[45] Melissa J. M. Turcotte, Alexander D. Kent, and Curtis Hash. Unified Host and Network Data Set, chapter Chapter 1, pages 1-22. World Scientific, 2018. 7, 14
|
| 290 |
+
|
| 291 |
+
[46] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1-305, 2008. 4
|
| 292 |
+
|
| 293 |
+
[47] Christopher J C H Watkins and Peter Dayan. Q-Learning. Machine Learning, 8:279-292, 1992. 5
|
| 294 |
+
|
| 295 |
+
[48] Duncan J. Watts and Steven H. Strogatz. Collective dynamics of 'small-world' networks. Nature, 393(6684):440, 1998. 7
|
| 296 |
+
|
| 297 |
+
[49] Tian Xie and Jeffrey C Grossman. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Physical Review Letters, 120(14), 2018.1
|
| 298 |
+
|
| 299 |
+
[50] Jiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, and Jure Leskovec. Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation. NeurIPS, 31, 2018. 1
|
| 300 |
+
|
| 301 |
+
[51] Kimberly Zeitz, Michael Cantrell, Randy Marchany, and Joseph Tront. Designing a micro-moving target ipv6 defense for the internet of things. In IoTDI, pages 179-184. IEEE, 2017. 3
|
| 302 |
+
|
| 303 |
+
[52] Hector Zenil, Santiago Hernández-Orozco, Narsis A Kiani, Fernando Soler-Toscano, Antonio Rueda-Toicen, and Jesper Tegnér. A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity. Entropy, 20(8):605, 2018. ISSN 10994300. 6
|
| 304 |
+
|
| 305 |
+
[53] Kang Zhao, Kevin Scheibe, Jennifer Blackhurst, and Akhil Kumar. Supply Chain Network Robustness Against Disruptions: Topological Analysis, Measurement, and Optimization. IEEE Transactions on Engineering Management, 66(1):127-139, 2018. 1
|
| 306 |
+
|
| 307 |
+
471
|
| 308 |
+
|
| 309 |
+
## A Additional results
|
| 310 |
+
|
| 311 |
+
Computational cost of Greedy baseline. To evidence the poor scalability of the Greedy baseline as discussed in Section 6.1, we perform an additional experiment that measures the wall clock time taken by the different approaches to complete a sequence of rewirings. Results are shown in Figure 5 for Barabási-Albert graphs $\left( {{M}_{ba} = 2}\right)$ as a function of graph size. Beyond graphs of size $n = {150}$ , we extrapolate by fitting polynomials of degree 5 and 4 for ${\mathcal{F}}_{\text{MERW }}$ and ${\mathcal{F}}_{\text{Shannon }}$ respectively.
|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
|
| 315 |
+
Figure 5: Wall clock time needed to complete a sequence of rewirings by the Greedy and DQN methods on Barabási-Albert graphs $\left( {{M}_{ba} = 2}\right)$ with a rewiring budget of 15%.
|
| 316 |
+
|
| 317 |
+
The time needed for evaluating the Greedy baseline increases rapidly as the size of the graph grows, while the post-training DQN is very efficient from a computational point of view. Hence, it is not feasible to use the Greedy baseline beyond very small graphs, but it serves as a useful comparison point.
|
| 318 |
+
|
| 319 |
+
Learning curves. Learning curves are shown in Figure 6, which captures the performance on the held-out validation set ${\mathcal{G}}_{\text{validation }}$ . We note that in many cases (e.g., BA $/{\mathcal{F}}_{MERW}$ ) the performance averaged across all seeds is misleadingly low compared to the baselines, an artifact of the variability of the validation set performance. We also show the performance of the worst-performing seed (dotted) and best-performing seed (dashed) to clarify this.
|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
|
| 323 |
+
Figure 6: MERW (upper half) and Shannon entropy (lower half) increase on the held-out validation set ${\mathcal{G}}_{\text{validation }}$ during training of the DQN algorithm. The dotted and dashed lines for the DQN algorithm represent the worst-performing and best-performing seeds respectively. Random and Greedy rewiring performance are shown for comparison. Graphs are of size $n = {30}$ and the rewiring budget is ${15}\%$ of the number of existing edges.
|
| 324 |
+
|
| 325 |
+
## B Implementation and training details
|
| 326 |
+
|
| 327 |
+
Codebase. The code for reproducing the results of this work will be made available in a future version. The DQN implementation we use is bootstrapped from the RNet-DQN codebase ${}^{1}$ in [16], which itself is based on the RL-S2V ${}^{2}$ implementation from [14] and S2V GNN ${}^{3}$ from [13]. Our neural network architecture is implemented with the deep learning library PyTorch [38].
|
| 328 |
+
|
| 329 |
+
Infrastructure and runtimes. Experiments were carried out on a cluster of 8 machines, each equipped with 2 Intel Xeon E5-2630 v3 processors and 128GB RAM. On this infrastructure, all experiments reported in this paper took approximately 8 days to complete.
|
| 330 |
+
|
| 331 |
+
MDP parameters. To improve numerical stability we scale the reward signals in Equation 5 by ${c}_{\mathcal{F}} = {10}^{1}$ for MERW-DQN and ${c}_{\mathcal{F}} = {10}^{2}$ for Shannon-DQN. We set the disconnection penalty ${\bar{r}}_{n} = - {10.0}$ . As we consider a finite horizon MDP, we set the discount factor $\gamma = 1$ .
|
| 332 |
+
|
| 333 |
+
Model architectures and hyperparameters. In all experiments the same neural network architectures and hy-perparameters are used in the three stages of the rewiring procedure as described in Section 3. The final MLPs described in Equation 8 contain a hidden layer of 128 units and a single-unit output layer representing the estimated state-action value. Batch normalization [28] is applied to the input of the final layer.
|
| 334 |
+
|
| 335 |
+
Table 3: Optimal initial learning rate ${\alpha }_{0}$ , message passing rounds $L$ and graph embedding dimension $\dim \left( {\mu }_{i}\right)$ found by a hyperparameter search.
|
| 336 |
+
|
| 337 |
+
<table><tr><td>DQN</td><td>G</td><td>${\alpha }_{0}$$\left\lbrack {10}^{-4}\right\rbrack$</td><td>$L$</td><td>$\mathrm{{dim}}\left( {\mu }_{i}\right)$</td></tr><tr><td rowspan="4">${\mathcal{F}}_{MERW}$</td><td>BA-2</td><td>5</td><td>3</td><td>128</td></tr><tr><td>BA-1</td><td>5</td><td>6</td><td>128</td></tr><tr><td>ER</td><td>5</td><td>4</td><td>128</td></tr><tr><td>WS</td><td>10</td><td>6</td><td>128</td></tr><tr><td rowspan="4">${\mathcal{F}}_{\text{Shannon }}$</td><td>BA-2</td><td>10</td><td>3</td><td>64</td></tr><tr><td>BA-1</td><td>5</td><td>6</td><td>64</td></tr><tr><td>ER</td><td>1</td><td>4</td><td>64</td></tr><tr><td>WS</td><td>10</td><td>6</td><td>64</td></tr></table>
|
| 338 |
+
|
| 339 |
+
We performed an initial hyperparameter grid search on BA-2 graphs over the following search space: the initial learning rate ${\alpha }_{0} \in \{ 5,{10},{50}\} \cdot {10}^{-4}$ for MERW-DQN and ${\alpha }_{0} \in \{ 1,5,{10}\} \cdot {10}^{-4}$ for Shannon-DQN; the number of message-passing rounds $L \in \{ 3,4\}$ ; the latent dimension of the graph embedding $\dim \left( {\mu }_{i}\right) \in \{ {32},{64},{128}\}$ . Due to computational budget constraints, for BA-1, ER and WS graphs, we only performed a hyperparameter search for for the initial learning rate ${\alpha }_{0}$ over the same values as for BA-2 graphs, while setting the number of message passing rounds equal to the graph diameter $L = D$ and bootstrapping the latent dimension from the hyperparameter search on BA-2 graphs. Table 3 presents an overview of the optimal values of the hyperparameters that were used for the results presented in the paper.
|
| 340 |
+
|
| 341 |
+
Training details. We train the models for 120,000 steps, and let the exploration parameter $\varepsilon$ decay linearly from $\varepsilon = {1.0}$ to $\varepsilon = {0.1}$ in the first40,000training steps after which it is kept constant. The network parameters are initialized using Glorot initialization [22] and updated using the Adam optimizer [31]. We use a batch size of 50 graphs. The replay memory contains 12,000 instances and replaces the oldest entry when adding a new transition. The target network parameters are updated every 50 training steps.
|
| 342 |
+
|
| 343 |
+
Graphs. The real-world UHN dataset [45] contains network events on day 2 of approximately 90 days of network events collected from the Los Alamos National Laboratory enterprise network and is pre-processed as follows: firstly, we build a directional graph where nodes represent unique hosts in the data set and construct directional links from the events between the hosts. Secondly, we filter the graph by removing all unidirectional links and transform the graph to be undirected, only keeping the largest connected component. Thirdly, we exclude nodes that only have many single-degree neighbors, such as email servers, and furthermore only retain nodes with degrees $\leq {80}$ . The graph obtained by this procedure is illustrated in Figure 7. We additionally note that, in all downstream experiments, graphs that are disconnected after rewiring are not considered in any of the evaluations.
|
| 344 |
+
|
| 345 |
+
Reconfiguration impact evaluation. The algorithm we use for measuring the random walk cost ${\mathcal{C}}_{RW}$ induced by a sequence of rewirings is shown in Algorithm 1. We sample without replacement ${N}_{\text{synthetic }} = \min \{ n,{30}\}$ and ${N}_{\mathrm{{UHN}}} = n$ entry nodes for synthetic graphs and the UHN graph,
|
| 346 |
+
|
| 347 |
+
---
|
| 348 |
+
|
| 349 |
+
${}^{1}$ https://github.com/VictorDarvariu/graph-construction-rl
|
| 350 |
+
|
| 351 |
+
${}^{2}$ https://github.com/Hanjun-Dai/graph_adversarial_attack
|
| 352 |
+
|
| 353 |
+
${}^{3}$ https://github.com/Hanjun-Dai/pytorch_structure2vec
|
| 354 |
+
|
| 355 |
+
---
|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
|
| 359 |
+
Figure 7: The graph derived from the Unified Host and Network (UHN) data set. It contains $n = {461}$ nodes, $m = {790}$ edges, and has a diameter $D = {18}$ .
|
| 360 |
+
|
| 361 |
+
respectively. After rewiring, we find the nodes that have become unreachable through at least one trajectory composed of the edges of the old map. We then perform a single random walk per missing target node as described in Section 5.2 and Algorithm 1.
|
| 362 |
+
|
| 363 |
+
Algorithm 1: Random walk cost evaluation
|
| 364 |
+
|
| 365 |
+
---
|
| 366 |
+
|
| 367 |
+
Data: ${G}_{ * }\left( {\mathcal{V},{\mathcal{E}}_{ * }}\right) , u,{v}_{i} \in \mathcal{V},{E}_{0}^{u} \subset {\mathcal{E}}_{0};\;//u,{v}_{i}$ are entry, target node resp.
|
| 368 |
+
|
| 369 |
+
${\mathcal{C}}_{\mathrm{{RW}}} \leftarrow 0$ ;
|
| 370 |
+
|
| 371 |
+
${\mathcal{E}}_{\text{visited }} \leftarrow \left( {{v}_{j},{v}_{k}}\right) \in {E}_{0}^{u}\forall j, k;$
|
| 372 |
+
|
| 373 |
+
${v}_{t - 1},{v}_{t} \leftarrow u \in \mathcal{V}$ ; $\;//{v}_{t - 1},{v}_{t}$ are previous, current position resp.
|
| 374 |
+
|
| 375 |
+
${v}_{t + 1} \leftarrow \mathcal{U}\left( {\mathcal{N}}_{u}\right)$ // ${v}_{t + 1}$ is next position
|
| 376 |
+
|
| 377 |
+
while ${v}_{t + 1} \neq {v}_{i}$ do
|
| 378 |
+
|
| 379 |
+
${e}_{t} \leftarrow \left( {{v}_{t},{v}_{t + 1}}\right)$ ;
|
| 380 |
+
|
| 381 |
+
if ${e}_{t} \notin {\mathcal{E}}_{\text{visited }}$ then
|
| 382 |
+
|
| 383 |
+
${\mathcal{C}}_{\mathrm{{RW}}} \leftarrow {\mathcal{C}}_{\mathrm{{RW}}} + 1$
|
| 384 |
+
|
| 385 |
+
add ${e}_{t}$ to ${\mathcal{E}}_{\text{visited }}$ ;
|
| 386 |
+
|
| 387 |
+
end
|
| 388 |
+
|
| 389 |
+
if ${k}_{{v}_{t + 1}} = 1$ then
|
| 390 |
+
|
| 391 |
+
${v}_{t - 1} \leftarrow {v}_{t + 1}$ ; // reverse random walk if dead end
|
| 392 |
+
|
| 393 |
+
else
|
| 394 |
+
|
| 395 |
+
${v}_{t - 1} \leftarrow {v}_{t};$
|
| 396 |
+
|
| 397 |
+
${v}_{t} \leftarrow {v}_{t + 1}$ ;
|
| 398 |
+
|
| 399 |
+
end
|
| 400 |
+
|
| 401 |
+
${v}_{t + 1} \leftarrow \mathcal{U}\left( {{\mathcal{N}}_{{v}_{t}} \smallsetminus {v}_{t - 1}}\right) ;$ // choose next node randomly
|
| 402 |
+
|
| 403 |
+
end
|
| 404 |
+
|
| 405 |
+
${e}_{t} \leftarrow \left( {{v}_{t},{v}_{t + 1}}\right)$ if ${e}_{t} \notin {\mathcal{E}}_{\text{visited }}$ then
|
| 406 |
+
|
| 407 |
+
${\mathcal{C}}_{\mathrm{{RW}}} \leftarrow {\mathcal{C}}_{\mathrm{{RW}}} + 1$
|
| 408 |
+
|
| 409 |
+
end
|
| 410 |
+
|
| 411 |
+
---
|
| 412 |
+
|
| 413 |
+
546
|
papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,259 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DYNAMIC NETWORK RECONFIGURATION FOR ENTROPY MAXIMIZATION USING DEEP REINFORCEMENT LEARNING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
A key problem in network theory is how to reconfigure a graph in order to optimize a quantifiable objective. Given the ubiquity of networked systems, such work has broad practical applications in a variety of situations, ranging from drug and material design to telecommunications. The large decision space of possible reconfigurations, however, makes this problem computationally intensive. In this paper, we cast the problem of network rewiring for optimizing a specified structural property as a Markov Decision Process (MDP), in which a decision-maker is given a budget of modifications that are performed sequentially. We then propose a general approach based on the Deep Q-Network (DQN) algorithm and graph neural networks (GNNs) that can efficiently learn strategies for rewiring networks. We then discuss a cybersecurity case study, i.e., an application to the computer network reconfiguration problem for intrusion protection. In a typical scenario, an attacker might have a (partial) map of the system they plan to penetrate; if the network is effectively "scrambled", they would not be able to navigate it since their prior knowledge would become obsolete. This can be viewed as an entropy maximization problem, in which the goal is to increase the surprise of the network. Indeed, entropy acts as a proxy measurement of the difficulty of navigating the network topology. We demonstrate the general ability of the proposed method to obtain better entropy gains than random rewiring on synthetic and real-world graphs while being computationally inexpensive, as well as being able to generalize to larger graphs than those seen during training. Simulations of attack scenarios confirm the effectiveness of the learned rewiring strategies.
|
| 12 |
+
|
| 13 |
+
§ 24 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
A key problem in network theory is how to rewire a graph in order to optimize a given quantifiable objective. Addressing this problem might have applications in several domains, given the fact several systems of practical interest can be represented as graphs $\left\lbrack {{23},{24},{29},{49},{50}}\right\rbrack$ . A large body of literature studies how to construct and design networks in order to optimize some quantifiable goal, such as robustness in supply chain and wireless sensor networks [40, 53] or ADME properties of molecules $\left\lbrack {{19},{39}}\right\rbrack$ . Given the intractable number of distinct configurations of even relatively small networks, optimizing these structural and topological properties is generally a non-trivial task that has been approached from various angles in graph theory $\left\lbrack {{15},{18}}\right\rbrack$ and also studied from heuristic perspectives $\left\lbrack {{21},{35}}\right\rbrack$ . Exact solutions are too computationally expensive to obtain and heuristic methods are generally sub-optimal and do not generalize well to unseen instances.
|
| 16 |
+
|
| 17 |
+
The adoption of graph neural networks (GNNs) [41] and deep reinforcement learning (RL) [36] techniques have lead to promising approaches to the problem of optimizing graph processes or structure $\left\lbrack {{14},{16},{30}}\right\rbrack$ . A fundamental structural modification is rewiring, in which edges (e.g., links in a computer network) are reconfigured such that the topology is changed while their total number remains constant. The problem of rewiring to optimize a structural property has not been studied in the literature.
|
| 18 |
+
|
| 19 |
+
In this paper, we present a solution to the network rewiring problem for optimizing a specified structural property. We formulate this task as a Markov Decision Process (MDP), in which a decision-maker is given a budget of rewiring operations that are performed sequentially. We then propose an approach based on the Deep Q-Network (DQN) algorithm and GNNs that can efficiently learn strategies for rewiring networks. We evaluate the method by means of a realistic cybersecurity case study. In particular, we assume a scenario in which an attacker has entered a computer network and aims to reach a particular node of interest. We also assume that the attacker has partial knowledge of the underlying graph topology, which is used to reach a given target inside the network. The goal is to learn a rewiring process for modifying the structure of the graph so as to disrupt the capability of the attacker to reach its target, all the while keeping the network operational. This can be seen as an example of moving target defense (MTD) [8]. We frame the solution as an entropy maximization problem, in which the goal is to increase the surprise of the network in order to disrupt the navigation of the attacker inside it. Indeed, entropy acts as proxy measurement of the difficulty of this task, with an increase in entropy corresponding to an increase its difficulty. In particular, we consider two measures of network entropy - namely Shannon entropy and Maximal Entropy Random Walk (MERW), and we compare their effectiveness.
|
| 20 |
+
|
| 21 |
+
More specifically, the contributions of this paper can be summarized as follows:
|
| 22 |
+
|
| 23 |
+
* We formulate the problem of graph rewiring so as to maximize a global structural property as an MDP, in which a central decision-maker is given a certain budget of rewiring operations that are performed sequentially. We formulate an approach that combines GNN architectures and the DQN algorithm to learn an optimal set of rewiring actions by trial-and-error;
|
| 24 |
+
|
| 25 |
+
* We present an extensive case study of the proposed approach in the context of defense against network intrusion by an attacker. We show that our method is able to obtain better gains in entropy than random rewiring, while scaling to larger networks than a local greedy search, and generalizing to larger out-of-distribution graphs in some cases. Furthermore, we demonstrate the effectiveness of this approach by simulating the movement of an attacker in the network, finding that indeed the applied modifications increase the difficulty for the attacker to reach its targets in both synthetic and real-world graph topologies.
|
| 26 |
+
|
| 27 |
+
§ 2 RELATED WORK
|
| 28 |
+
|
| 29 |
+
RL for graph reconfiguration. Recently, an increasing amount of research has been conducted on the use of reinforcement learning in graph reconfiguration. In particular, in [14] a solution based on reinforcement learning for modifying graphs with the aim of attacking both node and graph classification is presented. In addition, the authors briefly introduce a defense method using adversarial training and edge removal, which decreases their proposed classifier attack rate slightly by $1\%$ . This defense strategy is however only effective on the attack strategy it is trained on and does not generalize. Instead, the authors of [34] use a reinforcement learning approach to learn an attack strategy for neural network classifiers of graph topologies based on edge rewiring, and show that they are able to achieve misclassification with changes that are less noticeable compared to edge and vertex removal and addition. Our paper focuses on a different problem that does not involve classification tasks, but the maximization of a given network objective function. In [16] reinforcement learning techniques are applied to the problem of optimizing the robustness of a graph by means of graph construction; the authors show that their proposed method is able to outperform existing techniques and generalize to different graphs. In the present work, we optimize a global structural property through rewiring instead of constructing a graph through edge addition.
|
| 30 |
+
|
| 31 |
+
Graph robustness and attacks. A related research area is the optimization of graph robustness [37], which denotes the capacity of a graph to withstand targeted attacks and random failures. [42] demonstrates how small changes in complex networks such as an electricity system or the Internet can improve their robustness against malicious attacks. [6] investigates several heuristic reconfiguration techniques that aim to improve graph robustness without substantially modifying the network structure, and find that preferential rewiring is superior to random rewiring. The authors of [11] extend this study to a framework that can accommodate multiple rewiring strategies and objectives. Several works have used information-based complexity metrics in the context of network defense or attack strategies: [27] proposes a network security metric to assess network vulnerability by measuring the Kolmogorov complexity of effective attack paths. The underlying reasoning is that the more complex attack paths have to be in order to harm a network, the less vulnerable a network is to external attacks. Furthermore, [25] investigates the vulnerability of complex networks, finding that attacks based on edge and vertex removal are substantially more effective when the network properties are recomputed after each attack.
|
| 32 |
+
|
| 33 |
+
< g r a p h i c s >
|
| 34 |
+
|
| 35 |
+
Figure 1: Illustrative example of the MDP timesteps comprising a single rewiring operation. The agent observes an initial state ${S}_{0} = \left( {{G}_{0},\varnothing ,\varnothing }\right)$ (first panel), from which it then selects a base node ${v}_{1} = \{ 1\}$ that will be rewired (second panel). Given the new state that contains the initial graph and the selected base node, the agent selects a target node ${v}_{2} = \{ 5\}$ to which an edge will be added (third panel). Finally, a third node ${v}_{3} = \{ 0\}$ is selected from the neighborhood of ${v}_{1} = \{ 1\}$ and the corresponding edge is removed (last panel). After a sequence of $b$ rewiring operations, the agent will receive a reward proportional to the improvement in the objective function $\mathcal{F}$ .
|
| 36 |
+
|
| 37 |
+
Cybersecurity and network defense. In the last decade and in recent years in particular, a drastic surge in cyberattacks on governmental and industrial organizations has exposed the imminent vulnerability of global society to cyberthreats [43]. The targeted digital systems are generally structured as a network in which entities in the system communicate and share resources among each other. Typically, attackers seek to gain unauthorized access to the underlying network through an entry point and search for highly valuable nodes in order to infect these digital systems with malicious software such as viruses, ransomware and spyware [3], enabling them to extract sensitive information or control the functioning of the network [26]. Moving target defense (MTD) is a cybersecurity defense technique by which a network and the underlying software are dynamically changed to counteract attack strategies [4, 8, 9, 44, 51] Most existing MTD techniques involve NP-hard problems, and approximate or heuristic solutions are often impractical [8]. We note that while most studies are applied to specific software architectures, which prevent them from being applied effectively to large scale deployments, in this work we focus on modeling this problem from an abstract, infrastructure-agnostic perspective.
|
| 38 |
+
|
| 39 |
+
§ 3 GRAPH REWIRING AS AN MDP
|
| 40 |
+
|
| 41 |
+
§ 3.1 PROBLEM STATEMENT
|
| 42 |
+
|
| 43 |
+
We define a graph (network) as $G = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\}$ is the set of $n = \left| \mathcal{V}\right|$ vertices (nodes) and $\mathcal{E} = \left\{ {{e}_{1},\ldots ,{e}_{m}}\right\}$ is the set of $m = \left| \mathcal{E}\right|$ edges (links). A rewiring operation $\gamma \left( {G,{v}_{i},{v}_{j},{v}_{k}}\right)$ transforms the graph $G$ by adding the non-edge $\left( {{v}_{i},{v}_{j}}\right)$ and removing the existing edge $\left( {{v}_{i},{v}_{k}}\right)$ ; we denote the set of all such operations by $\Gamma$ . Given a budget $b \propto m$ of rewiring operations, and a global objective function $\mathcal{F}\left( G\right)$ to be maximized, the goal is to find the set of unique rewiring operations out of ${\Gamma }^{b}$ such that the resulting graph ${G}^{\prime }$ maximizes $\mathcal{F}\left( {G}^{\prime }\right)$ .
|
| 44 |
+
|
| 45 |
+
Since the size of the set of possible rewirings grows rapidly with the graph size, we cast this problem as a sequential decision-making process, which is detailed below.
|
| 46 |
+
|
| 47 |
+
§ 3.2 MDP FRAMEWORK
|
| 48 |
+
|
| 49 |
+
We let every rewiring operation consist of three sub-steps: 1) base node selection; 2) node selection for edge addition; and 3) node selection for edge removal. We precede the edge removal step by edge addition to suppress potential disconnections of the graph. The rewiring procedure is illustrated in Figure 1. For reducing the size of the decision space, we model each sub-step of the rewiring operation as a separate timestep in the MDP itself. Its elements are defined as:
|
| 50 |
+
|
| 51 |
+
State. The state ${S}_{t}$ is the tuple ${S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right)$ , containing the graph ${G}_{t} = \left( {\mathcal{V},{\mathcal{E}}_{t}}\right)$ , the chosen base node ${a}_{1}$ , and the chosen addition node ${a}_{2}$ . The base node and addition node may be null $\left( \varnothing \right)$ depending on the rewiring operation sub-step.
|
| 52 |
+
|
| 53 |
+
Actions. We specify three distinct action spaces ${\mathcal{A}}_{\widehat{t}}\left( {S}_{t}\right)$ , where $\widehat{t} \mathrel{\text{ := }} \left( \begin{array}{ll} t & \text{ mod }3 \end{array}\right)$ denotes the sub-step within a rewiring operation. Letting the degree of node $v$ be ${k}_{v}$ , they are defined as:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\mathcal{A}}_{0}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,\varnothing ,\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V}\left| {0 < {k}_{v} < }\right| \mathcal{V} \mid - 1}\right\} , \tag{1}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\mathcal{A}}_{1}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1},v}\right) \notin {\mathcal{E}}_{t}}\right\} , \tag{2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\mathcal{A}}_{2}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},{a}_{2}}\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1},v}\right) \in {\mathcal{E}}_{t} \smallsetminus \left( {{a}_{1},{a}_{2}}\right) }\right\} . \tag{3}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
Transitions. Transitions are deterministic; the model $P\left( {{S}_{t} = {s}^{\prime } \mid {S}_{t - 1} = s,{A}_{t - 1} = {a}_{t - 1}}\right)$ transitions to state ${S}^{\prime }$ with probability 1, where:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{S}^{\prime } = \left\{ \begin{array}{lll} \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1}}\right) ,{a}_{1},\varnothing }\right) , & \text{ if }3 \mid t + 2 & \text{ mark base node } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \cup \left( {{a}_{1},{a}_{2}}\right) }\right) ,{a}_{1},{a}_{2}}\right) , & \text{ if }3 \mid t & \text{ mark addition node }\& \text{ add edge } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \smallsetminus \left( {{a}_{1},{a}_{3}}\right) }\right) ,\varnothing ,\varnothing }\right) , & \text{ if }3 \mid t + 1 & \text{ remove edge }\& \text{ reset marked nodes } \end{array}\right.
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
(4)
|
| 74 |
+
|
| 75 |
+
Rewards. The reward signal ${R}_{t}$ is proportional to the difference in the value of the objective function $\mathcal{F}$ before and after the graph reconfiguration. Furthermore, a key operational constraint in the domain we consider is that the network remains connected after the rewiring operations. Instead of running connectivity algorithms at every time-step to determine if a potential removed edge disconnects the graph, we encourage maintaining connectivity by giving a penalty $\bar{r} < 0$ at the end of the episode if the graph becomes disconnected. All rewards and penalties are provided at the final timestep $T$ , and no intermediate rewards are given. This enables the flexibility to discover long-term strategies that maximize the total cumulative reward of a sequence of reconfigurations rather than a single-step rewiring operation, even if the graph is disconnected during intermediate steps. Concretely, given an initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we define the reward function at timestep $t$ as:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{R}_{t} = \left\{ \begin{array}{ll} {c}_{\mathcal{F}} \cdot \left( {\mathcal{F}\left( {G}_{t}\right) - \mathcal{F}\left( {G}_{0}\right) }\right) & \text{ if }t = T \land c\left( G\right) = 1, \\ \bar{r} & \text{ if }t = T \land c\left( G\right) \geq 2, \\ 0 & \text{ otherwise, } \end{array}\right. \tag{5}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $c\left( G\right)$ denotes the number of connected components of $G$ , and $\bar{r} < 0$ is the disconnection penalty. As the different objective functions may act on different scales, we use a reward scaling ${c}_{\mathcal{F}}$ , which we empirically establish for every objective function $\mathcal{F}$ .
|
| 82 |
+
|
| 83 |
+
§ 4 REINFORCEMENT LEARNING REPRESENTATION AND PARAMETRIZATION
|
| 84 |
+
|
| 85 |
+
In this section, we extend the graph representation and value function approximation parametrizations proposed in past work $\left\lbrack {{14},{16}}\right\rbrack$ for the problem of graph rewiring.
|
| 86 |
+
|
| 87 |
+
§ 4.1 GRAPH REPRESENTATION
|
| 88 |
+
|
| 89 |
+
As the state and action spaces in network reconfiguration quickly become intractable for a sequence of rewiring operations, we require a graph representation that generalizes over similar states and actions. To this end, we use a GNN architecture that is based on a mean field inference method [46]. More specifically, we use a variant of the structure2vec [13] embedding method to represent every node ${v}_{i} \in \mathcal{V}$ in a graph $G = \left( {\mathcal{V},\mathcal{E}}\right)$ by an embedding vector ${\mu }_{i}$ . This embedding vector is constructed in an iterative process by linearly transforming feature vectors ${x}_{i}$ with a set of weights $\left\{ {{\theta }^{\left( 1\right) },{\theta }^{\left( 2\right) }}\right\}$ , aggregating the ${x}_{i}$ with the feature vectors of neighboring nodes ${v}_{j} \in {\mathcal{N}}_{i}$ , then applying the nonlinear Rectified Linear Unit (ReLU) activation function. Hence, at every step $l \in \left( {1,2,\ldots ,L}\right)$ , embedding vectors are updated according to:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\mu }_{i}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {{\theta }^{\left( 1\right) }{x}_{i} + {\theta }^{\left( 2\right) }\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}}}{\mu }_{j}^{\left( l\right) }}\right) , \tag{6}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where all embedding vectors are initialized as ${\mu }_{i}^{\left( 0\right) } = \mathbf{0}$ . After $L$ iterations of feature aggregation, we obtain the node embedding vectors ${\mu }_{i} \equiv {\mu }_{i}^{\left( L\right) }$ . By summing the embedding vectors of nodes in a graph $G$ , we obtain its permutation-invariant embedding: $\mu \left( G\right) = \mathop{\sum }\limits_{{i \in \mathcal{V}}}{\mu }_{i}$ . These invariant graph embeddings represent part of the state that the RL agent observes. Aside from permutation invariance, such embeddings allow learned models to be applied to graphs of different sizes, potentially larger than those seen during training.
|
| 96 |
+
|
| 97 |
+
§ 4.2 VALUE FUNCTION APPROXIMATION
|
| 98 |
+
|
| 99 |
+
Due to the intractable size of the state-action space in graph reconfiguration tasks, we make use of neural networks to learn approximations of the state-action values $Q\left( {s,a}\right)$ [47]. More specifically, as the action spaces defined in Equation (1) are discrete, we use the DQN algorithm [36] to update the state-action values as follows:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
Q\left( {s,a}\right) \leftarrow Q\left( {s,a}\right) + \alpha \left\lbrack {r + \gamma \mathop{\max }\limits_{{{a}^{\prime } \in \mathcal{A}}}Q\left( {{s}^{\prime },{a}^{\prime }}\right) - Q\left( {s,a}\right) }\right\rbrack . \tag{7}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
The DQN algorithm uses an experience replay buffer [33] from which it samples previously observed transitions $\left( {s,a,r,{s}^{\prime }}\right)$ , and periodically synchronizes a target network with the parameters of the Q-network. The target network is used in the computation of the learning target for estimating the Q-value of the best action in the next timestep, making the learning more stable as the parameters are - kept fixed between updates. We use three separate MLP parametrizations of the Q-function, each corresponding to one of the three sub-steps of the rewiring procedure:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{Q}_{1}\left( {{S}_{t} = \left( {{G}_{t},\varnothing ,\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 3\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 4\right) }\left\lbrack {{\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8a}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{Q}_{2}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 5\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 6\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8b}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
{Q}_{3}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right) ,{A}_{t}}\right) = {\theta }^{\left( 7\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 8\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{a}_{2}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8c}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $\oplus$ denotes concatenation. We highlight that, since the underlying structure2vec parameters shown in Equation (6) are shared, the combined set of the learnable parameters in our model is $\Theta = {\left\{ {\theta }^{\left( i\right) }\right\} }_{i = 1}^{8}$ . During validation and test time, we derive a greedy policy from the above learned Q-functions as $\arg \mathop{\max }\limits_{{a \in {\mathcal{A}}_{t}}}Q\left( {s,a}\right)$ . During training, however, we use a linearly decaying $\epsilon$ -greedy behavioral policy. We refer the reader to Appendix B for a detailed description of our implementation.
|
| 120 |
+
|
| 121 |
+
§ 5 CASE STUDY: NETWORK RECONFIGURATION FOR INTRUSION DEFENSE
|
| 122 |
+
|
| 123 |
+
In this section, we detail the specifics of our intrusion defense application scenario. We first present the definition of the objective functions we leverage, which act as proxy metrics for the difficulty of navigating the graph. Secondly, we detail the procedure we use for simulating attacker behavior during an intrusion, which will allow us to compare the pre- and post-rewiring costs of traversal.
|
| 124 |
+
|
| 125 |
+
§ 5.1 OBJECTIVE FUNCTIONS FOR NETWORK OBFUSCATION
|
| 126 |
+
|
| 127 |
+
Our goal is to reconfigure the network so as to deter an attacker with partial knowledge of the network topology. Equivalently, we seek to modify the network so as to increase the surprise of the network and render this prior knowledge obsolete, while keep the network operational. A natural formalization of surprise is the concept of entropy, which measures the quantity of information encoded in a graph or, equivalently, its complexity.
|
| 128 |
+
|
| 129 |
+
As measures of entropy, we investigate two graph quantities that are invariant to permutations in representation: the Shannon entropy of the degree distribution [2] and the Maximum Entropy Random Walk (MERW) [7] calculated from the spectrum of the adjacency matrix. The former captures the idea that graphs with heterogeneous degrees are less predictable than regular graphs, while the latter is related to random walks on the network. Whereas generic random walks generally do not
|
| 130 |
+
|
| 131 |
+
< g r a p h i c s >
|
| 132 |
+
|
| 133 |
+
Figure 2: Illustrative example of the evaluation process for a network reconfiguration. (i) The graph is rewired by our approach, removing and adding the highlighted edges respectively. (ii) The leftmost nodes in the graph become unreachable by the attacker from the entry point marked E, and hence a path to them must be rediscovered by exploring the graph. (iii) To reach the nodes, the attacker pays a cost of 1 and 2 respectively for "unlocking" the previously unseen links along the highlighted paths. The total cost induced by the rewiring strategy is ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }} = 3$ .
|
| 134 |
+
|
| 135 |
+
maximize entropy [17], MERW uses a specific choice of transition probabilities that ensures every trajectory of fixed length is equiprobable, resulting in a maximal global entropy in the limit of infinite trajectory length. Although the local transition probabilities depend on the global structure of the graph, the generating process is local [7]. More formally, the two objective functions are formulated as follows: the Shannon entropy is defined as ${\mathcal{F}}_{\text{ Shannon }}\left( G\right) = - \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}q\left( k\right) {\log }_{2}q\left( k\right)$ , where $q\left( k\right)$ is the degree distribution; MERW is defined as ${\mathcal{F}}_{\text{ MERW }}\left( G\right) = \ln \lambda$ , where $\lambda$ is the largest eigenvalue of the adjacency matrix. In terms of time complexity, computing the Shannon entropy scales as $\mathcal{O}\left( n\right)$ . The calculation of MERW has instead an $\mathcal{O}\left( {n}^{3}\right)$ complexity due to the eigendecomposition required to compute the spectrum of the adjacency matrix.
|
| 136 |
+
|
| 137 |
+
It is worth noting that, in preliminary experiments, we have additionally investigated objective functions related to the Kolmogorov complexity. Also known as algorithmic complexity, this measure does not suffer from distributional dependencies [32]. As the Kolmogorov complexity is theoretically incomputable [10], we used graph compression algorithms such as bzip-2 [12] and Block Decomposition Methods [52] to approximate the Kolmogorov complexity. However, as these approximations depend on the representation of the graph such as the adjacency matrix, one has to consider many permutations of the graph representation. Compressing the representation for a sufficient number of permutations becomes infeasible even for small graphs. While the MERW objective function is also derived from the adjacency matrix through its largest eigenvalue, it does not suffer from this artifact as the spectrum of the adjacency matrix is invariant to permutations.
|
| 138 |
+
|
| 139 |
+
§ 5.2 SIMULATING AND EVALUATING ATTACKER BEHAVIOR
|
| 140 |
+
|
| 141 |
+
Given an initial connected and undirected graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we model the attacker as having entered the network through an arbitrary node $u \in \mathcal{V}$ , and having built a local map ${\mathcal{M}}_{0}^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{0}^{u}}\right)$ around this entry point, where ${\mathcal{V}}^{v} \subset \mathcal{V}$ is the set of nodes and ${\mathcal{E}}_{0}^{u} \subset {\mathcal{E}}_{0}$ is the set of edges in the map. The rewiring procedure transforms the initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ to the graph ${G}_{ * } = \left( {\mathcal{V},{\mathcal{E}}_{ * }}\right)$ , yielding the new local map ${\mathcal{M}}_{ * }^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{ * }^{u}}\right)$ that is unknown to the attacker. Our goal is to evaluate the effectiveness of the reconfiguration by measuring how "stale" the prior information of the attacker has become in comparison to the new map: if the attacker struggles to find its targets in the updated topology, the rewiring has succeeded.
|
| 142 |
+
|
| 143 |
+
Let $\overline{{\mathcal{V}}^{u}}$ denote the set of nodes in the new local map ${\mathcal{M}}_{ * }^{u}$ that are unreachable through at least one trajectory composed of original edges ${E}_{0}^{u}$ in the old map. For each newly unreachable node ${v}_{i}$ , we measure the cost ${\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ of finding it with a forward random walk, in which the random walker only returns to the previous node if the current node has no other outgoing links. Every time the random walker encounters a link that is (i) not included in ${E}_{0}^{u}$ and (ii) not yet encountered during the random walk, the cost increases by one. This simulates the cost of having to explore the new graph topology due to the reconfigurations that were introduced. Finally, we let ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }} = \mathop{\sum }\limits_{{{v}_{i} \in {\mathcal{V}}^{u}}}{\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ denote the sum of the costs for all newly unreachable nodes, which is our metric for the effectiveness of a rewiring strategy. An illustrative example of a forward random walk and cost evaluation is shown in Figure 2, and a formal description is presented in Algorithm 1 in Appendix B to aid reproducibility.
|
| 144 |
+
|
| 145 |
+
§ 6 EXPERIMENTS
|
| 146 |
+
|
| 147 |
+
§ 6.1 EXPERIMENTAL SETUP
|
| 148 |
+
|
| 149 |
+
Training and evaluation procedure. Our agent is trained on synthetic graphs of size $n = {30}$ that are generated using the graph models listed below. The given budget is ${15}\%$ of the total edges $m$ that are present in the initial graph. When performing the attacker simulations, the initial local map contains the subgraph induced by all nodes that are 2 hops away from the entry point, which is sampled without replacement from the node set. Training occurs separately for each graph model and objective $\mathcal{F}$ on a set of graphs ${\mathcal{G}}_{\text{ train }}$ of size $\left| {\mathcal{G}}_{\text{ train }}\right| = 6 \cdot {10}^{2}$ . Every 10 training steps, we measure the performance on a disjoint validation set ${\mathcal{G}}_{\text{ validation }}$ of size $\left| {\mathcal{G}}_{\text{ validation }}\right| = 2 \cdot {10}^{2}$ . We perform reconfiguration operations on a test set ${\mathcal{G}}_{\text{ test }}$ of size $\left| {\mathcal{G}}_{\text{ test }}\right| = {10}^{2}$ . To account for stochasticity, we train our models with 10 different seeds and present mean and confidence intervals accordingly. Further details about the experimental procedure (e.g., hyperparameter optimization) can be found in Appendix B.
|
| 150 |
+
|
| 151 |
+
Synthetic graphs. We evaluate the approaches on graphs generated by the following models:
|
| 152 |
+
|
| 153 |
+
Barabási-Albert (BA): A preferential attachment model where nodes joining the network are linked to $M$ nodes [5]. We consider values of ${M}_{ba} = 2$ and ${M}_{ba} = 1$ (abbreviated BA-2 and BA-1).
|
| 154 |
+
|
| 155 |
+
Watts-Strogatz (WS): A model that starts with a ring lattice of nodes with degree $k$ . Each edge is rewired to a random node with probability $p$ , yielding characteristically small shortest path lengths [48]. We use $k = 4$ and $p = {0.1}$ .
|
| 156 |
+
|
| 157 |
+
Erdős-Rényi (ER): A random graph model in which the existence of each edge is governed by a uniform probability $p$ [20]. We use $p = {0.15}$ .
|
| 158 |
+
|
| 159 |
+
Real-world graphs. We also consider the real-world Unified Host and Network (UHN) dataset [45], which is a subset of network and host events from an enterprise network. We transform this dataset into a graph by identifying the bidirectional links between hosts appearing in these records, obtaining a graph with $n = {461}$ nodes and $m = {790}$ edges. Further information about this processing can be found in Appendix B.
|
| 160 |
+
|
| 161 |
+
Baselines. We compare the approach against two baselines: Random, which acts in the same MDP as the agent but chooses actions uniformly, and Greedy, which is a shallow one-step search over all rewirings from a given configuration. The latter picks the rewiring that gives the largest improvement in $\mathcal{F}$ . As this search scales very poorly with graph size and budget, we only evaluate it on graphs of size 30 that are used to train the DQN as a comparison point for validating the learned strategies.
|
| 162 |
+
|
| 163 |
+
§ 6.2 ENTROPY MAXIMIZATION RESULTS
|
| 164 |
+
|
| 165 |
+
We first consider the results for the maximization of the entropy-based objectives. The gains in entropy obtained by the methods on the held-out test set are shown in Table 1, while training curves are presented in Appendix A. The results demonstrate that the approach discovers better reconfiguration strategies than random rewiring in all cases, and even the greedy search in one setting. Furthermore, we evaluate the out-of-distribution generalization properties of the learned models along two dimensions: varying the graph size $n \in \left\lbrack {{10},{300}}\right\rbrack$ and the budget $b$ as a percentage of existing edges $\in \{ 5,{10},{15},{20},{25}\}$ . The results for this experiment (from which Greedy is excluded due to poor scalability) are shown in Figure 3. We find that, with the exception of the (BA, ${\mathcal{F}}_{\text{ Shannon }}$ ) combination, the learned models generalize well to graphs substantially larger in size as well as varying rewiring budgets.
|
| 166 |
+
|
| 167 |
+
Table 1: Entropy gains on test graphs with $n = {30}$ .
|
| 168 |
+
|
| 169 |
+
max width=
|
| 170 |
+
|
| 171 |
+
$\mathcal{F}$ ${\mathcal{G}}_{\text{ test }}$ DQN Greedy Random
|
| 172 |
+
|
| 173 |
+
1-5
|
| 174 |
+
4*$\Delta {\mathcal{F}}_{MERW}$ BA-2 ${0.197}_{\pm {0.002}}$ ${0.225} \pm {0.003}$ $- {0.019}_{\pm {0.003}}$
|
| 175 |
+
|
| 176 |
+
2-5
|
| 177 |
+
BA-1 ${0.167}_{\pm {0.003}}$ ${0.135}_{\pm {0.003}}$ $- {0.045}_{\pm {0.004}}$
|
| 178 |
+
|
| 179 |
+
2-5
|
| 180 |
+
ER ${0.182}_{\pm {0.004}}$ ${0.209}_{\pm {0.012}}$ $- {0.005}_{\pm {0.003}}$
|
| 181 |
+
|
| 182 |
+
2-5
|
| 183 |
+
WS ${0.233}_{\pm {0.003}}$ ${0.298}_{\pm {0.002}}$ ${0.035}_{\pm {0.002}}$
|
| 184 |
+
|
| 185 |
+
1-5
|
| 186 |
+
4*$\Delta {\mathcal{F}}_{\text{ Shannon }}$ BA-2 ${0.541}_{\pm {0.009}}$ ${0.724} \pm {0.015}$ ${0.252} \pm {0.024}$
|
| 187 |
+
|
| 188 |
+
2-5
|
| 189 |
+
BA-1 ${0.167}_{\pm {0.008}}$ ${0.242}_{\pm {0.012}}$ ${0.084}_{\pm {0.015}}$
|
| 190 |
+
|
| 191 |
+
2-5
|
| 192 |
+
ER ${0.101} \pm {0.012}$ ${0.400}_{\pm {0.023}}$ $- {0.022}_{\pm {0.018}}$
|
| 193 |
+
|
| 194 |
+
2-5
|
| 195 |
+
WS ${0.926}_{\pm {0.016}}$ ${1.116} \pm {0.022}$ ${0.567}_{\pm {0.036}}$
|
| 196 |
+
|
| 197 |
+
1-5
|
| 198 |
+
|
| 199 |
+
§ 6.3 EVALUATING THE RECONFIGURATION IMPACT
|
| 200 |
+
|
| 201 |
+
We next evaluate the performance of the learned models for entropy maximization on the downstream task of disrupting the navigation of the graph by the attacker.
|
| 202 |
+
|
| 203 |
+
< g r a p h i c s >
|
| 204 |
+
|
| 205 |
+
Figure 3: Evaluation of the out-of-distribution generalization performance (higher is better) of the learned entropy maximization models as a function of graph size (top) and budget size (bottom). All models are trained on graphs with $n = {30}$ . In the bottom figure, the solid and dotted lines represent graphs with $n = {30}$ and $n = {100}$ respectively. Note the different $\mathrm{x}$ -axes used for ER graphs due to their high edge density.
|
| 206 |
+
|
| 207 |
+
Synthetic graphs. The results for synthetic graphs are shown in Figure 4 in an out-of-distribution setting as a function of graph size, a regime in which the Greedy baseline is too expensive to scale. We find that the best proxy metric varies with the class of synthetic graphs - Shannon entropy performs better for BA graphs, MERW performs better for ER, and performance is similar for WS. Strong out-of-distribution generalization performance is observed for 3 out of 4 synthetic graph models. The results also show that, in the case of WS graphs, even though the performance in terms of the metric itself is high (as shown in Figure 3), the objective is not a suitable proxy for the downstream task in an out-of-distribution setting since the random walk cost decays rapidly. This might be explained by the fact that the graph topology is derived through a rewiring process of cliques of nodes of a given size.
|
| 208 |
+
|
| 209 |
+
Real-world graphs. We also evaluate the models trained on synthetic graphs on the real-world graph constructed from the UHN dataset. Results are shown in Table 2. All but one of the trained models maintain a statistically significant random walk cost difference over the Random baseline. The best-performing models were trained on the (WS, ${\mathcal{F}}_{MERW}$ ) and (BA-1, ${\mathcal{F}}_{\text{ Shannon }}$ ) combinations, obtaining total gains in random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }}$ of ${136}\%$ and ${125}\%$ respectively. The Greedy baseline is not applicable for a graph of this size.
|
| 210 |
+
|
| 211 |
+
< g r a p h i c s >
|
| 212 |
+
|
| 213 |
+
Figure 4: Evaluation of the learned rewiring strategies for entropy maximization on the downstream task of disrupting attacker navigation. All models are trained on graphs with $n = {30}$ . The random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }}$ (higher is better) is normalized by $n$ for meaningful comparisons. Note the different $\mathrm{x}$ -axis used for ER graphs due to their high edge density.
|
| 214 |
+
|
| 215 |
+
303
|
| 216 |
+
|
| 217 |
+
§ 7 CONCLUSION
|
| 218 |
+
|
| 219 |
+
Summary. In this work, we have addressed the problem of graph reconfiguration for the optimization of a given property of a networked system, a computationally challenging problem given the generally large decision space. We have then have formulated it as a Markov Decision Process that treats rewirings as sequential, and proposed an approach based on deep reinforcement learning and graph neural networks for efficient learning of network reconfigurations. As a case study, we have applied the proposed method to a cybersecurity scenario in which the task is to disrupt the navigation of potential intruders in a computer network. We have assumed that the goal of the intruder is to navigate the network given some knowledge about its topology. In order to disrupt the attack, we have designed a mechanism for increasing the level of surprise of the network through entropy maximization by means of network rewiring. More specifically, in terms of the objective of the optimization process, we have considered two entropy metrics that quantify the predictability of the network topology, and demonstrated that our method generalizes well on unseen graphs with varying rewiring budgets and different numbers of nodes. We have also validated the effectiveness of the learned models for increasing path lengths towards targeted nodes. The proposed approach outperforms the considered baselines on both synthetic and real-world graphs.
|
| 220 |
+
|
| 221 |
+
Table 2: Total random walk cost of models applied to the real-world UHN graph $\left( {n = {461},m = {790}}\right)$ .
|
| 222 |
+
|
| 223 |
+
max width=
|
| 224 |
+
|
| 225 |
+
X $\mathcal{F}$ X ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }}/n\left( \widehat{ \uparrow }\right)$
|
| 226 |
+
|
| 227 |
+
1-4
|
| 228 |
+
8*DQN 3*${\mathcal{F}}_{MERW}$ BA-2 ${3.087}_{\pm {0.225}}$
|
| 229 |
+
|
| 230 |
+
3-4
|
| 231 |
+
BA-1 ${1.294}_{\pm {0.185}}$
|
| 232 |
+
|
| 233 |
+
3-4
|
| 234 |
+
ER ${2.887}_{\pm {0.335}}$
|
| 235 |
+
|
| 236 |
+
2-4
|
| 237 |
+
X WS ${\mathbf{{4.888}}}_{\pm {0.568}}$
|
| 238 |
+
|
| 239 |
+
2-4
|
| 240 |
+
${\mathcal{F}}_{\text{ Shannon }}$ BA-2 ${3.774}_{\pm {0.445}}$
|
| 241 |
+
|
| 242 |
+
2-4
|
| 243 |
+
X BA-1 ${\mathbf{{4.660}}}_{\pm {0.461}}$
|
| 244 |
+
|
| 245 |
+
2-4
|
| 246 |
+
X ER ${3.891}_{\pm {0.559}}$
|
| 247 |
+
|
| 248 |
+
2-4
|
| 249 |
+
X WS ${3.555}_{\pm {0.318}}$
|
| 250 |
+
|
| 251 |
+
1-4
|
| 252 |
+
Random - - ${2.071}_{\pm {0.289}}$
|
| 253 |
+
|
| 254 |
+
1-4
|
| 255 |
+
Greedy - - ∞
|
| 256 |
+
|
| 257 |
+
1-4
|
| 258 |
+
|
| 259 |
+
Limitations and future work. An advantage of the proposed approach is that it does not require any knowledge of the exact position of the attacker as the traversal of the graph takes place. One may also consider a real-time scenario in which the network reconfiguration aims to "close off" the attacker given knowledge of their location, which may lead to a more efficient defense if such information is available. We have also adopted a simple model of attacker navigation (forward random walks). Different, more complex navigation strategies (e.g., targeting vulnerable machines) can also be considered. This knowledge might be integrated as part of the training process, for example by increasing the probability of rewiring of edges around these nodes through a corresponding reward structure (i.e., higher reward for protecting more sensitive nodes). More generally, we have identified an important application to cybersecurity, which might have a positive impact in safeguarding networks from malicious intrusions. With respect to potential dual-use, we note that the proposed defense mechanism cannot be exploited by attackers directly, since it requires knowledge of at least part of the underlying network topology.
|
papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,357 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Learning Graph Search Heuristics
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Searching for a path between two nodes in a graph is one of the most well-studied and fundamental problems in computer science. In numerous domains such as robotics, AI, or biology, practitioners develop search heuristics to accelerate their pathfinding algorithms. However, it is a laborious and complex process to hand-design heuristics based on the problem and the structure of a given use case. Here we present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigation heuristics from data by leveraging recent advances in imitation learning and graph representation learning. At training time, we aggregate datasets of search trajectories and ground-truth shortest path distances, which we use to train a specialized graph neural network-based heuristic function using backpropagation through steps of the pathfinding process. Our heuristic function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as ${\mathrm{A}}^{ * }$ at test time. Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by ${58.5}\%$ on average, can be directly applied in diverse graphs ranging from biological networks to road networks, and allows for fast planning in time-critical robotics domains.
|
| 12 |
+
|
| 13 |
+
## 201 Introduction
|
| 14 |
+
|
| 15 |
+
Search heuristics are essential in several domains, including robotics, AI, biology, and chemistry [1- 6]. For example, in robotics, complex robot geometries often yield slow collision checks, and search algorithms are constrained by the robot's onboard computation resources, requiring well-performing search heuristics that visit as few nodes as possible [1, 4]. In AI, domain-specific search heuristics are useful for improving the performance of inference engines operating on knowledge bases [3, 5]. Search heuristics have been previously also developed to reduce search efforts in protein-protein interaction networks [6] and in planning chemical reactions that can synthesize target chemical products [2]. This broad set of applications underlines the importance of good search heuristics that are applicable to a wide range of problems.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Figure 1: The goal is to navigate (find a path) from the start to the goal node. While BFS visits many nodes to find a start-to-goal path (left), one can use a heuristic based on the features of the nodes (e.g., Euclidean distance) on the graph to reduce the search effort (middle). We propose PHIL to learn a tailored search heuristic for a given graph, capable of reducing the number of visited nodes even further by exploiting the inductive biases of the graph (right).
|
| 20 |
+
|
| 21 |
+
The search task can be formulated as a pathfinding problem on a graph, where given a graph, the task is to navigate and find a short feasible path from a start node to a goal node, while in the process visiting as few nodes as possible (Figure 1). The most straightforward approach would be to launch a search algorithm such as breadth-first search (BFS) and iteratively expand the graph from the start node until it reaches the goal node. Since BFS does not harness any prior knowledge about the graph, it usually visits many nodes before reaching the goal, which is expensive in cases such as robotics where visiting nodes is costly. To visit fewer nodes during the search, one may use domain-specific information about the graph via a heuristic function [7], which allows one to define a distance metric on graph nodes to prune directions that seem less promising to explore. Unfortunately, coming up with good search heuristics requires significant domain expertise and manual effort.
|
| 22 |
+
|
| 23 |
+
While there has been significant progress in designing search heuristics, it remains a challenging problem. Classical approaches $\left\lbrack {8,9}\right\rbrack$ tend to hand-design search heuristics, which requires domain knowledge and a lot of trial and error. To alleviate this problem, there has been significant development in general-purpose search heuristics based on trading-off greedy expansions and novelty-based exploration [10-13] or search problem simplifications [14-16]. These approaches alleviate some of the common pitfalls of goal-directed heuristics, but we demonstrate that if possible, it is useful to learn domain-specific heuristics that can better exploit problem structure.
|
| 24 |
+
|
| 25 |
+
On the other hand, learning-based methods face a set of different challenges. Firstly, the data distribution is not i.i.d., as newly encountered graph nodes depend on past heuristic values, which means that supervised learning-based methods are not directly applicable. Secondly, heuristics should run fast, with ideally constant time complexity. Otherwise, the overall asymptotic time complexity of the search procedure could be increased. Finally, as the environment (search graph) sizes increase, reinforcement learning-based heuristic learning approaches tend to perform poorly [1]. State-of-the-art imitation learning-based methods can learn useful search heuristics [1]; however, these methods 4 still rely on feature-engineering for a specific domain and do not generally guarantee a constant time complexity with respect to graph sizes.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
Figure 2: Main components of PHIL: On the left, using a greedy mixture policy induced by the current version of our parameterized heuristic ${h}_{\theta }$ and an oracle heuristic ${h}^{ * }$ (i.e., a heuristic that correctly determines distances between nodes), we roll-out a search trajectory from the start node to the goal node. Each trajectory step contains a set of newly added fringe nodes with bounded random subsets of their 1-hop neighborhoods and their oracle $\left( {h}^{ * }\right)$ distances to the goal node. Trajectories are aggregated throughout the training procedure. On the right, we use truncated backpropagation through time on each collected trajectory to train ${h}_{\theta }$ , where $\widehat{h}$ is the predicted distance between ${x}_{2}$ and ${x}_{g}$ , and ${z}_{2}$ is the updated state of the memory. Here, the memory captures the embedding of the graph visited so far.
|
| 30 |
+
|
| 31 |
+
In this paper, we propose Path Heuristic with Imitation Learning (PHIL), a framework that extends the recent imitation learning-based heuristic search paradigm with a learnable explored graph memory. This means that PHIL learns a representation that allows it to capture the structure of the so far 59 explored graph, so that it can then better select what node to explore next (Figure 2). We train our approach to predict the node-to-goal distances ( ${h}^{ * }$ in Figure 2) of graph nodes during search. To train our memory module, which captures the explored graph, we use truncated backpropagation through time (TBTT) [17], where we utilize ground-truth node-to-goal distances as a supervision signal at each search step. Our TBTT procedure is embedded within an adaptation of the AggreVaTe imitation learning algorithm [18]. PHIL also includes a specialized graph neural network architecture, which allows us to apply PHIL to diverse graphs from different Fdomains.
|
| 32 |
+
|
| 33 |
+
We evaluate PHIL on standard benchmark heuristic learning datasets (Section 5.1), diverse graph-based datasets from different domains (Section 5.2), and practical UAV flight use cases (Section 5.3). Experiments demonstrate that PHIL outperforms state-of-the-art heuristic learning methods up to $4 \times$ . Further, PHIL performs within 4.9% of an oracle in indoor drone planning scenarios, which is up to a 21.5% reduction compared with commonly used approaches. In practice, our contributions enable practitioners to quickly extract useful search heuristics from their graph datasets without any hand-engineering.
|
| 34 |
+
|
| 35 |
+
## 2 Preliminaries
|
| 36 |
+
|
| 37 |
+
Graph search. Suppose that we are given an unweighted connected graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is a set of nodes, and $\mathcal{E}$ a corresponding set of edges. Further suppose that each node $i \in \mathcal{V}$ has corresponding features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and each edge $\left( {i, j}\right) \in \mathcal{E}$ has features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ . Assume that we are also given a start node ${v}_{s} \in \mathcal{V}$ and a goal node ${v}_{q} \in \mathcal{V}$ . At any stage of our search algorithm, we can partition the nodes of our graph into three sets as $\mathcal{V} = \operatorname{CLOSE} \cup \mathrm{{OPEN}} \cup \mathrm{{REST}}$ , where CLOSE are the nodes already explored, OPEN are candidate nodes for exploration (i.e., all nodes connected to any node in CLOSE, but not yet in CLOSE), and REST is the rest of the graph. Each expansion moves a node from OPEN to CLOSE, and adds the neighbors of the given node from REST to OPEN. We call the set of newly added fringe nodes ${\mathcal{V}}_{\text{new }}$ at each search step. At the start of the search procedure, CLOSE $= \left\{ {v}_{s}\right\}$ and we expand the nodes until ${v}_{g}$ is encountered (i.e., until ${v}_{g} \in$ CLOSE).
|
| 38 |
+
|
| 39 |
+
Greedy best-first search. We can perform greedy best-first search using a greedy fringe expansion policy, such that we always expand the node $v \in$ OPEN that minimizes $h\left( {v,{v}_{g}}\right)$ . Here, $h : \mathcal{V} \times \mathcal{V} \rightarrow$ $\mathbb{R}$ is a tailored heuristic function for a given use case. In our work, we are interested in learning a function $h$ that predicts shortest path lengths, this way minimizing $\left| \text{CLOSE}\right|$ in a greedy best-first search regime.
|
| 40 |
+
|
| 41 |
+
Imitation of perfect heuristics. Partially observable Markov decision processes (POMDPs) are a suitable framework to describe the problem of learning search heuristics [1]. We can have $s =$ (CLOSE, OPEN, REST) as our state, an action $a \in \mathcal{A}$ corresponds to moving a node from OPEN to CLOSE, and the observations $o \in \mathcal{O}$ are the features of newly included nodes in OPEN. Note that one could consider an MDP framework to learn heuristics, but the time complexity of operating on the whole state is in most cases prohibitive. We also define a history $\psi \in \Psi$ as a sequence of observations $\psi = {o}_{1},{o}_{2},{o}_{3},\ldots$ Our work leverages the observation that using a heuristic function during greedy best-first search that correctly determines the length of the shortest path between fringe nodes and the goal node will also yield minimal |CLOSE|. For training, we adopt a perfect heuristic ${h}^{ * }$ , similar to [1], which has full information about $s$ during search. Such oracle can provide ground-truth distances ${h}^{ * }\left( {s, v,{v}_{g}}\right)$ , where $v \in$ OPEN. To conclude, we define a greedy best-first search policy ${\pi }_{\theta }$ that uses a parameterized heuristic ${h}_{\theta }$ to expand nodes from OPEN with minimal heuristic values. One could also directly use a POMDP solver for the above-described problem, but this approach is usually infeasible due to the dimensionality of the search state [19].
|
| 42 |
+
|
| 43 |
+
## 3 Related Work
|
| 44 |
+
|
| 45 |
+
General purpose heuristic design. There has been significant research in designing general-purpose heuristics for speeding up satisficing planning. The first set of approaches are based on simplifying the search problem for example using landmark heuristics [14, 16]. The next set of approaches aim to include novelty-based exploration in greedy best-first search [10-13]. The latter set of approaches showed state-of-the-art performance (best-first width search [12, 13], BFWS) in numerous settings. We show that in domains where data is available, it can be more effective to incorporate a learned heuristic into a greedy best-first search procedure.
|
| 46 |
+
|
| 47 |
+
Learning heuristic search. There have been numerous previous works that attempt to learn search heuristics: Arfaee et al. [20] propose to improve heuristics iteratively, Virseda et al. [21] learn to combine heuristics to estimate graph node distances, Wilt et al. [22] and Garrett et al. [23] propose to learn node rankings, Thayer et al. [24] suggest to infer heuristics during a search, and Kim et al. [25] train a neural network to predict graph node distances. These methods generally do not consider the non-i.i.d. nature of heuristic search. Further, Bhardwaj et al. [1] propose SAIL, where heuristic learning is framed as an imitation learning problem with cost-to-go oracles. The SAIL heuristic uses hand-designed features tailored for obstacle avoidance, with a linear time-complexity in the number of explored grid nodes found to be colliding with an obstacle. Feature-engineering becomes more difficult as we attempt to learn heuristics on diverse graphs such as ones seen in Section 5.2, where we may need expert knowledge. Further, heuristics that do not have a constant time complexity in the size of the graph $\left\lbrack {1,{26} - {29}}\right\rbrack$ generally scale poorly with graph size and hence have constrained use cases. Recent approaches to learning heuristics include Retro* [2] by Chen et al., where a heuristic is learned in the context of AND-OR search trees for chemical retrosynthetic planning. Our work focuses on a more general graph setting.
|
| 48 |
+
|
| 49 |
+
There has been significant progress on learning heuristics for NP-hard combinatorial optimization problems [30-32]. Still, these heuristic learning methods, due to their time complexities, are impractical for the application in polynomial-time search problems, on which this work focuses.
|
| 50 |
+
|
| 51 |
+
Learning general purpose search. Learning general search policies is a very well-studied research area with a rich set of developments and applications. These include Monte Carlo Tree Search methods [33, 34], implicit planning methods [35-37], and imagination-based planning approaches $\left\lbrack {{38},{39}}\right\rbrack$ . Learning search heuristics can be seen as a special case of general purpose search, where the search problem is treated as a partially observable Markov decision process with restricted action evaluation (see Section 4), and with models running in $\mathcal{O}\left( 1\right)$ to remain competitive time-complexity-wise on problems where best-first search performs well. General purpose search methods do not take into account the above-mentioned constraints, which motivates the development of tailored approaches for learning heuristics $\left\lbrack {1,2}\right\rbrack$ .
|
| 52 |
+
|
| 53 |
+
Imitation learning. Our approach builds on prior work in imitation learning (IL) with cost-to-go oracles. Cost-to-go oracles have been incorporated in the context of IL in methods such as SEARN [40], AggreVaTe [18], LOLS [41], AggrevaTeD [42], DART [43], and THOR [44]. SAIL [1] presents an AggreVaTe-based algorithm for learning heuristic search. We extend SAIL by incorporating a recurrent $Q$ -like function, in which sense our algorithm more closely resembles AggreVaTeD by Sun et al. [42]. While a recurrent policy can be easily incorporated in AggreVaTeD, we cannot use a policy to evaluate actions. This is due to the fact that we would either have to evaluate all actions in a state, which is computationally infeasible, or we would have to give up on taking actions that are not in the most recent version of the search fringe, which would degrade the performance (see Section 4).
|
| 54 |
+
|
| 55 |
+
## 4 Path Heuristic with Imitation Learning
|
| 56 |
+
|
| 57 |
+
Training objective. With the aim of minimizing |CLOSE| after search, our goal is to train a parameterized heuristic function ${h}_{\theta } : \Psi \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ to predict ground-truth node distances ${h}^{ * }$ and use ${h}_{\theta }$ within a greedy best-first policy ${\pi }_{\theta }$ at test time. More specifically, we assume access to a distribution over graphs ${P}_{\mathcal{G}}$ , a start-goal node distribution ${P}_{{v}_{sg}}\left( {\cdot \mid \mathcal{G}}\right)$ , and a time horizon $T$ . Moreover, we assume a joint state-history distribution $s,\psi \sim {P}_{s}\left( {\cdot \mid \mathcal{G}, t,{\pi }_{\theta },{v}_{s},{v}_{g}}\right)$ , where ${P}_{s}$ represents the probability our search being in state $s$ , at time $0 \leq t \leq T$ on graph $\mathcal{G}$ with pathfinding problem $\left( {{v}_{s},{v}_{g}}\right)$ , with a greedy best-first search policy ${\pi }_{\theta }$ using heuristic ${h}_{\theta }$ . Hence, our goal can be summarized as minimizing the following objective:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\mathcal{L}\left( \theta \right) = \underset{\begin{matrix} {\xi \sim {P}_{g},} \\ {\left( {{v}_{s},{v}_{g}}\right) \sim {P}_{vsg}} \\ {t \sim \mathcal{U}\left( {0,\ldots , T}\right) ,} \\ {s,\psi \sim {P}_{s}} \end{matrix}}{\mathbb{E}}\left\lbrack {\frac{1}{\left| \mathrm{{OPEN}}\right| }\mathop{\sum }\limits_{{v \in \mathrm{{OPEN}}}}{\left( {h}^{ * }\left( s, v,{v}_{g}\right) - {h}_{\theta }\left( \psi , v,{v}_{g}\right) \right) }^{2}}\right\rbrack \tag{1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Before we describe the algorithm that can be used to minimize $\mathcal{L}$ , we rewrite ${h}_{\theta }$ to include a memory digest component $\left( {z}_{t}\right)$ , which represents an embedding of $\psi$ at time step $t$ . Hence, ${h}_{\theta }$ becomes ${h}_{\theta } : {\mathbb{R}}^{d} \times \mathcal{O} \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ , where $d$ is the dimensionality of our memory’s embedding space. As opposed to previous methods [1], ${z}_{t}$ allows us to automatically extract relevant features for heuristic
|
| 64 |
+
|
| 65 |
+
Algorithm 1: PHIL— Sequential Heuristic Training
|
| 66 |
+
|
| 67 |
+
---
|
| 68 |
+
|
| 69 |
+
Obtain hyperparameters $T,{\beta }_{0}, N, m,{t}_{\tau }$ ;
|
| 70 |
+
|
| 71 |
+
Initialize $\mathcal{D} \leftarrow \varnothing ,{h}_{{\theta }_{1}}$ ;
|
| 72 |
+
|
| 73 |
+
for $i = 1,\ldots , N$ do
|
| 74 |
+
|
| 75 |
+
Sample $\mathcal{G} \sim {P}_{\mathcal{G}}$ ;
|
| 76 |
+
|
| 77 |
+
Sample ${v}_{s},{v}_{g} \sim {P}_{{v}_{sg}}$ ;
|
| 78 |
+
|
| 79 |
+
Set $\beta \leftarrow {\beta }_{0}^{i}$ ;
|
| 80 |
+
|
| 81 |
+
Set mixture policy ${\pi }_{\text{mix }} \leftarrow \left( {1 - \beta }\right) * {\pi }_{{\theta }_{i}} + \beta * {\pi }^{ * }$ ;
|
| 82 |
+
|
| 83 |
+
Collect $m$ trajectories ${\tau }_{ij}$ as follows;
|
| 84 |
+
|
| 85 |
+
for $j = 1,\ldots , m$ do
|
| 86 |
+
|
| 87 |
+
Sample $t \sim \mathcal{U}\left( {0,\ldots , T - {t}_{\tau }}\right)$ ;
|
| 88 |
+
|
| 89 |
+
Roll-in $t$ time steps of ${\pi }_{{\theta }_{i}}$ to obtain ${z}_{t}$ and new state ${s}_{t} = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\operatorname{REST}}^{0}}\right)$ ;
|
| 90 |
+
|
| 91 |
+
Roll-out trajectory ${\tau }_{ij}$ as follows;
|
| 92 |
+
|
| 93 |
+
for $k = 1,\ldots ,{t}_{\tau }$ do
|
| 94 |
+
|
| 95 |
+
Update ${s}_{t + k - 1}$ using ${\pi }_{\operatorname{mix}}$ to get new state ${s}_{t + k}$ and new fringe state ${\mathrm{{OPEN}}}^{k}$ ;
|
| 96 |
+
|
| 97 |
+
Obtain new fringe nodes ${\mathcal{V}}_{\text{new }} = {\mathrm{{OPEN}}}^{k} \smallsetminus {\mathrm{{OPEN}}}^{k - 1}$ ;
|
| 98 |
+
|
| 99 |
+
Update trajectory ${\tau }_{ij} \leftarrow {\tau }_{ij} \cup \left\{ \left( {{\mathcal{V}}_{\text{new }},{h}^{ * }\left( {{s}_{t + k},{\mathcal{V}}_{\text{new }},{v}_{g}}\right) }\right) \right\}$ ;
|
| 100 |
+
|
| 101 |
+
Update dataset $\mathcal{D} \leftarrow \mathcal{D} \cup \left\{ \left( {{\tau }_{ij},{z}_{t}}\right) \right\}$ or $\mathcal{D} \cup \left\{ \left( {{\tau }_{ij},0}\right) \right\}$ ;
|
| 102 |
+
|
| 103 |
+
Train ${h}_{{\theta }_{i}}$ using TBTT on each $\tau \in \mathcal{D}$ to get ${h}_{{\theta }_{i + 1}}$ ;
|
| 104 |
+
|
| 105 |
+
return best performing ${h}_{{\theta }_{i}}$ on validation;
|
| 106 |
+
|
| 107 |
+
---
|
| 108 |
+
|
| 109 |
+
computations and concurrently reduce the computational complexity of the heuristic function. Further, as shown in [1], if we would use ${h}_{\theta }$ to evaluate all actions in a state (i.e., recalculate the heuristic values of all nodes in OPEN), we would need a squared reduction in the number of expanded nodes compared with BFS for PHIL to bring performance benefits over BFS, which however may not be possible on all datasets. Hence, we constrain the heuristic only to evaluate new OPEN nodes which we obtain after moving a node to CLOSE, calling the set of new fringe nodes ${\mathcal{V}}_{\text{new }}$ after each expansion. In practice, the policy ${\pi }_{\theta }$ yields an algorithm equivalent to greedy best-first search, with the heuristic function replaced by ${h}_{\theta }$ .
|
| 110 |
+
|
| 111 |
+
### 4.1 Learning algorithm & architecture
|
| 112 |
+
|
| 113 |
+
Imitation learning algorithm. In Algorithm 1, we present the pseudo-code of the IL algorithm used to train our heuristic models (Figure 3). The high-level idea of our algorithm is that we aggregate trajectories of search traces (i.e., sequences of new fringe nodes) and use truncated backpropagation through time to optimize ${h}_{\theta }$ after each data-collection step. In particular, after sampling a graph $\mathcal{G}$ and a search problem ${v}_{s},{v}_{g}$ , we use our greedy learned policy ${\pi }_{\theta }$ induced by ${h}_{\theta }$ to roll-in for $t \sim \mathcal{U}\left( {0,\ldots , T - {t}_{\tau }}\right)$ expansions, where $T$ is the episode time horizon, and ${t}_{\tau }$ is the roll-out length. From our roll-in, we obtain a new state $s = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\mathrm{{REST}}}^{0}}\right)$ , and an initial memory state ${z}_{t}$ . After our roll-in, we roll-out for ${t}_{\tau }$ steps using our mixture policy ${\pi }_{mix}$ , which is obtained by probabilistically blending ${\pi }_{\theta }$ and the greedy best-first policy induced by the oracle heuristic ${\pi }^{ * }$ . In a roll-out, we collect sequences of new fringe nodes, together with their ground-truth distances to the goal ${v}_{g}$ , given by ${h}^{ * }$ . Once the roll-out is complete, we append the obtained trajectory and the initial state for the following optimization using backpropagation through time. Further analysis on the trade-offs between using rolled-in states ${z}_{t}$ or zeroed-out states for training can be found in the supplementary material.
|
| 114 |
+
|
| 115 |
+
Note that we could also use supervised learning-based approaches to sample a fixed dataset of $\left( {v}_{s}\right.$ , $\left. {{v}_{g},{h}^{ * }\left( {s,{v}_{s},{v}_{g}}\right) }\right)$ 3-tuples and train a model to predict node distances conditioned on their features. However, our experiments in Section 5 demonstrate that ignoring the non-i.i.d. nature of heuristic search negatively impacts model performance, with supervised learning-based methods performing up to ${40} \times$ worse.
|
| 116 |
+
|
| 117 |
+
Recurrent GNN architecture. In each forward pass, ${h}_{\theta }$ obtains a set of new fringe nodes ${\mathcal{V}}_{\text{new }}$ , the goal node ${v}_{g}$ , and the memory ${z}_{t}$ at time step $t$ . We represent each node in ${\mathcal{V}}_{\text{new }}$ using its features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and likewise the goal node ${v}_{g}$ using its features ${x}_{g} \in {\mathbb{R}}^{{D}_{v}}$ . Further, for each $i \in {\mathcal{V}}_{\text{new }}$ , we uniformly sample an $n \in {\mathbb{N}}_{ \geq 0}$ bounded set of nodes present in the 1-hop neighborhood of $i$ , calling
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
|
| 121 |
+
Figure 3: This figure demonstrates the core idea behind our IL algorithm. We present the roll-in phase on the left-hand side, where our policy is rolled in for $t$ steps to obtain state ${s}_{t}$ and embedding ${z}_{t}$ . On the right-hand side, we show the trajectory collection and training steps, where we aggregate the trajectory for downstream training (green) and use truncated backpropagation through time on the collected dataset (red).
|
| 122 |
+
|
| 123 |
+
this set ${\mathcal{N}}_{i}$ , with $\left| {\mathcal{N}}_{i}\right| \leq n$ . This sampling step produces a set of neighboring node features, where each $j \in {\mathcal{N}}_{i}$ has features ${x}_{j} \in {\mathbb{R}}^{{D}_{v}}$ , and corresponding edge features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ .
|
| 124 |
+
|
| 125 |
+
${h}_{\theta }$ forward pass. Algorithm 2 presents a single forward pass of ${h}_{\theta }$ . The forward
|
| 126 |
+
|
| 127 |
+
---
|
| 128 |
+
|
| 129 |
+
Algorithm 2: Heuristic func. $\left( {h}_{\theta }\right)$ forward pass
|
| 130 |
+
|
| 131 |
+
---
|
| 132 |
+
|
| 133 |
+
Obtain ${x}_{i},{x}_{j},{e}_{ij},{x}_{g}{z}_{t}$ ;
|
| 134 |
+
|
| 135 |
+
${x}_{i} \leftarrow f\left( {{x}_{i},{x}_{g},{D}_{EUC}\left( {{x}_{i},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{i},{x}_{g}}\right) }\right) ;$
|
| 136 |
+
|
| 137 |
+
${x}_{j} \leftarrow f\left( {{x}_{j},{x}_{g},{D}_{EUC}\left( {{x}_{j},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{j},{x}_{g}}\right) }\right) ;$
|
| 138 |
+
|
| 139 |
+
${g}_{i} \leftarrow \phi \left( {{x}_{i},{ \oplus }_{j \in {\mathcal{N}}_{i}}\gamma \left( {{x}_{i},{x}_{j},{e}_{ij}}\right) }\right) ;$
|
| 140 |
+
|
| 141 |
+
${g}_{i}^{\prime },{z}_{i, t + 1} \leftarrow \operatorname{GRU}\left( {{g}_{i},{z}_{t}}\right)$ ;
|
| 142 |
+
|
| 143 |
+
${z}_{t + 1} \leftarrow \overline{{z}_{i, t + 1}}$ ;
|
| 144 |
+
|
| 145 |
+
${\widehat{h}}_{i} \leftarrow \operatorname{MLP}\left( {{g}_{i}^{\prime },{x}_{g}}\right) ;$
|
| 146 |
+
|
| 147 |
+
return ${\widehat{h}}_{i},{z}_{t + 1}$ ;
|
| 148 |
+
|
| 149 |
+
---
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
pass outputs predicted distances of the new fringe nodes to the goal ${\widehat{h}}_{i}$ , together with an updated memory digest ${z}_{t + 1}$ . In Algorithm $2, f,\phi ,\gamma ,\operatorname{GRU}\left\lbrack {45}\right\rbrack$ , MLP are each param-eterised differentiable functions, with $\phi ,\gamma$ representing the update and message functions [46] of a graph neural network, respectively.
|
| 154 |
+
|
| 155 |
+
In our forward pass, using the function $f$ , we first project ${x}_{i},{x}_{j}$ into a node embedding space, together with the goal features ${x}_{g}$ , and their Euclidean $\left( {D}_{EUC}\right)$ and cosine distances $\left( {D}_{COS}\right)$ . After that, using a 1-layer GNN, we perform a single convolution over each ${x}_{i}$ and the corresponding neighborhood ${\mathcal{N}}_{i}$ , to obtain ${g}_{i}$ . The specific GNN choice is a design decision left to the practitioner, and further analysis of GNN choices can be found in Appendix D. Our graph convolution processing step allows us to easily incorporate edge features and work with variable sizes of ${\mathcal{N}}_{i}$ . After the graph convolution, we apply the GRU module over each embedding ${g}_{i}$ to obtain hidden states ${z}_{i, t + 1}$ , and new embeddings ${g}_{i}^{\prime }$ . We compute the sample mean of ${z}_{i, t + 1}$ for each node $i \in {\mathcal{V}}_{\text{new }}$ to obtain a new hidden state ${z}_{t + 1}$ , and process ${g}_{i}^{\prime }$ with ${x}_{g}$ using an MLP to compute the distances between the graph nodes.
|
| 156 |
+
|
| 157 |
+
Permutation invariant ${\mathcal{V}}_{\text{new }}$ embedding. There is a trade-off between processing new fringe nodes in batch, as in Algorithm 2, and processing them sequentially. Namely, when we process the nodes in batch, we do not use the in-batch observations to predict batch node values, which means that ${z}_{t}$ is slightly outdated. On the other hand, in PHIL, batch processing allows us to compute the heuristic values of all $v \in {\mathcal{V}}_{\text{new }}$ in parallel on a GPU and preserves the memory’s permutation invariance with respect to nodes in ${\mathcal{V}}_{\text{new }}$ . That is, because our observations are nodes &edges of a graph, the respective observation ordering usually does not contain inductive biases useful for predictions, which means that we can apply a permutation invariant operator such as the mean of all new states ${z}_{i, t + 1}$ to obtain an aggregated updated state. This approach provides additional scalability as we can process values in parallel and PHIL does not have to infer permutation invariance in ${\mathcal{V}}_{new}$ from data.
|
| 158 |
+
|
| 159 |
+
Runtime complexity. Since $\forall i \in {\mathcal{V}}_{\text{new }} : \left| {\mathcal{N}}_{i}\right| \leq n$ , Algorithm 2 together with neighborhood sampling runs in up to $n{c}_{1} + \left( {n + 1}\right) {c}_{2}$ operations per each node $i \in {\mathcal{V}}_{\text{new }}$ , which is $\mathcal{O}\left( 1\right)$ with respect to the size of the graph. Here, ${c}_{1}$ is the maximal number of operations associated with evaluating a node, such as performing robot collision checks in dynamically constructed graphs, and ${c}_{2}$ is the maximal count of total model operations (e.g., $f\& \gamma$ operations) on the node set $\{ i\} \cup {\mathcal{N}}_{i}$ . In general, we expect to learn a better search heuristic with increasing $n$ (see Appendix D for ablations), but in some use cases, ${c}_{1}$ may dominate overall complexity, which means the hyperparameter $n$ is helpful for practitioners to tune trade-offs between constant factors and search effort minimization.
|
| 160 |
+
|
| 161 |
+
## 5 Experiments
|
| 162 |
+
|
| 163 |
+
In our experiments, we evaluate PHIL both on benchmark heuristic learning datasets [1] (Section 5.1) as well on a diverse set of graph datasets (Section 5.2). Finally, we show that PHIL can be applied to efficient planning in the context of drone flight (Section 5.3). Our main goal is to assess how PHIL compares to baseline methods in terms of necessary expansions before the goal node is reached. Please refer to the supplementary material for information about baselines, an ablation study, and additional experiment details.
|
| 164 |
+
|
| 165 |
+
### 5.1 Heuristic search in grids
|
| 166 |
+
|
| 167 |
+
In Section 5.1, we evaluate PHIL on $8,{200} \times {200}$ 8-connected grid graph-based datasets by Bhardwaj et al. [1]. These datasets present challenging obstacle configurations for naive greedy planning heuristics, especially when ${v}_{s}$ is in the bottom-left of the grid, and ${v}_{g}$ in the top-right. Each dataset contains 200 training graphs, 70 validation graphs, and 100 test graphs. Example graphs from each dataset can be found in Table 1.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
Figure 4: Example of PHIL escaping local search minima.
|
| 172 |
+
|
| 173 |
+
We train PHIL with a hyperparameter configuration of $T = {128}$ , ${t}_{\tau } = {32},{\beta }_{0} = {0.7}, n = 8$ , and using rolled-in ${z}_{t}$ states as initial states for training. We use a 3-layer MLP of width 128 with LeakyReLU activations, followed by a DeeperGCN [47] graph convolution with softmax aggregation. Our memory's embedding dimensionality is 64 . See Appendix C for an overview of our baselines and datasets.
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Dataset</td><td/><td>Graph Examples</td><td/><td>SAIL</td><td>SL</td><td>CEM</td><td>QL</td><td>${h}_{euc}$</td><td>${h}_{man}$</td><td>A*</td><td>MHA*</td><td>BFWS</td><td>Neural ${\mathrm{A}}^{ * }$</td><td>PHIL</td></tr><tr><td>Alternating gaps</td><td/><td/><td/><td>0.039</td><td>0.432</td><td>0.042</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.34</td><td>0.546</td><td>0.024</td></tr><tr><td>Single Bugtrap</td><td/><td/><td/><td>0.158</td><td>0.214</td><td>0.057</td><td>1.000</td><td>0.184</td><td>0.192</td><td>1.000</td><td>0.286</td><td>0.099</td><td>0.394</td><td>0.077</td></tr><tr><td>Shifting gaps</td><td/><td/><td/><td>0.104</td><td>0.464</td><td>1.000</td><td>1.000</td><td>0.506</td><td>0.589</td><td>1.000</td><td>0.804</td><td>0.206</td><td>0.563</td><td>0.027</td></tr><tr><td>Forest</td><td/><td/><td/><td>0.036</td><td>0.043</td><td>0.048</td><td>0.121</td><td>0.041</td><td>0.043</td><td>1.000</td><td>0.075</td><td>0.039</td><td>0.399</td><td>0.027</td></tr><tr><td>Bugtrap+Forest</td><td/><td/><td/><td>0.147</td><td>0.384</td><td>0.182</td><td>1.000</td><td>0.410</td><td>0.337</td><td>1.000</td><td>3.177</td><td>0.149</td><td>0.651</td><td>0.135</td></tr><tr><td>Gaps+Forest</td><td/><td/><td/><td>0.221</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>1.000</td><td>0.401</td><td>0.580</td><td>0.039</td></tr><tr><td>Mazes</td><td/><td/><td/><td>0.103</td><td>0.238</td><td>0.479</td><td>0.399</td><td>0.185</td><td>0.171</td><td>1.000</td><td>0.279</td><td>0.095</td><td>1.000</td><td>0.069</td></tr><tr><td>Multiple Bugtraps</td><td/><td/><td/><td>0.479</td><td>0.480</td><td>1.000</td><td>0.835</td><td>0.648</td><td>0.617</td><td>1.000</td><td>0.876</td><td>0.169</td><td>0.331</td><td>0.136</td></tr></table>
|
| 176 |
+
|
| 177 |
+
Table 1: The number of expanded graph nodes of PHIL with respect to SAIL. We can observe that out of all baselines, SAIL performs best. PHIL outperforms SAIL by 58.5% on average over all datasets, with a maximal search effort reduction of ${82.3}\%$ in the Gaps+Forest dataset.
|
| 178 |
+
|
| 179 |
+
Discussion. As we can see in Table 1, PHIL outperforms the best baseline (SAIL) on all datasets, with an average reduction of explored nodes before ${v}_{q}$ is found of ${58.5}\%$ . Qualitatively, observing Figure 5, we can attribute these results to PHIL's ability to reduce the redundancy in explored nodes during a search. Further, PHIL is also capable of escaping local minima, which is illustrated in Figure 4. However, note that we occasionally observe failure cases in practice, where PHIL gets stuck in a bug trap-like structure. We discuss possible remedies and opportunities for future work in the supplementary material.
|
| 180 |
+
|
| 181 |
+
Runtime &convergence speed. PHIL converges in up to $N = {36}$ iterations, with $m = 1,{t}_{\tau } = {32}$ (i.e., after observing less than $N * {t}_{\tau } * \max \left( \left| {\mathcal{V}}_{\text{new }}\right| \right) \approx 9,{216}$ shortest path distances, where we take $\max \left( \left| {\mathcal{V}}_{\text{new }}\right| \right) = 8$ as the maximal size of ${\mathcal{V}}_{\text{new }}$ ). According to figures reported in [1], this is approximately $5 \times$ less data than it takes for SAIL to converge.
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+
Figure 5: In each image pair of this figure, we provide a qualitative comparison with the SAIL method. In particular, we show comparisons on the Shifting gaps, Gaps+Forest, Mazes, and Forest datasets. We can observe that PHIL (right) learns the appropriate heuristics for the given dataset and makes fewer redundant expansions than SAIL (left).
|
| 186 |
+
|
| 187 |
+
### 5.2 Search in real-life graphs of different structures
|
| 188 |
+
|
| 189 |
+
In this experiment, our goal is to demonstrate the general applicability of PHIL to various graphs. We train PHIL on 4 different groups of graph datasets: citation networks, biological networks, abstract syntax trees (ASTs), and road networks. We use the same graph for citation networks and road networks for training and evaluation, and we use 100 random ${v}_{s},{v}_{g}$ pairs for testing. In the case of biological networks and ASTs, we usually have train/validation/test splits of 80/10/10, and in the case of the OGB [48] datasets, we use the provided splits.
|
| 190 |
+
|
| 191 |
+
<table><tr><td/><td>Dataset</td><td>$\left| \mathcal{D}\right|$</td><td>$\left| \bar{\mathcal{V}}\right|$</td><td>$\left| \overline{\mathcal{E}}\right|$</td><td>SL</td><td>A*</td><td>${h}_{euc}$</td><td>BFS</td><td>SAIL</td><td>BFWS</td><td>PHIL</td></tr><tr><td rowspan="5">Citation Networks</td><td>Cora (Sen et al. [49])</td><td>1</td><td>2,708</td><td>5,429</td><td>2.201</td><td>2.067</td><td>1.000</td><td>4.001</td><td>0.669</td><td>1.378</td><td>0.475</td></tr><tr><td>PubMed (Sen et al. [49]))</td><td>1</td><td>19,717</td><td>44,338</td><td>2.157</td><td>2.983</td><td>1.000</td><td>3.853</td><td>1.196</td><td>1.000</td><td>0.745</td></tr><tr><td>CiteSeer (Sen et al. [49]))</td><td>1</td><td>3,327</td><td>4,732</td><td>1.636</td><td>1.487</td><td>1.000</td><td>2.190</td><td>1.062</td><td>0.951</td><td>0.599</td></tr><tr><td>Coauthor (cs) (Schur et al. [50])</td><td>1</td><td>18,333</td><td>81,894</td><td>1.571</td><td>1.069</td><td>1.000</td><td>2.820</td><td>1.941</td><td>1.026</td><td>0.835</td></tr><tr><td>Coauthor (physics) (Schur et al. [50])</td><td>1</td><td>34,493</td><td>247,962</td><td>4.076</td><td>1.081</td><td>1.000</td><td>4.523</td><td>-</td><td>1.012</td><td>0.964</td></tr><tr><td rowspan="4">Biological Networks</td><td>OGBG-Molhiv (Hu et al. [48])</td><td>41,127</td><td>25.5</td><td>27.5</td><td>1.086</td><td>1.065</td><td>1.000</td><td>1.267</td><td>1.104</td><td>1.146</td><td>1.016</td></tr><tr><td>PPI (Zitnik et al. [51])</td><td>24</td><td>2,372.67</td><td>34,113.16</td><td>0.772</td><td>0.831</td><td>1,000</td><td>5.618</td><td>1.746</td><td>3.941</td><td>0.658</td></tr><tr><td>Proteins (Full) (Morris et al. [52])</td><td>1.113</td><td>39.06</td><td>72.82</td><td>0.995</td><td>0.997</td><td>1.000</td><td>2.645</td><td>0.891</td><td>0.966</td><td>0.831</td></tr><tr><td>Enzymes (Morris et al. [52])</td><td>600</td><td>32.63</td><td>62.14</td><td>1.073</td><td>1.007</td><td>1.000</td><td>1.358</td><td>1.036</td><td>0.992</td><td>0.757</td></tr><tr><td>ASTs</td><td>OGBG-Code2 (Hu et al. [48])</td><td>452,741</td><td>125.2</td><td>124.2</td><td>1.196</td><td>1.013</td><td>1.000</td><td>1.267</td><td>1.029</td><td>0.817</td><td>1.219</td></tr><tr><td rowspan="2">Road Networks</td><td>OSMnx - Modena (Boeing [53])</td><td>1</td><td>29.324</td><td>38,309</td><td>2.904</td><td>3.085</td><td>1.000</td><td>3.493</td><td>1.182</td><td>0.997</td><td>0.489</td></tr><tr><td>OSMnx - New York (Boeing [53])</td><td>1</td><td>54.128</td><td>89.618</td><td>39.424</td><td>36.529</td><td>1.000</td><td>63.352</td><td>1.583</td><td>1.013</td><td>0.962</td></tr></table>
|
| 192 |
+
|
| 193 |
+
Table 2: Comparison of PHIL with baseline approaches on 4 groups of datasets: citation networks, biological networks, abstract syntax trees, and road networks. "-" denotes being out of a 4-day's training time limit. We can observe that, on average across all datasets, PHIL outperforms the best baseline per dataset by 13.4%. Discounting the OGBG datasets, this number becomes 19.5%.
|
| 194 |
+
|
| 195 |
+
Similarly as in Section 5.1, our MLP has four layers of width 128 with LeakyReLU activations and we use a DeeperGCN [47] graph convolution with softmax aggregation. The utilized node and edge features are the provided features in each dataset, except for a few minor modifications which are discussed in Appendix A & Appendix C. We train an MLP of depth 5 and width 256 using supervised learning (SL) for our learning-based baseline method.
|
| 196 |
+
|
| 197 |
+
Discussion. The results presented in Table 2 suggest that PHIL can learn superior search heuristics compared with baseline methods, outperforming top baselines per dataset in terms of visited nodes during a search by ${13.4}\%$ on average. Two datasets where PHIL fell short compared to other baselines are the OGBG-Molhiv and OGBG-Code2 datasets. The OGBG-Code2 dataset adopts a project split [54] and OGBG-Mohliv adopts a scaffold split [55], both of which ensure that graphs of different structure are present in the training & test sets. Although PHIL improved upon uninformed search (BFS) in the OGB datasets, structural graph consistency is explicitly discouraged in the above-mentioned OGBG splits. Without the OGBG datasets, PHIL improves on the top baselines per dataset by ${19.5}\%$ on average, and upon the Euclidean node feature heuristic $\left( {h}_{\text{euc }}\right)$ by ${20.4}\%$ . Note that we trained PHIL up to $N = {60}$ iterations, which means that it only encountered a small subset of the pathfinding problems in the single graph setting, which means that PHIL had to generalize to learn useful heuristics. Even in Cora, the $\left| \mathcal{D}\right| = 1$ dataset with least number of nodes, PHIL observed roughly 6,000 node distances during training, which is less than ${0.2}\%$ of total distances in the Cora graph.
|
| 198 |
+
|
| 199 |
+
### 5.3 Planning for drone flight
|
| 200 |
+
|
| 201 |
+
In our final experiment, we use PHIL to plan collision-free paths in a practical drone flight use case within an indoor environment. We built our environment using the CoppeliaSim simulator [56], and the Ivy framework [57]. Figure 6 presents the environment which we refer to as room adversarial in Table 3. For more detail about each environment, please refer to the supplementary material. We discretize the environments into 3D grid graphs of size ${50} \times {50} \times {25}$ , and randomly remove 5 sub-graphs of size $5 \times$ $5 \times 5$ both during training and testing, this way simulating real-life planning scenarios with random obstacles. The hyperparameter configuration and the specific architecture we utilize are equivalent to Section 5.1, but with $n = 4$ . Likewise, the node features are 3D grid coordinates, and the baselines include supervised learning (SL), ${h}_{euc},{\mathrm{\;A}}^{ * }$ , and BFS, similarly as in Sections 5.1, 5.2. In Table 3 we report the ratio of expanded nodes with respect to ${h}_{euc}$ .
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
|
| 205 |
+
Figure 6: This figure illustrates the room adversarial environment with an example planning problem (red) and the expanded graph by PHIL (blue).
|
| 206 |
+
|
| 207 |
+
311 Video demo. We provide a video demonstration of PHIL running in room adversarial: https: //cutt.ly/eniu5ax.
|
| 208 |
+
|
| 209 |
+
<table><tr><td>Dataset</td><td>SL</td><td>A*</td><td>${h}_{euc}$</td><td>BFS</td><td>SAIL</td><td>BFWS</td><td>PHIL</td><td>Shortest path</td></tr><tr><td>Room simple</td><td>1.124</td><td>76.052</td><td>1.000</td><td>291.888</td><td>0.973</td><td>1.286</td><td>0.785</td><td>0.782</td></tr><tr><td>Room adversarial</td><td>2.022</td><td>67.215</td><td>1.000</td><td>238.768</td><td>0.944</td><td>1.583</td><td>0.895</td><td>0.853</td></tr></table>
|
| 210 |
+
|
| 211 |
+
Table 3: Results of PHIL in the context of planning for indoor UAV flight. PHIL outperforms all baselines in both the room simple and room adversarial environments while remaining close performance-wise to the optimal number of expansions.
|
| 212 |
+
|
| 213 |
+
Discussion. As we can observe in Table 3, PHIL outperforms all baselines in both environments. Interestingly, PHIL expands only approximately 0.3% more nodes in the simple room than least possible and ${4.9}\%$ more in the adversarial room case. The same figures for the greedy method $\left( {h}_{euc}\right)$ are ${27.8}\%$ and ${17.2}\%$ , respectively. These results indicate that PHIL is capable of learning planning strategies that are close to optimal in both simple and adversarial graphs, while the performance of naive heuristics degrades.
|
| 214 |
+
|
| 215 |
+
### 5.4 Runtime Analysis
|
| 216 |
+
|
| 217 |
+
We summarize test run-times of different approaches in Appendix G. PHIL runs 57.9% faster than BFWS and 32.2% faster than SAIL, and not much slower than traditional A* (34.7%) and ${h}_{man}$ (18.3%). Although Neural A* is ${71.0}\%$ faster than PHIL due to the fact that it casts the whole search process into matrix operations on images, it cannot be employed in a generic search setting.
|
| 218 |
+
|
| 219 |
+
## 6 Conclusion
|
| 220 |
+
|
| 221 |
+
In our work, we consider the problem of learning to search for feasible paths in graphs efficiently. We propose a model and a training procedure to learn search heuristics that can be easily deployed across diverse graphs, with tuneable trade-off parameters between constant factors and performance. Our results demonstrate that PHIL outperforms current state-of-the-art approaches and can be applied to various graphs with practical use cases.
|
| 222 |
+
|
| 223 |
+
References
|
| 224 |
+
|
| 225 |
+
[1] Mohak Bhardwaj, Sanjiban Choudhury, and Sebastian Scherer. Learning heuristic search via imitation. In Conference on Robot Learning, 2017. 1, 2, 3, 4, 5, 7, 13, 14, 15, 16, 19, 22
|
| 226 |
+
|
| 227 |
+
[2] Binghong Chen, Chengtao Li, Hanjun Dai, and Le Song. Retro*: Learning retrosynthetic planning with neural guided a* search. In ${ICML},{2020.1},4$
|
| 228 |
+
|
| 229 |
+
[3] Martin Gebser, Benjamin Kaufmann, Javier Romero, Ramón Otero, Torsten Schaub, and Philipp Wanko. Domain-specific heuristics in answer set programming. In ${AAAI},{2013.1}$
|
| 230 |
+
|
| 231 |
+
[4] Thi Thoa Mac, Cosmin Copot, Duc Trung Tran, and Robin De Keyser. Heuristic approaches in robot path planning: A survey. In Robotics and Autonomous Systems, 2016. 1
|
| 232 |
+
|
| 233 |
+
[5] Abhishek Sharma and Keith M. Goolsbey. Identifying useful inference paths in large commonsense knowledge bases by retrograde analysis. In AAAI, 2017. 1
|
| 234 |
+
|
| 235 |
+
[6] Cheng-Yu Yeh, Hsiang-Yuan Yeh, Carlos Roberto Arias, and Von-Wun Soo. Pathway detection from protein interaction networks and gene expression data using color-coding methods and a* search algorithms. In The Scientific World booktitle, 2012. 1
|
| 236 |
+
|
| 237 |
+
[7] Judea Pearl. Heuristics: intelligent search strategies for computer problem solving. 1984. 2
|
| 238 |
+
|
| 239 |
+
[8] Danish Khalidi, Dhaval Gujarathi, and Indranil Saha. T*: A heuristic search based path planning algorithm for temporal logic specifications. In ${ICRA},{2020.2}$
|
| 240 |
+
|
| 241 |
+
[9] Bhargav Adabala and Zlatan Ajanovic. A multi-heuristic search-based motion planning for autonomous parking. In 30th International Conference on Automated Planning and Scheduling: Planning and Robotics Workshop, 2020. 2
|
| 242 |
+
|
| 243 |
+
[10] Fan Xie, Hootan Nakhost, and Martin Müller. Planning via random walk-driven local search. In Twenty-Second International Conference on Automated Planning and Scheduling, 2012. 2, 3
|
| 244 |
+
|
| 245 |
+
[11] Fan Xie, Martin Müller, and Robert Holte. Adding local exploration to greedy best-first search in satisficing planning. In ${AAAI},{2014}$ .
|
| 246 |
+
|
| 247 |
+
[12] Nir Lipovetzky and Hector Geffner. Best-first width search: Exploration and exploitation in classical planning. In ${AAAI},{2017.3},{14}$
|
| 248 |
+
|
| 249 |
+
[13] Florent Teichteil-Königsbuch, Miquel Ramirez, and Nir Lipovetzky. Boundary extension features for width-based planning with simulators on continuous-state domains. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021. 2, 3, 14
|
| 250 |
+
|
| 251 |
+
[14] Blai Bonet and Héctor Geffner. Planning as heuristic search. pages 5-33. 2,3
|
| 252 |
+
|
| 253 |
+
[15] Lin Zhu and Robert Givan. Landmark extraction via planning graph propagation. 2003.
|
| 254 |
+
|
| 255 |
+
[16] Silvia Richter and Matthias Westphal. The lama planner: Guiding cost-based anytime planning with landmarks.2010.2,3
|
| 256 |
+
|
| 257 |
+
[17] Ilya Sutskever. Training recurrent neural networks. University of Toronto, Toronto, Canada, 2013. 3
|
| 258 |
+
|
| 259 |
+
[18] Stephane Ross and J Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. In arXiv preprint arXiv:1406.5979, 2014. 3, 4
|
| 260 |
+
|
| 261 |
+
[19] Sanjiban Choudhury, Ashish Kapoor, Gireeja Ranade, Sebastian Scherer, and Debadeepta Dey. Adaptive information gathering via imitation learning. 2017. 3
|
| 262 |
+
|
| 263 |
+
[20] Shahab Jabbari Arfaee, Sandra Zilles, and Robert C Holte. Learning heuristic functions for large state spaces. In Artificial Intelligence, 2011. 4
|
| 264 |
+
|
| 265 |
+
[21] Jes ús Virseda, Daniel Borrajo, and Vidal Alcázar. Learning heuristic functions for cost-based planning. In Planning and Learning, 2013. 4
|
| 266 |
+
|
| 267 |
+
[22] Christopher Makoto Wilt and Wheeler Ruml. Building a heuristic for greedy search. In SOCS, 2015.4
|
| 268 |
+
|
| 269 |
+
[23] Caelan Reed Garrett, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. Learning to rank for synthesizing planning heuristics. In IJCAI, 2016. 4
|
| 270 |
+
|
| 271 |
+
[24] Jordan Thayer, Austin Dionne, and Wheeler Ruml. Learning inadmissible heuristics during search. In Proceedings of the International Conference on Automated Planning and Scheduling, 2011.4
|
| 272 |
+
|
| 273 |
+
[25] Soonkyum Kim and Byungchul An. Learning heuristic a*: efficient graph search using neural network. In ${ICRA},{2020.4}$
|
| 274 |
+
|
| 275 |
+
[26] Yuka Ariki and Takuya Narihira. Fully convolutional search heuristic learning for rapid path planners. In arXiv preprint arXiv:1908.03343, 2019. 4
|
| 276 |
+
|
| 277 |
+
[27] Ryo Terasawa, Yuka Ariki, Takuya Narihira, Toshimitsu Tsuboi, and Kenichiro Nagasaka. 3d-cnn based heuristic guided task-space planner for faster motion planning. In ICRA, 2020.
|
| 278 |
+
|
| 279 |
+
[28] Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, and Asako Kanezaki. Path planning using neural a* search. In ICML, 2021. 14
|
| 280 |
+
|
| 281 |
+
[29] Alberto Archetti, Marco Cannici, and Matteo Matteucci. Neural weighted a*: Learning graph costs and heuristics with differentiable anytime a. 2021. 4
|
| 282 |
+
|
| 283 |
+
[30] Elias B. Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In NeurIPS, 2017. 4
|
| 284 |
+
|
| 285 |
+
[31] Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial optimization with graph convolutional networks and guided tree search. In NeurIPS, 2018.
|
| 286 |
+
|
| 287 |
+
[32] Nikolaos Karalias and Andreas Loukas. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. In NeurIPS, 2020. 4
|
| 288 |
+
|
| 289 |
+
[33] Davide Silver and Joel Veness. Monte-carlo planning in large pomdps. In NeurIPS, 2010. 4
|
| 290 |
+
|
| 291 |
+
[34] Arthur Guez, Theophane Weber, Ioannis Antonoglou, Karen Simonyan, Oriol Vinyals, Daan Wierstra, Rémi Munos, and David Silver. Learning to search with mctsnets. In ICML, 2018. 4
|
| 292 |
+
|
| 293 |
+
[35] Andreea Deac, Petar Veličković, Ognjen Milinković, Pierre-Luc Bacon, Jian Tang, and Mladen Nikolić. Xlvin: executed latent value iteration nets. In arXiv preprint arXiv:2010.13146, 2020. 4
|
| 294 |
+
|
| 295 |
+
[36] Péter Karkus, David Hsu, and Wee Sun Lee. Qmdp-net: Deep learning for planning under partial observability. In NeurIPS, 2017.
|
| 296 |
+
|
| 297 |
+
[37] Aviv Tamar, Sergey Levine, Pieter Abbeel, Yi Wu, and Garrett Thomas. Value iteration networks. In NeurIPS, 2016. 4
|
| 298 |
+
|
| 299 |
+
[38] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racanière, David Reichert, Théophane Weber, Daan Wierstra, and Peter Battaglia. Learning model-based planning from scratch. In arXiv preprint arXiv:1707.06170, 2017. 4
|
| 300 |
+
|
| 301 |
+
[39] Sébastien Racanière, Theophane Weber, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter W. Battaglia, Demis Hassabis, David Silver, and Daan Wierstra. Imagination-augmented agents for deep reinforcement learning. In NeurIPS, 2017. 4
|
| 302 |
+
|
| 303 |
+
[40] Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. In Machine learning, 2009. 4
|
| 304 |
+
|
| 305 |
+
[41] Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé III, and John Langford. Learning to search better than your teacher. In ICML, 2015. 4
|
| 306 |
+
|
| 307 |
+
[42] Wen Sun, Arun Venkatraman, Geoffrey J. Gordon, Byron Boots, and J. Andrew Bagnell. Deeply aggrevated: differentiable imitation learning for sequential prediction. In ${ICML}$ , 2017. 4
|
| 308 |
+
|
| 309 |
+
[43] Michael Laskey, Jonathan Lee, Roy Fox, Anca Dragan, and Ken Goldberg. Dart: Noise injection for robust imitation learning. In Conference on robot learning, 2017. 4
|
| 310 |
+
|
| 311 |
+
[44] Wen Sun, J. Andrew Bagnell, and Byron Boots. Truncated horizon policy search: combining reinforcement learning & imitation learning. In ICLR, 2018. 4
|
| 312 |
+
|
| 313 |
+
[45] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014. 6
|
| 314 |
+
|
| 315 |
+
[46] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In ICML, 2017. 6, 17
|
| 316 |
+
|
| 317 |
+
[47] Guohao Li, Chenxin Xiong, Ali Thabet, and Bernard Ghanem. Deepergcn: All you need to train deeper gens. In arXiv preprint arXiv:2006.07739, 2020. 7, 8, 17
|
| 318 |
+
|
| 319 |
+
[48] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020. 8
|
| 320 |
+
|
| 321 |
+
[49] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. In AI magazine, 2008. 8
|
| 322 |
+
|
| 323 |
+
[50] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation. In arXiv preprint arXiv:1811.05868, 2018. 8
|
| 324 |
+
|
| 325 |
+
[51] Marinka Zitnik and Jure Leskovec. Predicting multicellular function through multi-layer tissue networks. In Bioinformatics, 2017. 8
|
| 326 |
+
|
| 327 |
+
[52] Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In arXiv preprint arXiv:2007.08663, 2020. 8
|
| 328 |
+
|
| 329 |
+
[53] Geoff Boeing. Osmnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks. In Computers, Environment and Urban Systems, 2017. 8
|
| 330 |
+
|
| 331 |
+
[54] Miltiadis Allamanis. The adverse effects of code duplication in machine learning models of code. In ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, 2019. 8
|
| 332 |
+
|
| 333 |
+
[55] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. In Chemical science, 2018. 8
|
| 334 |
+
|
| 335 |
+
[56] E. Rohmer, S. P. N. Singh, and M. Freese. Coppeliasim (formerly v-rep): a versatile and scalable robot simulation framework. In IROS, 2013. 9, 15
|
| 336 |
+
|
| 337 |
+
[57] Daniel Lenton, Fabio Pardo, Fabian Falck, Stephen James, and Ronald Clark. Ivy: Templated deep learning for inter-framework portability. In arXiv preprint arXiv:2102.02886, 2021. 9, 15
|
| 338 |
+
|
| 339 |
+
[58] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In ICML, 2019.13
|
| 340 |
+
|
| 341 |
+
[59] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In ${ACM}$ SIGKDD, 2016. 13
|
| 342 |
+
|
| 343 |
+
[60] Stuart Russell and Peter Norvig. Artificial intelligence: a modern approach. 2002. 13
|
| 344 |
+
|
| 345 |
+
[61] Sandip Aine, Siddharth Swaminathan, Venkatraman Narayanan, Victor Hwang, and Maxim Likhachev. Multi-heuristic a*. In The International booktitle of Robotics Research, 2016. 13, 14
|
| 346 |
+
|
| 347 |
+
[62] Edo Cohen-Karlik, Avichai Ben David, and Amir Globerson. Regularizing towards permutation invariance in recurrent models. In NeurIPS, 2020. 13
|
| 348 |
+
|
| 349 |
+
[63] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NeurIPS Deep Learning Workshop, 2013. 14
|
| 350 |
+
|
| 351 |
+
[64] Pieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on the cross-entropy method. In Annals of operations research, 2005. 14
|
| 352 |
+
|
| 353 |
+
[65] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018. 17
|
| 354 |
+
|
| 355 |
+
[66] Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In ICLR, 2020. 18
|
| 356 |
+
|
| 357 |
+
[67] Petar Velickovic. Tikz. https://github.com/PetarV-/TikZ, last accessed on 01/6/21. 20
|
papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LEARNING GRAPH SEARCH HEURISTICS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Searching for a path between two nodes in a graph is one of the most well-studied and fundamental problems in computer science. In numerous domains such as robotics, AI, or biology, practitioners develop search heuristics to accelerate their pathfinding algorithms. However, it is a laborious and complex process to hand-design heuristics based on the problem and the structure of a given use case. Here we present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigation heuristics from data by leveraging recent advances in imitation learning and graph representation learning. At training time, we aggregate datasets of search trajectories and ground-truth shortest path distances, which we use to train a specialized graph neural network-based heuristic function using backpropagation through steps of the pathfinding process. Our heuristic function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as ${\mathrm{A}}^{ * }$ at test time. Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by ${58.5}\%$ on average, can be directly applied in diverse graphs ranging from biological networks to road networks, and allows for fast planning in time-critical robotics domains.
|
| 12 |
+
|
| 13 |
+
§ 201 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Search heuristics are essential in several domains, including robotics, AI, biology, and chemistry [1- 6]. For example, in robotics, complex robot geometries often yield slow collision checks, and search algorithms are constrained by the robot's onboard computation resources, requiring well-performing search heuristics that visit as few nodes as possible [1, 4]. In AI, domain-specific search heuristics are useful for improving the performance of inference engines operating on knowledge bases [3, 5]. Search heuristics have been previously also developed to reduce search efforts in protein-protein interaction networks [6] and in planning chemical reactions that can synthesize target chemical products [2]. This broad set of applications underlines the importance of good search heuristics that are applicable to a wide range of problems.
|
| 16 |
+
|
| 17 |
+
< g r a p h i c s >
|
| 18 |
+
|
| 19 |
+
Figure 1: The goal is to navigate (find a path) from the start to the goal node. While BFS visits many nodes to find a start-to-goal path (left), one can use a heuristic based on the features of the nodes (e.g., Euclidean distance) on the graph to reduce the search effort (middle). We propose PHIL to learn a tailored search heuristic for a given graph, capable of reducing the number of visited nodes even further by exploiting the inductive biases of the graph (right).
|
| 20 |
+
|
| 21 |
+
The search task can be formulated as a pathfinding problem on a graph, where given a graph, the task is to navigate and find a short feasible path from a start node to a goal node, while in the process visiting as few nodes as possible (Figure 1). The most straightforward approach would be to launch a search algorithm such as breadth-first search (BFS) and iteratively expand the graph from the start node until it reaches the goal node. Since BFS does not harness any prior knowledge about the graph, it usually visits many nodes before reaching the goal, which is expensive in cases such as robotics where visiting nodes is costly. To visit fewer nodes during the search, one may use domain-specific information about the graph via a heuristic function [7], which allows one to define a distance metric on graph nodes to prune directions that seem less promising to explore. Unfortunately, coming up with good search heuristics requires significant domain expertise and manual effort.
|
| 22 |
+
|
| 23 |
+
While there has been significant progress in designing search heuristics, it remains a challenging problem. Classical approaches $\left\lbrack {8,9}\right\rbrack$ tend to hand-design search heuristics, which requires domain knowledge and a lot of trial and error. To alleviate this problem, there has been significant development in general-purpose search heuristics based on trading-off greedy expansions and novelty-based exploration [10-13] or search problem simplifications [14-16]. These approaches alleviate some of the common pitfalls of goal-directed heuristics, but we demonstrate that if possible, it is useful to learn domain-specific heuristics that can better exploit problem structure.
|
| 24 |
+
|
| 25 |
+
On the other hand, learning-based methods face a set of different challenges. Firstly, the data distribution is not i.i.d., as newly encountered graph nodes depend on past heuristic values, which means that supervised learning-based methods are not directly applicable. Secondly, heuristics should run fast, with ideally constant time complexity. Otherwise, the overall asymptotic time complexity of the search procedure could be increased. Finally, as the environment (search graph) sizes increase, reinforcement learning-based heuristic learning approaches tend to perform poorly [1]. State-of-the-art imitation learning-based methods can learn useful search heuristics [1]; however, these methods 4 still rely on feature-engineering for a specific domain and do not generally guarantee a constant time complexity with respect to graph sizes.
|
| 26 |
+
|
| 27 |
+
< g r a p h i c s >
|
| 28 |
+
|
| 29 |
+
Figure 2: Main components of PHIL: On the left, using a greedy mixture policy induced by the current version of our parameterized heuristic ${h}_{\theta }$ and an oracle heuristic ${h}^{ * }$ (i.e., a heuristic that correctly determines distances between nodes), we roll-out a search trajectory from the start node to the goal node. Each trajectory step contains a set of newly added fringe nodes with bounded random subsets of their 1-hop neighborhoods and their oracle $\left( {h}^{ * }\right)$ distances to the goal node. Trajectories are aggregated throughout the training procedure. On the right, we use truncated backpropagation through time on each collected trajectory to train ${h}_{\theta }$ , where $\widehat{h}$ is the predicted distance between ${x}_{2}$ and ${x}_{g}$ , and ${z}_{2}$ is the updated state of the memory. Here, the memory captures the embedding of the graph visited so far.
|
| 30 |
+
|
| 31 |
+
In this paper, we propose Path Heuristic with Imitation Learning (PHIL), a framework that extends the recent imitation learning-based heuristic search paradigm with a learnable explored graph memory. This means that PHIL learns a representation that allows it to capture the structure of the so far 59 explored graph, so that it can then better select what node to explore next (Figure 2). We train our approach to predict the node-to-goal distances ( ${h}^{ * }$ in Figure 2) of graph nodes during search. To train our memory module, which captures the explored graph, we use truncated backpropagation through time (TBTT) [17], where we utilize ground-truth node-to-goal distances as a supervision signal at each search step. Our TBTT procedure is embedded within an adaptation of the AggreVaTe imitation learning algorithm [18]. PHIL also includes a specialized graph neural network architecture, which allows us to apply PHIL to diverse graphs from different Fdomains.
|
| 32 |
+
|
| 33 |
+
We evaluate PHIL on standard benchmark heuristic learning datasets (Section 5.1), diverse graph-based datasets from different domains (Section 5.2), and practical UAV flight use cases (Section 5.3). Experiments demonstrate that PHIL outperforms state-of-the-art heuristic learning methods up to $4 \times$ . Further, PHIL performs within 4.9% of an oracle in indoor drone planning scenarios, which is up to a 21.5% reduction compared with commonly used approaches. In practice, our contributions enable practitioners to quickly extract useful search heuristics from their graph datasets without any hand-engineering.
|
| 34 |
+
|
| 35 |
+
§ 2 PRELIMINARIES
|
| 36 |
+
|
| 37 |
+
Graph search. Suppose that we are given an unweighted connected graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is a set of nodes, and $\mathcal{E}$ a corresponding set of edges. Further suppose that each node $i \in \mathcal{V}$ has corresponding features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and each edge $\left( {i,j}\right) \in \mathcal{E}$ has features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ . Assume that we are also given a start node ${v}_{s} \in \mathcal{V}$ and a goal node ${v}_{q} \in \mathcal{V}$ . At any stage of our search algorithm, we can partition the nodes of our graph into three sets as $\mathcal{V} = \operatorname{CLOSE} \cup \mathrm{{OPEN}} \cup \mathrm{{REST}}$ , where CLOSE are the nodes already explored, OPEN are candidate nodes for exploration (i.e., all nodes connected to any node in CLOSE, but not yet in CLOSE), and REST is the rest of the graph. Each expansion moves a node from OPEN to CLOSE, and adds the neighbors of the given node from REST to OPEN. We call the set of newly added fringe nodes ${\mathcal{V}}_{\text{ new }}$ at each search step. At the start of the search procedure, CLOSE $= \left\{ {v}_{s}\right\}$ and we expand the nodes until ${v}_{g}$ is encountered (i.e., until ${v}_{g} \in$ CLOSE).
|
| 38 |
+
|
| 39 |
+
Greedy best-first search. We can perform greedy best-first search using a greedy fringe expansion policy, such that we always expand the node $v \in$ OPEN that minimizes $h\left( {v,{v}_{g}}\right)$ . Here, $h : \mathcal{V} \times \mathcal{V} \rightarrow$ $\mathbb{R}$ is a tailored heuristic function for a given use case. In our work, we are interested in learning a function $h$ that predicts shortest path lengths, this way minimizing $\left| \text{ CLOSE }\right|$ in a greedy best-first search regime.
|
| 40 |
+
|
| 41 |
+
Imitation of perfect heuristics. Partially observable Markov decision processes (POMDPs) are a suitable framework to describe the problem of learning search heuristics [1]. We can have $s =$ (CLOSE, OPEN, REST) as our state, an action $a \in \mathcal{A}$ corresponds to moving a node from OPEN to CLOSE, and the observations $o \in \mathcal{O}$ are the features of newly included nodes in OPEN. Note that one could consider an MDP framework to learn heuristics, but the time complexity of operating on the whole state is in most cases prohibitive. We also define a history $\psi \in \Psi$ as a sequence of observations $\psi = {o}_{1},{o}_{2},{o}_{3},\ldots$ Our work leverages the observation that using a heuristic function during greedy best-first search that correctly determines the length of the shortest path between fringe nodes and the goal node will also yield minimal |CLOSE|. For training, we adopt a perfect heuristic ${h}^{ * }$ , similar to [1], which has full information about $s$ during search. Such oracle can provide ground-truth distances ${h}^{ * }\left( {s,v,{v}_{g}}\right)$ , where $v \in$ OPEN. To conclude, we define a greedy best-first search policy ${\pi }_{\theta }$ that uses a parameterized heuristic ${h}_{\theta }$ to expand nodes from OPEN with minimal heuristic values. One could also directly use a POMDP solver for the above-described problem, but this approach is usually infeasible due to the dimensionality of the search state [19].
|
| 42 |
+
|
| 43 |
+
§ 3 RELATED WORK
|
| 44 |
+
|
| 45 |
+
General purpose heuristic design. There has been significant research in designing general-purpose heuristics for speeding up satisficing planning. The first set of approaches are based on simplifying the search problem for example using landmark heuristics [14, 16]. The next set of approaches aim to include novelty-based exploration in greedy best-first search [10-13]. The latter set of approaches showed state-of-the-art performance (best-first width search [12, 13], BFWS) in numerous settings. We show that in domains where data is available, it can be more effective to incorporate a learned heuristic into a greedy best-first search procedure.
|
| 46 |
+
|
| 47 |
+
Learning heuristic search. There have been numerous previous works that attempt to learn search heuristics: Arfaee et al. [20] propose to improve heuristics iteratively, Virseda et al. [21] learn to combine heuristics to estimate graph node distances, Wilt et al. [22] and Garrett et al. [23] propose to learn node rankings, Thayer et al. [24] suggest to infer heuristics during a search, and Kim et al. [25] train a neural network to predict graph node distances. These methods generally do not consider the non-i.i.d. nature of heuristic search. Further, Bhardwaj et al. [1] propose SAIL, where heuristic learning is framed as an imitation learning problem with cost-to-go oracles. The SAIL heuristic uses hand-designed features tailored for obstacle avoidance, with a linear time-complexity in the number of explored grid nodes found to be colliding with an obstacle. Feature-engineering becomes more difficult as we attempt to learn heuristics on diverse graphs such as ones seen in Section 5.2, where we may need expert knowledge. Further, heuristics that do not have a constant time complexity in the size of the graph $\left\lbrack {1,{26} - {29}}\right\rbrack$ generally scale poorly with graph size and hence have constrained use cases. Recent approaches to learning heuristics include Retro* [2] by Chen et al., where a heuristic is learned in the context of AND-OR search trees for chemical retrosynthetic planning. Our work focuses on a more general graph setting.
|
| 48 |
+
|
| 49 |
+
There has been significant progress on learning heuristics for NP-hard combinatorial optimization problems [30-32]. Still, these heuristic learning methods, due to their time complexities, are impractical for the application in polynomial-time search problems, on which this work focuses.
|
| 50 |
+
|
| 51 |
+
Learning general purpose search. Learning general search policies is a very well-studied research area with a rich set of developments and applications. These include Monte Carlo Tree Search methods [33, 34], implicit planning methods [35-37], and imagination-based planning approaches $\left\lbrack {{38},{39}}\right\rbrack$ . Learning search heuristics can be seen as a special case of general purpose search, where the search problem is treated as a partially observable Markov decision process with restricted action evaluation (see Section 4), and with models running in $\mathcal{O}\left( 1\right)$ to remain competitive time-complexity-wise on problems where best-first search performs well. General purpose search methods do not take into account the above-mentioned constraints, which motivates the development of tailored approaches for learning heuristics $\left\lbrack {1,2}\right\rbrack$ .
|
| 52 |
+
|
| 53 |
+
Imitation learning. Our approach builds on prior work in imitation learning (IL) with cost-to-go oracles. Cost-to-go oracles have been incorporated in the context of IL in methods such as SEARN [40], AggreVaTe [18], LOLS [41], AggrevaTeD [42], DART [43], and THOR [44]. SAIL [1] presents an AggreVaTe-based algorithm for learning heuristic search. We extend SAIL by incorporating a recurrent $Q$ -like function, in which sense our algorithm more closely resembles AggreVaTeD by Sun et al. [42]. While a recurrent policy can be easily incorporated in AggreVaTeD, we cannot use a policy to evaluate actions. This is due to the fact that we would either have to evaluate all actions in a state, which is computationally infeasible, or we would have to give up on taking actions that are not in the most recent version of the search fringe, which would degrade the performance (see Section 4).
|
| 54 |
+
|
| 55 |
+
§ 4 PATH HEURISTIC WITH IMITATION LEARNING
|
| 56 |
+
|
| 57 |
+
Training objective. With the aim of minimizing |CLOSE| after search, our goal is to train a parameterized heuristic function ${h}_{\theta } : \Psi \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ to predict ground-truth node distances ${h}^{ * }$ and use ${h}_{\theta }$ within a greedy best-first policy ${\pi }_{\theta }$ at test time. More specifically, we assume access to a distribution over graphs ${P}_{\mathcal{G}}$ , a start-goal node distribution ${P}_{{v}_{sg}}\left( {\cdot \mid \mathcal{G}}\right)$ , and a time horizon $T$ . Moreover, we assume a joint state-history distribution $s,\psi \sim {P}_{s}\left( {\cdot \mid \mathcal{G},t,{\pi }_{\theta },{v}_{s},{v}_{g}}\right)$ , where ${P}_{s}$ represents the probability our search being in state $s$ , at time $0 \leq t \leq T$ on graph $\mathcal{G}$ with pathfinding problem $\left( {{v}_{s},{v}_{g}}\right)$ , with a greedy best-first search policy ${\pi }_{\theta }$ using heuristic ${h}_{\theta }$ . Hence, our goal can be summarized as minimizing the following objective:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\mathcal{L}\left( \theta \right) = \underset{\begin{matrix} {\xi \sim {P}_{g},} \\ {\left( {{v}_{s},{v}_{g}}\right) \sim {P}_{vsg}} \\ {t \sim \mathcal{U}\left( {0,\ldots ,T}\right) ,} \\ {s,\psi \sim {P}_{s}} \end{matrix}}{\mathbb{E}}\left\lbrack {\frac{1}{\left| \mathrm{{OPEN}}\right| }\mathop{\sum }\limits_{{v \in \mathrm{{OPEN}}}}{\left( {h}^{ * }\left( s,v,{v}_{g}\right) - {h}_{\theta }\left( \psi ,v,{v}_{g}\right) \right) }^{2}}\right\rbrack \tag{1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Before we describe the algorithm that can be used to minimize $\mathcal{L}$ , we rewrite ${h}_{\theta }$ to include a memory digest component $\left( {z}_{t}\right)$ , which represents an embedding of $\psi$ at time step $t$ . Hence, ${h}_{\theta }$ becomes ${h}_{\theta } : {\mathbb{R}}^{d} \times \mathcal{O} \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ , where $d$ is the dimensionality of our memory’s embedding space. As opposed to previous methods [1], ${z}_{t}$ allows us to automatically extract relevant features for heuristic
|
| 64 |
+
|
| 65 |
+
Algorithm 1: PHIL— Sequential Heuristic Training
|
| 66 |
+
|
| 67 |
+
Obtain hyperparameters $T,{\beta }_{0},N,m,{t}_{\tau }$ ;
|
| 68 |
+
|
| 69 |
+
Initialize $\mathcal{D} \leftarrow \varnothing ,{h}_{{\theta }_{1}}$ ;
|
| 70 |
+
|
| 71 |
+
for $i = 1,\ldots ,N$ do
|
| 72 |
+
|
| 73 |
+
Sample $\mathcal{G} \sim {P}_{\mathcal{G}}$ ;
|
| 74 |
+
|
| 75 |
+
Sample ${v}_{s},{v}_{g} \sim {P}_{{v}_{sg}}$ ;
|
| 76 |
+
|
| 77 |
+
Set $\beta \leftarrow {\beta }_{0}^{i}$ ;
|
| 78 |
+
|
| 79 |
+
Set mixture policy ${\pi }_{\text{ mix }} \leftarrow \left( {1 - \beta }\right) * {\pi }_{{\theta }_{i}} + \beta * {\pi }^{ * }$ ;
|
| 80 |
+
|
| 81 |
+
Collect $m$ trajectories ${\tau }_{ij}$ as follows;
|
| 82 |
+
|
| 83 |
+
for $j = 1,\ldots ,m$ do
|
| 84 |
+
|
| 85 |
+
Sample $t \sim \mathcal{U}\left( {0,\ldots ,T - {t}_{\tau }}\right)$ ;
|
| 86 |
+
|
| 87 |
+
Roll-in $t$ time steps of ${\pi }_{{\theta }_{i}}$ to obtain ${z}_{t}$ and new state ${s}_{t} = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\operatorname{REST}}^{0}}\right)$ ;
|
| 88 |
+
|
| 89 |
+
Roll-out trajectory ${\tau }_{ij}$ as follows;
|
| 90 |
+
|
| 91 |
+
for $k = 1,\ldots ,{t}_{\tau }$ do
|
| 92 |
+
|
| 93 |
+
Update ${s}_{t + k - 1}$ using ${\pi }_{\operatorname{mix}}$ to get new state ${s}_{t + k}$ and new fringe state ${\mathrm{{OPEN}}}^{k}$ ;
|
| 94 |
+
|
| 95 |
+
Obtain new fringe nodes ${\mathcal{V}}_{\text{ new }} = {\mathrm{{OPEN}}}^{k} \smallsetminus {\mathrm{{OPEN}}}^{k - 1}$ ;
|
| 96 |
+
|
| 97 |
+
Update trajectory ${\tau }_{ij} \leftarrow {\tau }_{ij} \cup \left\{ \left( {{\mathcal{V}}_{\text{ new }},{h}^{ * }\left( {{s}_{t + k},{\mathcal{V}}_{\text{ new }},{v}_{g}}\right) }\right) \right\}$ ;
|
| 98 |
+
|
| 99 |
+
Update dataset $\mathcal{D} \leftarrow \mathcal{D} \cup \left\{ \left( {{\tau }_{ij},{z}_{t}}\right) \right\}$ or $\mathcal{D} \cup \left\{ \left( {{\tau }_{ij},0}\right) \right\}$ ;
|
| 100 |
+
|
| 101 |
+
Train ${h}_{{\theta }_{i}}$ using TBTT on each $\tau \in \mathcal{D}$ to get ${h}_{{\theta }_{i + 1}}$ ;
|
| 102 |
+
|
| 103 |
+
return best performing ${h}_{{\theta }_{i}}$ on validation;
|
| 104 |
+
|
| 105 |
+
computations and concurrently reduce the computational complexity of the heuristic function. Further, as shown in [1], if we would use ${h}_{\theta }$ to evaluate all actions in a state (i.e., recalculate the heuristic values of all nodes in OPEN), we would need a squared reduction in the number of expanded nodes compared with BFS for PHIL to bring performance benefits over BFS, which however may not be possible on all datasets. Hence, we constrain the heuristic only to evaluate new OPEN nodes which we obtain after moving a node to CLOSE, calling the set of new fringe nodes ${\mathcal{V}}_{\text{ new }}$ after each expansion. In practice, the policy ${\pi }_{\theta }$ yields an algorithm equivalent to greedy best-first search, with the heuristic function replaced by ${h}_{\theta }$ .
|
| 106 |
+
|
| 107 |
+
§ 4.1 LEARNING ALGORITHM & ARCHITECTURE
|
| 108 |
+
|
| 109 |
+
Imitation learning algorithm. In Algorithm 1, we present the pseudo-code of the IL algorithm used to train our heuristic models (Figure 3). The high-level idea of our algorithm is that we aggregate trajectories of search traces (i.e., sequences of new fringe nodes) and use truncated backpropagation through time to optimize ${h}_{\theta }$ after each data-collection step. In particular, after sampling a graph $\mathcal{G}$ and a search problem ${v}_{s},{v}_{g}$ , we use our greedy learned policy ${\pi }_{\theta }$ induced by ${h}_{\theta }$ to roll-in for $t \sim \mathcal{U}\left( {0,\ldots ,T - {t}_{\tau }}\right)$ expansions, where $T$ is the episode time horizon, and ${t}_{\tau }$ is the roll-out length. From our roll-in, we obtain a new state $s = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\mathrm{{REST}}}^{0}}\right)$ , and an initial memory state ${z}_{t}$ . After our roll-in, we roll-out for ${t}_{\tau }$ steps using our mixture policy ${\pi }_{mix}$ , which is obtained by probabilistically blending ${\pi }_{\theta }$ and the greedy best-first policy induced by the oracle heuristic ${\pi }^{ * }$ . In a roll-out, we collect sequences of new fringe nodes, together with their ground-truth distances to the goal ${v}_{g}$ , given by ${h}^{ * }$ . Once the roll-out is complete, we append the obtained trajectory and the initial state for the following optimization using backpropagation through time. Further analysis on the trade-offs between using rolled-in states ${z}_{t}$ or zeroed-out states for training can be found in the supplementary material.
|
| 110 |
+
|
| 111 |
+
Note that we could also use supervised learning-based approaches to sample a fixed dataset of $\left( {v}_{s}\right.$ , $\left. {{v}_{g},{h}^{ * }\left( {s,{v}_{s},{v}_{g}}\right) }\right)$ 3-tuples and train a model to predict node distances conditioned on their features. However, our experiments in Section 5 demonstrate that ignoring the non-i.i.d. nature of heuristic search negatively impacts model performance, with supervised learning-based methods performing up to ${40} \times$ worse.
|
| 112 |
+
|
| 113 |
+
Recurrent GNN architecture. In each forward pass, ${h}_{\theta }$ obtains a set of new fringe nodes ${\mathcal{V}}_{\text{ new }}$ , the goal node ${v}_{g}$ , and the memory ${z}_{t}$ at time step $t$ . We represent each node in ${\mathcal{V}}_{\text{ new }}$ using its features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and likewise the goal node ${v}_{g}$ using its features ${x}_{g} \in {\mathbb{R}}^{{D}_{v}}$ . Further, for each $i \in {\mathcal{V}}_{\text{ new }}$ , we uniformly sample an $n \in {\mathbb{N}}_{ \geq 0}$ bounded set of nodes present in the 1-hop neighborhood of $i$ , calling
|
| 114 |
+
|
| 115 |
+
< g r a p h i c s >
|
| 116 |
+
|
| 117 |
+
Figure 3: This figure demonstrates the core idea behind our IL algorithm. We present the roll-in phase on the left-hand side, where our policy is rolled in for $t$ steps to obtain state ${s}_{t}$ and embedding ${z}_{t}$ . On the right-hand side, we show the trajectory collection and training steps, where we aggregate the trajectory for downstream training (green) and use truncated backpropagation through time on the collected dataset (red).
|
| 118 |
+
|
| 119 |
+
this set ${\mathcal{N}}_{i}$ , with $\left| {\mathcal{N}}_{i}\right| \leq n$ . This sampling step produces a set of neighboring node features, where each $j \in {\mathcal{N}}_{i}$ has features ${x}_{j} \in {\mathbb{R}}^{{D}_{v}}$ , and corresponding edge features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ .
|
| 120 |
+
|
| 121 |
+
${h}_{\theta }$ forward pass. Algorithm 2 presents a single forward pass of ${h}_{\theta }$ . The forward
|
| 122 |
+
|
| 123 |
+
Algorithm 2: Heuristic func. $\left( {h}_{\theta }\right)$ forward pass
|
| 124 |
+
|
| 125 |
+
Obtain ${x}_{i},{x}_{j},{e}_{ij},{x}_{g}{z}_{t}$ ;
|
| 126 |
+
|
| 127 |
+
${x}_{i} \leftarrow f\left( {{x}_{i},{x}_{g},{D}_{EUC}\left( {{x}_{i},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{i},{x}_{g}}\right) }\right) ;$
|
| 128 |
+
|
| 129 |
+
${x}_{j} \leftarrow f\left( {{x}_{j},{x}_{g},{D}_{EUC}\left( {{x}_{j},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{j},{x}_{g}}\right) }\right) ;$
|
| 130 |
+
|
| 131 |
+
${g}_{i} \leftarrow \phi \left( {{x}_{i},{ \oplus }_{j \in {\mathcal{N}}_{i}}\gamma \left( {{x}_{i},{x}_{j},{e}_{ij}}\right) }\right) ;$
|
| 132 |
+
|
| 133 |
+
${g}_{i}^{\prime },{z}_{i,t + 1} \leftarrow \operatorname{GRU}\left( {{g}_{i},{z}_{t}}\right)$ ;
|
| 134 |
+
|
| 135 |
+
${z}_{t + 1} \leftarrow \overline{{z}_{i,t + 1}}$ ;
|
| 136 |
+
|
| 137 |
+
${\widehat{h}}_{i} \leftarrow \operatorname{MLP}\left( {{g}_{i}^{\prime },{x}_{g}}\right) ;$
|
| 138 |
+
|
| 139 |
+
return ${\widehat{h}}_{i},{z}_{t + 1}$ ;
|
| 140 |
+
|
| 141 |
+
pass outputs predicted distances of the new fringe nodes to the goal ${\widehat{h}}_{i}$ , together with an updated memory digest ${z}_{t + 1}$ . In Algorithm $2,f,\phi ,\gamma ,\operatorname{GRU}\left\lbrack {45}\right\rbrack$ , MLP are each param-eterised differentiable functions, with $\phi ,\gamma$ representing the update and message functions [46] of a graph neural network, respectively.
|
| 142 |
+
|
| 143 |
+
In our forward pass, using the function $f$ , we first project ${x}_{i},{x}_{j}$ into a node embedding space, together with the goal features ${x}_{g}$ , and their Euclidean $\left( {D}_{EUC}\right)$ and cosine distances $\left( {D}_{COS}\right)$ . After that, using a 1-layer GNN, we perform a single convolution over each ${x}_{i}$ and the corresponding neighborhood ${\mathcal{N}}_{i}$ , to obtain ${g}_{i}$ . The specific GNN choice is a design decision left to the practitioner, and further analysis of GNN choices can be found in Appendix D. Our graph convolution processing step allows us to easily incorporate edge features and work with variable sizes of ${\mathcal{N}}_{i}$ . After the graph convolution, we apply the GRU module over each embedding ${g}_{i}$ to obtain hidden states ${z}_{i,t + 1}$ , and new embeddings ${g}_{i}^{\prime }$ . We compute the sample mean of ${z}_{i,t + 1}$ for each node $i \in {\mathcal{V}}_{\text{ new }}$ to obtain a new hidden state ${z}_{t + 1}$ , and process ${g}_{i}^{\prime }$ with ${x}_{g}$ using an MLP to compute the distances between the graph nodes.
|
| 144 |
+
|
| 145 |
+
Permutation invariant ${\mathcal{V}}_{\text{ new }}$ embedding. There is a trade-off between processing new fringe nodes in batch, as in Algorithm 2, and processing them sequentially. Namely, when we process the nodes in batch, we do not use the in-batch observations to predict batch node values, which means that ${z}_{t}$ is slightly outdated. On the other hand, in PHIL, batch processing allows us to compute the heuristic values of all $v \in {\mathcal{V}}_{\text{ new }}$ in parallel on a GPU and preserves the memory’s permutation invariance with respect to nodes in ${\mathcal{V}}_{\text{ new }}$ . That is, because our observations are nodes &edges of a graph, the respective observation ordering usually does not contain inductive biases useful for predictions, which means that we can apply a permutation invariant operator such as the mean of all new states ${z}_{i,t + 1}$ to obtain an aggregated updated state. This approach provides additional scalability as we can process values in parallel and PHIL does not have to infer permutation invariance in ${\mathcal{V}}_{new}$ from data.
|
| 146 |
+
|
| 147 |
+
Runtime complexity. Since $\forall i \in {\mathcal{V}}_{\text{ new }} : \left| {\mathcal{N}}_{i}\right| \leq n$ , Algorithm 2 together with neighborhood sampling runs in up to $n{c}_{1} + \left( {n + 1}\right) {c}_{2}$ operations per each node $i \in {\mathcal{V}}_{\text{ new }}$ , which is $\mathcal{O}\left( 1\right)$ with respect to the size of the graph. Here, ${c}_{1}$ is the maximal number of operations associated with evaluating a node, such as performing robot collision checks in dynamically constructed graphs, and ${c}_{2}$ is the maximal count of total model operations (e.g., $f\& \gamma$ operations) on the node set $\{ i\} \cup {\mathcal{N}}_{i}$ . In general, we expect to learn a better search heuristic with increasing $n$ (see Appendix D for ablations), but in some use cases, ${c}_{1}$ may dominate overall complexity, which means the hyperparameter $n$ is helpful for practitioners to tune trade-offs between constant factors and search effort minimization.
|
| 148 |
+
|
| 149 |
+
§ 5 EXPERIMENTS
|
| 150 |
+
|
| 151 |
+
In our experiments, we evaluate PHIL both on benchmark heuristic learning datasets [1] (Section 5.1) as well on a diverse set of graph datasets (Section 5.2). Finally, we show that PHIL can be applied to efficient planning in the context of drone flight (Section 5.3). Our main goal is to assess how PHIL compares to baseline methods in terms of necessary expansions before the goal node is reached. Please refer to the supplementary material for information about baselines, an ablation study, and additional experiment details.
|
| 152 |
+
|
| 153 |
+
§ 5.1 HEURISTIC SEARCH IN GRIDS
|
| 154 |
+
|
| 155 |
+
In Section 5.1, we evaluate PHIL on $8,{200} \times {200}$ 8-connected grid graph-based datasets by Bhardwaj et al. [1]. These datasets present challenging obstacle configurations for naive greedy planning heuristics, especially when ${v}_{s}$ is in the bottom-left of the grid, and ${v}_{g}$ in the top-right. Each dataset contains 200 training graphs, 70 validation graphs, and 100 test graphs. Example graphs from each dataset can be found in Table 1.
|
| 156 |
+
|
| 157 |
+
< g r a p h i c s >
|
| 158 |
+
|
| 159 |
+
Figure 4: Example of PHIL escaping local search minima.
|
| 160 |
+
|
| 161 |
+
We train PHIL with a hyperparameter configuration of $T = {128}$ , ${t}_{\tau } = {32},{\beta }_{0} = {0.7},n = 8$ , and using rolled-in ${z}_{t}$ states as initial states for training. We use a 3-layer MLP of width 128 with LeakyReLU activations, followed by a DeeperGCN [47] graph convolution with softmax aggregation. Our memory's embedding dimensionality is 64 . See Appendix C for an overview of our baselines and datasets.
|
| 162 |
+
|
| 163 |
+
max width=
|
| 164 |
+
|
| 165 |
+
Dataset X Graph Examples X SAIL SL CEM QL ${h}_{euc}$ ${h}_{man}$ A* MHA* BFWS Neural ${\mathrm{A}}^{ * }$ PHIL
|
| 166 |
+
|
| 167 |
+
1-15
|
| 168 |
+
Alternating gaps X X X 0.039 0.432 0.042 1.000 1.000 1.000 1.000 1.000 0.34 0.546 0.024
|
| 169 |
+
|
| 170 |
+
1-15
|
| 171 |
+
Single Bugtrap X X X 0.158 0.214 0.057 1.000 0.184 0.192 1.000 0.286 0.099 0.394 0.077
|
| 172 |
+
|
| 173 |
+
1-15
|
| 174 |
+
Shifting gaps X X X 0.104 0.464 1.000 1.000 0.506 0.589 1.000 0.804 0.206 0.563 0.027
|
| 175 |
+
|
| 176 |
+
1-15
|
| 177 |
+
Forest X X X 0.036 0.043 0.048 0.121 0.041 0.043 1.000 0.075 0.039 0.399 0.027
|
| 178 |
+
|
| 179 |
+
1-15
|
| 180 |
+
Bugtrap+Forest X X X 0.147 0.384 0.182 1.000 0.410 0.337 1.000 3.177 0.149 0.651 0.135
|
| 181 |
+
|
| 182 |
+
1-15
|
| 183 |
+
Gaps+Forest X X X 0.221 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.401 0.580 0.039
|
| 184 |
+
|
| 185 |
+
1-15
|
| 186 |
+
Mazes X X X 0.103 0.238 0.479 0.399 0.185 0.171 1.000 0.279 0.095 1.000 0.069
|
| 187 |
+
|
| 188 |
+
1-15
|
| 189 |
+
Multiple Bugtraps X X X 0.479 0.480 1.000 0.835 0.648 0.617 1.000 0.876 0.169 0.331 0.136
|
| 190 |
+
|
| 191 |
+
1-15
|
| 192 |
+
|
| 193 |
+
Table 1: The number of expanded graph nodes of PHIL with respect to SAIL. We can observe that out of all baselines, SAIL performs best. PHIL outperforms SAIL by 58.5% on average over all datasets, with a maximal search effort reduction of ${82.3}\%$ in the Gaps+Forest dataset.
|
| 194 |
+
|
| 195 |
+
Discussion. As we can see in Table 1, PHIL outperforms the best baseline (SAIL) on all datasets, with an average reduction of explored nodes before ${v}_{q}$ is found of ${58.5}\%$ . Qualitatively, observing Figure 5, we can attribute these results to PHIL's ability to reduce the redundancy in explored nodes during a search. Further, PHIL is also capable of escaping local minima, which is illustrated in Figure 4. However, note that we occasionally observe failure cases in practice, where PHIL gets stuck in a bug trap-like structure. We discuss possible remedies and opportunities for future work in the supplementary material.
|
| 196 |
+
|
| 197 |
+
Runtime &convergence speed. PHIL converges in up to $N = {36}$ iterations, with $m = 1,{t}_{\tau } = {32}$ (i.e., after observing less than $N * {t}_{\tau } * \max \left( \left| {\mathcal{V}}_{\text{ new }}\right| \right) \approx 9,{216}$ shortest path distances, where we take $\max \left( \left| {\mathcal{V}}_{\text{ new }}\right| \right) = 8$ as the maximal size of ${\mathcal{V}}_{\text{ new }}$ ). According to figures reported in [1], this is approximately $5 \times$ less data than it takes for SAIL to converge.
|
| 198 |
+
|
| 199 |
+
< g r a p h i c s >
|
| 200 |
+
|
| 201 |
+
Figure 5: In each image pair of this figure, we provide a qualitative comparison with the SAIL method. In particular, we show comparisons on the Shifting gaps, Gaps+Forest, Mazes, and Forest datasets. We can observe that PHIL (right) learns the appropriate heuristics for the given dataset and makes fewer redundant expansions than SAIL (left).
|
| 202 |
+
|
| 203 |
+
§ 5.2 SEARCH IN REAL-LIFE GRAPHS OF DIFFERENT STRUCTURES
|
| 204 |
+
|
| 205 |
+
In this experiment, our goal is to demonstrate the general applicability of PHIL to various graphs. We train PHIL on 4 different groups of graph datasets: citation networks, biological networks, abstract syntax trees (ASTs), and road networks. We use the same graph for citation networks and road networks for training and evaluation, and we use 100 random ${v}_{s},{v}_{g}$ pairs for testing. In the case of biological networks and ASTs, we usually have train/validation/test splits of 80/10/10, and in the case of the OGB [48] datasets, we use the provided splits.
|
| 206 |
+
|
| 207 |
+
max width=
|
| 208 |
+
|
| 209 |
+
X Dataset $\left| \mathcal{D}\right|$ $\left| \bar{\mathcal{V}}\right|$ $\left| \overline{\mathcal{E}}\right|$ SL A* ${h}_{euc}$ BFS SAIL BFWS PHIL
|
| 210 |
+
|
| 211 |
+
1-12
|
| 212 |
+
5*Citation Networks Cora (Sen et al. [49]) 1 2,708 5,429 2.201 2.067 1.000 4.001 0.669 1.378 0.475
|
| 213 |
+
|
| 214 |
+
2-12
|
| 215 |
+
PubMed (Sen et al. [49])) 1 19,717 44,338 2.157 2.983 1.000 3.853 1.196 1.000 0.745
|
| 216 |
+
|
| 217 |
+
2-12
|
| 218 |
+
CiteSeer (Sen et al. [49])) 1 3,327 4,732 1.636 1.487 1.000 2.190 1.062 0.951 0.599
|
| 219 |
+
|
| 220 |
+
2-12
|
| 221 |
+
Coauthor (cs) (Schur et al. [50]) 1 18,333 81,894 1.571 1.069 1.000 2.820 1.941 1.026 0.835
|
| 222 |
+
|
| 223 |
+
2-12
|
| 224 |
+
Coauthor (physics) (Schur et al. [50]) 1 34,493 247,962 4.076 1.081 1.000 4.523 - 1.012 0.964
|
| 225 |
+
|
| 226 |
+
1-12
|
| 227 |
+
4*Biological Networks OGBG-Molhiv (Hu et al. [48]) 41,127 25.5 27.5 1.086 1.065 1.000 1.267 1.104 1.146 1.016
|
| 228 |
+
|
| 229 |
+
2-12
|
| 230 |
+
PPI (Zitnik et al. [51]) 24 2,372.67 34,113.16 0.772 0.831 1,000 5.618 1.746 3.941 0.658
|
| 231 |
+
|
| 232 |
+
2-12
|
| 233 |
+
Proteins (Full) (Morris et al. [52]) 1.113 39.06 72.82 0.995 0.997 1.000 2.645 0.891 0.966 0.831
|
| 234 |
+
|
| 235 |
+
2-12
|
| 236 |
+
Enzymes (Morris et al. [52]) 600 32.63 62.14 1.073 1.007 1.000 1.358 1.036 0.992 0.757
|
| 237 |
+
|
| 238 |
+
1-12
|
| 239 |
+
ASTs OGBG-Code2 (Hu et al. [48]) 452,741 125.2 124.2 1.196 1.013 1.000 1.267 1.029 0.817 1.219
|
| 240 |
+
|
| 241 |
+
1-12
|
| 242 |
+
2*Road Networks OSMnx - Modena (Boeing [53]) 1 29.324 38,309 2.904 3.085 1.000 3.493 1.182 0.997 0.489
|
| 243 |
+
|
| 244 |
+
2-12
|
| 245 |
+
OSMnx - New York (Boeing [53]) 1 54.128 89.618 39.424 36.529 1.000 63.352 1.583 1.013 0.962
|
| 246 |
+
|
| 247 |
+
1-12
|
| 248 |
+
|
| 249 |
+
Table 2: Comparison of PHIL with baseline approaches on 4 groups of datasets: citation networks, biological networks, abstract syntax trees, and road networks. "-" denotes being out of a 4-day's training time limit. We can observe that, on average across all datasets, PHIL outperforms the best baseline per dataset by 13.4%. Discounting the OGBG datasets, this number becomes 19.5%.
|
| 250 |
+
|
| 251 |
+
Similarly as in Section 5.1, our MLP has four layers of width 128 with LeakyReLU activations and we use a DeeperGCN [47] graph convolution with softmax aggregation. The utilized node and edge features are the provided features in each dataset, except for a few minor modifications which are discussed in Appendix A & Appendix C. We train an MLP of depth 5 and width 256 using supervised learning (SL) for our learning-based baseline method.
|
| 252 |
+
|
| 253 |
+
Discussion. The results presented in Table 2 suggest that PHIL can learn superior search heuristics compared with baseline methods, outperforming top baselines per dataset in terms of visited nodes during a search by ${13.4}\%$ on average. Two datasets where PHIL fell short compared to other baselines are the OGBG-Molhiv and OGBG-Code2 datasets. The OGBG-Code2 dataset adopts a project split [54] and OGBG-Mohliv adopts a scaffold split [55], both of which ensure that graphs of different structure are present in the training & test sets. Although PHIL improved upon uninformed search (BFS) in the OGB datasets, structural graph consistency is explicitly discouraged in the above-mentioned OGBG splits. Without the OGBG datasets, PHIL improves on the top baselines per dataset by ${19.5}\%$ on average, and upon the Euclidean node feature heuristic $\left( {h}_{\text{ euc }}\right)$ by ${20.4}\%$ . Note that we trained PHIL up to $N = {60}$ iterations, which means that it only encountered a small subset of the pathfinding problems in the single graph setting, which means that PHIL had to generalize to learn useful heuristics. Even in Cora, the $\left| \mathcal{D}\right| = 1$ dataset with least number of nodes, PHIL observed roughly 6,000 node distances during training, which is less than ${0.2}\%$ of total distances in the Cora graph.
|
| 254 |
+
|
| 255 |
+
§ 5.3 PLANNING FOR DRONE FLIGHT
|
| 256 |
+
|
| 257 |
+
In our final experiment, we use PHIL to plan collision-free paths in a practical drone flight use case within an indoor environment. We built our environment using the CoppeliaSim simulator [56], and the Ivy framework [57]. Figure 6 presents the environment which we refer to as room adversarial in Table 3. For more detail about each environment, please refer to the supplementary material. We discretize the environments into 3D grid graphs of size ${50} \times {50} \times {25}$ , and randomly remove 5 sub-graphs of size $5 \times$ $5 \times 5$ both during training and testing, this way simulating real-life planning scenarios with random obstacles. The hyperparameter configuration and the specific architecture we utilize are equivalent to Section 5.1, but with $n = 4$ . Likewise, the node features are 3D grid coordinates, and the baselines include supervised learning (SL), ${h}_{euc},{\mathrm{\;A}}^{ * }$ , and BFS, similarly as in Sections 5.1, 5.2. In Table 3 we report the ratio of expanded nodes with respect to ${h}_{euc}$ .
|
| 258 |
+
|
| 259 |
+
< g r a p h i c s >
|
| 260 |
+
|
| 261 |
+
Figure 6: This figure illustrates the room adversarial environment with an example planning problem (red) and the expanded graph by PHIL (blue).
|
| 262 |
+
|
| 263 |
+
311 Video demo. We provide a video demonstration of PHIL running in room adversarial: https: //cutt.ly/eniu5ax.
|
| 264 |
+
|
| 265 |
+
max width=
|
| 266 |
+
|
| 267 |
+
Dataset SL A* ${h}_{euc}$ BFS SAIL BFWS PHIL Shortest path
|
| 268 |
+
|
| 269 |
+
1-9
|
| 270 |
+
Room simple 1.124 76.052 1.000 291.888 0.973 1.286 0.785 0.782
|
| 271 |
+
|
| 272 |
+
1-9
|
| 273 |
+
Room adversarial 2.022 67.215 1.000 238.768 0.944 1.583 0.895 0.853
|
| 274 |
+
|
| 275 |
+
1-9
|
| 276 |
+
|
| 277 |
+
Table 3: Results of PHIL in the context of planning for indoor UAV flight. PHIL outperforms all baselines in both the room simple and room adversarial environments while remaining close performance-wise to the optimal number of expansions.
|
| 278 |
+
|
| 279 |
+
Discussion. As we can observe in Table 3, PHIL outperforms all baselines in both environments. Interestingly, PHIL expands only approximately 0.3% more nodes in the simple room than least possible and ${4.9}\%$ more in the adversarial room case. The same figures for the greedy method $\left( {h}_{euc}\right)$ are ${27.8}\%$ and ${17.2}\%$ , respectively. These results indicate that PHIL is capable of learning planning strategies that are close to optimal in both simple and adversarial graphs, while the performance of naive heuristics degrades.
|
| 280 |
+
|
| 281 |
+
§ 5.4 RUNTIME ANALYSIS
|
| 282 |
+
|
| 283 |
+
We summarize test run-times of different approaches in Appendix G. PHIL runs 57.9% faster than BFWS and 32.2% faster than SAIL, and not much slower than traditional A* (34.7%) and ${h}_{man}$ (18.3%). Although Neural A* is ${71.0}\%$ faster than PHIL due to the fact that it casts the whole search process into matrix operations on images, it cannot be employed in a generic search setting.
|
| 284 |
+
|
| 285 |
+
§ 6 CONCLUSION
|
| 286 |
+
|
| 287 |
+
In our work, we consider the problem of learning to search for feasible paths in graphs efficiently. We propose a model and a training procedure to learn search heuristics that can be easily deployed across diverse graphs, with tuneable trade-off parameters between constant factors and performance. Our results demonstrate that PHIL outperforms current state-of-the-art approaches and can be applied to various graphs with practical use cases.
|
papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,418 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ GRAPH NEURAL NETWORK WITH LOCAL FRAME FOR MOLECULAR POTENTIAL ENERGY SURFACE
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Modeling molecular potential energy surface is of pivotal importance in science. Graph Neural Networks have shown great success in this field. However, their message passing schemes need special designs to capture geometric information and fulfill symmetry requirement like rotation equivariance, leading to complicated architectures. To avoid these designs, we introduce a novel local frame method to molecule representation learning and analyze its expressivity. Projected onto a frame, equivariant features like 3D coordinates are converted to invariant features, so that we can capture geometric information with these projections and decouple the symmetry requirement from GNN design. Theoretically, we prove that given non-degenerate frames, even ordinary GNNs can encode molecules injectively and reach maximum expressivity with coordinate projection and frame-frame projection. In experiments, our model uses a simple ordinary GNN architecture yet achieves state-of-the-art accuracy. The simpler architecture also leads to higher scalability. Our model only takes about ${30}\%$ inference time and ${10}\%$ GPU memory compared to the most efficient baselines.
|
| 12 |
+
|
| 13 |
+
§ 17 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Prediction of molecular properties is widely used in fields such as material searching, drug designing, and understanding chemical reactions [1]. Among properties, potential energy surface (PES) [2], the relationship between the energy of a molecule and its geometry, is of pivotal importance as it can determine the dynamics of molecular systems and many other properties. Many computational chemistry methods have been developed for the prediction, but few can achieve both high precision and scalability.
|
| 16 |
+
|
| 17 |
+
In recent years, machine learning (ML) methods have emerged, which are both accurate and efficient. Graph Neural Networks (GNNs) are promising among these ML methods. They have improved continuously [3-10] and achieved state-of-the-art performance on many benchmark datasets. Compared with popular GNNs used in other graph tasks [11], these models need special designs, as molecules are more than a graph composed of merely nodes and edges. Atoms are in the continuous 3D space, and the prediction targets like energy are sensitive to the coordinates of atoms. Therefore, GNNs for molecules must include geometric information. Moreover, these models should keep the symmetry of the target properties for generalization. For example, the energy prediction should be invariant to the coordinate transformations in $\mathrm{O}\left( 3\right)$ group, like rotation and reflection.
|
| 18 |
+
|
| 19 |
+
All existing methods can keep the invariance. Some models $\left\lbrack {4,5,8}\right\rbrack$ use hand-crafted invariant features like distance, angle, and dihedral angle as the input of GNN. Others use equivariant representations, which change with the coordinate transformations. Among them, some $\left\lbrack {6,9,{12}}\right\rbrack$ use irreducible representations of the $\mathrm{{SO}}\left( 3\right)$ group. The other models $\left\lbrack {7,{10}}\right\rbrack$ manually design functions for equivariant and invariant representations. All these methods can keep invariance, but they vary in performance. Therefore, expressivity analysis is necessary. However, the symmetry requirement hinders the application of the existing theoretical framework for ordinary GNNs [13].
|
| 20 |
+
|
| 21 |
+
By using the local frame, we decouple the symmetry requirement. As shown in Figure 1, our model, namely ${GNN} - {LF}$ , first produces a frame (a set of bases of ${\mathbb{R}}^{3}$ space) equivariant to $\mathrm{O}\left( 3\right)$
|
| 22 |
+
|
| 23 |
+
< g r a p h i c s >
|
| 24 |
+
|
| 25 |
+
Figure 1: An illustration of our model. One local frame is generated for each atom. Frames are used to transform geometric information into invariant representations. Then an ordinary GNN is applied. transformations. Then it projects the relative positions and frames of neighbor atoms on the frame as the edge features. Therefore, an ordinary GNN with no special design for symmetry can work on the graph with only invariant features. The expressivity of the GNN for molecules can also be proved using a framework for ordinary GNNs [13]. As the GNN needs no special design for symmetry, GNN-LF also has a simpler architecture and, thus, better scalability. Our model achieves state-of-the-art performance on the MD17 and QM9 datasets. It also uses only 30% time and 10% GPU memory than the fastest baseline on the PES task.
|
| 26 |
+
|
| 27 |
+
§ 2 PRELIMINARIES
|
| 28 |
+
|
| 29 |
+
Ordinary GNN. Message passing neural network (MPNN) [14] is a common framework of GNNs. 51 For each node, a message passing layer aggregates information from neighbors to update the node representations. The ${k}^{\text{ th }}$ layer can be formulated as follows.
|
| 30 |
+
|
| 31 |
+
$$
|
| 32 |
+
{\mathbf{h}}_{v}^{\left( k\right) } = {\mathrm{U}}^{\left( k\right) }\left( {{\mathbf{h}}_{v}^{\left( k - 1\right) },\mathop{\sum }\limits_{{u \in N\left( v\right) }}{M}^{\left( k\right) }\left( {{\mathbf{h}}_{u}^{\left( k - 1\right) },{e}_{vu}}\right) }\right) \tag{1}
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
where ${\mathbf{h}}_{v}^{\left( k\right) }$ is the representations of node $v$ at the ${k}^{\text{ th }}$ layer, $N\left( v\right)$ is the set of neighbors of $v,{\mathbf{h}}_{v}^{\left( 0\right) }$ is the node $v$ ’s features, ${e}_{uv}$ is the features of edge ${uv}$ , and ${U}^{\left( k\right) },{M}^{\left( k\right) }$ are some functions.
|
| 36 |
+
|
| 37 |
+
Xu et al. [13] provide a theoretical framework for the expressivity of ordinary GNNs. One message passing layer can encode neighbor nodes injectively and then reaches maximum expressivity. With several message passing layers, MPNN can learn the information of multi-hop neighbors.
|
| 38 |
+
|
| 39 |
+
Modeling PES. PES is the relationship between molecular energy and geometry. Given a molecule with $N$ atoms, our model takes the kinds of atoms $z \in {\mathbb{Z}}^{N}$ and the $3\mathrm{D}$ coordinates of atoms $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ as input to predict the energy $\widehat{E} \in \mathbb{R}$ of this molecule. It can also predict the force $\widehat{\overrightarrow{F}} \in {\mathbb{R}}^{N \times 3} = - {\nabla }_{\overrightarrow{r}}\widehat{E}.$
|
| 40 |
+
|
| 41 |
+
Equivariance. To formalized the symmetry requirement, we define equivariant and invariant functions as in [15].
|
| 42 |
+
|
| 43 |
+
Definition 2.1. Given a function $h : \mathbb{X} \rightarrow \mathbb{Y}$ and a group $G$ acting on $\mathbb{X}$ and $\mathbb{Y}$ as $\star$ . We say that $h$ is
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
G\text{ -invariant: }\;\text{ if }h\left( {g \star x}\right) = h\left( x\right) ,\forall x \in \mathbb{X},g \in G \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
G\text{ -equivariant: if }h\left( {g \star x}\right) = g \star h\left( x\right) ,\forall x \in \mathbb{X},g \in G \tag{3}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
The energy is invariant to the permutation of atoms, coordinates' translations, and coordinates' orthogonal transformations (rotations and reflections). GNN naturally keeps the permutation invariance. As the relative position ${\overrightarrow{r}}_{ij} = {\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j} \in {\mathbb{R}}^{1 \times 3}$ , which is invariant to translation, is used as the input to GNNs, the translation invariance can also be ensured. So we focus on orthogonal transformations. Orthogonal transformations of coordinates form the group $\mathrm{O}\left( 3\right) = \left\{ {Q \in {\mathbb{R}}^{3 \times 3} \mid Q{Q}^{T} = I}\right\}$ , where $I$ is the identity matrix. Representations are considered as functions of $z$ and $\overrightarrow{r}$ , so we can define equivariant and invariant representations.
|
| 54 |
+
|
| 55 |
+
Definition 2.2. Representation $s$ is called an invariant representation if $s\left( {z,\overrightarrow{r}}\right) = s\left( {z,\overrightarrow{r}{o}^{T}}\right) ,\forall o \in$ $O\left( 3\right) ,z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ . Representation $\overrightarrow{v}$ is called an equivariant representation if $\overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {o}^{T} =$ $\overrightarrow{v}\left( {z,\overrightarrow{r}{o}^{T}}\right) ,\forall o \in O\left( 3\right) ,z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ .
|
| 56 |
+
|
| 57 |
+
Invariant and equivariant representations are also called scalar and vector representations respectively in some previous work [7].
|
| 58 |
+
|
| 59 |
+
Frame is a special kind of equivariant representation. Through our theoretical analysis, frame $\overrightarrow{E}$ is an orthogonal matrix in ${\mathbb{R}}^{3 \times 3},\overrightarrow{E}{\overrightarrow{E}}^{T} = I$ . GNN-LF generates a frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{3 \times 3}$ for each node $i$ . We will discuss how to generate the frames in Section 5.
|
| 60 |
+
|
| 61 |
+
In Lemma 2.1, we introduce some basic operations of representations.
|
| 62 |
+
|
| 63 |
+
§ LEMMA 2.1.
|
| 64 |
+
|
| 65 |
+
* Any function of invariant representation $s$ will produce an invariant representation.
|
| 66 |
+
|
| 67 |
+
* Let $s \in {\mathbb{R}}^{F}$ denote an invariant representation, $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. We define $s \circ \overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ as a matrix whose(i, j)th element is ${s}_{i}{\overrightarrow{v}}_{ij}$ . When $\overrightarrow{v} \in {\mathbb{R}}^{1 \times 3}$ , we first broadcast it along the first dimension. Then the output is also an equivariant representation.
|
| 68 |
+
|
| 69 |
+
* Let $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. $\overrightarrow{E} \in {\mathbb{R}}^{3 \times 3}$ denotes an equivariant frame. The projection of $\overrightarrow{v}$ to $\overrightarrow{E}$ , denoted as ${P}_{\overrightarrow{E}}\left( \overrightarrow{v}\right) \mathrel{\text{ := }} \overrightarrow{v}{\overrightarrow{E}}^{T}$ , is an invariant representation in ${\mathbb{R}}^{F \times 3}$ . For $\overrightarrow{v},{P}_{\overrightarrow{E}}$ is a bijective function. Its inverse ${P}_{\overrightarrow{E}}^{-1}$ convert an invariant representation $s \in {\mathbb{R}}^{F \times 3}$ to an equivariant representation in ${\mathbb{R}}^{F \times 3},{P}_{\overrightarrow{E}}^{-1}\left( s\right) = s\overrightarrow{E}$ .
|
| 70 |
+
|
| 71 |
+
* Projection of $\overrightarrow{v}$ to a general equivariant representation ${\overrightarrow{v}}^{\prime } \in {\mathbb{R}}^{{F}^{\prime } \times 3}$ can also be defined. It produces an invariant representation in ${\mathbb{R}}^{F \times {F}^{\prime }},{P}_{{\overrightarrow{v}}^{\prime }}\left( \overrightarrow{v}\right) = \overrightarrow{v}{\overrightarrow{v}}^{\prime T}$ .
|
| 72 |
+
|
| 73 |
+
Local Environment. Most PES models set a cutoff radius ${r}_{c}$ and encode the local environment of each atom as defined in Definition 2.3.
|
| 74 |
+
|
| 75 |
+
Definition 2.3. Let ${r}_{ij}$ denote $\begin{Vmatrix}{\overrightarrow{r}}_{ij}\end{Vmatrix}$ . The local environment of atom $i$ is $L{E}_{i} = \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ , the set of invariant atom features ${s}_{j}$ (like atomic numbers) and relative positions ${\overrightarrow{r}}_{ij}$ of atoms $j$ within the sphere centered at $i$ with cutoff distance ${r}_{c}$ , where ${r}_{c}$ is usually a hyperparameter.
|
| 76 |
+
|
| 77 |
+
In this work, orthogonal transformation of a set/sequence means transforming each element in the set/sequence. For example, an orthogonal transformation $o$ will map $\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ to $\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}{o}^{T}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ .
|
| 78 |
+
|
| 79 |
+
§ 3 RELATED WORK
|
| 80 |
+
|
| 81 |
+
We classify existing ML models for PES into two classes: manual descriptors and GNNs. GNN-LF outperforms the representative of each kind in experiments.
|
| 82 |
+
|
| 83 |
+
Manual Descriptor. These models first use manually designed functions with few learnable parameters to convert one molecule to a descriptor vector and then feed the vector into some ordinary ML models like kernel regression [16-18] and neural network [19-21] to produce the prediction. These methods are more scalable and data-efficient than GNNs. However, due to the hard-coded descriptors, they are less accurate and cannot process variable-size molecules or different kinds of atoms.
|
| 84 |
+
|
| 85 |
+
GNN. These GNNs mainly differ in the way to incorporate geometric information.
|
| 86 |
+
|
| 87 |
+
Invariant models use rotation-invariant geometric features only. Schutt et al. [3] and Schütt et al. [4] only consider the distance between atoms. Klicpera et al. [5] introduce angular features, and Gasteiger et al. [8] further use dihedral angles. Similar to GNN-LF, the input of the GNN is invariant. However, the features are largely hand-crafted and are not expressive enough, while our projections on frames are learnable and provably expressive. Moreover, as some features are of multiple atoms (for example, angle is a feature of three-atom tuple), the message passing scheme passes messages between node tuples rather than nodes, while GNN-LF uses an ordinary GNN with lower time complexity.
|
| 88 |
+
|
| 89 |
+
Recent works have also utilized equivariant features, which will change as the input coordinates rotate. Some $\left\lbrack {6,9,{12}}\right\rbrack$ are based on irreducible representations of the ${SO}\left( 3\right)$ group. Though having certain theoretical expressivity guarantees [22], these methods and analyses are based on polynomial approximation. High-order tensors are needed to approximate complex functions like high-order polynomials. However, in implementation, only low-order tensors are used, and these models' empirical performance is not high. Other works $\left\lbrack {7,{10}}\right\rbrack$ model equivariant interactions in Cartesian space using both invariant and equivariant representations. They achieve good empirical performance but have no theoretical guarantees. Different sets of functions must be designed separately for different input and output types (invariant or equivariant representations), so their architectures are also complex. Our work adopts a completely different approach. We introduce $\mathrm{O}\left( 3\right)$ -equivariant frames and project all equivariant features on the frames. The expressivity can be proved using the existing framework [13] and needs no high-order tensors.
|
| 90 |
+
|
| 91 |
+
"Frame" models. Some of existing methods [23, 24] designed for other tasks also use the term "frame". However, in conclusion, these methods differ significantly from ours in task, theory, and method as follows.
|
| 92 |
+
|
| 93 |
+
* Most target properties of molecules are $\mathrm{O}\left( 3\right)$ -equivariant or invariant (including energy and force). Our model can fully describe symmetry, while existing models cannot. For example, a molecule and its mirroring must have the same energy, and GNN-LF will produce the same prediction while existing models cannot keep the invariance.
|
| 94 |
+
|
| 95 |
+
* Our theoretical analysis removes group representation used in [22, 24].
|
| 96 |
+
|
| 97 |
+
* Existing models use some schemes not learnable to initialize frames and update them. GNN-LF uses a learnable message passing scheme to produce frames and will not update them, leading to simpler architecture and lower overhead.
|
| 98 |
+
|
| 99 |
+
* Only coordinate projection is used previously, while we add frame-frame projection.
|
| 100 |
+
|
| 101 |
+
The comparison is detailed in Appendix F.
|
| 102 |
+
|
| 103 |
+
§ 4 HOW FRAMES BOOST EXPRESSIVITY?
|
| 104 |
+
|
| 105 |
+
Though symmetry imposes constraints on our design, our primary focus is expressivity. Therefore, we only discuss how the frame boosts expressivity in this section. Our methods, implementations, and how our model keeps invariance will be detailed in Section 6 and Appendix J. Throughout this section, we assume the existence of frames, which will be discussed in Section 5.
|
| 106 |
+
|
| 107 |
+
§ 4.1 DECOUPLING SYMMETRY REQUIREMENT
|
| 108 |
+
|
| 109 |
+
Though equivariant representations have been used for a long time, it is still unclear how to transform them ideally. Existing methods $\left\lbrack {7,{10},{15},{25}}\right\rbrack$ either have no theoretical guarantee or tend to use too many parameters. This section asks a fundamental question: can we use invariant representations instead of equivariant ones and keep expressivity?
|
| 110 |
+
|
| 111 |
+
Given any frame $\overrightarrow{E}$ , the projection ${P}_{\overrightarrow{E}}\left( \overrightarrow{x}\right)$ will contain all the information of the input equivariant feature $\overrightarrow{x}$ , because the inverse projection function can resume $\overrightarrow{x}$ from projection, ${P}_{\overrightarrow{E}}^{-1}\left( {{P}_{\overrightarrow{E}}\left( \overrightarrow{x}\right) }\right) = \overrightarrow{x}$ . Therefore, we can use ${P}_{\overrightarrow{E}}$ and ${P}_{\overrightarrow{E}}^{-1}$ to change the type (invariant or equivariant representation) of input and output of any function without information loss.
|
| 112 |
+
|
| 113 |
+
Proposition 4.1. Given frame $\overrightarrow{E}$ and any equivariant function $g$ , there exists a function $\widetilde{g} =$ ${P}_{\overrightarrow{E}} \cdot g \cdot {P}_{\overrightarrow{E}}^{-1}$ which takes invariant representations as input and outputs invariant representations, where $\cdot$ is function composition. $g$ can be expressed with $\widetilde{g} : g = {P}_{\overrightarrow{E}}^{-1} \cdot \widetilde{g} \cdot {P}_{\overrightarrow{E}}$ .
|
| 114 |
+
|
| 115 |
+
We can use a multilayer perceptron (MLP) to approximate the function $\widetilde{g}$ and thus achieving universal approximation for all $\mathrm{O}\left( 3\right)$ -equivariant functions. Proposition 4.1 motivates us to transform equivariant representations to projections in the beginning and then fully operate on the invariant representation space. Invariant representations can also be transformed back to equivariant prediction with inverse projection operation if necessary.
|
| 116 |
+
|
| 117 |
+
§ 4.2 PROJECTION BOOSTS MESSAGE PASSING LAYER
|
| 118 |
+
|
| 119 |
+
The previous section discusses how projection decouples the symmetry requirement. This section shows that projections contain rich geometry information. Even ordinary GNNs can reach maximum expressivity with projections on frames, while existing models with hand-crafted invariant features are not expressive enough. The discussion is composed of two parts. Coordinate projection boosts the expressivity of one single message passing layer, and frame-frame projection boosts the whole GNN composed of multiple message passing layers.
|
| 120 |
+
|
| 121 |
+
Note that in this section, we consider input ${x}_{1},{x}_{2}$ (local environment or the whole molecule) as equal if they can interconvert with some orthogonal transformation $\left( {\exists o \in \mathrm{O}\left( 3\right) ,o\left( {x}_{1}\right) = {x}_{2}}\right)$ , because the invariant representations and energy prediction are invariant under $\mathrm{O}\left( 3\right)$ transformation. Therefore, injective mapping and maximum expressivity mean that function can differentiate inputs unequal in this sense.
|
| 122 |
+
|
| 123 |
+
< g r a p h i c s >
|
| 124 |
+
|
| 125 |
+
Figure 2: The green balls in the figure are the center atoms. We use balls with different colors to represent different kinds of atoms. (a) SchNet cannot distinguish two local environments due to the inability to capture angle. (b) DimeNet cannot distinguish two local environments with the same set of angles. Blue lines form a regular icosahedron and help visualization. The center atom is at the symmetrical center of the icosahedron. (c) Invariant models fail to pass the orientation information, while the projection of frame vectors can solve this problem. For simplicity, we only show one vector (orange) to represent the frame.
|
| 126 |
+
|
| 127 |
+
Encoding local environment. Similar to that MPNN can encode neighbor nodes injectively on the graph, GNN-LF can encode neighbor nodes injectively in 3D space. Other models can also be analyzed from an encoding local environments perspective. GNNs for PES only collect messages from atoms within the sphere of radius ${r}_{c}$ , so one message passing layer of them is equivalent to encoding the local environments in Definition 2.3. When mapping local environments injectively, a single message passing layer reaches maximum expressivity.
|
| 128 |
+
|
| 129 |
+
Some popular models are under-expressive. For example, as shown in Figure 2a, SchNet [4] only considers the distance between atoms and neglects the angular information, leading to the inability to differentiate some simple local environments. Moreover, Figure 2b illustrates that though DimeNet [5] adds angular information to message passing, its expressivity is still limited, which may be attributed to the loss of high-order geometric information like dihedral angle.
|
| 130 |
+
|
| 131 |
+
In contrast, no information loss will happen when we use the coordinates projected on the frame.
|
| 132 |
+
|
| 133 |
+
Theorem 4.1. There exists a function $\phi$ . Given a frame ${\overrightarrow{E}}_{i}$ of the atom $i,\phi$ encodes the local environment of atom i injectively into atom i's embeddings.
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\operatorname{Concatenate}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{s}_{j}}\right) }\right) }\right) . \tag{4}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
Theorem 4.1 shows that an ordinary message passing layer can encode local environments injectively with coordinate projection as an edge feature.
|
| 140 |
+
|
| 141 |
+
Passing messages across local environments. In physics, interaction between distant atoms is usually not negligible. Using one single message passing layer, which encodes atoms within cutoff radius only, leads to loss of such interaction. When using multiple message passing layers, GNN can pass messages between two distant atoms along a path of atoms and thus model the interaction.
|
| 142 |
+
|
| 143 |
+
However, passing messages in multiple steps may lead to loss of information. For example, in Figure 2c, two molecules are different as a part of the molecule rotates. However, the local environment will not change. So the node representations, the messages passed between nodes, and finally, the energy prediction will not change while two molecules have different energy. This problem will also happen in previous PES models [4, 5]. Loss of information in multi-step message passing is a fundamental and challenging problem even for ordinary GNN [13].
|
| 144 |
+
|
| 145 |
+
Nevertheless, the solution is simple in this special case. We can eliminate the information loss by frame-frame projection, i.e., projecting ${\overrightarrow{E}}_{i}$ (the frame of atom $j$ ) on ${\overrightarrow{E}}_{j}$ (the frame of atom $i$ ). For example, in Figure 2c, as the molecule rotates, frame vectors also rotate, leading to frame-frame projection change, so our model can differentiate them. We also prove the effectiveness of frame-frame projection in theory.
|
| 146 |
+
|
| 147 |
+
Theorem 4.2. Let $\mathcal{G}$ denote the graph in which node $i$ represents the atom $i$ and edge ${ij}$ exists iff ${r}_{ij} < {r}_{c}$ , where ${r}_{c}$ is the cutoff radius. Assuming frames exist, if $\mathcal{G}$ is a connected graph whose diameter is $L$ , GNN with $L$ message passing layers as follows can encode the whole molecule
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\text{ Concatenate }\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}}\right) }\right) }\right) . \tag{5}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
< g r a p h i c s >
|
| 154 |
+
|
| 155 |
+
Figure 3: (a) The left part shows the symmetry of the water molecule, which has a rotation axis. Its equivariant vectors must be parallel to the rotation axis. However, with a frame composed of only one vector, its geometry can be described. The right part shows that with the projection of ${\overrightarrow{r}}_{ij}$ on the frame and the distance between two atoms, the angle $\theta$ and the position of $j$ atom can be determined. (b) The left part is a molecule with central symmetry. Its global frame will be zero. However, when selected as the center (green), the atom's environment has no central symmetry.
|
| 156 |
+
|
| 157 |
+
$\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid j \in \{ 1,2,\ldots ,n\} }\right\}$ injectively into the embedding of node $i$ .
|
| 158 |
+
|
| 159 |
+
Theorem 4.2 shows that an ordinary GNN can encode the whole molecule injectively with coordinate projection and frame-frame projection as edge features.
|
| 160 |
+
|
| 161 |
+
In conclusion, when frames exist, even ordinary GNN can encode molecule injectively and thus reach maximum expressivity with coordinate projection and frame-frame projection.
|
| 162 |
+
|
| 163 |
+
§ 5 HOW TO BUILD A FRAME?
|
| 164 |
+
|
| 165 |
+
We propose frame generation method after discussing how to use frames because generation method's connection to expressivity is less direct. Whatever frame generation method is used, GNN-LF can keep expressivity as long as the frame does not degenerate. A frame degenerates iff it has less than three linearly independent vectors. This section provides one feasible frame generation method.
|
| 166 |
+
|
| 167 |
+
A straightforward idea is to use the invariant features of each atom, like the atomic number, to produce the frame. However, function of invariant features must be invariant representations rather than equivariant frames. Therefore, we consider producing the frame from the local environment of each atom, which contains equivariant 3D coordinates. In Theorem 5.1, we prove that there exists a function mapping the local environment to an $\mathrm{O}\left( 3\right)$ -equivariant frame.
|
| 168 |
+
|
| 169 |
+
Theorem 5.1. There exists an $O\left( 3\right)$ -equivariant function $g$ mapping the local environment $L{E}_{i}$ to an equivariant representation in ${\mathbb{R}}^{3 \times 3}$ . The output forms a frame if $\forall o \in O\left( 3\right) ,o \neq I,o\left( {L{E}_{i}}\right) \neq L{E}_{i}$ .
|
| 170 |
+
|
| 171 |
+
The frame produced by the function in Theorem 5.1 will not degenerate if the local environment has no symmetry elements, such as centers of inversion, axes of rotation, or mirror planes.
|
| 172 |
+
|
| 173 |
+
Building a frame for a symmetric local environment remains a problem in our current implementation but will not seriously hamper our model. Firstly, our model can produce reasonable output even with symmetric input and is provably more expressive than a widely used model SchNet [4] (see Appendix G). Secondly, symmetric molecules are rare and form a zero-measure set. In our two representative real-world datasets, less than ${0.01}\%$ of molecules (about ten molecules in the whole datasets of several hundred thousand molecules) are symmetric. Thirdly, symmetric geometry may be captured with a degenerate frame. As shown in Figure 3a, water is a symmetric molecule. We can use a frame with one vector to describe its geometry. Based on node identity features and relational pooling [26], we also propose a scheme in Appendix H to completely solve the expressivity loss caused by degeneration. However, for scalability, we do not use it in GNN-LF.
|
| 174 |
+
|
| 175 |
+
A message passing layer for frame generation. The existence of the frame generation function is proved in Theorem 4.2. Here we demonstrate how to implement it. There exists a universal framework for approximating $\mathrm{O}\left( 3\right)$ -equivariant functions [15] which can be used to implement the function in Theorem 5.1. For scalability, we use a simplified form of that framework which has empirically good performance:
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
{\overrightarrow{E}}_{i} = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{g}^{\prime }\left( {{r}_{ij},{s}_{j}}\right) \circ \frac{{\overrightarrow{r}}_{ij}}{{r}_{ij}}, \tag{6}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where ${g}^{\prime }$ maps invariant features and distance to invariant weights and the entire framework reduces to a message passing process. The derivation is detailed in Appendix B.
|
| 182 |
+
|
| 183 |
+
Local frame vs global frame. With the message passing framework in Equation 6, an individual frame, called local frame, is produced for each atom. These local frames can also be summed to produce a global frame.
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\overrightarrow{E} = \mathop{\sum }\limits_{{i = 1}}^{n}{\overrightarrow{E}}_{i} \tag{7}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
The global frame can replace local frames and keep the invariance of energy prediction. All previous analysis will still be valid if the frame degeneration does not happen. However, the global frame is more likely to degenerate than local frames. As shown in Figure 3b, the benzene molecule has central symmetry and produces a zero global frame. However, when choosing each atom as the center, the central symmetry is broken, and a non-zero local frame can be produced. We further formalize this intuition and prove that the global frame is more likely to degenerate in Appendix I.
|
| 190 |
+
|
| 191 |
+
In conclusion, we can generate local frames with a message passing layer.
|
| 192 |
+
|
| 193 |
+
§ 6 GNN WITH LOCAL FRAME
|
| 194 |
+
|
| 195 |
+
We formally introduce our GNN with local frame (GNN-LF) model. The whole architecture is detailed in Appendix C. The time and space complexity are $O\left( {Nn}\right)$ , where $N$ is the number of atoms in the molecule, and $n$ is the maximum number of neighbor atoms of one atom.
|
| 196 |
+
|
| 197 |
+
Notations. Let $F$ denote the hidden dimension. We first convert the input features, coordinates $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ and atomic numbers $z \in {\mathbb{N}}^{N}$ , to a graph. The initial node feature ${s}_{i}^{\left( 0\right) } \in {\mathbb{R}}^{F}$ is an embedding of the atomic number ${z}_{i}$ . Edge ${ij}$ has two features: the edge weight ${w}_{ij}^{\left( e\right) } = \operatorname{cutoff}\left( {r}_{ij}\right)$ (where cutoff means the cutoff function), and the radial basis expansion of the distance ${s}_{ij}^{\left( e\right) } = \operatorname{rbf}\left( {r}_{ij}\right)$ . Edge weight ${w}_{ij}^{\left( e\right) }$ is not necessary for expressivity. However, to ensure that the energy prediction is a smooth function of coordinates, messages passed among atoms must be scaled with ${w}_{ij}^{\left( e\right) }$ [19]. These special functions are detailed in Appendix C.
|
| 198 |
+
|
| 199 |
+
Producing frame. The message passing scheme for producing local frames implements Equation (6).
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
{\overrightarrow{E}}_{i} = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{w}_{ij}^{\left( e\right) }\left( {{f}_{1}\left( {s}_{ij}^{\left( e\right) }\right) \odot {s}_{j}}\right) \circ \frac{{\overrightarrow{r}}_{ij}}{{r}_{ij}}, \tag{8}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
where ${f}_{1}$ is an MLP. Note that frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{F \times 3}$ in implementation is not restricted to have three vectors. The number of vectors equals the hidden dimension. This design needs no extra linear layer to change the hidden dimension. Moreover, our theoretical analysis is still valid because frame in ${\mathbb{R}}^{F \times 3}$ can be considered as an ensemble of $\frac{F}{3}$ frames in ${\mathbb{R}}^{3 \times 3}$ .
|
| 206 |
+
|
| 207 |
+
Coordinate projection is as follows,
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
{d}_{ij}^{1} = \frac{1}{{r}_{ij}}{\overrightarrow{r}}_{ij}{\overrightarrow{E}}_{i}^{T}. \tag{9}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
The projection in implementation is scaled by $\frac{1}{{r}_{ij}}$ to decouple the distance information in ${s}_{ij}^{\left( e\right) }$ .
|
| 214 |
+
|
| 215 |
+
Frame-frame projection. ${\overrightarrow{E}}_{i}{\overrightarrow{E}}_{j}^{T}$ is a large matrix. Therefore, we only use the diagonal elements of the projection. To keep the expressivity, we transform the frame with two ordinary linear layers.
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
{d}_{ij}^{2} = \operatorname{diag}\left( {{W}_{1}{\overrightarrow{E}}_{j}{\overrightarrow{E}}_{i}^{T}{W}_{2}^{T}}\right) . \tag{10}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
Adding the projections to edge features, we get a graph with invariant features only.
|
| 222 |
+
|
| 223 |
+
GNN working on the invariant graph. The message passing scheme uses the form in Theorem 4.1. Let the ${s}_{i}^{\left( l\right) }$ denote the node representations produced by the ${l}^{\text{ th }}$ message passing layers. ${s}_{i}^{\left( 0\right) } = {s}_{i}$ .
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
{s}_{i}^{\left( l\right) } = \rho \left( {\mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{w}_{ij}^{\left( e\right) }\left( {{f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right) \odot {s}_{j}^{\left( l - 1\right) }}\right) }\right) , \tag{11}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
{f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right) = {g}_{1}\left( {s}_{ij}^{\left( e\right) }\right) \odot {g}_{2}\left( {{d}_{ij}^{1},{d}_{ij}^{2}}\right) . \tag{12}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
Table 1: Results on the MD17 dataset. Units: energy (E) (kcal/mol) and forces (F) (kcal/mol/Å).
|
| 234 |
+
|
| 235 |
+
max width=
|
| 236 |
+
|
| 237 |
+
X X FCHL SchNet DimeNet GemNet PaiNN NequlP TorchMD GNN-LF
|
| 238 |
+
|
| 239 |
+
1-10
|
| 240 |
+
2*Aspirin E 0.182 0.37 0.204 - 0.167 - 0.124 0.1342
|
| 241 |
+
|
| 242 |
+
2-10
|
| 243 |
+
F 0.478 1.35 0.499 0.2168 0.338 0.348 0.255 0.2018
|
| 244 |
+
|
| 245 |
+
1-10
|
| 246 |
+
2*Benzene E - 0.08 0.078 - - - 0.056 0.0686
|
| 247 |
+
|
| 248 |
+
2-10
|
| 249 |
+
F - 0.31 0.187 0.1453 - 0.187 0.201 0.1506
|
| 250 |
+
|
| 251 |
+
1-10
|
| 252 |
+
2*Ethanol E 0.054 0.08 0.064 - 0.064 - 0.054 0.0520
|
| 253 |
+
|
| 254 |
+
2-10
|
| 255 |
+
F 0.136 0.39 0.230 0.0853 0.224 0.208 0.116 0.0814
|
| 256 |
+
|
| 257 |
+
1-10
|
| 258 |
+
2*Malonaldehyde E 0.081 0.13 0.104 - 0.091 - 0.079 0.0764
|
| 259 |
+
|
| 260 |
+
2-10
|
| 261 |
+
F 0.245 0.66 0.383 0.1545 0.319 0.337 0.176 0.1259
|
| 262 |
+
|
| 263 |
+
1-10
|
| 264 |
+
2*Naphthalene E 0.117 0.16 0.122 - 0.166 - 0.085 0.1136
|
| 265 |
+
|
| 266 |
+
2-10
|
| 267 |
+
F 0.151 0.58 0.215 0.0553 0.077 0.097 0.060 0.0550
|
| 268 |
+
|
| 269 |
+
1-10
|
| 270 |
+
2*Salicylic acid E 0.114 0.20 0.134 - 0.166 - 0.094 0.1081
|
| 271 |
+
|
| 272 |
+
2-10
|
| 273 |
+
F 0.221 0.85 0.374 0.1048 0.195 0.238 0.135 0.1005
|
| 274 |
+
|
| 275 |
+
1-10
|
| 276 |
+
2*Toluence E 0.098 0.12 0.102 - 0.095 - 0.074 0.0930
|
| 277 |
+
|
| 278 |
+
2-10
|
| 279 |
+
F 0.203 0.57 0.216 0.0600 0.094 0.101 0.066 0.0543
|
| 280 |
+
|
| 281 |
+
1-10
|
| 282 |
+
2*Uracil E 0.104 0.14 0.115 - 0.106 - 0.096 0.1037
|
| 283 |
+
|
| 284 |
+
2-10
|
| 285 |
+
F 0.105 0.56 0.301 0.0969 0.139 0.173 0.094 0.0751
|
| 286 |
+
|
| 287 |
+
1-10
|
| 288 |
+
average rank 3.93 6.63 5.38 2.00 4.36 5.25 2.25 1.75
|
| 289 |
+
|
| 290 |
+
1-10
|
| 291 |
+
|
| 292 |
+
where $\rho$ is an MLP. We further use a filter decomposition design as follows.
|
| 293 |
+
|
| 294 |
+
The distance information ${s}_{ij}^{\left( e\right) }$ is easier to learn as it has been expanded with a set of bases, so a linear layer ${g}_{1}$ is enough. In contrast, projections need a more expressive MLP ${g}_{2}$ .
|
| 295 |
+
|
| 296 |
+
Sharing filters. Generating different filters ${f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right)$ for each message passing layer is time-consuming. Therefore, we share filters between different layers. Experimental results show that sharing filters leads to minor performance loss and significant scalability gain.
|
| 297 |
+
|
| 298 |
+
§ 7 EXPERIMENT
|
| 299 |
+
|
| 300 |
+
In this section, we compare GNN-LF with existing models and do an ablation analysis. We report the mean absolute error (MAE) on the test set (the lower, the better). All our results are averaged over three random splits. Baselines' results are from their papers. The best and the second best results are shown in bold and underline respectively in tables. Experimental settings are detailed in Appendix D.
|
| 301 |
+
|
| 302 |
+
§ 7.1 MODELING PES
|
| 303 |
+
|
| 304 |
+
We first evaluate GNN-LF for modeling PES on the MD17 dataset [27], which consists of MD trajectories of small organic molecules. GNN-LF is compared with a manual descriptor model: FCHL [18], invariant models: SchNet [4], DimeNet [5], GemNet [8], a model using irreducible representations: NequIP [9], and models using equivariant representations: PaiNN [7] and TorchMD [10]. The results are shown in Table 1. GNN-LF outperforms all the baselines on $9/{16}$ targets and achieves the second-best performance on all other 7 targets. Our model leads to ${10}\%$ lower loss on average than GemNet, the best baseline. The outstanding performance verifies the effectiveness of the local frame method for modeling PES. Moreover, our model also uses fewer parameters and only about 30% time and 10% GPU memory compared with the baselines as shown in Appendix E.
|
| 305 |
+
|
| 306 |
+
§ 7.2 ABLATION STUDY
|
| 307 |
+
|
| 308 |
+
We perform an ablation study to verify our model designs. The results are shown in Table 2.
|
| 309 |
+
|
| 310 |
+
On average, ablation of frame-frame projection (NoDir2) leads to ${20}\%$ higher MAE, which verifies the necessity of frame-frame projection. The column Global replaces the local frames with the global frame, resulting in 100% higher loss, which verifies local frames' advantages over global frame. Ablation of filter decomposition (NoDecomp) leads to 9% higher loss, indicating the advantage of separately processing distance and projections. Although using different filters for each message passing layer (NoShare) uses much more computation time ( ${1.67} \times$ ) and parameters ( ${3.55} \times$ ), it 8 only leads to 0.01% lower loss on average, illustrating that sharing filters does little harm to the expressivity.
|
| 311 |
+
|
| 312 |
+
Table 2: Ablation results on the MD17 dataset. Units: energy (E) (kcal/mol) and forces (F) (kcal/mol/Å). GNN-LF does not use ${d}^{2}$ for some molecules, so the NoDir2 column is empty.
|
| 313 |
+
|
| 314 |
+
max width=
|
| 315 |
+
|
| 316 |
+
X X GNN-LF NoDir2 Global NoDecomp GNN-LF Noshare
|
| 317 |
+
|
| 318 |
+
1-8
|
| 319 |
+
2*Aspirin E 0.1342 0.1435 0.2280 0.1411 0.1342 0.1364
|
| 320 |
+
|
| 321 |
+
2-8
|
| 322 |
+
F 0.2018 0.2799 0.6894 0.2622 0.2018 0.1979
|
| 323 |
+
|
| 324 |
+
1-8
|
| 325 |
+
2*Benzene E 0.0686 0.0716 0.0972 0.0688 0.0686 0.0713
|
| 326 |
+
|
| 327 |
+
2-8
|
| 328 |
+
F 0.1506 0.1583 0.3520 0.1499 0.1506 0.1507
|
| 329 |
+
|
| 330 |
+
1-8
|
| 331 |
+
2*Ethanol E 0.0520 0.0532 0.0556 0.0518 0.0520 0.0514
|
| 332 |
+
|
| 333 |
+
2-8
|
| 334 |
+
F 0.0814 0.0930 0.1465 0.0847 0.0814 0.0751
|
| 335 |
+
|
| 336 |
+
1-8
|
| 337 |
+
2*Malonaldehyde E 0.0764 0.0776 0.0923 0.0765 0.0764 0.0790
|
| 338 |
+
|
| 339 |
+
2-8
|
| 340 |
+
F 0.1259 0.1466 0.3194 0.1321 0.1259 0.1210
|
| 341 |
+
|
| 342 |
+
1-8
|
| 343 |
+
2*Naphthalene E 0.1136 0.1152 0.1276 0.1254 0.1136 0.1168
|
| 344 |
+
|
| 345 |
+
2-8
|
| 346 |
+
F 0.0550 0.0834 0.2069 0.0553 0.0550 0.0547
|
| 347 |
+
|
| 348 |
+
1-8
|
| 349 |
+
2*Salicylic acid E 0.1081 0.1087 0.1224 0.1123 0.1081 0.1091
|
| 350 |
+
|
| 351 |
+
2-8
|
| 352 |
+
F 0.1048 0.1328 0.2890 0.1399 0.1048 0.1012
|
| 353 |
+
|
| 354 |
+
1-8
|
| 355 |
+
2*Toluence E 0.0930 0.0942 0.1000 0.0932 0.0930 0.0942
|
| 356 |
+
|
| 357 |
+
2-8
|
| 358 |
+
F 0.0543 0.0770 0.1659 0.0695 0.0543 0.0519
|
| 359 |
+
|
| 360 |
+
1-8
|
| 361 |
+
2*Uracil E 0.1037 0.1069 0.1075 0.1053 0.1037 0.1042
|
| 362 |
+
|
| 363 |
+
2-8
|
| 364 |
+
F 0.0751 0.0964 0.1901 0.0825 0.0751 0.0754
|
| 365 |
+
|
| 366 |
+
1-8
|
| 367 |
+
|
| 368 |
+
Table 3: Results on the QM9 dataset.
|
| 369 |
+
|
| 370 |
+
max width=
|
| 371 |
+
|
| 372 |
+
Target Unit SchNet DimeNet++ Cormorant PaiNN Torchmd GNN-LF
|
| 373 |
+
|
| 374 |
+
1-8
|
| 375 |
+
$\mu$ D 0.033 0.0297 0.038 0.012 0.002 0.013
|
| 376 |
+
|
| 377 |
+
1-8
|
| 378 |
+
$\alpha$ ${a}_{0}^{3}$ 0.235 0.0435 0.085 0.045 0.01 0.0353
|
| 379 |
+
|
| 380 |
+
1-8
|
| 381 |
+
EHOMO meV 41 24.6 34 27.6 21.2 23.5
|
| 382 |
+
|
| 383 |
+
1-8
|
| 384 |
+
ELUMO meV 34 19.5 38 20.4 17.8 17.0
|
| 385 |
+
|
| 386 |
+
1-8
|
| 387 |
+
${\Delta \epsilon }$ meV 63 32.6 61 45.7 38 37.1
|
| 388 |
+
|
| 389 |
+
1-8
|
| 390 |
+
$\langle {R}^{2}\rangle$ ${a}_{0}^{2}$ 0.073 0.331 0.961 0.066 0.015 0.037
|
| 391 |
+
|
| 392 |
+
1-8
|
| 393 |
+
ZPVE meV 1.7 1.21 2.027 1.28 2.12 1.19
|
| 394 |
+
|
| 395 |
+
1-8
|
| 396 |
+
${U}_{0}$ meV 14 6.32 22 5.85 6.24 5.30
|
| 397 |
+
|
| 398 |
+
1-8
|
| 399 |
+
$U$ meV 19 6.28 21 5.83 6.30 5.24
|
| 400 |
+
|
| 401 |
+
1-8
|
| 402 |
+
$H$ meV 14 6.53 21 5.98 6.48 5.48
|
| 403 |
+
|
| 404 |
+
1-8
|
| 405 |
+
$G$ meV 14 7.56 20 7.35 7.64 6.84
|
| 406 |
+
|
| 407 |
+
1-8
|
| 408 |
+
${C}_{v}$ cal/mol/K 0.033 0.023 0.026 0.024 0.026 0.022
|
| 409 |
+
|
| 410 |
+
1-8
|
| 411 |
+
|
| 412 |
+
§ 7.3 OTHER CHEMICAL PROPERTIES
|
| 413 |
+
|
| 414 |
+
Though designed for PES, our model can also predict other properties directly. The QM9 dataset [28] consists of ${134}\mathrm{k}$ stable small organic molecules. The task is to use the atomic numbers and coordinates to predict properties of these molecules. We compare our model with invariant models: SchNet [4], DimeNet++ [29], a model using irreducible representations: Cormorant [6], and models using equivariant representations: PaiNN [7] and TorchMD [10]. Results are shown in Table 3. Our model outperforms all other models on $7/{12}$ tasks and achieves the second-best performance on $4/5$ left tasks, which illustrates that the local frame method has the potential to be applied to other fields.
|
| 415 |
+
|
| 416 |
+
§ 8 CONCLUSION
|
| 417 |
+
|
| 418 |
+
This paper proposes GNN-LF, a simple and effective molecular potential energy surface model. It introduces a novel local frame method to decouple the symmetry requirement and capture rich geometric information. In theory, we prove that even ordinary GNNs can reach maximum expressivity with the local frame method. Furthermore, we propose ways to construct local frames. In experiments, our model outperforms all baselines in both scalability (using only 30% time and 10% GPU memory) and accuracy (10% lower loss). Ablation study also verifies the effectiveness of our designs.
|
papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,428 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Graph Reinforcement Learning for Network Control via Bi-Level Optimization
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Dynamic network flow models have been extensively studied and widely used in the past decades to formulate many problems with great real-world impact, such as transportation, supply chain management, power grid control, and more. Within this context, time-expansion techniques currently represent a generic approach for solving control problems over dynamic networks. However, the complexity of these methods does not allow traditional approaches to scale to large networks, especially when these need to be solved recursively over a receding horizon (e.g., to yield a sequence of actions in model predictive control). Moreover, tractable optimization-based approaches are limited to simple linear deterministic settings, and are not able to handle environments with stochastic, non-linear, or unknown dynamics. In this work, we present dynamic network flow problems through the lens of reinforcement learning and propose a graph network-based framework that can handle a wide variety of problems and learn efficient algorithms without significantly compromising optimality. Instead of a naive and poorly-scalable formulation, in which agent actions (and thus network outputs) consist of actions on edges, we present a two-phase decomposition. The first phase consists of an RL agent specifying desired outcomes to the actions. The second phase exploits the problem structure to solve a convex optimization problem and achieve (as best as possible) these desired outcomes. This formulation leads to dramatically improved scalability and performance. We further highlight a collection of features that are potentially desirable to system designers, investigate design decisions, and present experiments showing the utility, scalability, and flexibility of our framework.
|
| 12 |
+
|
| 13 |
+
## 24 1 Introduction
|
| 14 |
+
|
| 15 |
+
Many economically critical real-world systems are well-modelled through the lens of control on graphs. Power generation [1-3]; road, rail, and air transportation systems [4, 5]; complex manufacturing systems, supply chain, and distribution networks [6, 7]; telecommunication networks [8-10]; and many other systems are fundamentally the problem of controlling flows of products, vehicles, or other quantities on graph-structured networks. Traditionally, these problems are approached through the definition of a dynamic network flow model (DNF) [11, 12]. Within this class of problems, Ford and Fulkerson [13, 14] proposed a generic approach, showing how one can use time-expansion techniques to (i) convert dynamic networks with discrete time horizon into static networks, and (ii) solve the problem using algorithms developed for static networks. However, this approach leads to networks that grow exponentially in the input size of the problem, thus not allowing traditional methods to scale to large networks. Moreover, the design of good heuristics or approximation algorithms for network flow problems often requires significant specialized knowledge and trial-and-error.
|
| 16 |
+
|
| 17 |
+
In this paper, we argue that data-driven strategies have the potential to automate this challenging, tedious process, and learn efficient algorithms without compromising optimality. To do so, we propose a graph network-based reinforcement learning framework that can handle a wide variety of network control problems. Specifically, we introduce a bi-level formulation that leads to dramatically 41 improved scalability and performance by combining the strengths of mathematical optimization and learning-based approaches.
|
| 18 |
+
|
| 19 |
+
## 2 Problem Setting: Dynamic Network Control
|
| 20 |
+
|
| 21 |
+
To outline our problem formulation, we first define the linear problem, which is a classic convex problem formulation. We will then define a nonlinear, dynamic, non-convex problem setting that better corresponds to real-world instances. Much of the classical flow control literature and practice substitute the former linear problem for the latter nonlinear problem to yield tractable optimization problems [15-17]; we leverage the linear problem as an important algorithmic primitive. We consider the control of ${N}_{c}$ commodities on graphs, for example, vehicles in a transportation problem. A graph $\mathcal{G} = \{ \mathcal{V},\mathcal{E}\}$ is defined as a set $\mathcal{V}$ of ${N}_{v}$ nodes, and a set $\mathcal{E}$ of ${N}_{e}$ ordered pairs of nodes(i, j)called edges, each described by a traversal time ${t}_{ij}$ . We use ${\mathcal{N}}^{ + }\left( i\right) ,{\mathcal{N}}^{ - }\left( i\right) \subseteq \mathcal{V}$ for the set of nodes having edges pointing away from or toward node $i$ , respectively. We use ${s}_{i}^{t}\left( k\right) \in \mathbb{R}$ to denote the quantity of commodity $k$ at node $i$ and time ${t}^{1}$ .
|
| 22 |
+
|
| 23 |
+
The Linear Network Control Problem. Within the linear model, our commodity quantities evolve
|
| 24 |
+
|
| 25 |
+
in time as
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
{s}_{i}^{t + 1} = {s}_{i}^{t} + {f}_{i}^{t} + {e}_{i}^{t},\;\forall i \in \mathcal{V} \tag{1}
|
| 29 |
+
$$
|
| 30 |
+
|
| 31 |
+
where ${f}_{i}^{t}$ denotes the change due to flow of commodities along edges and ${e}_{i}^{t}$ denotes the change due to exchange between commodities at the same graph node. We refer to this expression as the conservation of flow. We also accrue money as
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
{m}^{t + 1} = {m}^{t} + {m}_{f}^{t} + {m}_{e}^{t}, \tag{2}
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
where ${m}_{f}^{t},{m}_{e}^{t} \in \mathbb{R}$ denote the money gained due to flows and exchanges respectively. Money can also be replaced with any other form of scalar reward, although it may be subject to e.g. non-negativity constraints and thus is different from the notion of reward in the RL problem. Our overall problem formulation will typically be to control flows and exchanges so as to maximize money over one or more steps subject to additional constraints such as, e.g., flow limitations through a particular edge. Please refer to Appendix A for a formal treatment of flow and exchange quantities, together with practical constraints within network control problems.
|
| 38 |
+
|
| 39 |
+
The Nonlinear Dynamic Network Control Problem. The previous subsection presented a linear problem formulation that yields a convex optimization problem for the decision variables-the chosen flow and exchange values. However, the formulation is limited by the assumption of linearity, thus lacking in the characterization of a number of elements typical of real-world systems (please refer to Appendix A for a more complete treatment). Crucially, these nonlinear, time-varying, stochastic, or unknown elements lead to severe difficulties in applying the convex formulation derived in the previous subsection. A common approach is to solve a linearized version of the nonlinear problem at each timestep, which is a form of model predictive control (MPC), although this essentially discards some elements of the problem to achieve computational tractability. In this paper, we focus on solving the nonlinear problem (reflecting real, highly general problem statements) via a bilevel optimization approach, wherein the linear problem (which has been shown to be extremely useful in practice) is used as an inner control primitive.
|
| 40 |
+
|
| 41 |
+
## 3 Methodology: The Bi-Level Formulation
|
| 42 |
+
|
| 43 |
+
In this section we describe the bi-level formulation that is the primary contribution of this paper. We further introduce a more formal Markov decision process (MDP) for our problem setting, together with a discussion on practical elements for real-world problem formulations in Appendix B.
|
| 44 |
+
|
| 45 |
+
The Bi-Level Formulation. We consider a discounted infinite-horizon MDP $\mathcal{M} = \left( {\mathcal{S},\mathcal{A}, P, R,\gamma }\right)$ . Here, ${s}^{t} \in \mathcal{S}$ is the state and ${a}^{t} \in \mathcal{A}$ is the action space, both at time $t$ . The state in this setting is commodity values at nodes, as well as other available information; actions corresponds to aforementioned decision variables. The dynamics, $P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \left\lbrack {0,1}\right\rbrack$ are probabilistic, with $P\left( {{s}^{t + 1} \mid {s}^{t},{a}^{t}}\right)$ denoting a conditional distribution over ${s}^{t + 1}$ . The reward function $R : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is real-valued, and not limited to strictly positive or negative rewards. Finally, we write the discount factor as $\gamma$ as is typical in the infinite-horizon RL formulation, although it is straightforward to instead consider a finite-horizon setting. Please refer to Appendix B. 1 for further treatment of the MDP.
|
| 46 |
+
|
| 47 |
+
The overall goal of the reinforcement learning problem setting is to find a policy ${\widetilde{\pi }}^{ * } \in \widetilde{\Pi }$ (where $\widetilde{\Pi }$ is the space of realizable Markovian policies) such that ${\widetilde{\pi }}^{ * } \in \arg \mathop{\max }\limits_{{\widetilde{\pi } \in \widetilde{\Pi }}}{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack$ ,
|
| 48 |
+
|
| 49 |
+
---
|
| 50 |
+
|
| 51 |
+
${}^{1}$ We consider several reduced views over these quantities, and maintain several notational rules. We write ${s}_{i}^{t} \in {\mathbb{R}}^{{N}_{c}}$ to denote the vector of all commodities; we write ${s}^{t}\left( k\right) \in {\mathbb{R}}^{{N}_{v}}$ to denote the vector of commodity $k$ at all nodes; we write ${s}_{i}\left( k\right) \in {\mathbb{R}}^{T}$ to denote commodity $k$ at node $i$ for all times $t$ . We can also apply any combination of these notation rules, yielding for example $s \in {\mathbb{R}}^{T \times {N}_{c} \times {N}_{v}}$ .
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
where $\tau = \left( {{s}^{0},{a}^{0},{s}^{1},{a}^{1},\ldots }\right)$ denotes the trajectory of states and actions. This policy formulation requires specifying a distribution over all flow/exchange actions, which may be an extremely large space. We instead consider a bi-level formulation
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\pi }^{ * } \in \underset{\pi \in \Pi }{\arg \max }{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack \;\text{ s.t. }{a}^{t} = \operatorname{LCP}\left( {{\widehat{s}}^{t + 1},{s}^{t}}\right) \tag{3}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where we consider a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ , which maps from the current state to a goal next state (or subset of the state, such as commodity values only). This goal next state is used in the linear control problem $\left( {\operatorname{LCP}\left( {\cdot , \cdot }\right) }\right)$ , which leverages a (slightly modified) one-step version of the linear problem formulation of Section 2 to map from desired next state to action. Thus, the resulting formulation is a bi-level optimization problem, whereby the policy $\widetilde{\pi }$ is the composition of the policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ and the solution to the linear control problem. Specifically, given a sample of ${\widehat{s}}^{t + 1}$ from the stochastic policy, we select concrete flow and exchange actions by solving the linear control problem, defined as
|
| 62 |
+
|
| 63 |
+
$\underset{{a}^{t}}{\arg \min }\;d\left( {{\widehat{s}}^{t + 1},{s}^{t + 1}}\right) - R\left( {{s}^{t},{a}^{t}}\right)$(4a)s.t. Conservation of flow (1); Net flow (5); Flow reward (6);(4b)Exchange conditions (7); Other constraints, e.g. (8) or (9)(4c)
|
| 64 |
+
|
| 65 |
+
where $d\left( {\cdot , \cdot }\right)$ is a chosen convex metric which penalizes deviation from the desired next state. The resultant problem-consisting of a convex objective subject to linear constraints-is convex and thus may be easily and inexpensively solved to choose actions ${a}^{t}$ , even for very large problems.
|
| 66 |
+
|
| 67 |
+
As is standard in reinforcement learning, we will aim to solve this problem via learning the policy from data. This may be in the form of online learning [18] or via learning from offline data [19]. There are large bodies of work on both problems, and our presentation will generally aim to be as-agnostic-as-possible to the underlying reinforcement learning algorithm used. Of critical importance is the fact that the majority of reinforcement learning algorithms use likelihood ratio gradient estimation (typically referred to as the REINFORCE gradient estimator in RL [20]), which does not require path-wise backpropagation through the inner problem.
|
| 68 |
+
|
| 69 |
+
We also note that our formulation assumes access to a model (the linear problem) that is a reasonable approximation of the true dynamics over short horizons. This short-term correspondence is central to our formulation: we exploit exact optimization when it is useful, and otherwise push the impacts of the nonlinearity over time in the learned policy. We assume this model is known in our experiments, but it could be identified independently. Please see Appendix C.1, C.2, and C.4 for a broader discussion.
|
| 70 |
+
|
| 71 |
+
Network Architecture. To exploit the network structure of the problem we introduce a policy graph neural network architecture based on message passing neural networks [21] (Appendix B.2). As introduced in this section, the goal of RL is to learn a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ mapping to goal next states. Concretely, to obtain a valid probability density over next states, we define the output of our policy network to represent the concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{v}}$ of a Dirichlet distribution, such that ${\widehat{s}}^{t + 1} \sim \operatorname{Dir}\left( {{\widehat{s}}^{t + 1} \mid \alpha }\right)$ , although alternate output formulations are possible.
|
| 72 |
+
|
| 73 |
+
## 4 Experiments
|
| 74 |
+
|
| 75 |
+
In this section, we compare against a number of benchmarks on an instance of network control with great real-world impact: the minimum cost flow problem. Within this context, the goal is to control commodities so to move them from one or more source nodes to one or more sink nodes, in the minimum time possible. Appendix E provides further details on both benchmarks and environments.
|
| 76 |
+
|
| 77 |
+
Minimum cost flow through message passing. In this first experiment, we consider 3 different environments (Fig. 1), such that different topologies enforce a different number of required hops of message passing between source and sink nodes to select the best path. Results in Table 1 (2-hop, 3-hop, 4-hop) show how MPNN-RL is able to achieve at least 87% of oracle performance. Table 1 further shows how agents based on graph convolutions (i.e., GCN [22], GAT [23]) fail to learn an effective flow optimization strategy. As in Xu et al. [24], we argue in favor of the algorithmic alignment between the computational structure of MPNNs and the kind of computations needed to solve traditional network optimization problems (see Appendix C.3 for further discussion).
|
| 78 |
+
|
| 79 |
+
Dynamic traversal times. In this experiment, we define time-dependent traversal times. In Fig. 2 and Table 1 (Dyn tt) we measure results on a dynamic network characterized by two change-points, i.e., time steps where the optimal path changes because of a change in traversal times. Results show how the proposed MPNN-RL is able to achieve above ${99}\%$ of oracle performance.
|
| 80 |
+
|
| 81 |
+
Table 1: Average performance across multiple environments over 100 test episodes
|
| 82 |
+
|
| 83 |
+
<table><tr><td colspan="2"/><td>Random</td><td>MLP-RL</td><td>GCN-RL</td><td>GAT-RL</td><td>MPNN-RL (ours)</td><td>Oracle</td></tr><tr><td rowspan="2">2-hops</td><td>Avg. Reward</td><td>63</td><td>387</td><td>201</td><td>146</td><td>576</td><td>642</td></tr><tr><td>% Oracle</td><td>9.9%</td><td>60.2%</td><td>31.3%</td><td>22.9%</td><td>89.7%</td><td>-</td></tr><tr><td rowspan="2">3-hops</td><td>Avg. Reward</td><td>1013</td><td>1084</td><td>1385</td><td>1257</td><td>1803</td><td>2014</td></tr><tr><td>% Oracle</td><td>50.3%</td><td>53.8%</td><td>68.7%</td><td>62.4%</td><td>89.5%</td><td>-</td></tr><tr><td rowspan="2">4-hops</td><td>Avg. Reward</td><td>2033</td><td>2185</td><td>2303</td><td>2198</td><td>2807</td><td>3223</td></tr><tr><td>% Oracle</td><td>63.1%</td><td>67.8%</td><td>71.4%</td><td>68.2%</td><td>87.1%</td><td>-</td></tr><tr><td rowspan="2">Dyn tt</td><td>Avg. Reward</td><td>-546</td><td>-18</td><td>437</td><td>400</td><td>2306</td><td>2327</td></tr><tr><td>% Oracle</td><td>-23.4%</td><td>-0.7%</td><td>18.7%</td><td>17.1%</td><td>99.1%</td><td>-</td></tr><tr><td rowspan="2">Dyn top</td><td>Avg. Reward</td><td>810</td><td>N/A</td><td>1016</td><td>827</td><td>1599</td><td>1904</td></tr><tr><td>% Oracle</td><td>42.5%</td><td>N/A</td><td>53.4%</td><td>43.4%</td><td>$\mathbf{{83.9}\% }$</td><td>-</td></tr><tr><td rowspan="3">Capacity</td><td>Avg. Reward</td><td>1495</td><td>1498</td><td>1557</td><td>1503</td><td>2145</td><td>2389</td></tr><tr><td>% Oracle</td><td>62.6%</td><td>62.7%</td><td>65.2%</td><td>62.9%</td><td>89.8%</td><td>-</td></tr><tr><td>Success Rate</td><td>82%</td><td>82%</td><td>87%</td><td>80%</td><td>87%</td><td>88%</td></tr><tr><td rowspan="2">Multi-com</td><td>Avg. Reward</td><td>2191</td><td>4045</td><td>3278</td><td>3206</td><td>6986</td><td>9701</td></tr><tr><td>% Oracle</td><td>22.5%</td><td>41.7%</td><td>33.8%</td><td>33.0%</td><td>72.0%</td><td>-</td></tr></table>
|
| 84 |
+
|
| 85 |
+
Dynamic topology. In this experiment we assume a time-dependent topology, i.e., nodes and edges can be dropped or added during an episode. This case is substantially different from what most traditional approaches are able to handle: the locality of MPNN agents together with the one-step implicit planning of RL, enable our framework to deal with multiple graph configurations during the same episode. Fig. 3 and Table 1 (Dyn top) show how MPNN-RL achieves 83.9% of oracle performance clearly outperforming the other benchmarks. Crucially, these results highlight how agents based on MLPs result in highly inflexible network controllers, thus limited to a fixed topology.
|
| 86 |
+
|
| 87 |
+
Capacity constraints. In this experiment, we relax the assumption that capacities ${\bar{f}}_{ij}$ are always able to accommodate any flow on the graph. Compared to previous sections, the lower capacities introduce the possibility of infeasible states. To measure this, the Success Rate computes the percentage of episodes which have been terminated successfully. Results in Table 1 (Capacity) highlight how MPNN-RL is able to achieve ${89.8}\%$ of oracle performance while being able to successfully terminate ${87}\%$ of episodes. Qualitatively, Fig. 4 shows a visualization of the policy for a specific test episode. The plots show how the MPNN-RL is able to learn the effects of capacity on the optimal strategy by allocating flow to a different node when the corresponding edge is approaching its capacity limit.
|
| 88 |
+
|
| 89 |
+
Multi-commodity. In this scenario, we extend the current architecture to deal with multiple commodities and source-sink combinations. Results in Table 1 (Multi-com) and Fig. 5 show how MPNN-RL is able to effectively recover distinct policies for each policy head, thus being able to operate successfully multi-commodity flows within the same network.
|
| 90 |
+
|
| 91 |
+
Computational analysis. We study the computational cost of MPNN-RL compared to MPC-based solutions. As shown in Fig. 6, we compare the time necessary to compute a single network flow decision. We do so across varying dimensions of the underlying graph, ranging from 10 up to 400 nodes. As verified by this experiment, learning-based approaches exhibit computational complexity linear in the number of nodes and graph connectivity, without significant decay in performance.
|
| 92 |
+
|
| 93 |
+
## 5 Outlook and Limitations
|
| 94 |
+
|
| 95 |
+
Research in network flow models, in both theory and practice, is largely scattered across the control, management science, and optimization literature, potentially hindering scientific progress. In this work, we propose a general framework that could enable learning-based approaches to help address the open challenges in this space: handling nonlinear dynamics and scalability, among others. In the hope of fostering a unification of tools among the reinforcement learning and network control communities, we aimed to (i) maintain the narration as-agnostic-as-possible, and (ii) showcase the extreme versatility of our framework through numerous controlled experiments. However, what we present here should be considered as, in our opinion, exciting preliminary results aiming to gather more traction among the ML community towards the solution of hugely impactful real-world problems in the field of network control. Crucially, before being able to consider learning-based frameworks as a concrete alternative to current standards, we believe this research opens several promising future directions for the extension of these concepts to large-scale applications.
|
| 96 |
+
|
| 97 |
+
References
|
| 98 |
+
|
| 99 |
+
[1] Daniel Bienstock, Michael Chertkov, and Sean Harnett. Chance-constrained optimal power flow: Risk-aware network control under uncertainty. SIAM Review, 56(3):461-495, 2014. 1
|
| 100 |
+
|
| 101 |
+
[2] Hermann W Dommel and William F Tinney. Optimal power flow solutions. IEEE Transactions on power apparatus and systems, (10):1866-1876, 1968.
|
| 102 |
+
|
| 103 |
+
[3] M Huneault and Francisco D Galiana. A survey of the optimal power flow literature. IEEE transactions on Power Systems, 6(2):762-770, 1991. 1
|
| 104 |
+
|
| 105 |
+
[4] Y. Wang, W. Y. Szeto, K. Han, and T. Friesz. Dynamic traffic assignment: A review of the methodological advances for environmentally sustainable road transportation applications. Transportation Research Part B: Methodological, 111:370-394, 2018. 1
|
| 106 |
+
|
| 107 |
+
[5] D. Gammelli, K. Yang, J. Harrison, F. Rodrigues, F. C. Pereira, and M. Pavone. Graph neural network reinforcement learning for autonomous mobility-on-demand systems. In Proc. IEEE Conf. on Decision and Control, 2021. 1, 10
|
| 108 |
+
|
| 109 |
+
[6] Haralambos Sarimveis, Panagiotis Patrinos, Chris D Tarantilis, and Chris T Kiranoudis. Dynamic modeling and control of supply chain systems: A review. Computers & operations research, 35(11):3530-3561, 2008. 1
|
| 110 |
+
|
| 111 |
+
[7] Marcus A Bellamy and Rahul C Basole. Network analysis of supply chain systems: A systematic review and future research. Systems Engineering, 16(2):235-249, 2013. 1
|
| 112 |
+
|
| 113 |
+
[8] Gabriel Jakobson and Mark Weissman. Real-time telecommunication network management: Extending event correlation with temporal constraints. In International Symposium on Integrated Network Management, pages 290-301, 1995. 1
|
| 114 |
+
|
| 115 |
+
[9] John Edward Flood. Telecommunication networks. IET, 1997.
|
| 116 |
+
|
| 117 |
+
[10] Vladimir Popovskij, Alexander Barkalov, and Larysa Titarenko. Control and adaptation in telecommunication systems: Mathematical Foundations, volume 94. Springer Science & Business Media, 2011. 1
|
| 118 |
+
|
| 119 |
+
[11] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms and Applications. Prentice Hall, 1993. 1
|
| 120 |
+
|
| 121 |
+
[12] B. Kotnyek. An annotated overview of dynamic network flows. INRIA, 2003. 1
|
| 122 |
+
|
| 123 |
+
[13] L. R. Ford and D. R. Fulkerson. Constructing maximal dynamic flows from static flows. Operations Research, 6(3):419-433, 1958. 1
|
| 124 |
+
|
| 125 |
+
[14] L. R. Ford and D. R. Fulkerson. Flows in Networks. Princeton Univ. Press, 1962. 1
|
| 126 |
+
|
| 127 |
+
[15] Fangxing Li and Rui Bo. Dcopf-based Imp simulation: algorithm, comparison with acopf, and sensitivity. IEEE Transactions on Power Systems, 22(4):1475-1485, 2007. 2
|
| 128 |
+
|
| 129 |
+
[16] Rick Zhang, Federico Rossi, and Marco Pavone. Model predictive control of autonomous mobility-on-demand systems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1382-1389, 2016.
|
| 130 |
+
|
| 131 |
+
[17] Peter B Key and Graham A Cope. Distributed dynamic routing schemes. IEEE Communications Magazine, 28(10):54-58, 1990. 2
|
| 132 |
+
|
| 133 |
+
[18] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 2 edition, 2018.3,11
|
| 134 |
+
|
| 135 |
+
[19] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv:2005.01643, 2020. 3
|
| 136 |
+
|
| 137 |
+
[20] R.-J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 1992. 3
|
| 138 |
+
|
| 139 |
+
[21] Gilmer J., Schoenholz S., Riley P., Vinyals O., and Dahl G. Neural message passing for quantum chemistry. In Int. Conf. on Machine Learning, 2017. 3
|
| 140 |
+
|
| 141 |
+
[22] T.-N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Int. Conf. on Learning Representations, 2017. 3
|
| 142 |
+
|
| 143 |
+
[23] P. Veličković, G. Cucurull, A. Casanova, A. Romero, O. Liò, and Y. Bengio. Graph attention networks. In Int. Conf. on Learning Representations, 2018. 3
|
| 144 |
+
|
| 145 |
+
[24] K. Xu, J. Li, M. Zhang, S. Du, K. Kawarabayashi, and S. Jegelka. What can neural networks reason about? In Int. Conf. on Learning Representations, 2020. 3
|
| 146 |
+
|
| 147 |
+
[25] H. Markowitz. Portfolio selection. Journal of Finance, 7(1):77-91, 1952. 10
|
| 148 |
+
|
| 149 |
+
[26] H. U. Gerber and G. Pafum. Utility functions: From risk theory to finance. North American Actuarial Journal, 2(3):74-91, 1998. 10
|
| 150 |
+
|
| 151 |
+
[27] V. Konda and J. Tsitsiklis. Actor-critic algorithms. In Conf. on Neural Information Processing Systems, 1999. 10
|
| 152 |
+
|
| 153 |
+
[28] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. 10
|
| 154 |
+
|
| 155 |
+
[29] V. Mnih, A. Puigdomenech, M. Mirza, A. Graves, T.-P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Int. Conf. on Learning Representations, 2016. 10
|
| 156 |
+
|
| 157 |
+
[30] D. Gammelli, K. Yang, J. Harrison, F. Rodrigues, F. C. Pereira, and M. Pavone. Graph meta-reinforcement learning for transferable autonomous mobility-on-demand. In ACM Int. Conf. on Knowledge Discovery and Data Mining, 2022. 10
|
| 158 |
+
|
| 159 |
+
[31] K. Murota. Matrices and Matroids for Systems Analysis. Springer Science & Business Media, 1 edition, 2009. 10
|
| 160 |
+
|
| 161 |
+
[32] Mario VF Pereira and Leontina MVG Pinto. Multi-stage stochastic optimization applied to energy planning. Mathematical Programming, 52(1):359-375, 1991. 11
|
| 162 |
+
|
| 163 |
+
[33] Justin Dumouchelle, Rahul Patel, Elias B Khalil, and Merve Bodur. Neur2sp: Neural two-stage stochastic programming. arXiv:2205.12006, 2022. 11
|
| 164 |
+
|
| 165 |
+
[34] Scott Fujimoto, David Meger, Doina Precup, Ofir Nachum, and Shixiang Shane Gu. Why should i trust you, bellman? the bellman error is a poor replacement for value error. arXiv:2201.12417, 2022. 11
|
| 166 |
+
|
| 167 |
+
[35] A. Shapiro, D. Dentcheva, and A. Ruszczyński. Lectures on stochastic programming: Modeling and theory. SIAM, second edition, 2014. 11
|
| 168 |
+
|
| 169 |
+
[36] J. Rawlings and D. Mayne. Model predictive control: Theory and design. Nob Hill Publishing, 2013.12
|
| 170 |
+
|
| 171 |
+
[37] Tom Van de Wiele, David Warde-Farley, Andriy Mnih, and Volodymyr Mnih. Q-learning in enormous action spaces via amortized approximate maximization. arXiv:2001.08116, 2020. 12
|
| 172 |
+
|
| 173 |
+
[38] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Int. Conf. on Machine Learning, 2017. 12
|
| 174 |
+
|
| 175 |
+
[39] James Harrison, Apoorva Sharma, and Marco Pavone. Meta-learning priors for efficient online bayesian regression. In Workshop on Algorithmic Foundations of Robotics, pages 318-337, 2018.
|
| 176 |
+
|
| 177 |
+
[40] A. Agrawal, S. Barratt, S. Boyd, E. Busseti, and W. M. Moursi. Differentiating through a conic program. Online, 2019.
|
| 178 |
+
|
| 179 |
+
[41] A. Agrawal, S. Barratt, S. Boyd, and B. Stellato. Learning convex optimization control policies. In Learning for Dynamics & Control, 2019. 12
|
| 180 |
+
|
| 181 |
+
[42] B. Amos and J. Z. Kolter. Optnet: Differentiable optimization as a layer in neural networks. In Int. Conf. on Machine Learning, 2017.
|
| 182 |
+
|
| 183 |
+
[43] Benoit Landry, Joseph Lorenzetti, Zachary Manchester, and Marco Pavone. Bilevel optimization for planning through contact: A semidirect method. In The International Symposium of Robotics Research, pages 789-804, 2019.
|
| 184 |
+
|
| 185 |
+
[44] Luke Metz, Niru Maheswaranathan, Jeremy Nixon, Daniel Freeman, and Jascha Sohl-Dickstein. Understanding and correcting pathologies in the training of learned optimizers. In Int. Conf. on Machine Learning, pages 4556-4565, 2019. 12
|
| 186 |
+
|
| 187 |
+
[45] Aviv Tamar, Garrett Thomas, Tianhao Zhang, Sergey Levine, and Pieter Abbeel. Learning from the hindsight plan-episodic mpc improvement. In Proc. IEEE Conf. on Robotics and Automation, pages 336-343, 2017. 12
|
| 188 |
+
|
| 189 |
+
[46] Brandon Amos, Ivan Jimenez, Jacob Sacks, Byron Boots, and J Zico Kolter. Differentiable mpc for end-to-end planning and control. Conf. on Neural Information Processing Systems, 31, 2018.12
|
| 190 |
+
|
| 191 |
+
[47] Brian Ichter, James Harrison, and Marco Pavone. Learning sampling distributions for robot motion planning. In Proc. IEEE Conf. on Robotics and Automation, pages 7087-7094, 2018. 12
|
| 192 |
+
|
| 193 |
+
[48] Thomas Power and Dmitry Berenson. Variational inference mpc using normalizing flows and out-of-distribution projection. arXiv:2205.04667, 2022.
|
| 194 |
+
|
| 195 |
+
[49] Brandon Amos and Denis Yarats. The differentiable cross-entropy method. In Int. Conf. on Machine Learning, pages 291-302, 2020. 12
|
| 196 |
+
|
| 197 |
+
[50] Jacob Sacks and Byron Boots. Learning to optimize in model predictive control. In Proc. IEEE Conf. on Robotics and Automation, pages 10549-10556, 2022. 12
|
| 198 |
+
|
| 199 |
+
[51] Xuesu Xiao, Tingnan Zhang, Krzysztof Marcin Choromanski, Tsang-Wei Edward Lee, Anthony Francis, Jake Varley, Stephen Tu, Sumeet Singh, Peng Xu, Fei Xia, Leila Takayama, Roy Frostig, Jie Tan, Carolina Parada, and Vikas Sindhwani. Learning model predictive controllers with real-time attention for real-world navigation. In Conf. on Robot Learning, 2022. 12
|
| 200 |
+
|
| 201 |
+
[52] Priya Donti, Brandon Amos, and J Zico Kolter. Task-based end-to-end model learning in stochastic optimization. Conf. on Neural Information Processing Systems, 30, 2017. 12
|
| 202 |
+
|
| 203 |
+
[53] Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75-84, 1990. 12
|
| 204 |
+
|
| 205 |
+
[54] A. Paszke, S. Gross, F. Massa, A. Lerer, et al. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703, 2019. 12
|
| 206 |
+
|
| 207 |
+
[55] IBM. ILOG CPLEX User's guide. IBM ILOG, 1987. 12
|
| 208 |
+
|
| 209 |
+
[56] R. Zhang, F. Rossi, and M. Pavone. Model predictive control of Autonomous Mobility-on-Demand systems. In Proc. IEEE Conf. on Robotics and Automation, 2016. 13
|
| 210 |
+
|
| 211 |
+
## A Dynamic Network Control
|
| 212 |
+
|
| 213 |
+
In this section we make concrete both our linear and nonlinear problem formulation.
|
| 214 |
+
|
| 215 |
+
Flows. We will denote flows along edge(i, j)with ${f}_{ij}^{t}\left( k\right)$ . From these flows, we have
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
{f}_{i}^{t} = \mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ - }\left( i\right) }}{f}_{ji}^{t} - \mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ + }\left( i\right) }}{f}_{ij}^{t},\;\forall i \in \mathcal{V} \tag{5}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
which is the net flow (inflows minus outflows). As discussed, associated with each flow is a cost ${m}_{ij}^{t}\left( k\right)$ . Note that given this formulation, the total cost for all commodities can be written as ${m}_{ij}^{t} \cdot {f}_{ij}^{t} = {\left( {m}_{ij}^{t}\right) }^{\top }{f}_{ij}^{t}$ . Thus, we can write the total flow cost at time $t$ as
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{m}_{f}^{t} = \mathop{\sum }\limits_{{i \in \mathcal{V}}}\left( {\mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ - }\left( i\right) }}{m}_{ji}^{t} \cdot {f}_{ji}^{t} - \mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ + }\left( i\right) }}{m}_{ij}^{t} \cdot {f}_{ij}^{t}}\right) . \tag{6}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
Exchanges. To define our exchange relations and their effect on commodity quantities and costs, we will write the effect which exchanges have on money for each node; we write this as ${m}_{i}^{t}$ . Thus, we have ${m}_{e}^{t} = \mathop{\sum }\limits_{{i \in \mathcal{V}}}{m}_{i}^{t}$ . The exchange relation takes the form
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
\left\lbrack \begin{matrix} {e}_{i}^{t} \\ {m}_{i}^{t} \end{matrix}\right\rbrack = {E}_{i}^{t}{w}_{i}^{t} \tag{7}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
where ${E}_{i}^{t} \in {\mathbb{R}}^{{N}_{c} + 1 \times {N}_{e}\left( i\right) }$ is an exchange matrix and $w \in {\mathbb{R}}^{{N}_{e}\left( i\right) }$ are the weights for each exchange. Each column in this exchange matrix denotes an (exogenous) exchange rate between commodities; for example, for $i$ ’th column ${\left\lbrack -1,1,{0.1}\right\rbrack }^{\top }$ , one unit of commodity one is exchanged for one unit of commodity two plus 0.1 units of money. Thus, choice of exchange weights ${w}_{i}^{t}$ uniquely determines exchanges ${e}_{i}^{t}$ and money change due to exchanges, ${m}_{e}^{t}$ .
|
| 234 |
+
|
| 235 |
+
Linear Constraints. We may impose additional (linear) constraints on the problem beyond the conservation of flow we have discussed so far. There are a few common examples that we may use in several applications. A common constraint is non-negativity of commodity values, which we may express as
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{s}_{i}^{t} \geq 0,\;\forall i, t. \tag{8}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
Note that this inequality is defined element-wise. A similar constraint can be defined for money. We may also impose constraints on flows and exchanges; thus, we may for example limit the flow of all commodities through a particular edge via
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\mathop{\sum }\limits_{{k = 1}}^{{N}_{c}}{f}_{ij}^{t}\left( k\right) \leq {\bar{f}}_{ij}^{t} \tag{9}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
where this sum could also be weighted per-commodity. These linear constraints are only a limited selection of some common examples; the space of possible constraints is extremely general and the particular choice of constraints is problem-specific.
|
| 248 |
+
|
| 249 |
+
Elements breaking the linearity assumptions. Real-world systems are characterized by many factors that cannot be reliably modelled through the linear problem described in Section 2. In what follows, we discuss a (non-exhaustive) list of factors potentially breaking such linearity assumptions:
|
| 250 |
+
|
| 251 |
+
- Stochasticity. Various stochastic elements can impact the problem. Commodity transitions in the previous section were defined as being deterministic; in practice in many problems, there are elements of stochasticity to these transitions. For example, random demand may reduce supply by an unpredictable amount; vehicles may be randomly added in a transportation problem; and packages may be lost in a supply chain setting. In addition to these state transitions, constraints may be stochastic as well: flow times or edge capacities may be stochastic, as when a road is shared with other users, or costs for flows and exchange may be stochastic.
|
| 252 |
+
|
| 253 |
+
- Nonlinearity. Various elements of the state evolution, constraints, or cost function may be nonlinear. The objective may be chosen to be a risk-sensitive or robust metric applied to the distribution of outcomes, as is common in financial problems. The state evolution may have natural saturating behavior (e.g. automatic load shedding). Indeed, many real constraints will have natural nonlinear behavior.
|
| 254 |
+
|
| 255 |
+
- Time-varying costs and constraints. Similar to the stochastic case, various quantities may be time-varying. However, it is possible that they are time-varying in a structured way, as opposed to randomly. For example, demand for transportation may vary over the time of day, or purchasing costs may vary over the year.
|
| 256 |
+
|
| 257 |
+
- Unknown dynamics elements. While not a major focus of discussion in the paper up to this point, elements of the underlying dynamics may be partially or wholly unknown. Our reinforcement learning formulation is capapble of addressing this by learning policies directly from data, in contrast to standard control techniques.
|
| 258 |
+
|
| 259 |
+
## B Methodology
|
| 260 |
+
|
| 261 |
+
In this section we discuss the full MDP formulation (including defining state and action spaces) and discuss algorithmic details.
|
| 262 |
+
|
| 263 |
+
### B.1 The Dynamic Network MDP
|
| 264 |
+
|
| 265 |
+
The problem setting for the full, dynamic network problem is best formulated, in the general case, as a partially-observed MDP. We will present it as a Markovian decision process (fully-observed), where the choice of input features beyond commodity values are chosen by the user; discussion on strategies for better handling partial-observability are presented later in this section.
|
| 266 |
+
|
| 267 |
+
We consider a discounted infinite-horizon MDP $\mathcal{M} = \left( {\mathcal{S},\mathcal{A}, P, R,\gamma }\right)$ . Here, ${s}^{t} \in \mathcal{S}$ is the state and ${a}^{t} \in \mathcal{A}$ is the action space, both at time $t$ . The dynamics, $P : \mathcal{S} \times \mathcal{S} \times \mathcal{A} \rightarrow \left\lbrack {0,1}\right\rbrack$ are probabilistic, with $P\left( {{s}^{t + 1} \mid {s}^{t},{a}^{t}}\right)$ denoting a conditional distribution over ${s}^{t + 1}$ . The reward function $R : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is real-valued, and not limited to strictly positive or negative rewards. Finally, we write the discount factor as $\gamma$ as is typical in the infinite-horizon RL formulation, although it is straightforward to instead consider a finite-horizon setting.
|
| 268 |
+
|
| 269 |
+
State and state space. We will, generally, define the state to contain enough information to yield "good" Markovian policies. More formally, real-world network control problems are typically highly partially-observed; many features of the world impact the state evolution. However, a small number of features are typically of primary importance, and the impact of the other partially-observed elements can be modeled as stochastic disturbances.
|
| 270 |
+
|
| 271 |
+
For our bi-level formulation, there are some state elements that are required. Our formulation requires, at each timestep, the commodity values ${s}^{t}$ . Furthermore, the constraint values are required, such as costs, exchange rates, flow capacities, etc. If the graph topology is time-varying, the connectivity at time $t$ is also critical. The state values to fully define the one-step linear control problem are the only state elements which are required. We refer to these constraint values as edge state elements. More precisely, the state elements we have discussed so far are either properties of the graph nodes (commodity values) or of the edges (such as flow constraints). This difference is of critical importance in our graph neural network architecture.
|
| 272 |
+
|
| 273 |
+
In addition to these state elements, additional information may be incorporated. Generally, the choice of state elements will depend on the information available to a system designer (what can be measured) and will depend on the particular problem setting. Possible examples of further state elements include forecasts of prices, exchange rates, or flow constraints at future times; exchanges rates, for example, include notions of demand or supply. We note that such forecasts are almost always available, as they are necessary for solving the multi-step planning problem.
|
| 274 |
+
|
| 275 |
+
Action and action space. As discussed in Section 2, the action is defined as all flows and exchange weights at all nodes/edges, ${a}^{t} = \left( {{f}^{t},{w}^{t}}\right)$ .
|
| 276 |
+
|
| 277 |
+
Dynamics. The dynamics of the MDP, $P$ , describe the evolution of state elements. We split our discussion in to two parts: the dynamics associated with the commodity time evolution and the dynamics of the non-commodity elements.
|
| 278 |
+
|
| 279 |
+
The commodity dynamics are assumed to be reasonably well-modeled by the conservation of flow, (1), subject to the constraints; this forms the basis of the bi-level approach we describe in the next subsection. The primary element not included in the conservation of flow expression is possible stochasticity. For example, in transportation problems, vehicles may randomly drop out of service.
|
| 280 |
+
|
| 281 |
+
The non-commodity dynamics are assumed to be substantially more complex. For example, prices to buy or sell (reflected in exchange rates) may have a complex dependency on past sales, current demand, and current supply (commodity values), as well as random exogenous factors. Thus, we place few assumptions on the evolution of non-commodity dynamics, and assume that current values are measurable.
|
| 282 |
+
|
| 283 |
+
Reward. Throughout the paper, we will assume our full reward is the total discounted money earned over the (infinite) problem duration. This results in a stage-wise reward function that corresponds simply to the money earned in that time period, or
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
R\left( {{s}^{t},{a}^{t}}\right) = {m}_{e}^{t} + {m}_{f}^{t}. \tag{10}
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
Note that the sum of rewards to time $t$ is exactly ${m}^{t} - {m}^{0}$ , which corresponds to the money earned. It is typical in economics and finance to consider concave utility functions or risk metrics as opposed to the exact return $\left\lbrack {{25},{26}}\right\rbrack$ . However, this reward structure does not result in a simple stage-wise reward decomposition as in the linear case. Thus, while addressing this concavity is important, we do not address it in this work.
|
| 290 |
+
|
| 291 |
+
### B.2 Network Architecture and RL Details
|
| 292 |
+
|
| 293 |
+
In this section we introduce the basic building blocks of our graph neural network architecture. Let us define with ${\mathbf{x}}_{i} \in {\mathbb{R}}^{{D}_{\mathbf{x}}}$ and ${\mathbf{e}}_{ji} \in {\mathbb{R}}^{{D}_{\mathbf{e}}}$ the ${D}_{\mathbf{x}}$ -dimensional vector of node features of node $i$ and the ${D}_{\mathrm{e}}$ -dimensional vector of edge features from node $j$ to node $i$ , respectively.
|
| 294 |
+
|
| 295 |
+
We define the update function of node features through the following message passing neural network (MPNN):
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
{\mathbf{x}}_{i}^{\left( k\right) } = \mathop{\max }\limits_{{j \in {\mathcal{N}}^{ - }\left( i\right) }}{f}_{\theta }\left( {{\mathbf{x}}_{i}^{\left( k - 1\right) },{\mathbf{x}}_{j}^{\left( k - 1\right) },{\mathbf{e}}_{ji}}\right) , \tag{11}
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
where $k$ indicates the $k$ -th layer of message passing in the GNN with $k = 0$ indicating raw environment features, i.e., ${\mathbf{x}}_{i}^{\left( 0\right) } = {\mathbf{x}}_{i}$ , and where we use the element-wise max operator as aggregation function in our proposed graph-network.
|
| 302 |
+
|
| 303 |
+
We note that this network architecture can be used to define both policy and value function estimator, depending on the reinforcement learning algorithm of interest (e.g., actor-critic [27], value-based [28], etc.). As an example, in our implementation, we define two separate decoder architectures for the actor and critic networks of an Advantage Actor Critic (A2C) [29] algorithm. For the actor, we define the output of our policy network to represent the concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{v}}$ of a Dirichlet distribution, such that ${\mathbf{a}}_{t} \sim \operatorname{Dir}\left( {{\mathbf{a}}_{t} \mid \alpha }\right)$ , and where the positivity of $\alpha$ is ensured by a Softplus nonlinearity. On the other hand, the critic is characterized by a global sum-pooling performed after $K$ layers of MPNN. In this way, the critic computes a single value function estimate for the entire network by aggregating information across all nodes in the graph.
|
| 304 |
+
|
| 305 |
+
Exploration. In practice, we choose large penalty terms $d\left( {\cdot , \cdot }\right)$ to minimize greediness. However early in training, randomly initialized penalty terms can harm exploration. In practice, we found it was sufficient to down-weight the penalty term early in training. As such, the inner action selection is biased toward short-term rewards, resulting in greedy action selection. However, there are many further possibilities for exploiting random penalty functions to induce exploration, which we discuss in the next section.
|
| 306 |
+
|
| 307 |
+
Integer-valued flows. For several problem settings, it is desirable that the chosen flows be integer-valued. For example, in a transportation problem, we may wish to allocate some number of vehicles, which can not be infinitely sub-divided [5, 30]. There are several ways to introduce integer-valued constraints to our framework. First, we note that because the RL agent is trained through policy gradient-and thus we do not require a differentiable inner problem-we can simply introduce integer constraints into the lower-level problem ${}^{2}$ . However, solving integer-constrained problems is typically expensive in practice. An alternate solution is to simply use a heuristic rounding operation on the output of the inner problem. Again, because of the choice of gradient estimator, this does not need to be differentiable. Moreover, the RL policy learns to adapt to this heuristic clipping. Thus, we in general recommend this strategy as opposed to directly imposing constraints in the inner problem.
|
| 308 |
+
|
| 309 |
+
## C Discussion and Algorithmic Components
|
| 310 |
+
|
| 311 |
+
In this section we discuss various elements of the proposed framework, highlight correspondences and design decisions, and discuss component-level extensions.
|
| 312 |
+
|
| 313 |
+
---
|
| 314 |
+
|
| 315 |
+
${}^{2}$ Note that several problems exhibit a total unimodularity property [31], for which the relaxed integer-valued problem is tight.
|
| 316 |
+
|
| 317 |
+
---
|
| 318 |
+
|
| 319 |
+
### C.1 Distance metric as value function
|
| 320 |
+
|
| 321 |
+
The role of the distance metric (and the generated goal next state) is to capture the value of future reward in the greedy one-step inner optimization problem. This is closely related to the value function in dynamic programming and reinforcement learning, which in expectation captures the sum of future rewards for a particular policy. Indeed, under moderate technical assumptions, our linear problem formulation with stochasticity yields convex expected cost-to-go (the negative of the value) [32,33].
|
| 322 |
+
|
| 323 |
+
There are several critical differences between our penalty term and a learned value function. First, a value function in a Markovian setting for a given policy is a function solely of state. For example, in the LCP, a value function would depend only on ${s}^{t + 1}$ . In contrast, our value function depends on ${\widehat{s}}^{t + 1}$ , which is the output of a policy which takes ${s}^{t}$ as an input. Thus, the penalty term is a function of both the current and predicted next state. Given this, the penalty term is better understood as a local approximation of the value function, for which convex optimization is tractable, or as a form of state-action value function with a reduced action space (also referred to as a Q function).
|
| 324 |
+
|
| 325 |
+
The second major distinction between the penalty term and a value function is particular to reinforcement learning. Value functions in modern RL are typically learned via minimizing the Bellman residual [18], although there is disagreement on whether this is a desirable objective [34]. In contrast, our policy is trained directly via gradient descent on the total reward (potentially incorporating value function control variates). Thus, the objective for this penalty method is better aligned with maximizing total reward.
|
| 326 |
+
|
| 327 |
+
### C.2 Beyond a single-step inner problem
|
| 328 |
+
|
| 329 |
+
Our formulation so far has considered a bi-level formulation in which the RL policy outputs a desired state at the next timestep, ${\widehat{s}}^{t + 1}$ ; this is then used in the lower-level problem to select actions. There are two relaxations to this procedure that can be incorporated here.
|
| 330 |
+
|
| 331 |
+
First, the RL policy can output any future state, and direct optimization can happen for any horizon. We may parameterize the RL policy to return ${\widehat{s}}^{t + k}$ for $k \geq 1$ . Given this, a multi-step optimization problem may be solved using the linear model. The potential risk to this approach is the linear (in horizon) growth in variables for the inner problem, and poor agreement between the linear model and the nonlinear model. This presents a strict generalization of our proposed method. The primary reason we have not considered the multi-step formulation as the primary algorithm of this paper is that it requires modeling the dynamics of the non-commodity state variables. For example, this model requires forecasting all constraint values, whereas our one-step formulation requires only knowledge at the current timestep. Forecasting of constraint values is closely linked to questions of (persistent) feasibility, which we do not consider in detail in this paper.
|
| 332 |
+
|
| 333 |
+
Second, stochasticity may be directly integrated into the lower-level problem. The standard formulation for stochastic model predictive control (or stochastic multi-stage optimization) is the scenario formulation [35], in which a tree of outcomes is constructed via sampling noise realizations ${}^{3}$ . Within the one-step bi-level formulation, sampling ${N}_{n}$ noise realizations results in ${N}_{n}$ values of the next state, ${s}_{i}^{t + 1}, i = 1,\ldots ,{N}_{n}$ within the inner problem. The empirical mean loss
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
{\mathbb{E}}_{{s}^{t + 1}}\left\lbrack {d\left( {{\widehat{s}}^{t + 1},{s}^{t + 1}}\right) }\right\rbrack - R\left( {{s}^{t},{a}^{t}}\right) \approx \frac{1}{{N}_{n}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{n}}d\left( {{\widehat{s}}^{t + 1},{s}_{i}^{t + 1}}\right) - R\left( {{s}^{t},{a}^{t}}\right) \tag{12}
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
can then be minimized. We emphasize that the actions are the same for each noise realization-this is the so-called non-anticipativity constraint. This formulation, for one step, does not meaningfully increase the number of decision variables, although will result in increased computational complexity More importantly, multi-step optimization within the scenario tree approach yields exponential growth in the number of decision variables, which will rapid result in intractability. We refer the reader to [35] for more details on scenario-based stochastic optimization.
|
| 340 |
+
|
| 341 |
+
### C.3 Algorithmic alignment
|
| 342 |
+
|
| 343 |
+
The concept of algorithmic alignment refers to the fact that despite many neural network architectures have the capacity to represent a wide range of algorithms, not all networks are able to actually learn these algorithms. Intuitively, a network may learn and generalize better if it is able to represent a function (algorithm) "more easily." A notable example of this in the context of supervised learning is the relation between MLPs and CNNs in computer vision-where MLPs are theoretically universal approximators yet struggle to achieve satisfying performance on most vision tasks. The difference in results of MLP-RL in Table 1 (2-hops) compared to Table 1 (3-hops, 4-hops) further confirms these concepts, whereby the smaller dimensionality of the 2-hops environment leads to a smaller solution space for the MLPs, which are able to converge to relatively good policies. On the other hand, the 3-hops and 4-hops environments are characterized by a significant increase in the number of edges and nodes, leading to a more challenging search for solutions in policy-space.
|
| 344 |
+
|
| 345 |
+
---
|
| 346 |
+
|
| 347 |
+
${}^{3}$ We note that non-sampling strategies such as moment-matching formulations are also possible, although we will not discuss these methods herein.
|
| 348 |
+
|
| 349 |
+
---
|
| 350 |
+
|
| 351 |
+
### C.4 Computational efficiency
|
| 352 |
+
|
| 353 |
+
Consider solving the full nonlinear control problem via direct optimization over a finite horizon ( $T$ timesteps), which corresponds to a model predictive control [36] formulation. How many total actions must be selected? The number of possible flows for a fully dense graph (worst case) is ${N}_{v}\left( {{N}_{v} - 1}\right)$ . In addition to this, there are $\mathop{\sum }\limits_{{i \in \mathcal{V}}}{N}_{e}\left( i\right)$ possible exchange actions; if we assume ${N}_{e}$ is the same for all nodes, this yields ${N}_{v}{N}_{e}$ possible actions. Finally, we have ${N}_{c}$ commodities. Thus, the worst-case number of actions to select is $T{N}_{c}{N}_{v}\left( {{N}_{v} + {N}_{e} - 1}\right)$ ; it is evident that for even moderate choices of each variable, the complexity of action selection in our problem formulation quickly grows beyond tractability.
|
| 354 |
+
|
| 355 |
+
While moderately-sized problems may be tractable within the direct optimization setting, we aim to incorporate the impacts of stochasticity, nonlinearity, and uncertainty, which typically results in non-convexity. The reinforcement learning approach, in addition to being able to improve directly from data, reduces the number of actions required to those for a single step. If we were to directly parameterize the naive policy that outputs flows and exchanges, this would correspond to ${N}_{c}{N}_{v}\left( {{N}_{v} + }\right.$ ${N}_{e} - 1$ ) actions. For even moderate values of ${N}_{c},{N}_{v},{N}_{e}$ , this can result in millions of actions. It is well-known that reinforcement learning algorithms struggle with high dimensional action spaces [37], and thus this approach is unlikely to be successful. In contrast, our bi-level formulation requires only ${N}_{c}$ actions for the learned policy, while additionally leveraging the beneficial inductive biases over short time horizons.
|
| 356 |
+
|
| 357 |
+
## D Related Work
|
| 358 |
+
|
| 359 |
+
Bi-level optimization-in which one optimization problem depends on the solution to another optimization problem, and are thus nested-has recently become an important topic in machine learning, reinforcement learning, and control [38-44]. Of particular relevance to our framework are methods that combine principled control strategies with learned components in a hierarchical way. Examples include using LQR control in the inner problem with learnable cost and dynamics $\left\lbrack {{41},{45},{46}}\right\rbrack$ ; learning sampling distributions in planning and control $\left\lbrack {{47} - {49}}\right\rbrack$ ; or learning optimization strategies or goals for optimization-based control [50, 51].
|
| 360 |
+
|
| 361 |
+
Numerous strategies for learning control with bi-level formulations have been proposed. A simple approach is to insert intermediate goals to train lower-level components, such as imitation [47]. This approach is inherently limited by the choice of the intermediate objective; if this objective does not strongly correlate with the downstream task, learning could emphasize unnecessary elements or miss critical ones. An alternate strategy, which we take in this work, is directly optimizing through an inner controller. A large body of work has focused on exploiting exact solutions to the gradient of (convex) optimization problems at fixed points $\left\lbrack {{41},{46},{52}}\right\rbrack$ . This allows direct backpropatation through optimization problems, allowing them to be used as a generic component in a differentiable computation graph (or neural network). Our approach leverages likelihood ratio gradients (equivalently, policy gradient), an alternate zeroth-order gradient estimator [53]. This enables easy differentiation through lower-lever optimization problems without the technical details necessitated with fixed-point differentiation.
|
| 362 |
+
|
| 363 |
+
## E Experiments
|
| 364 |
+
|
| 365 |
+
### E.1 Benchmarks
|
| 366 |
+
|
| 367 |
+
All RL modules were implemented using PyTorch [54] and the IBM CPLEX solver [55] for the optimization problem. In our experiments, we compare the proposed framework with the following methods:
|
| 368 |
+
|
| 369 |
+
Heuristics. In this class of methods, we focus on measuring performance of simple, domain-knowledge-driven rebalancing heuristics.
|
| 370 |
+
|
| 371 |
+
1. Random policy: at each timestep, we sample the desired distribution from a Dirichlet prior with concentration parameter $\alpha = \left\lbrack {1,1,\ldots ,1}\right\rbrack$ . This benchmark provides a lower bound of performance by choosing desired goal states randomly.
|
| 372 |
+
|
| 373 |
+
Learning-based. Within this class of methods, we focus on measuring how different architectures affect the quality of the solutions for the dynamic network control problem. For all methods, the A2C algorithm is kept fixed, thus the difference solely lies in the neural network architecture.
|
| 374 |
+
|
| 375 |
+
3. MLP-RL: both policy and value function estimator are parametrized by feed-forward neural networks. In all our experiments, we use two layers of 32 hidden unites and an output layer mapping to the output's support (e.g., a scalar value for the critic network). Through this comparison, we highlight the performance and flexibility of graph representations for network-structured data.
|
| 376 |
+
|
| 377 |
+
4. GCN-RL: In all our experiments, we use three layers of graph convolution with 32 hidden units and a linear output layer mapping to the output's support. See below for a broader discussion of graph convolution operators.
|
| 378 |
+
|
| 379 |
+
5. GAT-RL: In all our experiments, we use three layers of graph attention with 32 hidden units and single attention head. The output is further computed by a linear output layer mapping to the output's support. Together with GCN-RL, this model represents an approach based on graph convolutions rather than explicit message passing along the edges (as in MPNNs). Through this comparison, we argue in favor of explicit, pair-wise messages along the edges, opposed to sole aggregation of node features among a neighborhood. Specifically, we argue in favor of the alignment between MPNN and the kind of computations required to solve flow optimization tasks, e.g., propagation of travel times and selection of best path among a set of candidates (max aggregation).
|
| 380 |
+
|
| 381 |
+
6. MPNN-RL: ours. We use three layers of MPNN of 32 hidden units as defined in Section B. 2 and a linear output layer mapping to the output's support.
|
| 382 |
+
|
| 383 |
+
MPC-based. Within this class of methods, we focus on measuring performance of MPC approaches that serve as state-of-art benchmarks for the dynamic network flow problem.
|
| 384 |
+
|
| 385 |
+
5. MPC-Oracle: we directly optimize the flow using a standard formulation of MPC [56]. Notice that although the embedded optimization is a linear programming model, it may not meet the computation requirement of real-time applications (e.g., obtaining a solution within several seconds) for large scale networks.
|
| 386 |
+
|
| 387 |
+
### E.2 Environments
|
| 388 |
+
|
| 389 |
+
- Minimum cost flow through message passing. Given a single-source, single-sink network, we assume travel times to be constant over the episode and requirements (i.e., demand) to be sampled at each time step as $\rho = {10} + {\psi }_{i},{\psi }_{i} \sim \operatorname{Uniform}\left\lbrack {-2,2}\right\rbrack$ . Capacities ${u}_{ij}$ are fixed to a very high positive number, thus not representing a constraint in practice. Cost ${m}_{ij}$ is considered equal to the traversal time ${t}_{ij}$ . An episode is assumed to have a duration of 30 time steps and terminates when there is no more flow traversing the network. To present a variety of scenarios to the agent at training time, we sample random travel times for each new episode as ${t}_{ij} \sim$ Uniform $\left\lbrack {0,{10}}\right\rbrack$ and use the topologies shown in Fig. 1. In our experiments, we apply as many layers of message passing as hops from source to sink node in the graph, e.g., $K = 2$ and $K = 3$ in the 2-hops and 3-hops environment, respectively.
|
| 390 |
+
|
| 391 |
+
- Dynamic traversal times. To train our MPNN-RL, we select the 3-hops environment and generate travel times as follows for every episode: (i) sample random traversal times as ${t}_{ij} \sim$ Uniform $\left\lbrack {0,{10}}\right\rbrack$ ,(ii) for every time step, gradually change the traversal time as ${t}_{ij} = {t}_{ij} + \psi ,\psi \sim$ Uniform $\left\lbrack {-1,1}\right\rbrack$ .
|
| 392 |
+
|
| 393 |
+
- Capacity constraints. In this experiment, we focus on the 3-hops environment and assume a constant value ${\bar{f}}_{ij} = {20},\forall i, j \in \mathcal{V} : j \neq 7$ while we keep a high value for all the edges going into node 7 (i.e., the sink node) which would more easily generate infeasible scenarios. From an RL perspective, we add the following edge-level features:
|
| 394 |
+
|
| 395 |
+
- Edge-capacity ${\left\{ {\bar{f}}_{ij}^{t}\right\} }_{i, j \in \mathcal{V}}$ at the current time step $t$ .
|
| 396 |
+
|
| 397 |
+
- Accumulated flow ${\left\{ {f}_{ij}^{t}\right\} }_{i, j \in \mathcal{V}}$ on edge ${ij}$
|
| 398 |
+
|
| 399 |
+
- Multi-commodity. Let ${N}_{c}$ define the number of commodities to consider, indexed by $k$ . From an RL perspective, we extend the our proposed policy graph neural network to represent a ${N}_{c}$ - dimensionsional Dirichlet distribution. Concretely, we define the output of the policy network to
|
| 400 |
+
|
| 401 |
+

|
| 402 |
+
|
| 403 |
+
Figure 1: Graph topologies used for the message passing experiments: 2-hops (left), 3-hops (center), 4-hops (right). The source and sink nodes are represented by the left-most and right-most nodes, respectively. Values in proximity of the edges represent traversal times.
|
| 404 |
+
|
| 405 |
+

|
| 406 |
+
|
| 407 |
+
Figure 2: Visualization of a trained instance of MPNN-RL on an environment with dynamic traversal times. We simulate a scenario where the optimal path changes three times (left, middle, and right) over the course of an episode. Shaded edges represent actions induced by the MPNN-RL.
|
| 408 |
+
|
| 409 |
+
represent the ${N}_{c} \times {N}_{v}$ concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{c} \times {N}_{v}}$ of a Dirichlet distribution over nodes for each commodity, such that ${\mathbf{a}}_{t} \sim \operatorname{Dir}\left\{ {{\mathbf{a}}_{t} \mid \alpha }\right\}$ . In other words, to extend our approach to the multi-commodity setting, we define a multi-head policy network characterized by one head per commodity. In our experiments, we train our multi-head agent on the topology shown in Fig. 5 whereby we assume two parallel commodities: commodity A going from node 0 to node 10 , and commodity B going from node 0 to node 11 . We choose this topology so that the only way to solve the scenario is to discover distinct behaviours between the two network heads (i.e., the policy head controlling flow for commodity A needs to go up or it won't get any reward, and vice-versa for commodity B).
|
| 410 |
+
|
| 411 |
+
- Computational analysis. In this experiment, we generate different versions of the 3-hops environment, whereby different environments are characterized by intermediate layers with increasing number of nodes and edges. The results are computed by applying the pre-trained MPNN-RL agent on the original 3-hops environment (i.e., characterized by 8 nodes in the graph). In light of this, Figure 6 showcases a promising degree of transfer and generalization among graphs of different dimensions.
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
|
| 415 |
+
Figure 3: Visualization of a trained instance of MPNN-RL on an environment with dynamic topology. We simulate a scenario where the optimal path changes over the course of an episode because of the addition of a new path. Shaded edges represent actions induced by the MPNN-RL.
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
|
| 419 |
+
Figure 4: Visualization of the MPNN-RL policy on the capacity constrained environment. (Top) The resulting flow ${f}_{ij}$ on the edges $0 \rightarrow 1,0 \rightarrow 2,0 \rightarrow 3$ . (Center) The accumulated flow on the same edges compared to the fixed capacity ${\bar{f}}_{ij} = {20}$ , represented as a dashed horizontal line. (Bottom) The desired distribution described by the MPNN-RL policy.
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
|
| 423 |
+
Figure 5: Visualization of the multi-commodity environment. (Left) The topology considered during our experiments. (Center) A visualization of the policy for the first commodity A. (Right) A visualization of the policy for the second commodity B.
|
| 424 |
+
|
| 425 |
+

|
| 426 |
+
|
| 427 |
+
Figure 6: Comparison of computation times between learning-based (blue) and control-based (orange) approaches. Green triangles represent the percentage performance of our RL framework compared to the oracle model.
|
| 428 |
+
|
papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ GRAPH REINFORCEMENT LEARNING FOR NETWORK CONTROL VIA BI-LEVEL OPTIMIZATION
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Dynamic network flow models have been extensively studied and widely used in the past decades to formulate many problems with great real-world impact, such as transportation, supply chain management, power grid control, and more. Within this context, time-expansion techniques currently represent a generic approach for solving control problems over dynamic networks. However, the complexity of these methods does not allow traditional approaches to scale to large networks, especially when these need to be solved recursively over a receding horizon (e.g., to yield a sequence of actions in model predictive control). Moreover, tractable optimization-based approaches are limited to simple linear deterministic settings, and are not able to handle environments with stochastic, non-linear, or unknown dynamics. In this work, we present dynamic network flow problems through the lens of reinforcement learning and propose a graph network-based framework that can handle a wide variety of problems and learn efficient algorithms without significantly compromising optimality. Instead of a naive and poorly-scalable formulation, in which agent actions (and thus network outputs) consist of actions on edges, we present a two-phase decomposition. The first phase consists of an RL agent specifying desired outcomes to the actions. The second phase exploits the problem structure to solve a convex optimization problem and achieve (as best as possible) these desired outcomes. This formulation leads to dramatically improved scalability and performance. We further highlight a collection of features that are potentially desirable to system designers, investigate design decisions, and present experiments showing the utility, scalability, and flexibility of our framework.
|
| 12 |
+
|
| 13 |
+
§ 24 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Many economically critical real-world systems are well-modelled through the lens of control on graphs. Power generation [1-3]; road, rail, and air transportation systems [4, 5]; complex manufacturing systems, supply chain, and distribution networks [6, 7]; telecommunication networks [8-10]; and many other systems are fundamentally the problem of controlling flows of products, vehicles, or other quantities on graph-structured networks. Traditionally, these problems are approached through the definition of a dynamic network flow model (DNF) [11, 12]. Within this class of problems, Ford and Fulkerson [13, 14] proposed a generic approach, showing how one can use time-expansion techniques to (i) convert dynamic networks with discrete time horizon into static networks, and (ii) solve the problem using algorithms developed for static networks. However, this approach leads to networks that grow exponentially in the input size of the problem, thus not allowing traditional methods to scale to large networks. Moreover, the design of good heuristics or approximation algorithms for network flow problems often requires significant specialized knowledge and trial-and-error.
|
| 16 |
+
|
| 17 |
+
In this paper, we argue that data-driven strategies have the potential to automate this challenging, tedious process, and learn efficient algorithms without compromising optimality. To do so, we propose a graph network-based reinforcement learning framework that can handle a wide variety of network control problems. Specifically, we introduce a bi-level formulation that leads to dramatically 41 improved scalability and performance by combining the strengths of mathematical optimization and learning-based approaches.
|
| 18 |
+
|
| 19 |
+
§ 2 PROBLEM SETTING: DYNAMIC NETWORK CONTROL
|
| 20 |
+
|
| 21 |
+
To outline our problem formulation, we first define the linear problem, which is a classic convex problem formulation. We will then define a nonlinear, dynamic, non-convex problem setting that better corresponds to real-world instances. Much of the classical flow control literature and practice substitute the former linear problem for the latter nonlinear problem to yield tractable optimization problems [15-17]; we leverage the linear problem as an important algorithmic primitive. We consider the control of ${N}_{c}$ commodities on graphs, for example, vehicles in a transportation problem. A graph $\mathcal{G} = \{ \mathcal{V},\mathcal{E}\}$ is defined as a set $\mathcal{V}$ of ${N}_{v}$ nodes, and a set $\mathcal{E}$ of ${N}_{e}$ ordered pairs of nodes(i, j)called edges, each described by a traversal time ${t}_{ij}$ . We use ${\mathcal{N}}^{ + }\left( i\right) ,{\mathcal{N}}^{ - }\left( i\right) \subseteq \mathcal{V}$ for the set of nodes having edges pointing away from or toward node $i$ , respectively. We use ${s}_{i}^{t}\left( k\right) \in \mathbb{R}$ to denote the quantity of commodity $k$ at node $i$ and time ${t}^{1}$ .
|
| 22 |
+
|
| 23 |
+
The Linear Network Control Problem. Within the linear model, our commodity quantities evolve
|
| 24 |
+
|
| 25 |
+
in time as
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
{s}_{i}^{t + 1} = {s}_{i}^{t} + {f}_{i}^{t} + {e}_{i}^{t},\;\forall i \in \mathcal{V} \tag{1}
|
| 29 |
+
$$
|
| 30 |
+
|
| 31 |
+
where ${f}_{i}^{t}$ denotes the change due to flow of commodities along edges and ${e}_{i}^{t}$ denotes the change due to exchange between commodities at the same graph node. We refer to this expression as the conservation of flow. We also accrue money as
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
{m}^{t + 1} = {m}^{t} + {m}_{f}^{t} + {m}_{e}^{t}, \tag{2}
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
where ${m}_{f}^{t},{m}_{e}^{t} \in \mathbb{R}$ denote the money gained due to flows and exchanges respectively. Money can also be replaced with any other form of scalar reward, although it may be subject to e.g. non-negativity constraints and thus is different from the notion of reward in the RL problem. Our overall problem formulation will typically be to control flows and exchanges so as to maximize money over one or more steps subject to additional constraints such as, e.g., flow limitations through a particular edge. Please refer to Appendix A for a formal treatment of flow and exchange quantities, together with practical constraints within network control problems.
|
| 38 |
+
|
| 39 |
+
The Nonlinear Dynamic Network Control Problem. The previous subsection presented a linear problem formulation that yields a convex optimization problem for the decision variables-the chosen flow and exchange values. However, the formulation is limited by the assumption of linearity, thus lacking in the characterization of a number of elements typical of real-world systems (please refer to Appendix A for a more complete treatment). Crucially, these nonlinear, time-varying, stochastic, or unknown elements lead to severe difficulties in applying the convex formulation derived in the previous subsection. A common approach is to solve a linearized version of the nonlinear problem at each timestep, which is a form of model predictive control (MPC), although this essentially discards some elements of the problem to achieve computational tractability. In this paper, we focus on solving the nonlinear problem (reflecting real, highly general problem statements) via a bilevel optimization approach, wherein the linear problem (which has been shown to be extremely useful in practice) is used as an inner control primitive.
|
| 40 |
+
|
| 41 |
+
§ 3 METHODOLOGY: THE BI-LEVEL FORMULATION
|
| 42 |
+
|
| 43 |
+
In this section we describe the bi-level formulation that is the primary contribution of this paper. We further introduce a more formal Markov decision process (MDP) for our problem setting, together with a discussion on practical elements for real-world problem formulations in Appendix B.
|
| 44 |
+
|
| 45 |
+
The Bi-Level Formulation. We consider a discounted infinite-horizon MDP $\mathcal{M} = \left( {\mathcal{S},\mathcal{A},P,R,\gamma }\right)$ . Here, ${s}^{t} \in \mathcal{S}$ is the state and ${a}^{t} \in \mathcal{A}$ is the action space, both at time $t$ . The state in this setting is commodity values at nodes, as well as other available information; actions corresponds to aforementioned decision variables. The dynamics, $P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \left\lbrack {0,1}\right\rbrack$ are probabilistic, with $P\left( {{s}^{t + 1} \mid {s}^{t},{a}^{t}}\right)$ denoting a conditional distribution over ${s}^{t + 1}$ . The reward function $R : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is real-valued, and not limited to strictly positive or negative rewards. Finally, we write the discount factor as $\gamma$ as is typical in the infinite-horizon RL formulation, although it is straightforward to instead consider a finite-horizon setting. Please refer to Appendix B. 1 for further treatment of the MDP.
|
| 46 |
+
|
| 47 |
+
The overall goal of the reinforcement learning problem setting is to find a policy ${\widetilde{\pi }}^{ * } \in \widetilde{\Pi }$ (where $\widetilde{\Pi }$ is the space of realizable Markovian policies) such that ${\widetilde{\pi }}^{ * } \in \arg \mathop{\max }\limits_{{\widetilde{\pi } \in \widetilde{\Pi }}}{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack$ ,
|
| 48 |
+
|
| 49 |
+
${}^{1}$ We consider several reduced views over these quantities, and maintain several notational rules. We write ${s}_{i}^{t} \in {\mathbb{R}}^{{N}_{c}}$ to denote the vector of all commodities; we write ${s}^{t}\left( k\right) \in {\mathbb{R}}^{{N}_{v}}$ to denote the vector of commodity $k$ at all nodes; we write ${s}_{i}\left( k\right) \in {\mathbb{R}}^{T}$ to denote commodity $k$ at node $i$ for all times $t$ . We can also apply any combination of these notation rules, yielding for example $s \in {\mathbb{R}}^{T \times {N}_{c} \times {N}_{v}}$ .
|
| 50 |
+
|
| 51 |
+
where $\tau = \left( {{s}^{0},{a}^{0},{s}^{1},{a}^{1},\ldots }\right)$ denotes the trajectory of states and actions. This policy formulation requires specifying a distribution over all flow/exchange actions, which may be an extremely large space. We instead consider a bi-level formulation
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\pi }^{ * } \in \underset{\pi \in \Pi }{\arg \max }{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack \;\text{ s.t. }{a}^{t} = \operatorname{LCP}\left( {{\widehat{s}}^{t + 1},{s}^{t}}\right) \tag{3}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where we consider a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ , which maps from the current state to a goal next state (or subset of the state, such as commodity values only). This goal next state is used in the linear control problem $\left( {\operatorname{LCP}\left( {\cdot , \cdot }\right) }\right)$ , which leverages a (slightly modified) one-step version of the linear problem formulation of Section 2 to map from desired next state to action. Thus, the resulting formulation is a bi-level optimization problem, whereby the policy $\widetilde{\pi }$ is the composition of the policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ and the solution to the linear control problem. Specifically, given a sample of ${\widehat{s}}^{t + 1}$ from the stochastic policy, we select concrete flow and exchange actions by solving the linear control problem, defined as
|
| 58 |
+
|
| 59 |
+
$\underset{{a}^{t}}{\arg \min }\;d\left( {{\widehat{s}}^{t + 1},{s}^{t + 1}}\right) - R\left( {{s}^{t},{a}^{t}}\right)$(4a)s.t. Conservation of flow (1); Net flow (5); Flow reward (6);(4b)Exchange conditions (7); Other constraints, e.g. (8) or (9)(4c)
|
| 60 |
+
|
| 61 |
+
where $d\left( {\cdot , \cdot }\right)$ is a chosen convex metric which penalizes deviation from the desired next state. The resultant problem-consisting of a convex objective subject to linear constraints-is convex and thus may be easily and inexpensively solved to choose actions ${a}^{t}$ , even for very large problems.
|
| 62 |
+
|
| 63 |
+
As is standard in reinforcement learning, we will aim to solve this problem via learning the policy from data. This may be in the form of online learning [18] or via learning from offline data [19]. There are large bodies of work on both problems, and our presentation will generally aim to be as-agnostic-as-possible to the underlying reinforcement learning algorithm used. Of critical importance is the fact that the majority of reinforcement learning algorithms use likelihood ratio gradient estimation (typically referred to as the REINFORCE gradient estimator in RL [20]), which does not require path-wise backpropagation through the inner problem.
|
| 64 |
+
|
| 65 |
+
We also note that our formulation assumes access to a model (the linear problem) that is a reasonable approximation of the true dynamics over short horizons. This short-term correspondence is central to our formulation: we exploit exact optimization when it is useful, and otherwise push the impacts of the nonlinearity over time in the learned policy. We assume this model is known in our experiments, but it could be identified independently. Please see Appendix C.1, C.2, and C.4 for a broader discussion.
|
| 66 |
+
|
| 67 |
+
Network Architecture. To exploit the network structure of the problem we introduce a policy graph neural network architecture based on message passing neural networks [21] (Appendix B.2). As introduced in this section, the goal of RL is to learn a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ mapping to goal next states. Concretely, to obtain a valid probability density over next states, we define the output of our policy network to represent the concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{v}}$ of a Dirichlet distribution, such that ${\widehat{s}}^{t + 1} \sim \operatorname{Dir}\left( {{\widehat{s}}^{t + 1} \mid \alpha }\right)$ , although alternate output formulations are possible.
|
| 68 |
+
|
| 69 |
+
§ 4 EXPERIMENTS
|
| 70 |
+
|
| 71 |
+
In this section, we compare against a number of benchmarks on an instance of network control with great real-world impact: the minimum cost flow problem. Within this context, the goal is to control commodities so to move them from one or more source nodes to one or more sink nodes, in the minimum time possible. Appendix E provides further details on both benchmarks and environments.
|
| 72 |
+
|
| 73 |
+
Minimum cost flow through message passing. In this first experiment, we consider 3 different environments (Fig. 1), such that different topologies enforce a different number of required hops of message passing between source and sink nodes to select the best path. Results in Table 1 (2-hop, 3-hop, 4-hop) show how MPNN-RL is able to achieve at least 87% of oracle performance. Table 1 further shows how agents based on graph convolutions (i.e., GCN [22], GAT [23]) fail to learn an effective flow optimization strategy. As in Xu et al. [24], we argue in favor of the algorithmic alignment between the computational structure of MPNNs and the kind of computations needed to solve traditional network optimization problems (see Appendix C.3 for further discussion).
|
| 74 |
+
|
| 75 |
+
Dynamic traversal times. In this experiment, we define time-dependent traversal times. In Fig. 2 and Table 1 (Dyn tt) we measure results on a dynamic network characterized by two change-points, i.e., time steps where the optimal path changes because of a change in traversal times. Results show how the proposed MPNN-RL is able to achieve above ${99}\%$ of oracle performance.
|
| 76 |
+
|
| 77 |
+
Table 1: Average performance across multiple environments over 100 test episodes
|
| 78 |
+
|
| 79 |
+
max width=
|
| 80 |
+
|
| 81 |
+
2|c|X Random MLP-RL GCN-RL GAT-RL MPNN-RL (ours) Oracle
|
| 82 |
+
|
| 83 |
+
1-8
|
| 84 |
+
2*2-hops Avg. Reward 63 387 201 146 576 642
|
| 85 |
+
|
| 86 |
+
2-8
|
| 87 |
+
% Oracle 9.9% 60.2% 31.3% 22.9% 89.7% -
|
| 88 |
+
|
| 89 |
+
1-8
|
| 90 |
+
2*3-hops Avg. Reward 1013 1084 1385 1257 1803 2014
|
| 91 |
+
|
| 92 |
+
2-8
|
| 93 |
+
% Oracle 50.3% 53.8% 68.7% 62.4% 89.5% -
|
| 94 |
+
|
| 95 |
+
1-8
|
| 96 |
+
2*4-hops Avg. Reward 2033 2185 2303 2198 2807 3223
|
| 97 |
+
|
| 98 |
+
2-8
|
| 99 |
+
% Oracle 63.1% 67.8% 71.4% 68.2% 87.1% -
|
| 100 |
+
|
| 101 |
+
1-8
|
| 102 |
+
2*Dyn tt Avg. Reward -546 -18 437 400 2306 2327
|
| 103 |
+
|
| 104 |
+
2-8
|
| 105 |
+
% Oracle -23.4% -0.7% 18.7% 17.1% 99.1% -
|
| 106 |
+
|
| 107 |
+
1-8
|
| 108 |
+
2*Dyn top Avg. Reward 810 N/A 1016 827 1599 1904
|
| 109 |
+
|
| 110 |
+
2-8
|
| 111 |
+
% Oracle 42.5% N/A 53.4% 43.4% $\mathbf{{83.9}\% }$ -
|
| 112 |
+
|
| 113 |
+
1-8
|
| 114 |
+
3*Capacity Avg. Reward 1495 1498 1557 1503 2145 2389
|
| 115 |
+
|
| 116 |
+
2-8
|
| 117 |
+
% Oracle 62.6% 62.7% 65.2% 62.9% 89.8% -
|
| 118 |
+
|
| 119 |
+
2-8
|
| 120 |
+
Success Rate 82% 82% 87% 80% 87% 88%
|
| 121 |
+
|
| 122 |
+
1-8
|
| 123 |
+
2*Multi-com Avg. Reward 2191 4045 3278 3206 6986 9701
|
| 124 |
+
|
| 125 |
+
2-8
|
| 126 |
+
% Oracle 22.5% 41.7% 33.8% 33.0% 72.0% -
|
| 127 |
+
|
| 128 |
+
1-8
|
| 129 |
+
|
| 130 |
+
Dynamic topology. In this experiment we assume a time-dependent topology, i.e., nodes and edges can be dropped or added during an episode. This case is substantially different from what most traditional approaches are able to handle: the locality of MPNN agents together with the one-step implicit planning of RL, enable our framework to deal with multiple graph configurations during the same episode. Fig. 3 and Table 1 (Dyn top) show how MPNN-RL achieves 83.9% of oracle performance clearly outperforming the other benchmarks. Crucially, these results highlight how agents based on MLPs result in highly inflexible network controllers, thus limited to a fixed topology.
|
| 131 |
+
|
| 132 |
+
Capacity constraints. In this experiment, we relax the assumption that capacities ${\bar{f}}_{ij}$ are always able to accommodate any flow on the graph. Compared to previous sections, the lower capacities introduce the possibility of infeasible states. To measure this, the Success Rate computes the percentage of episodes which have been terminated successfully. Results in Table 1 (Capacity) highlight how MPNN-RL is able to achieve ${89.8}\%$ of oracle performance while being able to successfully terminate ${87}\%$ of episodes. Qualitatively, Fig. 4 shows a visualization of the policy for a specific test episode. The plots show how the MPNN-RL is able to learn the effects of capacity on the optimal strategy by allocating flow to a different node when the corresponding edge is approaching its capacity limit.
|
| 133 |
+
|
| 134 |
+
Multi-commodity. In this scenario, we extend the current architecture to deal with multiple commodities and source-sink combinations. Results in Table 1 (Multi-com) and Fig. 5 show how MPNN-RL is able to effectively recover distinct policies for each policy head, thus being able to operate successfully multi-commodity flows within the same network.
|
| 135 |
+
|
| 136 |
+
Computational analysis. We study the computational cost of MPNN-RL compared to MPC-based solutions. As shown in Fig. 6, we compare the time necessary to compute a single network flow decision. We do so across varying dimensions of the underlying graph, ranging from 10 up to 400 nodes. As verified by this experiment, learning-based approaches exhibit computational complexity linear in the number of nodes and graph connectivity, without significant decay in performance.
|
| 137 |
+
|
| 138 |
+
§ 5 OUTLOOK AND LIMITATIONS
|
| 139 |
+
|
| 140 |
+
Research in network flow models, in both theory and practice, is largely scattered across the control, management science, and optimization literature, potentially hindering scientific progress. In this work, we propose a general framework that could enable learning-based approaches to help address the open challenges in this space: handling nonlinear dynamics and scalability, among others. In the hope of fostering a unification of tools among the reinforcement learning and network control communities, we aimed to (i) maintain the narration as-agnostic-as-possible, and (ii) showcase the extreme versatility of our framework through numerous controlled experiments. However, what we present here should be considered as, in our opinion, exciting preliminary results aiming to gather more traction among the ML community towards the solution of hugely impactful real-world problems in the field of network control. Crucially, before being able to consider learning-based frameworks as a concrete alternative to current standards, we believe this research opens several promising future directions for the extension of these concepts to large-scale applications.
|
papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,755 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Graph-Time Convolutional Autoencoders
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
We introduce graph-time convolutional autoencoder (GTConvAE), a novel spatiotemporal architecture tailored to unsupervised learning for multivariate time series on networks. The GTConvAE leverages product graphs to represent the time series and a principled joint spatiotemporal convolution over this product graph. Instead of fixing the product graph at the outset, we make it parametric to attend to the spatiotemporal coupling for the task at hand. On top of this, we propose temporal downsampling for the encoder to improve the receptive field in a spatiotemporal manner without affecting the network structure; respectively. In the decoder, we consider the opposite upsampling operator. We prove that the GTConvAEs with graph integral Lipschitz filters are stable to relative network perturbations, ultimately showing the role of the different components in the encoder and decoder. Numerical experiments for denoising and anomaly detection in solar and water networks corroborate our findings and showcase the effectiveness of the GTConvAE compared with state-of-the-art alternatives.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Learning unsupervised representations from spatiotemporal network data is commonly encountered in applications concerning multivariate data denoising [1], anomaly detection [2], missing data imputation [3], and forecasting [4], to name just a few. The challenge is to develop models that jointly capture the spatiotemporal dependencies in a computation- and data-efficient manner yet being tractable so that to understand the role played by the network structure and the dynamics over it. The autoencoder family of functions is of interest in this setting, but vanilla spatiotemporal forms [5-7] that ignore the network structure suffer the well-known curse of dimensionality and lack inductive learning capabilities [8].
|
| 16 |
+
|
| 17 |
+
Upon leveraging the network as an inductive bias [9], graph-time autoencoders have been recently developed. These approaches are typically composed of two interleaving modules: one capturing the spatial dependencies via graph neural networks (GNNs) [10] and one capturing the temporal dependencies via temporal CNN or LTSM networks. For example, the work in [1] uses an edge-varying GNN [11] followed by a temporal convolution for motion denoising. The work in [12] considers LSTMs and graph convolutions for variational spatiotemporal autoencoders, which have been further investigated in $\left\lbrack {3,{13}}\right\rbrack$ , respectively, for spatiotemporal data imputation as a graph-based matrix completion problem and dynamic topologies. Graph-time autoencoders over dynamic topologies have also been investigated in [14, 15]. Lastly, [4] embeds the temporal information into the edges of a graph and develops an autoencoder over this graph for forecasting purposes.
|
| 18 |
+
|
| 19 |
+
By working disjointly first on the graph and then on the temporal dimension of the graph embeddings, these approaches fail to capture the joint spatiotemporal dependencies present in the raw data. It is also challenging to analyze their theoretical properties and to attribute to what extent the benefit comes from one module over the other. This aspect has been investigated for supervised spatiotemporal learning via GNNs [16-21] but not for autoencoders. The two works elaborating on this are [2] and [22]. The work in [2] replicates the graph over time via the Cartesian product principle [23] and uses an order one graph convolution [24] to learn spatiotemporal embeddings that are fed into an LSTM module to improve the temporal memory, ultimately giving more importance to the temporal dimension of the latent representation. Differently, [25] proposed a variational graph-time autoencoder that its encoder is based on [17] and its decoder is a multi-layer perceptron; hence, being suitable only for topological tasks such as dynamic link prediction but not for tasks concerning time series over networks such as denoising or anomaly detection.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose a GTConvAE that, differently from [2], captures jointly the spatiotemporal coupling both in the raw data and the intermediate higher-level representations. The GTConvAE operates over a parametric product graph [26] to attend to the spatiotemporal coupling for the task at hand rather than fixing it at the outset. Differently from [17], the GTConvAE has a symmetric structure with graph-time convolutions in both encoder and decoder, making it suitable for tasks concerning network time series. We also study the capability of the GTConvAE to transfer learning across different networks, which is of importance as practical topologies differ from the models used during training (e.g., because of model uncertainness, perturbations, or dynamics). The latter has been studied for traditional [27-29] and graph-time GNN models [20, 26, 30] but not for graph-time autoencoders.
|
| 22 |
+
|
| 23 |
+
Our contribution in this paper is twofold. First, we propose a symmetric graph-time convolutional autoencoder that jointly captures the spatiotemporal coupling in the data suited for tasks concerning multivariate time series over networks. The GTConvAE represents the time series as a graph signal over product graphs and uses the latter as an inductive bias to learn unsupervised representations. The product graph is parametric to attend to the coupling for the specific task, and it generalizes the popular choices of product graphs [31]. We also propose a temporal downsampling/upsampling in the encoder/decoder to increase the spatiotemporal receptive field without affecting the network structure; hence, preserving the inductive bias. Second, we prove GTConvAE is stable to relative perturbations on the spatial graph; highlighting the role played by the encoder, decoder, parametric product graph, convolutional filters, and downsampling/upsampling rate. Numerical experiments about denoising and anomaly detection over solar and water networks corroborate our findings and show a competitive performance compared with the more involved state-of-the-art alternatives.
|
| 24 |
+
|
| 25 |
+
The rest of this paper is organized as follows. Section 2 formulates the GTConvAE model and Section 3 analyzes its theoretical properties. Numerical experiments are presented in Section 4 and conclusions in Section 5. The proofs are collected in the appendix.
|
| 26 |
+
|
| 27 |
+
## 2 Graph-Time Convolutional Autoencoders
|
| 28 |
+
|
| 29 |
+
GTconvAE learns representations from $N$ -dimensional multivariate time series ${\mathbf{x}}_{t} \in {\mathbb{R}}^{N}, t =$ $1,\ldots , T$ , collected in matrix $\mathbf{X} \in {\mathbb{R}}^{N \times T}$ . These time series have a spatial network structure represented by a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ composed of $N$ nodes $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ and $M$ edges. The $n$ -th row of $\mathbf{X}$ contains the time series ${\mathbf{x}}^{n} = {\left\lbrack {x}_{1}\left( n\right) ,\ldots ,{x}_{T}\left( n\right) \right\rbrack }^{\top }$ on node ${v}_{n}$ and the $t$ -th column a graph signal ${\mathbf{x}}_{t} = {\left\lbrack {x}_{t}\left( 1\right) ,\ldots ,{x}_{t}\left( N\right) \right\rbrack }^{\top }$ at timestamp $t\left\lbrack {{32},{33}}\right\rbrack$ . For example, the time series could be nodal pressures measured over junction nodes in a water distribution network, while the pipe connections rule the spatial structure. The representations learned from the tuple $\{ \mathcal{G},\mathbf{X}\}$ can then be used, among others, for anomaly detection [5], denoising dynamic data over graphs [1], and missing data completion [3].
|
| 30 |
+
|
| 31 |
+
The GTconvAE follows the standard encoder-decoder structure [34], but in each module, it jointly captures the spatiotemporal structure in the data. We denote the GTconvAE as
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\widehat{\mathbf{X}} = \operatorname{GTConvAE}\left( {\mathbf{X},\mathcal{G};\mathcal{H}}\right) \mathrel{\text{:=}} \operatorname{DEC}\left( {\operatorname{ENC}\left( {\mathbf{X},\mathcal{G};{\mathcal{H}}_{e}}\right) ,\mathcal{G};{\mathcal{H}}_{d}}\right)
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
where the encoder $\operatorname{ENC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{e}}\right)$ and decoder $\operatorname{DEC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{d}}\right)$ are non-linear parametric functions and where set $\mathcal{H} = {\mathcal{H}}_{e} \cup {\mathcal{H}}_{d}$ collects all parameters. The encoder takes as input the graph $\mathcal{G}$ and the time series $\mathbf{X}$ and produces higher-level representations $\mathbf{Z} \in {\mathbb{R}}^{N \times {T}_{e}}$ . These representations are built in a layered manner where each layer comprises: $i$ ) a joint graph-time convolutional filter to capture the spatiotemporal dependencies in a principled manner; ii) a temporal downsampling module to increase the receptive field without affecting the network structure; and iii) a pointwise nonlinearity to have more complex representations. The decoder has a mirrored structure w.r.t. the encoder by taking as input $\mathbf{Z}$ and outputting an estimate of the input $\widehat{\mathbf{X}}$ . The model parameters are estimated end-to-end by minimizing a spatiotemporal regularized reconstruction loss $\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right)$ .
|
| 38 |
+
|
| 39 |
+
### 2.1 Product Graph Representation of Network Time Series
|
| 40 |
+
|
| 41 |
+
GTConvAE uses product graphs to represent the spatiotemporal dependencies in X [23]. Product graphs have been proven successful for processing multivariate time series, such as imputing missing values [35, 36], denoising [37], providing a spatiotemporal Fourier analysis [33], as well as building vector autoregressive models [38], spatiotemporal scattering transforms [39], and graph-time neural networks [26]. Specifically, denote by $\mathbf{S} \in {\mathbb{R}}^{N \times N}$ the graph shift operator (GSO) of the spatial graph $\mathcal{G}$ , e.g., adjacency, Laplacian. Consider also a temporal graph ${\mathcal{G}}_{T} = \left( {{\mathcal{V}}_{T},{\mathcal{E}}_{T},{\mathbf{S}}_{T}}\right)$ , where the node set ${\mathcal{V}}_{T} = \{ 1,\ldots , T\}$ comprises the discrete-time instants, the edge set ${\mathcal{E}}_{T} \subseteq {\mathcal{V}}_{T} \times {V}_{T}$ captures the temporal dependencies; e.g., a directed line or a cyclic graph, and ${\mathbf{S}}_{T} \in {\mathbb{R}}^{N \times N}$ is the respective GSO [40,41]. The time series ${\mathbf{x}}^{n}$ now can be defined as a graph signal over temporal graph ${\mathbf{S}}_{T}$ where ${x}_{t}\left( n\right)$ is a scalar value assigned to the $t$ -th node of ${\mathcal{G}}_{T}$ .
|
| 42 |
+
|
| 43 |
+
The product graph representing the spatiotemporal patterns in $\mathbf{X}$ is denoted by ${\mathcal{G}}_{\diamond } = {\mathcal{G}}_{T}\diamond \mathcal{G} =$ $\left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right)$ . The node set ${\mathcal{V}}_{\diamond }$ is the Cartesian product between ${\mathcal{V}}_{T}$ and $\mathcal{V}$ which leads to ${NT}$ distinct spatiotemporal nodes ${i}_{\diamond } = \left( {n, t}\right)$ . The edge set ${\mathcal{E}}_{\diamond }$ connects these nodes and the GSO ${\mathbf{S}}_{\diamond } \in {\mathbb{R}}^{{NT} \times {NT}}$ is dictated by the product graph. Fixing the product graph implies fixing the spatiotemporal dependencies in the data, which may lead to wrong inductive biases. To avoid this and improve flexibility, we consider a parametric product graph whose GSO is of the form
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{\mathbf{S}}_{\diamond } = \mathop{\sum }\limits_{{i = 0}}^{1}\mathop{\sum }\limits_{{j = 0}}^{1}{s}_{ij}\left( {{\mathbf{S}}_{T}^{i} \otimes {\mathbf{S}}^{j}}\right) = \underset{\text{self-loops }}{\underbrace{{s}_{00}{\mathbf{I}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{Cartesian }}{\underbrace{{s}_{01}{\mathbf{I}}_{T} \otimes \mathbf{S} + {s}_{10}{\mathbf{S}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{Kronecker }}{\underbrace{{s}_{11}{\mathbf{S}}_{T} \otimes \mathbf{S}}}, \tag{1}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
where the scalar parameters $\left\{ {s}_{ij}\right\}$ attend the spatiotemporal connections and encompass the typical product graph choices such as the Kronecker, the Cartesian, and the strong product. By column-vectorizing $\mathbf{X}$ into ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right) \in {\mathbb{R}}^{NT}$ , we obtain a product graph signal assigning a real value to each spacetime node ${i}_{\diamond }$ . I.e., the dynamic data ${\mathbf{x}}_{t}$ over $\mathcal{G}$ is now a static signal ${\mathbf{x}}_{\diamond }$ over the product graph ${\mathcal{G}}_{\diamond }$ .
|
| 50 |
+
|
| 51 |
+
### 2.2 Encoder
|
| 52 |
+
|
| 53 |
+
The encoder is an ${L}_{e}$ -layered architecture in which each layer comprises a bank of product graph convolutional filters, temporal downsampling, and pointwise nonlinearities.
|
| 54 |
+
|
| 55 |
+
GTConv filter captures the spatiotemporal patterns in the data matrix X. Given the parametric product graph representation ${\mathcal{G}}_{\diamond } = \left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right) \left\lbrack \text{cf. (1)}\right\rbrack$ and the product graph signal ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right)$ as input, the output of a graph-time convolutional filter of order $K$ is
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\mathbf{y}}_{\diamond } = \mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond } = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}{\mathbf{x}}_{\diamond } \tag{2}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where $\mathbf{h} = {\left\lbrack {h}_{0},\ldots ,{h}_{K}\right\rbrack }^{\top }$ are the filter parameters and $\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}$ the filtering matrix. The filter in (2) is called convolutional as the output ${\mathbf{y}}_{\diamond }$ is a weighted linear combination of shifted graph signals over the product graph up to $K$ times [42]. Hence, the filter is spatiotemporally local in a neighborhood of radius $K$ . The filter locality does not only depend on the order $K$ but also on the type of product graph. For example, for a fixed $K$ , the Cartesian product is more localized than the strong product, which can be considered to have a longer spatiotemporal memory [26]. Consequently, learning parameters $\left\{ {s}_{ij}\right\}$ in (1) implies learning the multi-hop resolution radius.
|
| 62 |
+
|
| 63 |
+
In the $\ell$ -th layer, the encoder has ${F}_{\ell - 1}$ product graph signal features ${\mathbf{x}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{x}}_{\diamond ,\ell - 1}^{g},\ldots {\mathbf{x}}_{\diamond ,\ell - 1}^{{F}_{\ell - 1}}$ , processes these with a bank of ${F}_{\ell }{F}_{\ell - 1}$ filters and outputs ${F}_{\ell }$ product graph signal features as
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }, \tag{3}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
which are the higher-level linear representation of the layer.
|
| 70 |
+
|
| 71 |
+
Temporal downsampling reduces the temporal dimension in each output ${\left\{ {\mathbf{y}}_{\diamond ,\ell }^{f}\right\} }_{f}$ in (3) by down-sampling the latter along the temporal dimension with a rate $r$ . More specifically, we first transform
|
| 72 |
+
|
| 73 |
+
the $f$ -th output ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{e}}$ into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{e}}$ and then summarize every $r$ consecutive columns without overlap to obtain the downsampled matrix ${\mathbf{X}}_{d,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell }^{e}}$ with ${T}_{\ell }^{e} < {T}_{\ell - 1}^{e}$ . The(n, t)-th entry of ${\mathbf{X}}_{d,\ell }^{f}$ is computed as
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\mathbf{X}}_{d,\ell }^{f}\left( {n, t}\right) = \operatorname{SUM}\left( {{\mathbf{Y}}_{\ell }^{f}\left( {n, r\left( {t - 1}\right) + 1 : {rt}}\right) }\right) ,\;f = 1,\ldots {F}_{\ell }, \tag{4}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $\operatorname{SUM}\left( \cdot \right)$ is a summary function over the temporal indices $r\left( {t - 1}\right) + 1$ to ${rt}$ . This summary function could be a simple downsampling (i.e., output the first column in the block ${\mathbf{Y}}_{\ell }^{f}(n, r\left( {t - 1}\right) + 1$ : ${rt}))$ or an aggregation function (i.e., mean/max/min per spatial node).
|
| 80 |
+
|
| 81 |
+
This temporal downsampling increases the encoder spatiotemporal memory without affecting the spatial structure. I.e., nodes with the temporal indices $t,{rt},\left( {r + 1}\right) t,\ldots$ become neighbors, which brings in a longer memory in the next layer and increases the encoder receptive field. While also spatial graph pooling can be added [43], we do not advocate it for two reasons. First, the spatial graph acts as an inductive bias for the GTConvAE [9]; hence, changing the graph in the layers via graph reduction, coarsening, or alternatives will affect the spatial structure, ultimately changing the inductive bias. Second, the spatial graph often represents the communication channels for distributed implementation of GTConv $\left\lbrack {{20},{42},{44}}\right\rbrack$ , and changing it may be physically impossible as sensor nodes have a limited transmission radius. An option in the latter setting may be a zero-pad spatial pooling $\left\lbrack {{45},{46}}\right\rbrack$ but it requires memorizing the indices where the zero-padding is applied, which may be challenging for large graphs.
|
| 82 |
+
|
| 83 |
+
Activation functions nonlinearize the downsampled features to increase the representational capacity. We consider an entry-wise nonlinear function $\sigma \left( \cdot \right)$ such as ReLU and produce layer $\ell$ -th output as
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\mathbf{X}}_{\ell + 1}^{f} = \sigma \left( {\mathbf{X}}_{d,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{5}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
The encoder performs operations (3)-(4)-(5) for all the ${L}_{e}$ layers to yield the encoded output
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\mathbf{Z}}_{\diamond } \mathrel{\text{:=}} {\mathbf{X}}_{\diamond , L} = \operatorname{ENC}\left( {{\mathbf{x}}_{\diamond ,0},\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) , \tag{6}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where ${\mathbf{x}}_{\diamond ,0} \mathrel{\text{:=}} {\mathbf{x}}_{\diamond } \in {\mathbb{R}}^{NT},{\mathbf{Z}}_{\diamond } = \left\lbrack {{\mathbf{z}}_{\diamond }^{1},\ldots ,{\mathbf{z}}_{\diamond }^{{F}_{L}}}\right\rbrack \in {\mathbb{R}}^{N{T}_{{L}_{e}} \times {F}_{L}}$ , and we made explicit the dependence from the product graph parameters $\mathbf{s} = {\left\lbrack {s}_{00},{s}_{01},{s}_{10},{s}_{11}\right\rbrack }^{\top }$ [cf. (1)].
|
| 96 |
+
|
| 97 |
+
### 2.3 Decoder
|
| 98 |
+
|
| 99 |
+
Mirroring the encoder, the decoder reconstructs the input from the latent representations in (6). At the generic layer $\ell$ , graph-time convolutional filtering is performed, subsequently a temporal upsampling, and a pointwise nonlinearity.
|
| 100 |
+
|
| 101 |
+
GTConv filtering decodes the spatiotemporal latent representations from the encoder. Considering again ${F}_{\ell - 1}$ input features ${\mathbf{z}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{g},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{{F}_{\ell } - 1}$ and a filter bank of ${F}_{\ell }{F}_{\ell - 1}$ GTConv filters as per (2), the outputs are
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{z}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }. \tag{7}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Upsampling zero-pads the removed temporal values during downsampling [cf. (4)] so that the final GTConvAE output matches the dimension of $\mathbf{X}$ . Specifically, given the $f$ -th feature ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{d}}$ from (7), we again transform it into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{d}}$ and obtain the upsampled matrix ${\mathbf{Z}}_{u,\ell }^{f} \in {\mathbb{R}}^{N \times {T}_{\ell }^{d}}$ whose(n, t)-th entry is computed as
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{\mathbf{Z}}_{u,\ell }^{f}\left( {n, t}\right) = \left\{ \begin{array}{ll} {\mathbf{Y}}_{\ell }^{f}\left( {n,\lceil t/r\rceil }\right) ; & \text{ if }\exists k \in \mathbb{Z} : t = {kr} \\ 0; & \text{ o/w } \end{array}\right. \tag{8}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $\lceil \cdot \rceil$ is the ceiling function. ${}^{1}$ The GTConv filter bank in the next layer interpolates these zero-padded values from the downsampled ones. This implies that the downsampling rate in the
|
| 114 |
+
|
| 115 |
+
---
|
| 116 |
+
|
| 117 |
+
${}^{1}$ We considered the same down/up-sampling rate in each layer of the decoder and encoder; hence, because of the mirrored structure ${T}_{\ell }^{e}$ in (5) equals ${T}_{\ell - 1}^{d}$ in (8).
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
encoder cannot be too harsh to lose information, and also, the filter orders in the decoder cannot be too small to have a high interpolatory capacity.
|
| 122 |
+
|
| 123 |
+
Activation functions again nonlinzearize the upsampled features in (8) and yield
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{\mathbf{Z}}_{\ell }^{f} = \sigma \left( {\mathbf{Z}}_{u,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{9}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
The decoder performs operations (7)-(8)-(9) for all ${L}_{d}$ layers to yield the decoded output ${\widehat{\mathbf{x}}}_{\diamond } =$ ${\mathbf{z}}_{\diamond ,{L}_{d}} \in {\mathbb{R}}^{NT}$ , which also corresponds to the GTConvAE output
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
{\widehat{\mathbf{x}}}_{\diamond } = {\mathbf{z}}_{\diamond ,{L}_{d}} = \operatorname{DEC}\left( {{\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) , \tag{10}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where we match the dimensions by setting ${F}_{{L}_{d}} = 1$ .
|
| 136 |
+
|
| 137 |
+
### 2.4 Loss Function
|
| 138 |
+
|
| 139 |
+
Given (6) and (10), the GTConvAE in (1) can be detailed as
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{\widehat{\mathbf{x}}}_{\diamond } = \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};\mathcal{H},\mathbf{s}}\right) = \operatorname{DEC}\left( {\operatorname{ENC}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) ,\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) . \tag{11}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
The GTConv filter parameters in $\mathcal{H}$ and the product graph parameters in $\mathbf{s}$ are estimated by minimizing the loss function
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right) = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\begin{Vmatrix}{\mathbf{x}}_{\diamond } - {\widehat{\mathbf{x}}}_{\diamond }\end{Vmatrix}}_{2}\right\rbrack + \rho \parallel \mathbf{s}{\parallel }_{1}. \tag{12}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
where the first term measures the reconstruction error over the probabilistic distribution $\mathcal{D}$ of the training set, whereas the second term imposes sparsity in the spatiotemporal dependencies of the product graph. Scalar $\rho > 0$ controls the trade-off between fitting and regularization, and a higher value implies a stronger spatiotemporal sparsity (from the norm one $\parallel \cdot {\parallel }_{1}$ ); i.e., sparser spatiotemporal attention.
|
| 152 |
+
|
| 153 |
+
Complexity analysis: Denoting the maximum number of features in all layers by ${F}_{\max } = \max \left\{ {F}_{\ell }\right\}$ the GTConvAE has $\left| \mathcal{H}\right| = \left( {{L}_{e} + {L}_{d}}\right) \left( {K + 1}\right) {F}_{\max }^{2}$ parameters. This is because each GTConv filter (2) has $K + 1$ parameters and in each layer a filter bank of at most ${F}_{max}^{2}$ filters is used. Despite the product graphs are of large dimensions, the latter is highly sparse and the computation complexity of the GTConvAE is of order $\mathcal{O}\left( {{M}_{\diamond }\left| \mathcal{H}\right| }\right)$ , where ${M}_{\diamond } = {NT} + N{M}_{T} + {MT} + {2M}{M}_{T}$ is the number of edges of the product graph ( $M$ edges in the spatial graph and ${M}_{T}$ edges in the temporal graph). This is because each graph-time filter has a computational complexity of order $\mathcal{O}\left( {\left( {K + 1}\right) {M}_{\diamond }}\right)$ [26] and the GTConvAE consists of $\left( {{L}_{e} + {L}_{d}}\right) {F}_{\max }^{2}$ graph-time filters. Note that we consider $r = 1$ sampling rate to provide the worst case analysis, but the computational complexity can be further reduced for $r > 1$ .
|
| 154 |
+
|
| 155 |
+
## 3 Stability Analysis
|
| 156 |
+
|
| 157 |
+
In this section, we conduct a stability analysis of the GTConvAE w.r.t. relative perturbations in the spatial graph. This stability analysis is motivated by the fact that we do not always have access to the ground truth spatial graph due to modeling issues or when the physical network undergoes slight changes over time. Hence, the spatial graph used for training differs from that used for testing; thus, having a stable GTConvAE is desirable to perform the tasks reliably.
|
| 158 |
+
|
| 159 |
+
We consider the relative perturbation model proposed in [27]
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\widehat{\mathbf{S}} = \mathbf{S} + \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) \tag{13}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
where $\widehat{\mathbf{S}}$ is the perturbed GSO and $\mathbf{E}$ is the perturbation matrix with bounded operator norm $\parallel \mathbf{E}\parallel \leq \epsilon$ . This model accounts for graph perturbation depending on its structure, i.e., a higher degree node (a node with higher-weighted connected edges) is relatively prone to more perturbation.
|
| 166 |
+
|
| 167 |
+
### 3.1 Spatiotemporal integral Lipschitz filters
|
| 168 |
+
|
| 169 |
+
To investigate the stability of GTConvAE, we first characterize the graph-time convolutional filters in the spectral domain. Consider the eigendecompositions of the spatial GSO $\mathbf{S} = \mathbf{V}\mathbf{\Lambda }{\mathbf{V}}^{\mathrm{H}}$ and of the temporal GSO ${\mathbf{S}}_{T} = {\mathbf{V}}_{T}{\mathbf{\Lambda }}_{T}{\mathbf{V}}_{T}^{\mathrm{H}}$ . Matrices $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{1},\ldots ,{\mathbf{v}}_{N}\right\rbrack }^{\top }$ and $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{T,1},\ldots ,{\mathbf{v}}_{T, T}\right\rbrack }^{\top }$ collect the spatial and the temporal eigenvectors, respectively, and $\mathbf{\Lambda } = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{N}}\right)$ and ${\mathbf{\Lambda }}_{T} = \operatorname{diag}\left( {{\lambda }_{T,1},\ldots ,{\lambda }_{T, T}}\right)$ the corresponding eigenvalues. From (1), the eigendecomposition of the product graph GSO is ${\mathbf{S}}_{\diamond } = {\mathbf{V}}_{\diamond }{\mathbf{\Lambda }}_{\diamond }{\mathbf{V}}_{\diamond }^{\mathrm{H}}$ with eigenvectors ${\mathbf{V}}_{\diamond } = {\mathbf{V}}_{T} \otimes \mathbf{V}$ being the Kronecker product $\otimes$ of the respective GSOs and the eigenvalues ${\mathbf{\Lambda }}_{\diamond } = {\mathbf{\Lambda }}_{T}\diamond \mathbf{\Lambda }$ are defined by the product graph rule. As in graph signal processing [32], it is possible to characterize the joint graph-time Fourier transform of product graph signals. Specifically, the graph-time Fourier of signal ${\mathbf{x}}_{\diamond }$ is defined as $\widetilde{\mathbf{x}} = {\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}{\mathbf{x}}_{\diamond }$ and the eigenvalues in ${\mathbf{\Lambda }}_{\diamond }$ now collect the graph-time frequencies of the product graph [33]. Applying this Fourier transform on the input and output of the GTConv filter in (2), we can write the filter input-output as ${\widetilde{\mathbf{y}}}_{\diamond } = \mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right) \widetilde{\mathbf{y}}$ , where ${\widetilde{\mathbf{y}}}_{\diamond }$ is the Fourier transform of the output and $\mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right)$ is an ${NT} \times {NT}$ diagonal matrix containing the filter frequency response on the main diagonal. This frequency response is of the form
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k} \tag{14}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where ${\lambda }_{\diamond ,\left( {n, t}\right) } = {\lambda }_{T, t}\diamond {\lambda }_{n}$ indicates the eigenvalue of ${\mathbf{S}}_{\diamond }$ corresponding to the spatial index $n \in \left\lbrack N\right\rbrack$ and temporal index $t \in \left\lbrack T\right\rbrack$ of the product graph.
|
| 176 |
+
|
| 177 |
+
The eigenvalues ${\lambda }_{\diamond ,\left( {n, t}\right) }$ can be considered as the frequencies of the product graph and can be ordered in ascending order of magnitude. We can then characterize the variation of the filter frequency response for two different spatial eigenvalues.
|
| 178 |
+
|
| 179 |
+
Definition 1. A GTConv filter with a frequency response $h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right)$ is graph integral Lipschitz if there exists constant $C > 0$ such that for all frequencies ${\lambda }_{\diamond ,\left( {n, t}\right) },{\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ , it holds that
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\left| {h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) - h\left( {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }\right) }\right| \leq C\frac{\left| {\lambda }_{n} - {\lambda }_{{n}^{\prime }}\right| }{\left| {{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}\right| /2}\text{ for all }\left\{ {{\lambda }_{n},{\lambda }_{{n}^{\prime }}}\right\} \in \mathbf{\Lambda }. \tag{15}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
Expression (15) states that the frequency response of graph-time convolutional filter should vary sub-linearly while the coefficient depends on the gap $\left| {{\lambda }_{\diamond ,\left( {n, t}\right) } + {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }}\right| /2$ . This implies
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\left| {{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) }{\partial {\lambda }_{n}}}\right| \leq C\text{ for all }{\lambda }_{n} \in \mathbf{\Lambda }\;\text{ and }\;{\lambda }_{\diamond ,\left( {n, t}\right) } \in {\mathbf{\Lambda }}_{\diamond } \tag{16}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
which means the integral Lipschitz filter cannot vary drastically in high frequencies. Hence, such a filter can discriminate low frequency content but not high frequency ones.
|
| 192 |
+
|
| 193 |
+
Definition 2. A graph-time convolutional filter has normalized frequency response if $\left| {h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) }\right| \leq 1$ for all ${\lambda }_{\diamond ,\left( {n, t}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ .
|
| 194 |
+
|
| 195 |
+
This definition is a direct consequence of normalizing the filters' frequency response by their maximum value. We shall show next that GTConvAE with filters satisfying Def. 1 and 2 are stable to perturbations in the form (13).
|
| 196 |
+
|
| 197 |
+
### 3.2 Stability result
|
| 198 |
+
|
| 199 |
+
The following theorem with proof in Appendix A provides the main result.
|
| 200 |
+
|
| 201 |
+
Theorem 1. Consider a GTConvAE with an ${L}_{e}$ -layer encoder and an ${L}_{d}$ -layer decoder having ${F}_{\ell } \leq {F}_{\max }$ and ${F}_{d,\ell } \leq {F}_{\max }$ features per layer in encoder and decoder, respectively, and a summary function $\operatorname{SUM}\left( \cdot \right)$ performing pure downsampling with rate $r$ . Consider also the filters are integral Lipschitz [cf. Def. 1] with a normalized frequency response [cf. Def. 2] and that the nonlinearities are 1-Lipschitz (e.g., ReLU, absolute value). Let this GTConvAE be trained over the product graph (1) and deployed over its perturbed version whose spatial GSO is given in (13) with a perturbation of at most $\parallel \mathbf{E}\parallel \leq \epsilon$ . The distance between the two models is upper bounded by
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
\parallel \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right) - \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}}\right) {\parallel }_{2} \leq \left( {{L}_{d} + {L}_{e}}\right) {r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{max}^{{L}_{e} + {L}_{d} - 1}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}.
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
(17)
|
| 208 |
+
|
| 209 |
+
where $\Delta = {2C}\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\text{ max }}}\right) \left( {1 + \delta \sqrt{NT}}\right)$ , and $\delta = {\left( \parallel \mathbf{U} - \mathbf{V}{\parallel }^{2} + 1\right) }^{2} - 1$ with eigenvectors $\mathbf{U}$ from $\mathbf{E} = {\mathbf{{UMU}}}^{\mathrm{H}}$ and $\mathbf{V}$ from $\mathbf{S} = {\mathbf{{V\Lambda V}}}^{\mathrm{H}}$ .
|
| 210 |
+
|
| 211 |
+
The result (17) states that GTConvAE is stable against relative perturbations. It also suggests that GTConvAE is less stable for larger product graphs $\left( \sqrt{NT}\right)$ since more nodes pass information over the perturbed edges. Moreover, making the model more complex by increasing the number of features or layers compromises stability as more graph-time convolutional filters work on a perturbed graph $\left( {F}_{\max }^{{L}_{c} + {L}_{d} - 1}\right)$ . We also see the stability improves with the sampling rate $r > 1$ because fewer nodes operate over the perturbed graph after downsampling. Furthermore, for a deeper encoder we have more downsampling hence the stability improves; yet there is a tradeoff between improving the bound imposed by the terms ${r}^{-{L}_{e}/2},{F}_{\max }^{{L}_{e} + {L}_{d} - 1}$ , and ${L}_{e} + {L}_{d}$ . Finally, parameters ${s}_{01}$ and ${s}_{11}$ appear in the stability bound because they are the only ones composing the spatial edges; thus, minimizing $\parallel \mathbf{s}{\parallel }_{1}$ in (12) leads to improved stability.
|
| 212 |
+
|
| 213 |
+
## 4 Numerical Results
|
| 214 |
+
|
| 215 |
+
This section compares the GTConvAE with baseline solutions and competitive alternatives for time series denoising as well as anomaly detection with real data from solar irradiance and water networks. In all experiments, the ADAM optimizer with the standard hyperparameters is used and an unweighted directed line graph is considered for the temporal graph in (1).
|
| 216 |
+
|
| 217 |
+
### 4.1 Denoising of solar irradiance time series
|
| 218 |
+
|
| 219 |
+
We consider the task of denoising solar irradiance time series over $N = {75}$ solar cities around the northern region of the U.S. measured in GHI $\left( {W/{m}^{2}}\right) \left\lbrack 4\right\rbrack$ . Each solar city is a vertex and an undirected edge is set using the physical distances between the cities via Gaussian threshold kernel with $\sigma = {0.25}$ and ${th} = {0.1}$ after normalizing maximum weight to 1 [32]. The noise is generated via a zero-mean Gaussian distribution with a covariance matrix corresponding to the pseudo-inverse of the normalized graph Laplacian.
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+
Figure 1: Denoising performance of the proposed GTConvAE and alternatives. The standard deviation for all the models is of order ${10}^{-2}$ .
|
| 224 |
+
|
| 225 |
+
Experimental setup. We considered the first 2000 samples for training and validation (2000- 2014) and the subsequent 200 (2014-2016) for testing. The input data is a single feature corresponding to the GHI measurement and the product graph has $N = {75}$ spatial nodes and $T = 8$ temporal nodes. The GTConvAE has three layers with $\{ 8,4,2\}$ features in the encoder and reversely in the decoder; all filters are 4th-order and normalized Laplacian is used as GSO; a downsampling rate of $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.2}$ and the learning rate is ${25} \times {10}^{-4}$ . We compared the GTConvAE with the following alternatives:
|
| 226 |
+
|
| 227 |
+
- C3D [5]: non-graph spatiotemporal autoencoder using three-dimensional CNNs.
|
| 228 |
+
|
| 229 |
+
- ConvLSTMAE [7]: A non-graph spatiotemporal autoencoder using two-dimensional CNNs followed by LSTMs.
|
| 230 |
+
|
| 231 |
+
- STGAE [1]: A modular spatiotemporal graph autoencoder that uses an edge varying filter for the graph dimension followed by temporal convolution.
|
| 232 |
+
|
| 233 |
+
- Baseline GCNN [42]: An autoencoder built with a conventional graph convolutional neural network using the time series as features over the nodes. The shift operator is the normalized Laplacian matrix.
|
| 234 |
+
|
| 235 |
+
The first two methods are considered to show the role of using a distance graph as an inductive bias. The third method is considered to compare the joint GTConvAE over disjoint alternatives, whereas the last model is considered to show the role of the sparse product graphs rather than treating time series as node features. The parameters for all models are chosen via grid search from the ranges reported in Appendix B.
|
| 236 |
+
|
| 237 |
+
Results. Fig. 1 shows the reconstruction normalized mean squared error (NMSE) for different signal-to-noise ratios (SNRs). The proposed GTConvAE compares well with STGAE for low SNRs but better for high SNRs. We attribute this improvement to the ability of the GTConvAE to capture
|
| 238 |
+
|
| 239 |
+
Table 1: Comparison of different models in the BATADAL dataset. All metrics are the higher the better.
|
| 240 |
+
|
| 241 |
+
<table><tr><td>Model</td><td>${N}_{A}$</td><td>$\mathcal{S}$</td><td>${\mathcal{S}}_{\text{TTD }}$</td><td>${\mathcal{S}}_{\mathrm{{CM}}}$</td><td>TPR</td><td>TNR</td></tr><tr><td>STGCAE-LSTM [2]</td><td>7</td><td>0.924</td><td>0.920</td><td>0.928</td><td>0.892</td><td>0.964</td></tr><tr><td>TGCN [47]</td><td>7</td><td>0.931</td><td>0.934</td><td>0.928</td><td>0.885</td><td>0.971</td></tr><tr><td>GTConvAE (ours)</td><td>7</td><td>0.940</td><td>0.928</td><td>0.952</td><td>0.922</td><td>0.981</td></tr></table>
|
| 242 |
+
|
| 243 |
+
jointly the spatiotemporal patterns in the data while STGAE operates disjointly. We also see that in comparison with the baseline GCNN, the GTConvAE performs consistently better, highlighting the importance of the sparser product graphs and temporal downsampling. Finally, we also observe a superior performance compared with the non-graph alternatives C3D and ConvLSTMAE.
|
| 244 |
+
|
| 245 |
+
### 4.2 Anomaly detection in water networks
|
| 246 |
+
|
| 247 |
+
We now consider the task of detecting cyber-physical attacks on a water network. We considered the C-town network from the Battle of ATtack Detection ALgorithms (BATADAL) dataset comprising $N = {388}$ nodes (demand junctions, storage tanks, and reservoirs) and 8762 hourly measurements of 43 different node feature signals for a period of 12 months. We used the same setup as in [47] and considered a correlation graph from the data. The dataset provides a normal operating condition comprising recordings for the first 12 months and an anomalous event operating condition comprising 7 attacks over the successive 3 months. Refer to [48, 49] for more detail about the BATADAL dataset.
|
| 248 |
+
|
| 249 |
+
Experimental setup. The normal operating condition data are used to train the model for one-step forecasting to be used for detecting anomalies. The anomalous event operating condition data is used for testing and an anomaly is flagged if the prediction error exceeds a fixed threshold. We set the threshold intuitively to three times the error variance during training. The inputs are the 43 time series over the $N = {388}$ nodes and we considered $T = 6$ for the temporal graph dimension. The GTConvAE has two layers with $\{ 8,2\}$ features in the encoder and reversely in the decoder; all filters are of order $K = 4$ ; a downsampling rate $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.14}$ and learning rate is $5 \times {10}^{-4}$ . We compared the performance against two graph-based alternatives:
|
| 250 |
+
|
| 251 |
+
- STGCAE-LSTM [2]: A related solution to our method that uses a Cartesian spatiotemporal graph with graph convolutions followed by an LSTM in the latent domain.
|
| 252 |
+
|
| 253 |
+
- TGCN [47]: A modular graph-based autoencoder using cascades of temporal convolutions and message passing.
|
| 254 |
+
|
| 255 |
+
The parameters for all models are obtained via grid search from the ranges reported in Appendix C. We measure the performance via the $\mathcal{S}$ -score present in the BATADAL dataset, which contains ${\mathcal{S}}_{\text{TTD }}$ for the timing in detecting anomalies and ${\mathcal{S}}_{\mathrm{{CM}}}$ for the classification accuracy. The $\mathcal{S}$ -score is defined as
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\mathcal{S} = {0.5}\left( {{\mathcal{S}}_{\mathrm{{TTD}}} + {\mathcal{S}}_{\mathrm{{CM}}}}\right) = {0.5}\left( {\left( {1 - \frac{1}{{N}_{A}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{A}}\frac{{\mathrm{{TTD}}}_{i}}{\Delta {\mathrm{T}}_{i}}}\right) + \frac{\mathrm{{TPR}} + \mathrm{{TNR}}}{2}}\right) , \tag{18}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
where ${N}_{A}$ is the number of attacks, TTD is the detection time of the attack, $\Delta {T}_{i}$ is the duration of the $i$ -th attack, TPR is the true positive rate, and TNR is the true negative rate.
|
| 262 |
+
|
| 263 |
+
Results: Table 1 shows that all the models managed to detect all of the attacks, however, the TGCN has a better performance in timing ${\mathcal{S}}_{\text{TTD }}$ . This is due to the calibration of the threshold in their work with a validation dataset while we used a fixed intuitive threshold only based on training. In the accuracy of anomaly detection ${\mathcal{S}}_{\mathrm{{CM}}}$ , the GTConvAE outperforms the other two models as the product graphs alongside downsampling enable it to learn spatiotemporal patterns in the data effectively. Overall, the GTConvAE performs better than other models by a small margin.
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+
Figure 2: Stability results for different scenarios of the GTConvAE and fixed product graphs. (a) Different SNRs in the topology. (b) Different graph sizes in $4\mathrm{\;{dB}}$ perturbation. (c) Different sampling rates $r$ .
|
| 268 |
+
|
| 269 |
+
### 4.3 Stability analysis
|
| 270 |
+
|
| 271 |
+
To investigate the stability of the GTConvAE, we trained the model over a synthesized dataset so we could control all the settings such as the spatial graph size $N$ . The graph is an undirected stochastic block model with 5 communities among $N = \{ {50},{100},\ldots ,{500}\}$ . The edges are drawn independently with probability 0.8 for nodes in the same community and 0.2 otherwise. Each data sample is a diffused signal over the graph $\mathbf{X} = \left\lbrack {\mathbf{{Sx}},\ldots ,{\mathbf{S}}^{T}\mathbf{x}}\right\rbrack$ with $T = 6$ and $\mathbf{x}$ having a random non-zero entry. The autoencoder is used to reconstruct this data.
|
| 272 |
+
|
| 273 |
+
Experimental setup The model has two layers of encoder and decoder with sampling rate $r = 2$ . Each layer of the encoder has $\{ 8,4\}$ features and reversely in the decoder. All filters are of order four and the normalized graph Laplacian is used as GSO. The activation functions are ReLU and pure donwsampling is considered. The regularizer weight is 0.25 and learning rate is ${25} \times {10}^{-3}$ . The model is trained over the graph with different sizes and tested with a perturbed graph following the relative perturbation model in (13) for different SNR scenarios in the topology. We compare the stability of the GTConvAE with learned graphs with the same autoencoder having fixed Cartesian and strong product graphs.
|
| 274 |
+
|
| 275 |
+
Results Fig. 2a indicates that the GTConvAE in different noisy scenarios. GTConvAE is the most stable in medium and high SNRs as it leverages sparsity in the spatiotemporal coupling. However, GTConvAE performance drops more rapidly in low SNR scenarios as its parameters are trained for the data and task. Fig. 2b shows the results for reconstruction error over graphs with different sizes. The GTConvAE is more stable than the other models, even in graphs with the larger sizes for the same reason as before. All the models lose performance similarly as the size of the graph grows. This is consistent with the theoretical result in (17).
|
| 276 |
+
|
| 277 |
+
## 5 Conclusion
|
| 278 |
+
|
| 279 |
+
We introduced GTConv-AE as an unsupervised model for learning representations from multivariate time series over networks. The GTConv-AE uses parametric product graphs to aggregate information from a spatiotemporal neighborhood while it yet learns spatiotemporal couplings in the product graph We proposed a spectral analysis for GTConv-AE due to its convolutional nature which led to stability analysis. The stability analysis states that GTConv-AE is stable against relative perturbations in the spatial graph as long as graph-time filters vary smoothly over high spatiotemporal frequencies. Finally, numerical results showed that the GTConv-AE compares well with the state-of-the-art models on benchmark datasets and corroborated the stability results.
|
| 280 |
+
|
| 281 |
+
## References
|
| 282 |
+
|
| 283 |
+
[1] Kanglei Zhou, Zhiyuan Cheng, Hubert P. H. Shum, Frederick W. B. Li, and Xiaohui Liang. Stgae: Spatial-temporal graph auto-encoder for hand motion denoising. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages 41-49, 2021. doi: 10.1109/ ISMAR52148.2021.00018. 1, 2, 7
|
| 284 |
+
|
| 285 |
+
369
|
| 286 |
+
|
| 287 |
+
370
|
| 288 |
+
|
| 289 |
+
[2] Nanjun Li, Faliang Chang, and Chunsheng Liu. Human-related anomalous event detection via spatial-temporal graph convolutional autoencoder with embedded long short-term memory network. Neurocomputing, 490:482-494, 2022. ISSN 0925-2312. doi: https://doi.org/10.1016/ j.neucom.2021.12.023. 1, 2, 8
|
| 290 |
+
|
| 291 |
+
[3] Tien Huu Do, Duc Minh Nguyen, Evaggelia Tsiligianni, Angel Lopez Aguirre, Valerio Panzica La Manna, Frank Pasveer, Wilfried Philips, and Nikos Deligiannis. Matrix completion with variational graph autoencoders: Application in hyperlocal air quality inference. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7535-7539, 2019. 1, 2
|
| 292 |
+
|
| 293 |
+
[4] Manajit Sengupta, Yu Xie, Anthony Lopez, Aron Habte, Galen Maclaurin, and James Shelby. The national solar radiation data base (nsrdb). Renewable and Sustainable Energy Reviews, 89: 51-60, 2018. ISSN 1364-0321. 1, 7
|
| 294 |
+
|
| 295 |
+
[5] Shifu Zhou, Wei Shen, Dan Zeng, Mei Fang, Yuanwang Wei, and Zhijiang Zhang. Spatial-temporal convolutional neural networks for anomaly detection and localization in crowded scenes. Signal Processing: Image Communication, 47:358-368, 2016. ISSN 0923-5965. 1, 2, 7
|
| 296 |
+
|
| 297 |
+
[6] Yong Shean Chong and Yong Haur Tay. Abnormal event detection in videos using spatiotemporal autoencoder. In International symposium on neural networks, pages 189-196. Springer, 2017.
|
| 298 |
+
|
| 299 |
+
[7] Weixin Luo, Wen Liu, and Shenghua Gao. Remembering history with convolutional lstm for anomaly detection. In 2017 IEEE International Conference on Multimedia and Expo (ICME), pages 439-444, 2017. doi: 10.1109/ICME.2017.8019325. 1, 7
|
| 300 |
+
|
| 301 |
+
[8] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. 1
|
| 302 |
+
|
| 303 |
+
[9] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 1, 4
|
| 304 |
+
|
| 305 |
+
[10] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4-24, 2021. doi: 10.1109/TNNLS.2020.2978386. 1
|
| 306 |
+
|
| 307 |
+
[11] Elvin Isufi, Fernando Gama, and Alejandro Ribeiro. Edgenets:edge varying graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-1, 2021. doi: 10.1109/TPAMI.2021.3111054. 1
|
| 308 |
+
|
| 309 |
+
[12] Wenchao Chen, Long Tian, Bo Chen, Liang Dai, Zhibin Duan, and Mingyuan Zhou. Deep variational graph convolutional recurrent network for multivariate time series anomaly detection. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 3621-3633. PMLR, 17-23 Jul 2022. 1
|
| 310 |
+
|
| 311 |
+
[13] Sedigheh Mahdavi, Shima Khoshraftar, and Aijun An. Dynamic joint variational graph au-toencoders. In Peggy Cellier and Kurt Driessens, editors, Machine Learning and Knowledge Discovery in Databases, pages 385-401, Cham, 2020. Springer International Publishing. ISBN 978-3-030-43823-4. 1
|
| 312 |
+
|
| 313 |
+
[14] Yue Hu, Ao Qu, and Dan Work. Detecting extreme traffic events via a context augmented graph autoencoder. ACM Transactions on Intelligent Systems and Technology (TIST), 2022. 1
|
| 314 |
+
|
| 315 |
+
[15] Mounir Haddad, Cécile Bothorel, Philippe Lenca, and Dominique Bedart. Temporalizing static graph autoencoders to handle temporal networks. In Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pages 201-208, 2021. 1
|
| 316 |
+
|
| 317 |
+
[16] C. Si, W. Chen, W. Wang, L. Wang, and T. Tan. An attention enhanced graph convolutional lstm network for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1227-1236, 2019. 1
|
| 318 |
+
|
| 319 |
+
[17] Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson. Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pages 362-373. Springer, 2018. 2
|
| 320 |
+
|
| 321 |
+
[18] L. Ruiz, F. Gamao, and A. Ribeiro. Gated graph recurrent neural networks. IEEE Transactions on Signal Processing, 68:6303-6318, 2020.
|
| 322 |
+
|
| 323 |
+
[19] S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
|
| 324 |
+
|
| 325 |
+
[20] Samar Hadou, Charilaos I Kanatsoulis, and Alejandro Ribeiro. Space-time graph neural networks. arXiv preprint arXiv:2110.02880, 2021. 2, 4
|
| 326 |
+
|
| 327 |
+
[21] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kaneza-shi, Tim Kaler, Tao Schardl, and Charles Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. 34:5363-5370, Apr. 2020. doi: 10.1609/aaai.v34i04.5984. 1
|
| 328 |
+
|
| 329 |
+
[22] Yanbang Wang, Pan Li, Chongyang Bai, and Jure Leskovec. Tedic: Neural modeling of behavioral patterns in dynamic social interaction networks. In Proceedings of the Web Conference 2021, WWW '21, page 693-705, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383127. 1
|
| 330 |
+
|
| 331 |
+
[23] Richard H Hammack, Wilfried Imrich, Sandi Klavžar, Wilfried Imrich, and Sandi Klavžar. Handbook of product graphs, volume 2. CRC press Boca Raton, 2011. 1, 3
|
| 332 |
+
|
| 333 |
+
[24] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016. 1
|
| 334 |
+
|
| 335 |
+
[25] Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. Variational graph recurrent neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. 2
|
| 336 |
+
|
| 337 |
+
[26] Mohammad Sabbaqi and Elvin Isufi. Graph-time convolutional neural networks: Architecture and theoretical analysis. arXiv preprint arXiv:2206.15174, 2022. 2, 3, 5
|
| 338 |
+
|
| 339 |
+
[27] Fernando Gama, Joan Bruna, and Alejandro Ribeiro. Stability properties of graph neural networks. IEEE Transactions on Signal Processing, 68:5680-5695, 2020. doi: 10.1109/TSP. 2020.3026980.2,5,13
|
| 340 |
+
|
| 341 |
+
[28] Zhan Gao, Elvin Isufi, and Alejandro Ribeiro. Stability of graph convolutional neural networks to stochastic perturbations. Signal Processing, 188:108216, 2021. ISSN 0165-1684.
|
| 342 |
+
|
| 343 |
+
[29] Henry Kenlay, Dorina Thano, and Xiaowen Dong. On the stability of graph convolutional neural networks under edge rewiring. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8513-8517, 2021. doi: 10.1109/ ICASSP39728.2021.9413474. 2
|
| 344 |
+
|
| 345 |
+
[30] Luana Ruiz, Fernando Gama, and Alejandro Ribeiro. Gated graph recurrent neural networks. IEEE Transactions on Signal Processing, 68:6303-6318, 2020. doi: 10.1109/TSP.2020.3033962. 2
|
| 346 |
+
|
| 347 |
+
[31] Aliaksei Sandryhaila and Jose M.F. Moura. Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure. IEEE Signal Processing Magazine, 31(5):80-90, 2014. 2
|
| 348 |
+
|
| 349 |
+
[32] David I Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3):83-98, 2013. doi: 10.1109/MSP.2012.2235192. 2, 6, 7
|
| 350 |
+
|
| 351 |
+
[33] Francesco Grassi, Andreas Loukas, Nathanaël Perraudin, and Benjamin Ricaud. A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs. IEEE Transactions on Signal Processing, 66(3):817-829, 2017. 2, 3, 6
|
| 352 |
+
|
| 353 |
+
[34] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016. 2
|
| 354 |
+
|
| 355 |
+
[35] Kai Qiu, Xianghui Mao, Xinyue Shen, Xiaohan Wang, Tiejian Li, and Yuantao Gu. Time-varying graph signal reconstruction. IEEE Journal of Selected Topics in Signal Processing, 11 (6):870-883, 2017. doi: 10.1109/JSTSP.2017.2726969. 3
|
| 356 |
+
|
| 357 |
+
[36] Vassilis N. Ioannidis, Daniel Romero, and Georgios B. Giannakis. Inference of spatio-temporal functions over graphs via multikernel kriged kalman filtering. IEEE Transactions on Signal Processing, 66(12):3228-3239, 2018. doi: 10.1109/TSP.2018.2827328. 3
|
| 358 |
+
|
| 359 |
+
[37] Jhony H. Giraldo, Arif Mahmood, Belmar Garcia-Garcia, Dorina Thanou, and Thierry Bouwmans. Reconstruction of time-varying graph signals via sobolev smoothness. IEEE Transactions on Signal and Information Processing over Networks, 8:201-214, 2022. doi: 10.1109/TSIPN.2022.3156886. 3
|
| 360 |
+
|
| 361 |
+
[38] Alberto Natali, Elvin Isufi, Mario Coutino, and Geert Leus. Learning time-varying graphs from online data. IEEE Open Journal of Signal Processing, 3:212-228, 2022. doi: 10.1109/OJSP. 2022.3178901. 3
|
| 362 |
+
|
| 363 |
+
[39] Chao Pan, Siheng Chen, and Antonio Ortega. Spatio-temporal graph scattering transform. arXiv preprint arXiv:2012.03363, 2020. 3
|
| 364 |
+
|
| 365 |
+
[40] Aliaksei Sandryhaila and José M. F. Moura. Discrete signal processing on graphs. IEEE Transactions on Signal Processing, 61(7):1644-1656, 2013. doi: 10.1109/TSP.2013.2238935. 3
|
| 366 |
+
|
| 367 |
+
[41] Antonio Ortega, Pascal Frossard, Jelena Kovačević, José M. F. Moura, and Pierre Vandergheynst. Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE, 106 (5):808-828, 2018. 3
|
| 368 |
+
|
| 369 |
+
[42] Fernando Gama, Elvin Isufi, Geert Leus, and Alejandro Ribeiro. Graphs, convolutions, and neural networks: From graph filters to graph neural networks. IEEE Signal Processing Magazine, 37(6):128-138, 2020. doi: 10.1109/MSP.2020.3016143. 3, 4, 7
|
| 370 |
+
|
| 371 |
+
[43] Chuang Liu, Yibing Zhan, Chang Li, Bo Du, Jia Wu, Wenbin Hu, Tongliang Liu, and Dacheng Tao. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321, 2022. 4
|
| 372 |
+
|
| 373 |
+
[44] Arbaaz Khan, Ekaterina Tolstaya, Alejandro Ribeiro, and Vijay Kumar. Graph policy gradients for large scale robot control. In Leslie Pack Kaelbling, Danica Kragic, and Komei Sugiura, editors, Proceedings of the Conference on Robot Learning, volume 100 of Proceedings of Machine Learning Research, pages 823-834. PMLR, 30 Oct-01 Nov 2020. 4
|
| 374 |
+
|
| 375 |
+
[45] Fernando Gama, Antonio G. Marques, Geert Leus, and Alejandro Ribeiro. Convolutional neural network architectures for signals supported on graphs. IEEE Transactions on Signal Processing, 67(4):1034-1049, 2019. doi: 10.1109/TSP.2018.2887403. 4
|
| 376 |
+
|
| 377 |
+
[46] E. Isufi and G. Mazzola. Graph-time convolutional neural networks. IEEE Data Science and Learning Workshop, 2021. 4
|
| 378 |
+
|
| 379 |
+
[47] Lydia Tsiami and Christos Makropoulos. Cyber-physical attack detection in water distribution systems with temporal graph convolutional neural networks. Water, 13(9):1247, 2021. 8
|
| 380 |
+
|
| 381 |
+
[48] Riccardo Taormina, Stefano Galelli, Nils Ole Tippenhauer, Elad Salomons, and Avi Ostfeld. Characterizing cyber-physical attacks on water distribution systems. Journal of Water Resources Planning and Management, 143(5):04017009, 2017. 8
|
| 382 |
+
|
| 383 |
+
[49] Riccardo Taormina, Stefano Galelli, Nils Ole Tippenhauer, Elad Salomons, Avi Ostfeld, Demetrios G Eliades, Mohsen Aghashahi, Raanju Sundararajan, Mohsen Pourahmadi, M Katherine Banks, et al. Battle of the attack detection algorithms: Disclosing cyber attacks on water distribution networks. Journal of Water Resources Planning and Management, 144(8), 2018. 8
|
| 384 |
+
|
| 385 |
+
## A Stability proof
|
| 386 |
+
|
| 387 |
+
The proof is structured in three components. First we prove the graph-time convolutional filter is stable to perturbations. Then, we prove stability for the encoder and finally for the decoder. Throughout the proof we will use the following lemmas.
|
| 388 |
+
|
| 389 |
+
Lemma 1. [27] Let $\mathbf{S} = {\mathbf{{V\Lambda V}}}^{\mathrm{H}}$ and $\mathbf{E} = {\mathbf{{UMU}}}^{\mathrm{H}}$ such that $\parallel \mathbf{E}\parallel \leq \epsilon$ . Assume that ${\mathbf{E}}_{V} = {\mathbf{{VMV}}}^{\mathrm{H}}$ is the projection of perturbation $\mathbf{E}$ over graph eigenspace of $\mathbf{S}$ , and $\mathbf{E} = {\mathbf{E}}_{V} + {\mathbf{E}}_{U}$ . For any eigenvector ${\mathbf{v}}_{n}$ of $\mathbf{S}$ it holds that
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\mathbf{E}{\mathbf{v}}_{n} = {m}_{n}{\mathbf{v}}_{n} + {\mathbf{E}}_{U}{\mathbf{v}}_{n} \tag{19}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
with $\begin{Vmatrix}{\mathbf{E}}_{U}\end{Vmatrix} \leq {\epsilon \delta }$ , where $\delta = {\left( \parallel \mathbf{U} - \mathbf{V}{\parallel }^{2} + 1\right) }^{2} - 1$ and ${m}_{n}$ is the $n$ -th eigenvalue of $\mathbf{M}$ . Recall that $\parallel \cdot \parallel$ represents the operator norm of a matrix.
|
| 396 |
+
|
| 397 |
+
Lemma 2. Given the frequency response of a graph-time convolutional filter as $h\left( {\lambda }_{\diamond }\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{h}_{k}{\lambda }_{\diamond }^{k}$ , the partial derivation w.r.t. graph frequency $\lambda$ is
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
\frac{\partial h\left( {\lambda }_{\diamond }\right) }{\partial \lambda } = \left( {{s}_{01} + {s}_{11}{\lambda }_{T}}\right) \mathop{\sum }\limits_{{k = 1}}^{K}k{h}_{k}{\lambda }_{\diamond }^{k - 1}. \tag{20}
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
527 Proof. Using the product graph definition (1) we have
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
\frac{\partial {\lambda }_{\diamond }}{\partial \lambda } = \frac{\partial \left( {{s}_{00} + {s}_{01}\lambda + {s}_{10}{\lambda }_{T} + {s}_{11}{\lambda }_{T}\lambda }\right) }{\partial \lambda } = {s}_{01} + {s}_{11}{\lambda }_{T}. \tag{21}
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
Then,
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
\frac{\partial h\left( {\lambda }_{\diamond }\right) }{\partial \lambda } = \frac{\partial h\left( {\lambda }_{\diamond }\right) }{\partial {\lambda }_{\diamond }} \times \frac{\partial {\lambda }_{\diamond }}{\partial \lambda } = \left( {\mathop{\sum }\limits_{{k = 1}}^{K}k{h}_{k}{\lambda }_{\diamond }^{k - 1}}\right) \left( {{s}_{01} + {s}_{11}{\lambda }_{T}}\right) \tag{22}
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
completes the proof.
|
| 416 |
+
|
| 417 |
+
To ease notation, let us also rearrange the parametric product graph GSO as
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
{\mathbf{S}}_{\diamond } = \left( {{s}_{00}{\mathbf{I}}_{T} + {s}_{10}{\mathbf{S}}_{T}}\right) \otimes {\mathbf{I}}_{N} + \left( {{s}_{01}{\mathbf{I}}_{T} + {s}_{11}{\mathbf{S}}_{T}}\right) \otimes \mathbf{S} = {\mathbf{S}}_{T0} \otimes {\mathbf{I}}_{N} + {\mathbf{S}}_{T1} \otimes \mathbf{S} \tag{23}
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
where ${\mathbf{S}}_{T0} = {s}_{00}{\mathbf{I}}_{T} + {s}_{10}{\mathbf{S}}_{T}$ collects the fully temporal edges and ${\mathbf{S}}_{T1} = {s}_{01}{\mathbf{I}}_{T} + {s}_{11}{\mathbf{S}}_{T}$ the edges ruled by the spatial graph.
|
| 424 |
+
|
| 425 |
+
## GTConv filter stability.
|
| 426 |
+
|
| 427 |
+
The difference of the filter operating on the perturbed and nominal graph is
|
| 428 |
+
|
| 429 |
+
$$
|
| 430 |
+
\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\left( {{\widehat{\mathbf{S}}}_{\diamond }^{k} - {\mathbf{S}}_{\diamond }^{k}}\right) \tag{24}
|
| 431 |
+
$$
|
| 432 |
+
|
| 433 |
+
Leveraging the product GSO expansion (23) and the perturbation model $\widehat{\mathbf{S}} = \mathbf{S} + \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right)$ [cf. 36 (13)] we can write the $k$ -th power of the perturbed product graph GSO as
|
| 434 |
+
|
| 435 |
+
$$
|
| 436 |
+
{\widehat{\mathbf{S}}}_{\diamond }^{k} = {\left( {\mathbf{S}}_{T0} \otimes {\mathbf{I}}_{N} + {\mathbf{S}}_{T1} \otimes \left( \mathbf{S} + \left( \mathbf{{SE}} + \mathbf{{ES}}\right) \right) \right) }^{k}
|
| 437 |
+
$$
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
= {\left( {\mathbf{S}}_{\diamond } + \left( {\mathbf{S}}_{T1} \otimes \left( \mathbf{{SE}} + \mathbf{{ES}}\right) \right) \right) }^{k} \tag{25}
|
| 441 |
+
$$
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
= {\mathbf{S}}_{\diamond }^{k} + \mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) {\mathbf{S}}_{\diamond }^{k - r - 1} + \mathbf{D},
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
where we applied the first-order Taylor expansion in the third line. Matrix D contains all terms of order $\mathcal{O}\left( {\epsilon }^{2}\right)$ and can be ignored.
|
| 448 |
+
|
| 449 |
+
Substituting then (25) into (24), we get
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) {\mathbf{S}}_{\diamond }^{k - r - 1}. \tag{26}
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
Upon applying the filters to an input ${\mathbf{x}}_{\diamond }$ we get the output difference ${\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \left( {\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) }\right) {\mathbf{x}}_{\diamond }$ . Substituting into this the graph-time Fourier expansion of the input
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
{\mathbf{x}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \tag{27}
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
with ${\widetilde{x}}_{\left( n, t\right) }$ the(n, t)-th Fourier coefficients and $\left( {{\mathbf{v}}_{T, t},{\mathbf{v}}_{n}}\right)$ the eigenvector pair for the temporal and spatial GSOs [cf. Sec. 3.1], we can write the output difference as
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
{\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) {\mathbf{S}}_{\diamond }^{k - r - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{28}
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
544 Since $\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right)$ is an eigenvector of ${\mathbf{S}}_{\diamond }$ , we have
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
{\mathbf{S}}_{\diamond }^{k - r - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) = {\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \tag{29}
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
45 which by substituting to (28) yields
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
{\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) \left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \tag{30}
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
6 where ${\lambda }_{\diamond ,\left( {n, t}\right) }$ is the eigenvalue of the product graph GSO ${\mathbf{S}}_{\diamond }$ for indices(n, t). Leveraging mixed product property of Kronecker product ${}^{2}$ allows us to rewrite (30) as
|
| 480 |
+
|
| 481 |
+
$$
|
| 482 |
+
{\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1}{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) {\mathbf{v}}_{n}}\right) . \tag{31}
|
| 483 |
+
$$
|
| 484 |
+
|
| 485 |
+
Replacing ${\mathbf{S}}_{T1} = {s}_{01}{\mathbf{I}}_{T} + {s}_{11}{\mathbf{S}}_{T}$ leads to
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
{\widehat{\mathbf{y}}}_{\diamond } - {\mathbf{y}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) {\mathbf{v}}_{n}}\right) . \tag{32}
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
Applying Lemma 1 results in
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
{\widehat{\mathbf{y}}}_{\diamond } - {\mathbf{y}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{S} + {\lambda }_{n}{\mathbf{I}}_{N}}\right) \left( {\underset{\text{term 1 }}{\underbrace{{m}_{n}{\mathbf{v}}_{n}}} + \underbrace{{\mathbf{E}}_{U}{\mathbf{v}}_{n}}}\right) }\right) ,
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
(33)
|
| 498 |
+
|
| 499 |
+
which leaves us with two terms that shall be discussed separately.
|
| 500 |
+
|
| 501 |
+
For the first term, we have
|
| 502 |
+
|
| 503 |
+
$$
|
| 504 |
+
{\mathbf{t}}_{1} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}2{\lambda }_{n}{m}_{n}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{34}
|
| 505 |
+
$$
|
| 506 |
+
|
| 507 |
+
By exploiting eigenvector property ${\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) = {\lambda }_{\diamond ,\left( {n, t}\right) }^{r}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right)$ we can rewrite (34) into
|
| 508 |
+
|
| 509 |
+
$$
|
| 510 |
+
{\mathbf{t}}_{1} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}2{\lambda }_{n}{m}_{n}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}k{h}_{k}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{35}
|
| 511 |
+
$$
|
| 512 |
+
|
| 513 |
+
553 Applying Lemma 2 leads to
|
| 514 |
+
|
| 515 |
+
$$
|
| 516 |
+
{\mathbf{t}}_{1} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}2{m}_{n}{\widetilde{x}}_{\left( n, t\right) }{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) }{\partial {\lambda }_{n}}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{36}
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
554 For the second term, we have
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
{\mathbf{t}}_{2} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{S} + {\lambda }_{n}{\mathbf{I}}_{N}}\right) {\mathbf{E}}_{U}{\mathbf{v}}_{n}}\right) . \tag{37}
|
| 523 |
+
$$
|
| 524 |
+
|
| 525 |
+
---
|
| 526 |
+
|
| 527 |
+
$$
|
| 528 |
+
{}^{2}\left( {A \otimes B}\right) \left( {C \otimes D}\right) = {AC} \otimes {BD}
|
| 529 |
+
$$
|
| 530 |
+
|
| 531 |
+
---
|
| 532 |
+
|
| 533 |
+
By substituting the eigendecomposition ${\mathbf{S}}_{\diamond }^{r} = \left( {{\mathbf{V}}_{T} \otimes \mathbf{V}}\right) {\mathbf{\Lambda }}_{\diamond }^{r}{\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}$ we get
|
| 534 |
+
|
| 535 |
+
$$
|
| 536 |
+
{\mathbf{t}}_{2} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\left( {{\mathbf{V}}_{T} \otimes \mathbf{V}}\right) \operatorname{diag}\left( {\mathbf{g}}_{\left( n, t\right) }\right) {\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{E}}_{U}{\mathbf{v}}_{n}}\right) . \tag{38}
|
| 537 |
+
$$
|
| 538 |
+
|
| 539 |
+
6 where the entries of vectors ${\mathbf{g}}_{\left( n, t\right) } \in {\mathbb{R}}^{NT}$ for $n \in \left\lbrack N\right\rbrack$ and $t \in \left\lbrack T\right\rbrack$ are defined as
|
| 540 |
+
|
| 541 |
+
$$
|
| 542 |
+
{g}_{\left( n, t\right) }\left( {{n}^{\prime },{t}^{\prime }}\right) = \left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) \left( {{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}\right) \mathop{\sum }\limits_{{k = 0}}^{k}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }^{r}
|
| 543 |
+
$$
|
| 544 |
+
|
| 545 |
+
$$
|
| 546 |
+
= \left\{ \begin{matrix} 2{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond \left( {n, t}\right) }\right) }{\partial {\lambda }_{n}}; & \left( {n, t}\right) = \left( {{n}^{\prime },{t}^{\prime }}\right) \\ \left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) \left( {h\left( {\lambda }_{\diamond \left( {n, t}\right) }\right) - h\left( {\lambda }_{\diamond \left( {{n}^{\prime },{t}^{\prime }}\right) }\right) }\right) \frac{{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}{{\lambda }_{n} - {\lambda }_{{n}^{\prime }}}; & \left( {n, t}\right) \neq \left( {{n}^{\prime },{t}^{\prime }}\right) \end{matrix}\right. \tag{39}
|
| 547 |
+
$$
|
| 548 |
+
|
| 549 |
+
With this in place, we now upper bound the two-norm of the difference ${\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = {\mathbf{t}}_{1} + {\mathbf{t}}_{2}$ by bounding each of the terms ${\mathbf{t}}_{1}$ and ${\mathbf{t}}_{2}$ separately. From $\parallel \mathbf{E}\parallel \leq \epsilon$ , we have that $\left| {m}_{n}\right| \leq \epsilon$ . Also from the integral Lipschitz property of the filter [cf. Def. 1]. Using these two into (36), we can upper bound the norm of term ${\mathbf{t}}_{1}$ as
|
| 550 |
+
|
| 551 |
+
$$
|
| 552 |
+
{\begin{Vmatrix}{\mathbf{t}}_{1}\end{Vmatrix}}_{2} \leq {2\epsilon C}\mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \leq {2\epsilon C}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}, \tag{40}
|
| 553 |
+
$$
|
| 554 |
+
|
| 555 |
+
where the second inequality holds due to Fourier transform definition (27).
|
| 556 |
+
|
| 557 |
+
Moving on to ${\mathbf{t}}_{2}$ , we use mixed product property as ${\mathbf{v}}_{T, t} \otimes {\mathbf{E}}_{U}{\mathbf{v}}_{n} = \left( {{\mathbf{I}}_{T} \otimes {\mathbf{E}}_{U}}\right) \left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right)$ and operator norms in (38) to obtain an upper bound as
|
| 558 |
+
|
| 559 |
+
$$
|
| 560 |
+
{\begin{Vmatrix}{\mathbf{t}}_{2}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left| {\widetilde{x}}_{\left( n, t\right) }\right| \begin{Vmatrix}\left( {{\mathbf{V}}_{T} \otimes \mathbf{V}}\right) \end{Vmatrix}\begin{Vmatrix}{\operatorname{diag}\left( {\mathbf{g}}_{\left( n, t\right) }\right) }\end{Vmatrix}\begin{Vmatrix}{\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{I}}_{T} \otimes {\mathbf{E}}_{U}}\end{Vmatrix}{\begin{Vmatrix}{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}\end{Vmatrix}}_{2}. \tag{41}
|
| 561 |
+
$$
|
| 562 |
+
|
| 563 |
+
From the integral Lipschitz property we can bound $\begin{Vmatrix}{\operatorname{diag}\left( {\mathbf{g}}_{\left( n, t\right) }\right) }\end{Vmatrix} \leq {2C}\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\max }}\right)$ in (39) where ${\lambda }_{T,\max }$ is a temporal eigenvalue with the largest absolute value. As ${\mathbf{V}}_{T} \otimes \mathbf{V}$ is an orthonormal bases, its operator norm is $\begin{Vmatrix}{{\mathbf{V}}_{T} \otimes \mathbf{V}}\end{Vmatrix} = 1$ , and ${l}_{2}$ -norm of the eigenvectors is ${\begin{Vmatrix}{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}\end{Vmatrix}}_{2} = 1$ . Lemma 1 states that $\parallel \mathbf{E}\parallel \leq {\epsilon \delta }$ which leads to $\begin{Vmatrix}{{\mathbf{I}}_{T} \otimes {\mathbf{E}}_{U}}\end{Vmatrix} \leq {\epsilon \delta }$ . Finally, ${l}_{1}$ -norm can be bounded by $\mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left| {\widetilde{x}}_{\left( n, t\right) }\right| = \parallel \widetilde{\mathbf{x}}{\parallel }_{1} \leq \sqrt{NT}\parallel \widetilde{\mathbf{x}}{\parallel }_{2} = \sqrt{NT}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}$ . Considering all the abovementioned bounds and replacing them in (41) yields
|
| 564 |
+
|
| 565 |
+
$$
|
| 566 |
+
{\begin{Vmatrix}{\mathbf{t}}_{2}\end{Vmatrix}}_{2} \leq 2\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\max }}\right) {\epsilon C\delta }\sqrt{NT}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}. \tag{42}
|
| 567 |
+
$$
|
| 568 |
+
|
| 569 |
+
Finally, based on the triangle inequality the GTConv filter difference is
|
| 570 |
+
|
| 571 |
+
$$
|
| 572 |
+
\begin{Vmatrix}{\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) }\end{Vmatrix} \leq 2\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\max }}\right) {\epsilon C}\left( {1 + \delta \sqrt{NT}}\right) = {\epsilon \Delta }. \tag{43}
|
| 573 |
+
$$
|
| 574 |
+
|
| 575 |
+
## Encoder stability.
|
| 576 |
+
|
| 577 |
+
Consider the encoder contains ${L}_{e}$ layer each having ${F}_{\ell }$ features and $r$ sampling rate. We are interested in the output difference of the encoder
|
| 578 |
+
|
| 579 |
+
$$
|
| 580 |
+
{\begin{Vmatrix}\operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{2}^{2} = \mathop{\sum }\limits_{{f = 1}}^{{F}_{{L}_{e}}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2}^{2}. \tag{44}
|
| 581 |
+
$$
|
| 582 |
+
|
| 583 |
+
To ease exposition, we denote $\mathbf{H} \mathrel{\text{:=}} \mathbf{H}\left( \mathbf{S}\right)$ and $\widehat{\mathbf{H}} \mathrel{\text{:=}} \mathbf{H}\left( \widehat{\mathbf{S}}\right)$ . For the $f$ -th output encoder feature we have
|
| 584 |
+
|
| 585 |
+
$$
|
| 586 |
+
{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2} = {\begin{Vmatrix}\sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{{L}_{e} - 1}{S}_{r}}}{S}_{r}\left( {\mathbf{H}}_{{L}_{e}}^{fg}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}\right) - \sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{{L}_{e} - 1}{S}_{r}}}{S}_{r}\left( {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}{\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\right) \right) \right) \end{Vmatrix}}_{2} \tag{45}
|
| 587 |
+
$$
|
| 588 |
+
|
| 589 |
+
where ${S}_{r}\left( \cdot \right)$ is the sampling operator with rate $r$ , i.e., simple ${SUM}\left( \cdot \right)$ function without any aggregation. The downsampling reduces the norm of each time series by a factor $1/\sqrt{r}$ , so ${\begin{Vmatrix}{\mathbf{y}}_{\diamond ,{L}_{e}}\end{Vmatrix}}_{2}$ will be
|
| 590 |
+
|
| 591 |
+
reduced by $1/\sqrt{r}$ . As non-linearity is 1-Lipschitz, i.e., $\left| {\sigma \left( a\right) - \sigma \left( b\right) }\right| \leq \left| {a - b}\right|$ , we can conclude the following inequality from (45) by use of triangular inequality
|
| 592 |
+
|
| 593 |
+
$$
|
| 594 |
+
{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{{L}_{e} - 1}}{\begin{Vmatrix}{\mathbf{H}}_{{L}_{e}}^{fg}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}{\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2}. \tag{46}
|
| 595 |
+
$$
|
| 596 |
+
|
| 597 |
+
We add and subtract ${\widehat{\mathbf{H}}}_{L}^{fg}{\mathbf{x}}_{\diamond , L - 1}^{g}$ inside the ${l}_{2}$ -norm and use the triangular inequality once again for each of the input features $g$ to get
|
| 598 |
+
|
| 599 |
+
$$
|
| 600 |
+
{\begin{Vmatrix}{\mathbf{H}}_{{L}_{e}}^{fg}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}{\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2} \leq \parallel \left( {{\mathbf{H}}_{{L}_{e}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}}\right) {\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}\left( {{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}}\right) {\parallel }_{2}
|
| 601 |
+
$$
|
| 602 |
+
|
| 603 |
+
$$
|
| 604 |
+
\leq \parallel {\mathbf{H}}_{{L}_{e}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}\parallel \parallel {\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}\parallel \parallel {\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}{\parallel }_{2}
|
| 605 |
+
$$
|
| 606 |
+
|
| 607 |
+
(47)
|
| 608 |
+
|
| 609 |
+
The stability of GTConv filter in (43) provides an upper bound for the first term as $\begin{Vmatrix}{{\mathbf{H}}_{{L}_{\alpha }}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{\alpha }}^{fg}}\end{Vmatrix} \leq {\epsilon \Delta }$ which is applicable for all the layers. Note that $\Delta$ depends on temporal graph size, so it is different in each layer due to the downsampling. However, we assume the largest temporal size $T$ so the inequality holds for all the layers ${}^{3}$ . The second term is bounded by spectral normalization assumption $\begin{Vmatrix}{\mathbf{H}}_{{L}_{e}}^{fg}\end{Vmatrix} \leq 1$ [cf. Def. 2]. Leveraging these bounds and replacing in (46) we get
|
| 610 |
+
|
| 611 |
+
$$
|
| 612 |
+
{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{{F}_{{L}_{e} - 1} + {\epsilon \Delta }}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2}. \tag{48}
|
| 613 |
+
$$
|
| 614 |
+
|
| 615 |
+
This equation defines a recursion among the encoder layers with initial condition ${\mathbf{x}}_{\diamond ,0}^{g} = {\widehat{\mathbf{x}}}_{\diamond ,0}^{g} \mathrel{\text{:=}} {\mathbf{x}}_{\diamond }^{g}$ for all the input features. So for the $\ell$ -th layer, we can write
|
| 616 |
+
|
| 617 |
+
$$
|
| 618 |
+
{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f} - {\widehat{\mathbf{x}}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\epsilon \Delta }{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2}. \tag{49}
|
| 619 |
+
$$
|
| 620 |
+
|
| 621 |
+
39 To solve this recursive inequality, we first upper bound ${\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2}$ as
|
| 622 |
+
|
| 623 |
+
$$
|
| 624 |
+
{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}{\mathbf{x}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2}, \tag{50}
|
| 625 |
+
$$
|
| 626 |
+
|
| 627 |
+
where the last inequality is due to the assumption $\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}\end{Vmatrix} \leq 1$ [Def. 2]. Solving this recursion leads to
|
| 628 |
+
|
| 629 |
+
$$
|
| 630 |
+
{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \frac{1}{{r}^{l/2}}\mathop{\prod }\limits_{{i = 1}}^{{\ell - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2} = {r}^{-\ell /2}\mathop{\prod }\limits_{{i = 1}}^{{\ell - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{51}
|
| 631 |
+
$$
|
| 632 |
+
|
| 633 |
+
1 Replacing (51) in (49) and solving the recursion considering the initial conditions we get
|
| 634 |
+
|
| 635 |
+
$$
|
| 636 |
+
{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f} - {\widehat{\mathbf{x}}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq {r}^{-\ell /2}{\epsilon \Delta }\ell \mathop{\prod }\limits_{{i = 1}}^{{\ell - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{52}
|
| 637 |
+
$$
|
| 638 |
+
|
| 639 |
+
Setting $\ell = {L}_{e}$ in (52) and replacing it in (44) yields to
|
| 640 |
+
|
| 641 |
+
$$
|
| 642 |
+
{\begin{Vmatrix}\operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{F} \leq {L}_{e}{r}^{-{L}_{e}/2}{\epsilon \Delta }\sqrt{{F}_{{L}_{e}}}\mathop{\prod }\limits_{{n = 1}}^{{{L}_{e} - 1}}{F}_{n}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{53}
|
| 643 |
+
$$
|
| 644 |
+
|
| 645 |
+
## GTConv-AE stability.
|
| 646 |
+
|
| 647 |
+
Let ${\mathbf{Z}}_{\diamond } = \operatorname{ENC}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right)$ be the input of the decoder and ${\mathbf{z}}_{\diamond ,{L}_{d}} = \operatorname{DEC}\left( {{\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right)$ its output. To prove GTConvAE stability, we need to bound
|
| 648 |
+
|
| 649 |
+
$$
|
| 650 |
+
{\begin{Vmatrix}\mathrm{{DEC}}\left( {\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \mathrm{{DEC}}\left( {\mathbf{Z}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{2}^{2} = \mathop{\sum }\limits_{{f = 1}}^{{F}_{d,{L}_{d}}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2}^{2}. \tag{54}
|
| 651 |
+
$$
|
| 652 |
+
|
| 653 |
+
For each feature in the output we have
|
| 654 |
+
|
| 655 |
+
$$
|
| 656 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} = {\begin{Vmatrix}\sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{d,{L}_{d}} - 1}}{U}_{r}\left( {\mathbf{H}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}\right) - \sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{d,{L}_{d} - 1}{U}_{r}}}{U}_{r}\left( {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\right) \right) \right) \end{Vmatrix}}_{2} \tag{55}
|
| 657 |
+
$$
|
| 658 |
+
|
| 659 |
+
---
|
| 660 |
+
|
| 661 |
+
${}^{3}$ It is possible to solve the recursive equation with ${\Delta }_{T}$ as a variable, but it leads to overcrowded multipliers in inequalities without carrying important information on the bound.
|
| 662 |
+
|
| 663 |
+
---
|
| 664 |
+
|
| 665 |
+
where ${U}_{r}\left( \cdot \right)$ is an upsampling operator with rate $r$ which insert zeros among the samples. The upsampling module leaves the ${l}_{2}$ -norm per time series unaffected and can be ignored. Given 1- Lipschitz continuity of activation function $\sigma \left( \cdot \right)$ , the following inequality can be concluded from (55) using the triangular inequality
|
| 666 |
+
|
| 667 |
+
$$
|
| 668 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,{L}_{d} - 1}}{\begin{Vmatrix}{\mathbf{H}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2}. \tag{56}
|
| 669 |
+
$$
|
| 670 |
+
|
| 671 |
+
Adding and subtracting ${\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}$ in the norm and leveraging again the triangular inequality yields
|
| 672 |
+
|
| 673 |
+
$$
|
| 674 |
+
{\begin{Vmatrix}{\mathbf{H}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2} \leq \parallel \left( {{\mathbf{H}}_{{L}_{d}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}}\right) {\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}\left( {{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}}\right) {\parallel }_{2}
|
| 675 |
+
$$
|
| 676 |
+
|
| 677 |
+
$$
|
| 678 |
+
\leq \parallel {\mathbf{H}}_{{L}_{d}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}\parallel \parallel {\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}\parallel \parallel {\mathbf{x}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}{\parallel }_{2},
|
| 679 |
+
$$
|
| 680 |
+
|
| 681 |
+
(57)
|
| 682 |
+
|
| 683 |
+
for $g = 1,\ldots ,{F}_{d,{L}_{d} - 1}$ . The first term is bounded by GTConv filters stability in (43) and the second term is upper-bounded because filters are normalized $\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}\end{Vmatrix} \leq 1$ [cf. Def. 2]. Given these two bounds, (57) can be upper-bounded as
|
| 684 |
+
|
| 685 |
+
$$
|
| 686 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{{F}_{d,{L}_{d} - 1} - 1}}{\epsilon \Delta }{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2}. \tag{58}
|
| 687 |
+
$$
|
| 688 |
+
|
| 689 |
+
This allows defining a recursion for the generic layer $\ell$ as
|
| 690 |
+
|
| 691 |
+
$$
|
| 692 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell }^{f} - {\widehat{\mathbf{z}}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,\ell - 1}}{\epsilon \Delta }{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2}. \tag{59}
|
| 693 |
+
$$
|
| 694 |
+
|
| 695 |
+
For the first term on the right hand-side of (59), we have
|
| 696 |
+
|
| 697 |
+
$$
|
| 698 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,\ell - 1}}{\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}{\mathbf{z}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,\ell - 1}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} = \mathop{\prod }\limits_{{j = 1}}^{{\ell - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2} \tag{60}
|
| 699 |
+
$$
|
| 700 |
+
|
| 701 |
+
because $\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}\end{Vmatrix} \leq 1$ [cf. Def. 2]. Replacing (60) into (59) and evaluating it at $\ell = {L}_{d}$ brings the recursion to its initial conditions
|
| 702 |
+
|
| 703 |
+
$$
|
| 704 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq {\epsilon \Delta }{L}_{d}\mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2} + \mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2}. \tag{61}
|
| 705 |
+
$$
|
| 706 |
+
|
| 707 |
+
For initial conditions we have ${\mathbf{Z}}_{\diamond ,0} = {\mathbf{Z}}_{\diamond }$ , however, the error caused by spatial graph perturbation in the encoder appears here as an initial condition where ${\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,0}^{f}\end{Vmatrix}}_{2}$ is bounded by the result in (53) for $f \in \left\lbrack {F}_{d,0}\right\rbrack$ .
|
| 708 |
+
|
| 709 |
+
As the initial condition of the decoder states ${\mathbf{Z}}_{\diamond ,0} = {\mathbf{Z}}_{\diamond } = {\mathbf{X}}_{\diamond , L}$ , we can set $\ell = L$ in (51) to obtain
|
| 710 |
+
|
| 711 |
+
$$
|
| 712 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond }^{f}\end{Vmatrix}}_{2} \leq {r}^{-{L}_{e}/2}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{62}
|
| 713 |
+
$$
|
| 714 |
+
|
| 715 |
+
Substituting encoder stability bound (53), to enforce the initial condition for $\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2}$ , and (62) into (61) results in
|
| 716 |
+
|
| 717 |
+
$$
|
| 718 |
+
{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq {L}_{d}{r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{d,0}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}
|
| 719 |
+
$$
|
| 720 |
+
|
| 721 |
+
$$
|
| 722 |
+
+ {L}_{e}{r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{d,0}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{63}
|
| 723 |
+
$$
|
| 724 |
+
|
| 725 |
+
Calculating over all the output features completes the upper-bound as
|
| 726 |
+
|
| 727 |
+
$$
|
| 728 |
+
{\begin{Vmatrix}\operatorname{GTConvAE}\left( {\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \operatorname{GTConvAE}\left( {\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{2} \leq
|
| 729 |
+
$$
|
| 730 |
+
|
| 731 |
+
$$
|
| 732 |
+
\left( {{L}_{d} + {L}_{e}}\right) {r}^{-{L}_{e}/2}{\epsilon \Delta }\sqrt{{F}_{d,{L}_{d}}}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\prod }\limits_{{j = 0}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{64}
|
| 733 |
+
$$
|
| 734 |
+
|
| 735 |
+
Assuming ${F}_{0} = {F}_{d,{L}_{d}} = 1$ and $\left\{ {{F}_{d}, F}\right\} \leq {F}_{\max }$ completes the proof.
|
| 736 |
+
|
| 737 |
+
## B Denoising solar irradiance time series
|
| 738 |
+
|
| 739 |
+
In this appendix we provide extra information on numerical experiment for denoising solar irradiance time series.
|
| 740 |
+
|
| 741 |
+
SNR: An error vector ${\mathbf{e}}_{t} \sim \mathcal{N}\left( {0,{L}^{ \dagger }}\right)$ is generated independently for each timestamp $t \in \left\lbrack T\right\rbrack$ . Matrix $L$ represents normalized Laplacian and $\dagger$ stands for pseudo-inverse operation. This noise varies smoothly over spatial graph which makes it more difficult to detect. Assume noise matrix $\sigma \mathbf{E} = \sigma \left\lbrack {{\mathbf{e}}_{1},\ldots ,{\mathbf{e}}_{T}}\right\rbrack \in {\mathbb{R}}^{N \times T}$ , we define SNR as follow:
|
| 742 |
+
|
| 743 |
+
$$
|
| 744 |
+
{SNR} = {20}\log \frac{\parallel \mathbf{X}{\parallel }_{F}}{\sigma \parallel \mathbf{E}{\parallel }_{F}}, \tag{65}
|
| 745 |
+
$$
|
| 746 |
+
|
| 747 |
+
where $\sigma$ is used to control SNR for the experiments.
|
| 748 |
+
|
| 749 |
+
Model parameters: The time window is searched over $T \in \{ 2,\ldots ,8\}$ . The number of layers for both encoder and decoder are selected from ${L}_{e} = {L}_{d} \in \{ 2,3\}$ . The number of features for every layer are chosen from $F \in {32},{16},8,4,2$ . The filter order is evaluated on $K \in \{ 2,3,4,5\}$ . The sampling is searched over $r \in \{ 1,2,3,4\}$ . All the aggregation function have been tested. Finally, the regularizer weight initially selected from logarithmic interval $\rho \in \left\{ {{10}^{ - }2,\ldots ,{10}^{2}}\right\}$ and fine-tuned around optimum value.
|
| 750 |
+
|
| 751 |
+
## C Anomaly detection in water networks
|
| 752 |
+
|
| 753 |
+
In this appendix we provide extra information on numerical experiments for anomaly detection in water networks.
|
| 754 |
+
|
| 755 |
+
Model parameters: The model parameters are evaluated and fine-tuned by sliding window back-testing. The time window is searched over $T \in \{ 2,\ldots ,8\}$ . The number of layers for both encoder and decoder are selected from ${L}_{e} = {L}_{d} \in \{ 2,3\}$ . The number of features for every layer are chosen from $F \in {128},{64},{32},{16},8,4,2$ . The filter order is evaluated on $K \in \{ 2,3,4,5\}$ . The sampling is searched over $r \in \{ 1,2,3\}$ . All the aggregation functions have been tested. Finally, the regularizer weight initially selected from logarithmic interval $\rho \in \left\{ {{10}^{ - }2,\ldots ,{10}^{2}}\right\}$ and fine-tuned around optimum value.
|
papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,288 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ GRAPH-TIME CONVOLUTIONAL AUTOENCODERS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
We introduce graph-time convolutional autoencoder (GTConvAE), a novel spatiotemporal architecture tailored to unsupervised learning for multivariate time series on networks. The GTConvAE leverages product graphs to represent the time series and a principled joint spatiotemporal convolution over this product graph. Instead of fixing the product graph at the outset, we make it parametric to attend to the spatiotemporal coupling for the task at hand. On top of this, we propose temporal downsampling for the encoder to improve the receptive field in a spatiotemporal manner without affecting the network structure; respectively. In the decoder, we consider the opposite upsampling operator. We prove that the GTConvAEs with graph integral Lipschitz filters are stable to relative network perturbations, ultimately showing the role of the different components in the encoder and decoder. Numerical experiments for denoising and anomaly detection in solar and water networks corroborate our findings and showcase the effectiveness of the GTConvAE compared with state-of-the-art alternatives.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Learning unsupervised representations from spatiotemporal network data is commonly encountered in applications concerning multivariate data denoising [1], anomaly detection [2], missing data imputation [3], and forecasting [4], to name just a few. The challenge is to develop models that jointly capture the spatiotemporal dependencies in a computation- and data-efficient manner yet being tractable so that to understand the role played by the network structure and the dynamics over it. The autoencoder family of functions is of interest in this setting, but vanilla spatiotemporal forms [5-7] that ignore the network structure suffer the well-known curse of dimensionality and lack inductive learning capabilities [8].
|
| 16 |
+
|
| 17 |
+
Upon leveraging the network as an inductive bias [9], graph-time autoencoders have been recently developed. These approaches are typically composed of two interleaving modules: one capturing the spatial dependencies via graph neural networks (GNNs) [10] and one capturing the temporal dependencies via temporal CNN or LTSM networks. For example, the work in [1] uses an edge-varying GNN [11] followed by a temporal convolution for motion denoising. The work in [12] considers LSTMs and graph convolutions for variational spatiotemporal autoencoders, which have been further investigated in $\left\lbrack {3,{13}}\right\rbrack$ , respectively, for spatiotemporal data imputation as a graph-based matrix completion problem and dynamic topologies. Graph-time autoencoders over dynamic topologies have also been investigated in [14, 15]. Lastly, [4] embeds the temporal information into the edges of a graph and develops an autoencoder over this graph for forecasting purposes.
|
| 18 |
+
|
| 19 |
+
By working disjointly first on the graph and then on the temporal dimension of the graph embeddings, these approaches fail to capture the joint spatiotemporal dependencies present in the raw data. It is also challenging to analyze their theoretical properties and to attribute to what extent the benefit comes from one module over the other. This aspect has been investigated for supervised spatiotemporal learning via GNNs [16-21] but not for autoencoders. The two works elaborating on this are [2] and [22]. The work in [2] replicates the graph over time via the Cartesian product principle [23] and uses an order one graph convolution [24] to learn spatiotemporal embeddings that are fed into an LSTM module to improve the temporal memory, ultimately giving more importance to the temporal dimension of the latent representation. Differently, [25] proposed a variational graph-time autoencoder that its encoder is based on [17] and its decoder is a multi-layer perceptron; hence, being suitable only for topological tasks such as dynamic link prediction but not for tasks concerning time series over networks such as denoising or anomaly detection.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose a GTConvAE that, differently from [2], captures jointly the spatiotemporal coupling both in the raw data and the intermediate higher-level representations. The GTConvAE operates over a parametric product graph [26] to attend to the spatiotemporal coupling for the task at hand rather than fixing it at the outset. Differently from [17], the GTConvAE has a symmetric structure with graph-time convolutions in both encoder and decoder, making it suitable for tasks concerning network time series. We also study the capability of the GTConvAE to transfer learning across different networks, which is of importance as practical topologies differ from the models used during training (e.g., because of model uncertainness, perturbations, or dynamics). The latter has been studied for traditional [27-29] and graph-time GNN models [20, 26, 30] but not for graph-time autoencoders.
|
| 22 |
+
|
| 23 |
+
Our contribution in this paper is twofold. First, we propose a symmetric graph-time convolutional autoencoder that jointly captures the spatiotemporal coupling in the data suited for tasks concerning multivariate time series over networks. The GTConvAE represents the time series as a graph signal over product graphs and uses the latter as an inductive bias to learn unsupervised representations. The product graph is parametric to attend to the coupling for the specific task, and it generalizes the popular choices of product graphs [31]. We also propose a temporal downsampling/upsampling in the encoder/decoder to increase the spatiotemporal receptive field without affecting the network structure; hence, preserving the inductive bias. Second, we prove GTConvAE is stable to relative perturbations on the spatial graph; highlighting the role played by the encoder, decoder, parametric product graph, convolutional filters, and downsampling/upsampling rate. Numerical experiments about denoising and anomaly detection over solar and water networks corroborate our findings and show a competitive performance compared with the more involved state-of-the-art alternatives.
|
| 24 |
+
|
| 25 |
+
The rest of this paper is organized as follows. Section 2 formulates the GTConvAE model and Section 3 analyzes its theoretical properties. Numerical experiments are presented in Section 4 and conclusions in Section 5. The proofs are collected in the appendix.
|
| 26 |
+
|
| 27 |
+
§ 2 GRAPH-TIME CONVOLUTIONAL AUTOENCODERS
|
| 28 |
+
|
| 29 |
+
GTconvAE learns representations from $N$ -dimensional multivariate time series ${\mathbf{x}}_{t} \in {\mathbb{R}}^{N},t =$ $1,\ldots ,T$ , collected in matrix $\mathbf{X} \in {\mathbb{R}}^{N \times T}$ . These time series have a spatial network structure represented by a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ composed of $N$ nodes $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ and $M$ edges. The $n$ -th row of $\mathbf{X}$ contains the time series ${\mathbf{x}}^{n} = {\left\lbrack {x}_{1}\left( n\right) ,\ldots ,{x}_{T}\left( n\right) \right\rbrack }^{\top }$ on node ${v}_{n}$ and the $t$ -th column a graph signal ${\mathbf{x}}_{t} = {\left\lbrack {x}_{t}\left( 1\right) ,\ldots ,{x}_{t}\left( N\right) \right\rbrack }^{\top }$ at timestamp $t\left\lbrack {{32},{33}}\right\rbrack$ . For example, the time series could be nodal pressures measured over junction nodes in a water distribution network, while the pipe connections rule the spatial structure. The representations learned from the tuple $\{ \mathcal{G},\mathbf{X}\}$ can then be used, among others, for anomaly detection [5], denoising dynamic data over graphs [1], and missing data completion [3].
|
| 30 |
+
|
| 31 |
+
The GTconvAE follows the standard encoder-decoder structure [34], but in each module, it jointly captures the spatiotemporal structure in the data. We denote the GTconvAE as
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\widehat{\mathbf{X}} = \operatorname{GTConvAE}\left( {\mathbf{X},\mathcal{G};\mathcal{H}}\right) \mathrel{\text{ := }} \operatorname{DEC}\left( {\operatorname{ENC}\left( {\mathbf{X},\mathcal{G};{\mathcal{H}}_{e}}\right) ,\mathcal{G};{\mathcal{H}}_{d}}\right)
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
where the encoder $\operatorname{ENC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{e}}\right)$ and decoder $\operatorname{DEC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{d}}\right)$ are non-linear parametric functions and where set $\mathcal{H} = {\mathcal{H}}_{e} \cup {\mathcal{H}}_{d}$ collects all parameters. The encoder takes as input the graph $\mathcal{G}$ and the time series $\mathbf{X}$ and produces higher-level representations $\mathbf{Z} \in {\mathbb{R}}^{N \times {T}_{e}}$ . These representations are built in a layered manner where each layer comprises: $i$ ) a joint graph-time convolutional filter to capture the spatiotemporal dependencies in a principled manner; ii) a temporal downsampling module to increase the receptive field without affecting the network structure; and iii) a pointwise nonlinearity to have more complex representations. The decoder has a mirrored structure w.r.t. the encoder by taking as input $\mathbf{Z}$ and outputting an estimate of the input $\widehat{\mathbf{X}}$ . The model parameters are estimated end-to-end by minimizing a spatiotemporal regularized reconstruction loss $\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right)$ .
|
| 38 |
+
|
| 39 |
+
§ 2.1 PRODUCT GRAPH REPRESENTATION OF NETWORK TIME SERIES
|
| 40 |
+
|
| 41 |
+
GTConvAE uses product graphs to represent the spatiotemporal dependencies in X [23]. Product graphs have been proven successful for processing multivariate time series, such as imputing missing values [35, 36], denoising [37], providing a spatiotemporal Fourier analysis [33], as well as building vector autoregressive models [38], spatiotemporal scattering transforms [39], and graph-time neural networks [26]. Specifically, denote by $\mathbf{S} \in {\mathbb{R}}^{N \times N}$ the graph shift operator (GSO) of the spatial graph $\mathcal{G}$ , e.g., adjacency, Laplacian. Consider also a temporal graph ${\mathcal{G}}_{T} = \left( {{\mathcal{V}}_{T},{\mathcal{E}}_{T},{\mathbf{S}}_{T}}\right)$ , where the node set ${\mathcal{V}}_{T} = \{ 1,\ldots ,T\}$ comprises the discrete-time instants, the edge set ${\mathcal{E}}_{T} \subseteq {\mathcal{V}}_{T} \times {V}_{T}$ captures the temporal dependencies; e.g., a directed line or a cyclic graph, and ${\mathbf{S}}_{T} \in {\mathbb{R}}^{N \times N}$ is the respective GSO [40,41]. The time series ${\mathbf{x}}^{n}$ now can be defined as a graph signal over temporal graph ${\mathbf{S}}_{T}$ where ${x}_{t}\left( n\right)$ is a scalar value assigned to the $t$ -th node of ${\mathcal{G}}_{T}$ .
|
| 42 |
+
|
| 43 |
+
The product graph representing the spatiotemporal patterns in $\mathbf{X}$ is denoted by ${\mathcal{G}}_{\diamond } = {\mathcal{G}}_{T}\diamond \mathcal{G} =$ $\left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right)$ . The node set ${\mathcal{V}}_{\diamond }$ is the Cartesian product between ${\mathcal{V}}_{T}$ and $\mathcal{V}$ which leads to ${NT}$ distinct spatiotemporal nodes ${i}_{\diamond } = \left( {n,t}\right)$ . The edge set ${\mathcal{E}}_{\diamond }$ connects these nodes and the GSO ${\mathbf{S}}_{\diamond } \in {\mathbb{R}}^{{NT} \times {NT}}$ is dictated by the product graph. Fixing the product graph implies fixing the spatiotemporal dependencies in the data, which may lead to wrong inductive biases. To avoid this and improve flexibility, we consider a parametric product graph whose GSO is of the form
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{\mathbf{S}}_{\diamond } = \mathop{\sum }\limits_{{i = 0}}^{1}\mathop{\sum }\limits_{{j = 0}}^{1}{s}_{ij}\left( {{\mathbf{S}}_{T}^{i} \otimes {\mathbf{S}}^{j}}\right) = \underset{\text{ self-loops }}{\underbrace{{s}_{00}{\mathbf{I}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{ Cartesian }}{\underbrace{{s}_{01}{\mathbf{I}}_{T} \otimes \mathbf{S} + {s}_{10}{\mathbf{S}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{ Kronecker }}{\underbrace{{s}_{11}{\mathbf{S}}_{T} \otimes \mathbf{S}}}, \tag{1}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
where the scalar parameters $\left\{ {s}_{ij}\right\}$ attend the spatiotemporal connections and encompass the typical product graph choices such as the Kronecker, the Cartesian, and the strong product. By column-vectorizing $\mathbf{X}$ into ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right) \in {\mathbb{R}}^{NT}$ , we obtain a product graph signal assigning a real value to each spacetime node ${i}_{\diamond }$ . I.e., the dynamic data ${\mathbf{x}}_{t}$ over $\mathcal{G}$ is now a static signal ${\mathbf{x}}_{\diamond }$ over the product graph ${\mathcal{G}}_{\diamond }$ .
|
| 50 |
+
|
| 51 |
+
§ 2.2 ENCODER
|
| 52 |
+
|
| 53 |
+
The encoder is an ${L}_{e}$ -layered architecture in which each layer comprises a bank of product graph convolutional filters, temporal downsampling, and pointwise nonlinearities.
|
| 54 |
+
|
| 55 |
+
GTConv filter captures the spatiotemporal patterns in the data matrix X. Given the parametric product graph representation ${\mathcal{G}}_{\diamond } = \left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right) \left\lbrack \text{ cf. (1) }\right\rbrack$ and the product graph signal ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right)$ as input, the output of a graph-time convolutional filter of order $K$ is
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\mathbf{y}}_{\diamond } = \mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond } = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}{\mathbf{x}}_{\diamond } \tag{2}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where $\mathbf{h} = {\left\lbrack {h}_{0},\ldots ,{h}_{K}\right\rbrack }^{\top }$ are the filter parameters and $\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}$ the filtering matrix. The filter in (2) is called convolutional as the output ${\mathbf{y}}_{\diamond }$ is a weighted linear combination of shifted graph signals over the product graph up to $K$ times [42]. Hence, the filter is spatiotemporally local in a neighborhood of radius $K$ . The filter locality does not only depend on the order $K$ but also on the type of product graph. For example, for a fixed $K$ , the Cartesian product is more localized than the strong product, which can be considered to have a longer spatiotemporal memory [26]. Consequently, learning parameters $\left\{ {s}_{ij}\right\}$ in (1) implies learning the multi-hop resolution radius.
|
| 62 |
+
|
| 63 |
+
In the $\ell$ -th layer, the encoder has ${F}_{\ell - 1}$ product graph signal features ${\mathbf{x}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{x}}_{\diamond ,\ell - 1}^{g},\ldots {\mathbf{x}}_{\diamond ,\ell - 1}^{{F}_{\ell - 1}}$ , processes these with a bank of ${F}_{\ell }{F}_{\ell - 1}$ filters and outputs ${F}_{\ell }$ product graph signal features as
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }, \tag{3}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
which are the higher-level linear representation of the layer.
|
| 70 |
+
|
| 71 |
+
Temporal downsampling reduces the temporal dimension in each output ${\left\{ {\mathbf{y}}_{\diamond ,\ell }^{f}\right\} }_{f}$ in (3) by down-sampling the latter along the temporal dimension with a rate $r$ . More specifically, we first transform
|
| 72 |
+
|
| 73 |
+
the $f$ -th output ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{e}}$ into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{e}}$ and then summarize every $r$ consecutive columns without overlap to obtain the downsampled matrix ${\mathbf{X}}_{d,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell }^{e}}$ with ${T}_{\ell }^{e} < {T}_{\ell - 1}^{e}$ . The(n, t)-th entry of ${\mathbf{X}}_{d,\ell }^{f}$ is computed as
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\mathbf{X}}_{d,\ell }^{f}\left( {n,t}\right) = \operatorname{SUM}\left( {{\mathbf{Y}}_{\ell }^{f}\left( {n,r\left( {t - 1}\right) + 1 : {rt}}\right) }\right) ,\;f = 1,\ldots {F}_{\ell }, \tag{4}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $\operatorname{SUM}\left( \cdot \right)$ is a summary function over the temporal indices $r\left( {t - 1}\right) + 1$ to ${rt}$ . This summary function could be a simple downsampling (i.e., output the first column in the block ${\mathbf{Y}}_{\ell }^{f}(n,r\left( {t - 1}\right) + 1$ : ${rt}))$ or an aggregation function (i.e., mean/max/min per spatial node).
|
| 80 |
+
|
| 81 |
+
This temporal downsampling increases the encoder spatiotemporal memory without affecting the spatial structure. I.e., nodes with the temporal indices $t,{rt},\left( {r + 1}\right) t,\ldots$ become neighbors, which brings in a longer memory in the next layer and increases the encoder receptive field. While also spatial graph pooling can be added [43], we do not advocate it for two reasons. First, the spatial graph acts as an inductive bias for the GTConvAE [9]; hence, changing the graph in the layers via graph reduction, coarsening, or alternatives will affect the spatial structure, ultimately changing the inductive bias. Second, the spatial graph often represents the communication channels for distributed implementation of GTConv $\left\lbrack {{20},{42},{44}}\right\rbrack$ , and changing it may be physically impossible as sensor nodes have a limited transmission radius. An option in the latter setting may be a zero-pad spatial pooling $\left\lbrack {{45},{46}}\right\rbrack$ but it requires memorizing the indices where the zero-padding is applied, which may be challenging for large graphs.
|
| 82 |
+
|
| 83 |
+
Activation functions nonlinearize the downsampled features to increase the representational capacity. We consider an entry-wise nonlinear function $\sigma \left( \cdot \right)$ such as ReLU and produce layer $\ell$ -th output as
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\mathbf{X}}_{\ell + 1}^{f} = \sigma \left( {\mathbf{X}}_{d,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{5}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
The encoder performs operations (3)-(4)-(5) for all the ${L}_{e}$ layers to yield the encoded output
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\mathbf{Z}}_{\diamond } \mathrel{\text{ := }} {\mathbf{X}}_{\diamond ,L} = \operatorname{ENC}\left( {{\mathbf{x}}_{\diamond ,0},\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) , \tag{6}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where ${\mathbf{x}}_{\diamond ,0} \mathrel{\text{ := }} {\mathbf{x}}_{\diamond } \in {\mathbb{R}}^{NT},{\mathbf{Z}}_{\diamond } = \left\lbrack {{\mathbf{z}}_{\diamond }^{1},\ldots ,{\mathbf{z}}_{\diamond }^{{F}_{L}}}\right\rbrack \in {\mathbb{R}}^{N{T}_{{L}_{e}} \times {F}_{L}}$ , and we made explicit the dependence from the product graph parameters $\mathbf{s} = {\left\lbrack {s}_{00},{s}_{01},{s}_{10},{s}_{11}\right\rbrack }^{\top }$ [cf. (1)].
|
| 96 |
+
|
| 97 |
+
§ 2.3 DECODER
|
| 98 |
+
|
| 99 |
+
Mirroring the encoder, the decoder reconstructs the input from the latent representations in (6). At the generic layer $\ell$ , graph-time convolutional filtering is performed, subsequently a temporal upsampling, and a pointwise nonlinearity.
|
| 100 |
+
|
| 101 |
+
GTConv filtering decodes the spatiotemporal latent representations from the encoder. Considering again ${F}_{\ell - 1}$ input features ${\mathbf{z}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{g},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{{F}_{\ell } - 1}$ and a filter bank of ${F}_{\ell }{F}_{\ell - 1}$ GTConv filters as per (2), the outputs are
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{z}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }. \tag{7}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Upsampling zero-pads the removed temporal values during downsampling [cf. (4)] so that the final GTConvAE output matches the dimension of $\mathbf{X}$ . Specifically, given the $f$ -th feature ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{d}}$ from (7), we again transform it into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{d}}$ and obtain the upsampled matrix ${\mathbf{Z}}_{u,\ell }^{f} \in {\mathbb{R}}^{N \times {T}_{\ell }^{d}}$ whose(n, t)-th entry is computed as
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{\mathbf{Z}}_{u,\ell }^{f}\left( {n,t}\right) = \left\{ \begin{array}{ll} {\mathbf{Y}}_{\ell }^{f}\left( {n,\lceil t/r\rceil }\right) ; & \text{ if }\exists k \in \mathbb{Z} : t = {kr} \\ 0; & \text{ o/w } \end{array}\right. \tag{8}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $\lceil \cdot \rceil$ is the ceiling function. ${}^{1}$ The GTConv filter bank in the next layer interpolates these zero-padded values from the downsampled ones. This implies that the downsampling rate in the
|
| 114 |
+
|
| 115 |
+
${}^{1}$ We considered the same down/up-sampling rate in each layer of the decoder and encoder; hence, because of the mirrored structure ${T}_{\ell }^{e}$ in (5) equals ${T}_{\ell - 1}^{d}$ in (8).
|
| 116 |
+
|
| 117 |
+
encoder cannot be too harsh to lose information, and also, the filter orders in the decoder cannot be too small to have a high interpolatory capacity.
|
| 118 |
+
|
| 119 |
+
Activation functions again nonlinzearize the upsampled features in (8) and yield
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
{\mathbf{Z}}_{\ell }^{f} = \sigma \left( {\mathbf{Z}}_{u,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{9}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
The decoder performs operations (7)-(8)-(9) for all ${L}_{d}$ layers to yield the decoded output ${\widehat{\mathbf{x}}}_{\diamond } =$ ${\mathbf{z}}_{\diamond ,{L}_{d}} \in {\mathbb{R}}^{NT}$ , which also corresponds to the GTConvAE output
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
{\widehat{\mathbf{x}}}_{\diamond } = {\mathbf{z}}_{\diamond ,{L}_{d}} = \operatorname{DEC}\left( {{\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) , \tag{10}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where we match the dimensions by setting ${F}_{{L}_{d}} = 1$ .
|
| 132 |
+
|
| 133 |
+
§ 2.4 LOSS FUNCTION
|
| 134 |
+
|
| 135 |
+
Given (6) and (10), the GTConvAE in (1) can be detailed as
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{\widehat{\mathbf{x}}}_{\diamond } = \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};\mathcal{H},\mathbf{s}}\right) = \operatorname{DEC}\left( {\operatorname{ENC}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) ,\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) . \tag{11}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
The GTConv filter parameters in $\mathcal{H}$ and the product graph parameters in $\mathbf{s}$ are estimated by minimizing the loss function
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right) = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\begin{Vmatrix}{\mathbf{x}}_{\diamond } - {\widehat{\mathbf{x}}}_{\diamond }\end{Vmatrix}}_{2}\right\rbrack + \rho \parallel \mathbf{s}{\parallel }_{1}. \tag{12}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where the first term measures the reconstruction error over the probabilistic distribution $\mathcal{D}$ of the training set, whereas the second term imposes sparsity in the spatiotemporal dependencies of the product graph. Scalar $\rho > 0$ controls the trade-off between fitting and regularization, and a higher value implies a stronger spatiotemporal sparsity (from the norm one $\parallel \cdot {\parallel }_{1}$ ); i.e., sparser spatiotemporal attention.
|
| 148 |
+
|
| 149 |
+
Complexity analysis: Denoting the maximum number of features in all layers by ${F}_{\max } = \max \left\{ {F}_{\ell }\right\}$ the GTConvAE has $\left| \mathcal{H}\right| = \left( {{L}_{e} + {L}_{d}}\right) \left( {K + 1}\right) {F}_{\max }^{2}$ parameters. This is because each GTConv filter (2) has $K + 1$ parameters and in each layer a filter bank of at most ${F}_{max}^{2}$ filters is used. Despite the product graphs are of large dimensions, the latter is highly sparse and the computation complexity of the GTConvAE is of order $\mathcal{O}\left( {{M}_{\diamond }\left| \mathcal{H}\right| }\right)$ , where ${M}_{\diamond } = {NT} + N{M}_{T} + {MT} + {2M}{M}_{T}$ is the number of edges of the product graph ( $M$ edges in the spatial graph and ${M}_{T}$ edges in the temporal graph). This is because each graph-time filter has a computational complexity of order $\mathcal{O}\left( {\left( {K + 1}\right) {M}_{\diamond }}\right)$ [26] and the GTConvAE consists of $\left( {{L}_{e} + {L}_{d}}\right) {F}_{\max }^{2}$ graph-time filters. Note that we consider $r = 1$ sampling rate to provide the worst case analysis, but the computational complexity can be further reduced for $r > 1$ .
|
| 150 |
+
|
| 151 |
+
§ 3 STABILITY ANALYSIS
|
| 152 |
+
|
| 153 |
+
In this section, we conduct a stability analysis of the GTConvAE w.r.t. relative perturbations in the spatial graph. This stability analysis is motivated by the fact that we do not always have access to the ground truth spatial graph due to modeling issues or when the physical network undergoes slight changes over time. Hence, the spatial graph used for training differs from that used for testing; thus, having a stable GTConvAE is desirable to perform the tasks reliably.
|
| 154 |
+
|
| 155 |
+
We consider the relative perturbation model proposed in [27]
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\widehat{\mathbf{S}} = \mathbf{S} + \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) \tag{13}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
where $\widehat{\mathbf{S}}$ is the perturbed GSO and $\mathbf{E}$ is the perturbation matrix with bounded operator norm $\parallel \mathbf{E}\parallel \leq \epsilon$ . This model accounts for graph perturbation depending on its structure, i.e., a higher degree node (a node with higher-weighted connected edges) is relatively prone to more perturbation.
|
| 162 |
+
|
| 163 |
+
§ 3.1 SPATIOTEMPORAL INTEGRAL LIPSCHITZ FILTERS
|
| 164 |
+
|
| 165 |
+
To investigate the stability of GTConvAE, we first characterize the graph-time convolutional filters in the spectral domain. Consider the eigendecompositions of the spatial GSO $\mathbf{S} = \mathbf{V}\mathbf{\Lambda }{\mathbf{V}}^{\mathrm{H}}$ and of the temporal GSO ${\mathbf{S}}_{T} = {\mathbf{V}}_{T}{\mathbf{\Lambda }}_{T}{\mathbf{V}}_{T}^{\mathrm{H}}$ . Matrices $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{1},\ldots ,{\mathbf{v}}_{N}\right\rbrack }^{\top }$ and $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{T,1},\ldots ,{\mathbf{v}}_{T,T}\right\rbrack }^{\top }$ collect the spatial and the temporal eigenvectors, respectively, and $\mathbf{\Lambda } = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{N}}\right)$ and ${\mathbf{\Lambda }}_{T} = \operatorname{diag}\left( {{\lambda }_{T,1},\ldots ,{\lambda }_{T,T}}\right)$ the corresponding eigenvalues. From (1), the eigendecomposition of the product graph GSO is ${\mathbf{S}}_{\diamond } = {\mathbf{V}}_{\diamond }{\mathbf{\Lambda }}_{\diamond }{\mathbf{V}}_{\diamond }^{\mathrm{H}}$ with eigenvectors ${\mathbf{V}}_{\diamond } = {\mathbf{V}}_{T} \otimes \mathbf{V}$ being the Kronecker product $\otimes$ of the respective GSOs and the eigenvalues ${\mathbf{\Lambda }}_{\diamond } = {\mathbf{\Lambda }}_{T}\diamond \mathbf{\Lambda }$ are defined by the product graph rule. As in graph signal processing [32], it is possible to characterize the joint graph-time Fourier transform of product graph signals. Specifically, the graph-time Fourier of signal ${\mathbf{x}}_{\diamond }$ is defined as $\widetilde{\mathbf{x}} = {\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}{\mathbf{x}}_{\diamond }$ and the eigenvalues in ${\mathbf{\Lambda }}_{\diamond }$ now collect the graph-time frequencies of the product graph [33]. Applying this Fourier transform on the input and output of the GTConv filter in (2), we can write the filter input-output as ${\widetilde{\mathbf{y}}}_{\diamond } = \mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right) \widetilde{\mathbf{y}}$ , where ${\widetilde{\mathbf{y}}}_{\diamond }$ is the Fourier transform of the output and $\mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right)$ is an ${NT} \times {NT}$ diagonal matrix containing the filter frequency response on the main diagonal. This frequency response is of the form
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\lambda }_{\diamond ,\left( {n,t}\right) }^{k} \tag{14}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
where ${\lambda }_{\diamond ,\left( {n,t}\right) } = {\lambda }_{T,t}\diamond {\lambda }_{n}$ indicates the eigenvalue of ${\mathbf{S}}_{\diamond }$ corresponding to the spatial index $n \in \left\lbrack N\right\rbrack$ and temporal index $t \in \left\lbrack T\right\rbrack$ of the product graph.
|
| 172 |
+
|
| 173 |
+
The eigenvalues ${\lambda }_{\diamond ,\left( {n,t}\right) }$ can be considered as the frequencies of the product graph and can be ordered in ascending order of magnitude. We can then characterize the variation of the filter frequency response for two different spatial eigenvalues.
|
| 174 |
+
|
| 175 |
+
Definition 1. A GTConv filter with a frequency response $h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right)$ is graph integral Lipschitz if there exists constant $C > 0$ such that for all frequencies ${\lambda }_{\diamond ,\left( {n,t}\right) },{\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ , it holds that
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\left| {h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) - h\left( {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }\right) }\right| \leq C\frac{\left| {\lambda }_{n} - {\lambda }_{{n}^{\prime }}\right| }{\left| {{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}\right| /2}\text{ for all }\left\{ {{\lambda }_{n},{\lambda }_{{n}^{\prime }}}\right\} \in \mathbf{\Lambda }. \tag{15}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
Expression (15) states that the frequency response of graph-time convolutional filter should vary sub-linearly while the coefficient depends on the gap $\left| {{\lambda }_{\diamond ,\left( {n,t}\right) } + {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }}\right| /2$ . This implies
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\left| {{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) }{\partial {\lambda }_{n}}}\right| \leq C\text{ for all }{\lambda }_{n} \in \mathbf{\Lambda }\;\text{ and }\;{\lambda }_{\diamond ,\left( {n,t}\right) } \in {\mathbf{\Lambda }}_{\diamond } \tag{16}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
which means the integral Lipschitz filter cannot vary drastically in high frequencies. Hence, such a filter can discriminate low frequency content but not high frequency ones.
|
| 188 |
+
|
| 189 |
+
Definition 2. A graph-time convolutional filter has normalized frequency response if $\left| {h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) }\right| \leq 1$ for all ${\lambda }_{\diamond ,\left( {n,t}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ .
|
| 190 |
+
|
| 191 |
+
This definition is a direct consequence of normalizing the filters' frequency response by their maximum value. We shall show next that GTConvAE with filters satisfying Def. 1 and 2 are stable to perturbations in the form (13).
|
| 192 |
+
|
| 193 |
+
§ 3.2 STABILITY RESULT
|
| 194 |
+
|
| 195 |
+
The following theorem with proof in Appendix A provides the main result.
|
| 196 |
+
|
| 197 |
+
Theorem 1. Consider a GTConvAE with an ${L}_{e}$ -layer encoder and an ${L}_{d}$ -layer decoder having ${F}_{\ell } \leq {F}_{\max }$ and ${F}_{d,\ell } \leq {F}_{\max }$ features per layer in encoder and decoder, respectively, and a summary function $\operatorname{SUM}\left( \cdot \right)$ performing pure downsampling with rate $r$ . Consider also the filters are integral Lipschitz [cf. Def. 1] with a normalized frequency response [cf. Def. 2] and that the nonlinearities are 1-Lipschitz (e.g., ReLU, absolute value). Let this GTConvAE be trained over the product graph (1) and deployed over its perturbed version whose spatial GSO is given in (13) with a perturbation of at most $\parallel \mathbf{E}\parallel \leq \epsilon$ . The distance between the two models is upper bounded by
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\parallel \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right) - \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}}\right) {\parallel }_{2} \leq \left( {{L}_{d} + {L}_{e}}\right) {r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{max}^{{L}_{e} + {L}_{d} - 1}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}.
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
(17)
|
| 204 |
+
|
| 205 |
+
where $\Delta = {2C}\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\text{ max }}}\right) \left( {1 + \delta \sqrt{NT}}\right)$ , and $\delta = {\left( \parallel \mathbf{U} - \mathbf{V}{\parallel }^{2} + 1\right) }^{2} - 1$ with eigenvectors $\mathbf{U}$ from $\mathbf{E} = {\mathbf{{UMU}}}^{\mathrm{H}}$ and $\mathbf{V}$ from $\mathbf{S} = {\mathbf{{V\Lambda V}}}^{\mathrm{H}}$ .
|
| 206 |
+
|
| 207 |
+
The result (17) states that GTConvAE is stable against relative perturbations. It also suggests that GTConvAE is less stable for larger product graphs $\left( \sqrt{NT}\right)$ since more nodes pass information over the perturbed edges. Moreover, making the model more complex by increasing the number of features or layers compromises stability as more graph-time convolutional filters work on a perturbed graph $\left( {F}_{\max }^{{L}_{c} + {L}_{d} - 1}\right)$ . We also see the stability improves with the sampling rate $r > 1$ because fewer nodes operate over the perturbed graph after downsampling. Furthermore, for a deeper encoder we have more downsampling hence the stability improves; yet there is a tradeoff between improving the bound imposed by the terms ${r}^{-{L}_{e}/2},{F}_{\max }^{{L}_{e} + {L}_{d} - 1}$ , and ${L}_{e} + {L}_{d}$ . Finally, parameters ${s}_{01}$ and ${s}_{11}$ appear in the stability bound because they are the only ones composing the spatial edges; thus, minimizing $\parallel \mathbf{s}{\parallel }_{1}$ in (12) leads to improved stability.
|
| 208 |
+
|
| 209 |
+
§ 4 NUMERICAL RESULTS
|
| 210 |
+
|
| 211 |
+
This section compares the GTConvAE with baseline solutions and competitive alternatives for time series denoising as well as anomaly detection with real data from solar irradiance and water networks. In all experiments, the ADAM optimizer with the standard hyperparameters is used and an unweighted directed line graph is considered for the temporal graph in (1).
|
| 212 |
+
|
| 213 |
+
§ 4.1 DENOISING OF SOLAR IRRADIANCE TIME SERIES
|
| 214 |
+
|
| 215 |
+
We consider the task of denoising solar irradiance time series over $N = {75}$ solar cities around the northern region of the U.S. measured in GHI $\left( {W/{m}^{2}}\right) \left\lbrack 4\right\rbrack$ . Each solar city is a vertex and an undirected edge is set using the physical distances between the cities via Gaussian threshold kernel with $\sigma = {0.25}$ and ${th} = {0.1}$ after normalizing maximum weight to 1 [32]. The noise is generated via a zero-mean Gaussian distribution with a covariance matrix corresponding to the pseudo-inverse of the normalized graph Laplacian.
|
| 216 |
+
|
| 217 |
+
< g r a p h i c s >
|
| 218 |
+
|
| 219 |
+
Figure 1: Denoising performance of the proposed GTConvAE and alternatives. The standard deviation for all the models is of order ${10}^{-2}$ .
|
| 220 |
+
|
| 221 |
+
Experimental setup. We considered the first 2000 samples for training and validation (2000- 2014) and the subsequent 200 (2014-2016) for testing. The input data is a single feature corresponding to the GHI measurement and the product graph has $N = {75}$ spatial nodes and $T = 8$ temporal nodes. The GTConvAE has three layers with $\{ 8,4,2\}$ features in the encoder and reversely in the decoder; all filters are 4th-order and normalized Laplacian is used as GSO; a downsampling rate of $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.2}$ and the learning rate is ${25} \times {10}^{-4}$ . We compared the GTConvAE with the following alternatives:
|
| 222 |
+
|
| 223 |
+
* C3D [5]: non-graph spatiotemporal autoencoder using three-dimensional CNNs.
|
| 224 |
+
|
| 225 |
+
* ConvLSTMAE [7]: A non-graph spatiotemporal autoencoder using two-dimensional CNNs followed by LSTMs.
|
| 226 |
+
|
| 227 |
+
* STGAE [1]: A modular spatiotemporal graph autoencoder that uses an edge varying filter for the graph dimension followed by temporal convolution.
|
| 228 |
+
|
| 229 |
+
* Baseline GCNN [42]: An autoencoder built with a conventional graph convolutional neural network using the time series as features over the nodes. The shift operator is the normalized Laplacian matrix.
|
| 230 |
+
|
| 231 |
+
The first two methods are considered to show the role of using a distance graph as an inductive bias. The third method is considered to compare the joint GTConvAE over disjoint alternatives, whereas the last model is considered to show the role of the sparse product graphs rather than treating time series as node features. The parameters for all models are chosen via grid search from the ranges reported in Appendix B.
|
| 232 |
+
|
| 233 |
+
Results. Fig. 1 shows the reconstruction normalized mean squared error (NMSE) for different signal-to-noise ratios (SNRs). The proposed GTConvAE compares well with STGAE for low SNRs but better for high SNRs. We attribute this improvement to the ability of the GTConvAE to capture
|
| 234 |
+
|
| 235 |
+
Table 1: Comparison of different models in the BATADAL dataset. All metrics are the higher the better.
|
| 236 |
+
|
| 237 |
+
max width=
|
| 238 |
+
|
| 239 |
+
Model ${N}_{A}$ $\mathcal{S}$ ${\mathcal{S}}_{\text{ TTD }}$ ${\mathcal{S}}_{\mathrm{{CM}}}$ TPR TNR
|
| 240 |
+
|
| 241 |
+
1-7
|
| 242 |
+
STGCAE-LSTM [2] 7 0.924 0.920 0.928 0.892 0.964
|
| 243 |
+
|
| 244 |
+
1-7
|
| 245 |
+
TGCN [47] 7 0.931 0.934 0.928 0.885 0.971
|
| 246 |
+
|
| 247 |
+
1-7
|
| 248 |
+
GTConvAE (ours) 7 0.940 0.928 0.952 0.922 0.981
|
| 249 |
+
|
| 250 |
+
1-7
|
| 251 |
+
|
| 252 |
+
jointly the spatiotemporal patterns in the data while STGAE operates disjointly. We also see that in comparison with the baseline GCNN, the GTConvAE performs consistently better, highlighting the importance of the sparser product graphs and temporal downsampling. Finally, we also observe a superior performance compared with the non-graph alternatives C3D and ConvLSTMAE.
|
| 253 |
+
|
| 254 |
+
§ 4.2 ANOMALY DETECTION IN WATER NETWORKS
|
| 255 |
+
|
| 256 |
+
We now consider the task of detecting cyber-physical attacks on a water network. We considered the C-town network from the Battle of ATtack Detection ALgorithms (BATADAL) dataset comprising $N = {388}$ nodes (demand junctions, storage tanks, and reservoirs) and 8762 hourly measurements of 43 different node feature signals for a period of 12 months. We used the same setup as in [47] and considered a correlation graph from the data. The dataset provides a normal operating condition comprising recordings for the first 12 months and an anomalous event operating condition comprising 7 attacks over the successive 3 months. Refer to [48, 49] for more detail about the BATADAL dataset.
|
| 257 |
+
|
| 258 |
+
Experimental setup. The normal operating condition data are used to train the model for one-step forecasting to be used for detecting anomalies. The anomalous event operating condition data is used for testing and an anomaly is flagged if the prediction error exceeds a fixed threshold. We set the threshold intuitively to three times the error variance during training. The inputs are the 43 time series over the $N = {388}$ nodes and we considered $T = 6$ for the temporal graph dimension. The GTConvAE has two layers with $\{ 8,2\}$ features in the encoder and reversely in the decoder; all filters are of order $K = 4$ ; a downsampling rate $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.14}$ and learning rate is $5 \times {10}^{-4}$ . We compared the performance against two graph-based alternatives:
|
| 259 |
+
|
| 260 |
+
* STGCAE-LSTM [2]: A related solution to our method that uses a Cartesian spatiotemporal graph with graph convolutions followed by an LSTM in the latent domain.
|
| 261 |
+
|
| 262 |
+
* TGCN [47]: A modular graph-based autoencoder using cascades of temporal convolutions and message passing.
|
| 263 |
+
|
| 264 |
+
The parameters for all models are obtained via grid search from the ranges reported in Appendix C. We measure the performance via the $\mathcal{S}$ -score present in the BATADAL dataset, which contains ${\mathcal{S}}_{\text{ TTD }}$ for the timing in detecting anomalies and ${\mathcal{S}}_{\mathrm{{CM}}}$ for the classification accuracy. The $\mathcal{S}$ -score is defined as
|
| 265 |
+
|
| 266 |
+
$$
|
| 267 |
+
\mathcal{S} = {0.5}\left( {{\mathcal{S}}_{\mathrm{{TTD}}} + {\mathcal{S}}_{\mathrm{{CM}}}}\right) = {0.5}\left( {\left( {1 - \frac{1}{{N}_{A}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{A}}\frac{{\mathrm{{TTD}}}_{i}}{\Delta {\mathrm{T}}_{i}}}\right) + \frac{\mathrm{{TPR}} + \mathrm{{TNR}}}{2}}\right) , \tag{18}
|
| 268 |
+
$$
|
| 269 |
+
|
| 270 |
+
where ${N}_{A}$ is the number of attacks, TTD is the detection time of the attack, $\Delta {T}_{i}$ is the duration of the $i$ -th attack, TPR is the true positive rate, and TNR is the true negative rate.
|
| 271 |
+
|
| 272 |
+
Results: Table 1 shows that all the models managed to detect all of the attacks, however, the TGCN has a better performance in timing ${\mathcal{S}}_{\text{ TTD }}$ . This is due to the calibration of the threshold in their work with a validation dataset while we used a fixed intuitive threshold only based on training. In the accuracy of anomaly detection ${\mathcal{S}}_{\mathrm{{CM}}}$ , the GTConvAE outperforms the other two models as the product graphs alongside downsampling enable it to learn spatiotemporal patterns in the data effectively. Overall, the GTConvAE performs better than other models by a small margin.
|
| 273 |
+
|
| 274 |
+
< g r a p h i c s >
|
| 275 |
+
|
| 276 |
+
Figure 2: Stability results for different scenarios of the GTConvAE and fixed product graphs. (a) Different SNRs in the topology. (b) Different graph sizes in $4\mathrm{\;{dB}}$ perturbation. (c) Different sampling rates $r$ .
|
| 277 |
+
|
| 278 |
+
§ 4.3 STABILITY ANALYSIS
|
| 279 |
+
|
| 280 |
+
To investigate the stability of the GTConvAE, we trained the model over a synthesized dataset so we could control all the settings such as the spatial graph size $N$ . The graph is an undirected stochastic block model with 5 communities among $N = \{ {50},{100},\ldots ,{500}\}$ . The edges are drawn independently with probability 0.8 for nodes in the same community and 0.2 otherwise. Each data sample is a diffused signal over the graph $\mathbf{X} = \left\lbrack {\mathbf{{Sx}},\ldots ,{\mathbf{S}}^{T}\mathbf{x}}\right\rbrack$ with $T = 6$ and $\mathbf{x}$ having a random non-zero entry. The autoencoder is used to reconstruct this data.
|
| 281 |
+
|
| 282 |
+
Experimental setup The model has two layers of encoder and decoder with sampling rate $r = 2$ . Each layer of the encoder has $\{ 8,4\}$ features and reversely in the decoder. All filters are of order four and the normalized graph Laplacian is used as GSO. The activation functions are ReLU and pure donwsampling is considered. The regularizer weight is 0.25 and learning rate is ${25} \times {10}^{-3}$ . The model is trained over the graph with different sizes and tested with a perturbed graph following the relative perturbation model in (13) for different SNR scenarios in the topology. We compare the stability of the GTConvAE with learned graphs with the same autoencoder having fixed Cartesian and strong product graphs.
|
| 283 |
+
|
| 284 |
+
Results Fig. 2a indicates that the GTConvAE in different noisy scenarios. GTConvAE is the most stable in medium and high SNRs as it leverages sparsity in the spatiotemporal coupling. However, GTConvAE performance drops more rapidly in low SNR scenarios as its parameters are trained for the data and task. Fig. 2b shows the results for reconstruction error over graphs with different sizes. The GTConvAE is more stable than the other models, even in graphs with the larger sizes for the same reason as before. All the models lose performance similarly as the size of the graph grows. This is consistent with the theoretical result in (17).
|
| 285 |
+
|
| 286 |
+
§ 5 CONCLUSION
|
| 287 |
+
|
| 288 |
+
We introduced GTConv-AE as an unsupervised model for learning representations from multivariate time series over networks. The GTConv-AE uses parametric product graphs to aggregate information from a spatiotemporal neighborhood while it yet learns spatiotemporal couplings in the product graph We proposed a spectral analysis for GTConv-AE due to its convolutional nature which led to stability analysis. The stability analysis states that GTConv-AE is stable against relative perturbations in the spatial graph as long as graph-time filters vary smoothly over high spatiotemporal frequencies. Finally, numerical results showed that the GTConv-AE compares well with the state-of-the-art models on benchmark datasets and corroborated the stability results.
|
papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,568 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Piecewise-Velocity Model for Learning Continuous-time Dynamic Node Representations
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Networks have become indispensable and ubiquitous structures in many fields to model the interactions among different entities, such as friendship in social networks or protein interactions in biological graphs. A major challenge is to understand the structure and dynamics of these systems. Although networks evolve through time, most existing graph representation learning methods target only static networks. Whereas approaches have been developed for the modeling of dynamic networks, there is a lack of efficient continuous time dynamic graph representation learning methods that can provide accurate network characterization and visualization in low dimensions while explicitly accounting for prominent network characteristics such as homophily and transitivity. In this paper, we propose the Precewise-VElocity Model (PIVEM) for the representation of continuous-time dynamic networks. It learns dynamic embeddings in which the temporal evolution of nodes is approximated by piecewise linear interpolations based on a latent distance model with piecewise constant node-specific velocities. The model allows for analytically tractable expressions of the associated Poisson process likelihood with scalable inference invariant to the number of events. We further impose a scalable Kronecker structured Gaussian Process prior to the dynamics accounting for community structure, temporal smoothness, and disentangled (uncorrelated) latent embedding dimensions optimally learned to characterize the network dynamics. We show that PIVEM can successfully represent network structure and dynamics in ultra-low two and three-dimensional embedding spaces. We further extensively evaluate the performance of the approach on various networks of different types and sizes and find that it outperforms existing relevant state-of-art methods in downstream tasks such as link prediction. In summary, PIVEM enables easily interpretable dynamic network visualizations and characterizations that can further improve our understanding of the intrinsic dynamics of time-evolving networks.
|
| 12 |
+
|
| 13 |
+
## 28 1 Introduction
|
| 14 |
+
|
| 15 |
+
With technological advancements in data storage and production systems, we have witnessed the massive growth of graph (or network) data in recent years, with many prominent examples, including social, technological, and biological networks from diverse disciplines [1]. They propose an exquisite way to store and represent the interactions among data points and machine learning techniques on graphs have thus gained considerable attention to extract meaningful information from these complex systems and perform various predictive tasks. In this regard, Graph Representation Learning (GRL) techniques have become a cornerstone in the field through their exceptional performance in many downstream tasks such as node classification and edge prediction. Unlike the classical techniques relying on the extraction and design of handcrafted feature vectors peculiar to given networks, GRL approaches aim to design algorithms that can automatically learn features optimally preserving various characteristics of networks in their induced latent space.
|
| 16 |
+
|
| 17 |
+
Many networks evolve through time and are liable to modifications in structure with newly arriving nodes or emerging connections, the GRL methods have primarily addressed static networks, in other words, a snapshot of the networks at a specific time. However, recent years have seen increasing efforts toward modeling dynamic complex networks, see also [2] for a review. Whereas most approaches have concentrated their attention on discrete-time temporal networks, which have built upon a collection of time-stamped networks (c.f. [3-11]) modeling of networks in continuous time has also been studied (c.f. [12-15]). These approaches have been based on latent class [3, 4, 12-14] and latent feature modeling approaches [5-11, 15] including advanced dynamic graph neural network representations [16, 17].
|
| 18 |
+
|
| 19 |
+
Although these procedures have enabled to characterize evolving networks useful for downstream tasks such as link prediction and node classification, existing dynamic latent feature models are either in discrete time or do not explicitly account for network homophily and transitivity in terms of their latent representations. Whereas latent class models typically provide interpretable representations at the level of groups, latent feature models in general rely on high-dimensional latent representations that are not easily amenable to visualization and interpretation. A further complication of most existing dynamic modeling approaches is their scaling typically growing in complexity by the numbers of observed events and number of network dyads.
|
| 20 |
+
|
| 21 |
+
This work addresses the embedding problem of nodes in a continuous-time latent space and seeks to accurately model network interaction patterns using low dimensional scalable representations explicitly accounting for network homopholy and transitivity. The main contributions of the paper can be summarized as follows:
|
| 22 |
+
|
| 23 |
+
- We propose a novel scalable GRL method, the Precewise-VElocity Model (PIVEM), to flexibly learn continuous-time dynamic node representations.
|
| 24 |
+
|
| 25 |
+
- We present a framework balancing the trade-off between the smoothness of node trajectories in the latent space and model capacity accounting for the temporal evolution.
|
| 26 |
+
|
| 27 |
+
- We show that the PIVEM can embed nodes accurately in very low dimensional spaces, i.e., $D = 2$ , such that it serves as a dynamic network visualization tool facilitating human insights into networks' complex, evolving structures.
|
| 28 |
+
|
| 29 |
+
- The performance of the introduced approach is extensively evaluated in various downstream tasks, such as network reconstruction and link prediction. We show that it outperforms wellknown baseline methods on a wide range of datasets. Besides, we propose an efficient model optimization strategy enabling the PIVEM to scale to large networks.
|
| 30 |
+
|
| 31 |
+
Source code and other materials. The datasets, implementation of the method in Python, and all the generated animations can be found at the address: https://tinyurl.com/pivem.
|
| 32 |
+
|
| 33 |
+
## 2 Related Work
|
| 34 |
+
|
| 35 |
+
The work on dynamic modeling of complex networks has spurred substantial attention in recent years and covers approaches for the modeling of dynamic structures at the level of groups (i.e., latent class models) and dynamic representation learning approaches based on latent feature models including graph neural networks (GNNs). Whereas most attention has been given to discrete time dynamic networks a substantial body of work has also covered continuous time modeling as outlined below.
|
| 36 |
+
|
| 37 |
+
### 2.1 Dynamic Latent Class Models
|
| 38 |
+
|
| 39 |
+
Initial efforts for modeling continuously evolving networks has combined latent class models defined by the stochastic block models [18, 19] with Hawkes processes [20, 21]. In the work of [12], co-dependent (through time) Hawkes processes were combined with the Infinite Relational Model [22] (Hawkes IRM), yielding a non-parametric Bayesian approach capable of expressing reciprocity between inferred groups of actors. A drawback of such a model is the computational cost of the imposed Markov-chain Monte-Carlo optimization, as well as, its limitation on modelling only reciprocation effects. Scalability issues were addressed in [13] via the Block Hawkes Model (BHM), which utilizes variational inference and simplifies the Hawkes IRM model by associating only the inferred block structure pairs with a univariate point process. Recently, the BHM model was extended to decoupling interactions between different pairs of nodes belonging to the same block pair, through the use of independent univariate Hawkes processes, defining the Community Hawkes Independent Pairs model [14]. Whereas the above works have been based on continuous time modeling of dynamic networks the dynamic-IRM (dIRM) of [3] focused on the modeling of discrete time networks by inducing a infinite Hidden Markov Model (IHMM) to account for transitions over time of nodes between communities. In [4] a dynamic hierarchical block model was proposed based on the modeling of change points admitting dynamic node relocation within a Gibbs fragmentation tree. Despite the various advantages of such models, networks are constrained to be regarded and analyzed at a block level which in many cases is restrictive.
|
| 40 |
+
|
| 41 |
+
### 2.2 Dynamic Latent Feature Models
|
| 42 |
+
|
| 43 |
+
Prominent works around node-level representations of continuous-time networks have originally considered feature propagation within the discrete time network topology [23] or extended the random-walk frameworks of [6] and [7] to the temporal case yielding the Continuous-Time Dynamic Network Embeddings model (CTDNE), outperforming the aforementioned original approaches in multiple temporal settings. CTDNE provides a single temporal-aware node embedding, meaning that network and node evolution are unable to be visualized and explored. A more flexible approach was designed in [24] (DyRep) where temporal node embeddings are learned under a so-called latent mediation process, combining an association process describing the dynamics of the network with a communication process describing the dynamics on the network. The DyRep model uses deep recurrent architectures to parameterize the intensity function of the point process, and thus the embedding space suffers from lack of explainability. Graph neural networks (GNNs) can be extended to the analysis of continuous networks via the Temporal Graph Network (TGN) [17] where the classical encoder-decoder architecture is coupled with a memory cell.
|
| 44 |
+
|
| 45 |
+
In the context of latent feature dynamic network models Gaussian Processes (GP) has been used to characterize the smoothness of the temporal dynamics. This includes the discrete time dynamic network model considered in [8] in which latent factors where endowed a GP prior based on radial basis function kernels imposing temporal smoothness within the latent representation. The approach was extended in [9] to impose stochastic differential equations for the evolution of latent factors. In [15] GPs were used for the modeling of continuous time dynamic networks based on Poisson and Hawkes processes respectively including exogenous as well as endogenous features specified by a radial basis function prior.
|
| 46 |
+
|
| 47 |
+
Latent Distance Models (LDM) as proposed in [25] have recently been shown to outperform prominent GRL methods utilizing very-low dimensions in the static case [26, 27]. LDMs for temporal networks have been mostly studied in the discrete case [10], considering mainly diffusion dynamics in order to make predictions, as firstly studied in [28] and extended with popularity and activity effects [11]. While all these models express homophily (a tendency where similar nodes are more likely to connect to each other than dissimilar ones) and transitivity ("a friend of a friend is a friend") in the dynamic case, they fail to account for continuous dynamics.
|
| 48 |
+
|
| 49 |
+
Our work is inspired by these previous approaches for the modeling of dynamic complex networks. Specifically, we make use of the latent distance model formulation to account for homophily and transitivity, the Poisson Process for the characterization of continuous time dynamics, and a Gaussian Process prior based on the radial-basis-function kernel to account for temporal smoothness within the latent representation. Inspired by latent class models we further impose a structured low-rank representation of nodes based on soft-assigning nodes to communities exhibiting similar temporal dynamics. Notably, we exploit how LDMs as opposed to GNN approaches in general can provide easy interpretable yet accurate network representations in ultra low $D = 2$ dimensional spaces facilitating accurate dynamic network visualization and interpretation.
|
| 50 |
+
|
| 51 |
+
## 3 Proposed Approach
|
| 52 |
+
|
| 53 |
+
Our main objective is to represent every node of a given network, $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , into a low-dimensional metric space, $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , in which the pairwise node proximities will be characterized by their distances in a continuous-time latent space (Objective 3.1). Since we address the continuous-time dynamic networks, the interactions among nodes through time can vary, with new links appearing or disappearing at any time. More precisely, we will presently consider undirected continuous-time networks:
|
| 54 |
+
|
| 55 |
+
Definition 3.1. A continuous-time dynamic undirected graph on a time interval ${\mathcal{I}}_{T} \mathrel{\text{:=}} \left\lbrack {0, T}\right\rbrack$ is an ordered pair $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ where $\mathcal{V} = \{ 1,\ldots , N\}$ is a set of nodes and $\mathcal{E} \subseteq \left\{ {\{ i, j, t\} \in {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \mid 1 \leq }\right.$ $i < j \leq N\}$ is a set of events or edges.
|
| 56 |
+
|
| 57 |
+
We will use the symbol, $N$ , to denote the number of nodes in the vertex set and ${\mathcal{E}}_{ij}\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq \mathcal{E}$ to indicate the set of edges between nodes $i$ and $j$ occurring on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$
|
| 58 |
+
|
| 59 |
+
But we note that the approach readily extends to directed and bipartite dynamic networks.
|
| 60 |
+
|
| 61 |
+
### 3.1 Nonhomogeneous Poisson Point Processes
|
| 62 |
+
|
| 63 |
+
The Poisson Point Processes (PPP)s are one of the natural choices widely used to model the number of random events occurring in time or the locations in a spatial space. PPPs are parameterized by a quantity known as the rate or the intensity indicating the average density of the points in the underlying space of the Poisson process. If the intensity depends on the time or location, the point process is called Nonhomogeneous PPP (Defn. 3.2), and it is typically adapted for applications in which the event points are not uniformly distributed [29].
|
| 64 |
+
|
| 65 |
+
Definition 3.2. [Nonhomogeneous PPP] A counting process $\{ M\left( t\right) , t \geq 0\}$ is called a nonhomogeneous Poisson process with intensity function $\lambda \left( t\right) , t \geq 0$ if (i) $M\left( 0\right) = 0$ ,(ii) $M\left( t\right)$ has independent increments: i.e., $\left( {M\left( {t}_{1}\right) - M\left( {t}_{0}\right) }\right) ,\ldots ,\left( {M\left( {t}_{B}\right) - M\left( {t}_{B - 1}\right) }\right)$ are independent random variables for each $0 \leq {t}_{0} < \cdots < {t}_{B}$ , and (iii) $M\left( {t}_{u}\right) - M\left( {t}_{l}\right)$ is Poisson distributed with mean ${\int }_{{t}_{l}}^{{t}_{u}}\lambda \left( t\right) {dt}$ .
|
| 66 |
+
|
| 67 |
+
In this paper, we consider continuous-time dynamic networks such that the events (or links/edges) among nodes can occur at any point in time. As we will examine in the following sections, these interactions do not necessarily exhibit any recurring characteristics; instead, they vary over time in many real networks. In this regard, we assume that the number of links, $M\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack$ , between a pair of node $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ follows a nonhomogeneous Poisson point process (NHPP) with intensity function ${\lambda }_{ij}\left( t\right)$ on the time interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ , and for a given network $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , the log-likelihood function can be written by
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\mathcal{L}\left( \Omega \right) \mathrel{\text{:=}} \log p\left( {\mathcal{G} \mid \Omega }\right) = \frac{1}{2}\mathop{\sum }\limits_{{\left( {i, j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where ${\mathcal{E}}_{i, j} \subseteq \mathcal{E}\left\lbrack {0, T}\right\rbrack$ is the set of links of node pair $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ on the timeline ${\mathcal{I}}_{T} \mathrel{\text{:=}} \left\lbrack {0, T}\right\rbrack$ , and $\Omega = {\left\{ {\lambda }_{ij}\right\} }_{1 \leq i < j \leq N}$ indicates the set of intensity functions.
|
| 74 |
+
|
| 75 |
+
### 3.2 Problem Formulation
|
| 76 |
+
|
| 77 |
+
Without loss of generality, it can be assumed that the timeline starts from 0 and is bounded by $T \in {\mathbb{R}}^{ + }$ . Since the interactions among nodes can occur at any time point on ${\mathcal{I}}_{T} = \left\lbrack {0, T}\right\rbrack$ , we would like to identify an accurate continuous-time node representation $\{ r\left( {i, t}\right) {\} }_{\left( {i, t}\right) \in \mathcal{V} \times {\mathcal{I}}_{T}}$ defined using a low-dimensional latent space ${\mathbb{R}}^{D}\left( {D \ll N}\right)$ where $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow {\mathbb{R}}^{D}$ is a map indicating the embedding or representation of node $i \in \mathcal{V}$ at time point $t \in {\mathcal{I}}_{T}$ . We define our objective more formally as follows:
|
| 78 |
+
|
| 79 |
+
Objective 3.1. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ be a continuous-time dynamic network and ${\lambda }^{ * } : {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \rightarrow \mathbb{R}$ be an unknown intensity function of a nonhomogeneous Poisson point process. For a given metric space $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , our purpose it to learn a function or representation $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow \mathrm{X}$ satisfying
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{d}_{\mathrm{X}}\left( {\mathbf{r}\left( {i, t}\right) ,\mathbf{r}\left( {j, t}\right) }\right) {dt} \approx \frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{\mathbf{\lambda }}^{ * }\left( {i, j, t}\right) {dt} \tag{2}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
for all $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ pairs, and for every interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$ .
|
| 86 |
+
|
| 87 |
+
In this work, we consider the Euclidean metric on a $D$ -dimensional real vector space, $\mathrm{X} \mathrel{\text{:=}} {\mathbb{R}}^{D}$ and the embedding of node $i \in \mathcal{V}$ at time $t \in {\mathcal{I}}_{T}$ will be denoted by ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
|
| 88 |
+
|
| 89 |
+
### 3.3 PIVEM: Piecewise-Velocity Model For Learning Continuous-time Embeddings
|
| 90 |
+
|
| 91 |
+
We learn continuous-time node representations by employing the canonical exponential link-function defining the intensity function as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\lambda }_{ij}\left( t\right) \mathrel{\text{:=}} \exp \left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ and ${\beta }_{i} \in \mathbb{R}$ denote the embedding vector at time $t$ and the bias term of node $i \in \mathcal{V}$ , respectively. For given bias terms, it can be seen by Lemma 3.1, that the definition of the intensity function provides a guarantee for our goal given in Equation (2), and a pair of nodes having a high number of interactions can be positioned close in the latent space. Although we utilize the squared Euclidean distance in Equation (3), which is not a metric, but we impose it as a distance [27, 30].
|
| 98 |
+
|
| 99 |
+
Lemma 3.1. For given fixed bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ , the node embeddings, ${\left\{ {\mathbf{r}}_{i}\left( t\right) \right\} }_{i \in \mathcal{V}}$ , learned by optimizing the objective function given in Equation (1) satisfy
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }\;\text{ for all }\left( {i, j}\right) \in {\mathcal{V}}^{2}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where ${p}_{ij}$ is the probability of having more than ${m}_{ij}$ links between $i$ and $j$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ .
|
| 106 |
+
|
| 107 |
+
## Proof. Please see the appendix for the proof.
|
| 108 |
+
|
| 109 |
+
Notably, constraining the approximation of the unknown intensity function by a metric space imposes the homophily property (i.e., similar nodes in the graph are placed close to each other in embedding space). When we have a pair of nodes exhibiting high interactions, they must have average intensity, so the term, ${p}_{ij}\left( {{m}_{ij}/\left( {{t}_{u} - {t}_{l}}\right) }\right.$ , in Lemma 3.1 converges to 1, and the average distance between the nodes is bounded by the sum of their bias terms. It can also be seen that the transitivity property holds up to some extend (i.e., if node $i$ is similar to $j$ and $j$ similar to $k$ , then $i$ should also be similar to $k$ ) since we can bound the squared Euclidean distance [27,31].
|
| 110 |
+
|
| 111 |
+
Importantly, for a dynamic embedding, we would like to have embeddings of a pair of nodes close enough to each other when they have high interactions during a particular time interval and far away from each other if they have less or no links. Note that the bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ are responsible for the node-specific effects such as degree heterogeneity [31, 32], and they provide additional flexibility to the model by acting as scaling factor for the corresponding nodes so that, for instance, a hub node might have a high number of interactions simultaneously without getting close to the others in the latent space.
|
| 112 |
+
|
| 113 |
+
Since our primary purpose is to learn continuous node representations in a latent space, we define the representation of node $i \in \mathcal{V}$ at time $t$ based on a linear model by ${\mathbf{r}}_{i}\left( t\right) \mathrel{\text{:=}} {\mathbf{x}}_{i}^{\left( 0\right) } + {\mathbf{v}}_{i}t$ . Here, ${\mathbf{x}}_{i}^{\left( 0\right) }$ can be considered as the initial position and ${\mathbf{v}}_{i}$ the velocity of the corresponding node. However, the linear model provides a minimal capacity for tracking the nodes and modeling their representations. Therefore, we reinterpret the given timeline ${\mathcal{I}}_{T} \mathrel{\text{:=}} \left\lbrack {0, T}\right\rbrack$ by dividing it into $B$ equally-sized bins, $\left\lbrack {{t}_{b - 1},{t}_{b}}\right) ,\left( {1 \leq b \leq B}\right)$ such that $\left\lbrack {0, T}\right\rbrack = \left\lbrack {0,{t}_{1}}\right) \cup \cdots \cup \left\lbrack {{t}_{B - 1},{t}_{B}}\right\rbrack$ where ${t}_{0} \mathrel{\text{:=}} 0$ and ${t}_{B} \mathrel{\text{:=}} T$ . By applying the linear model for each subinterval, we obtain a piecewise linear approximation of general intensity functions strengthening the models' capacity. As a result, we can write the position of node $i$ at time $t \in {\mathcal{I}}_{T}$ as follows:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
{\mathbf{r}}_{i}\left( t\right) \mathrel{\text{:=}} {\mathbf{x}}_{i}^{\left( 0\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 1\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 2\right) } + \cdots + \left( {t{\;\operatorname{mod}\;\left( {\Delta }_{B}\right) }}\right) {\mathbf{v}}_{i}^{\left( \left\lfloor t/{\Delta }_{B}\right\rfloor + 1\right) } \tag{4}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where ${\Delta }_{B}$ indicates the bin widths, $T/B$ , and ${\;\operatorname{mod}\;\left( \cdot \right) }$ is the modulo operation used to compute the remaining time. Note that the piece-wise interpretation of the timeline allows us to track better the path of the nodes in the embedding space, and it can be seen by Theorem 3.2 that we can obtain more accurate trails by augmenting the number of bins.
|
| 120 |
+
|
| 121 |
+
Theorem 3.2. Let $\mathbf{f}\left( t\right) : \left\lbrack {0, T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding of a node. For any given $\epsilon > 0$ , there exists a continuous, piecewise-linear node embedding, $\mathbf{r}\left( t\right)$ , satisfying $\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} < \epsilon$ for all $t \in \left\lbrack {0, T}\right\rbrack$ where $\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( b\right) }\left( t\right)$ for all $\left( {b - 1}\right) {\Delta }_{B} \leq t < b{\Delta }_{B},\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( B\right) }\left( t\right)$ for $t = T$ and ${\Delta }_{B} = T/B$ for some $B \in {\mathbb{N}}^{ + }$ .
|
| 122 |
+
|
| 123 |
+
## Proof. Please see the appendix for the proof.
|
| 124 |
+
|
| 125 |
+
Prior probability. In order to control the smoothness of the motion in the latent space, we employ a Gaussian Process (GP) [33] prior over the initial position ${\mathbf{x}}^{\left( 0\right) } \in {\mathbb{R}}^{N \times D}$ and velocity vectors $\mathbf{v} \in {\mathbb{R}}^{B \times N \times D}$ . Hence, we suppose that $\operatorname{vect}\left( {\mathbf{x}}^{\left( \mathbf{0}\right) }\right) \oplus \operatorname{vect}\left( \mathbf{v}\right) \sim \mathcal{N}\left( {\mathbf{0},\mathbf{\sum }}\right)$ where $\mathbf{\sum } \mathrel{\text{:=}} {\lambda }^{2}\left( {{\sigma }_{\sum }^{2}\mathbf{I} + \mathbf{K}}\right)$ is the covariance matrix with a scaling factor $\lambda \in \mathbb{R}$ . We utilize, ${\sigma }_{\sum \in \mathbb{R}}$ , to denote the noise of the covariance, and $\operatorname{vect}\left( \mathbf{z}\right)$ is the vectorization operator stacking the columns to form a single vector. To reduce the number of parameters of the prior and enable scalable inference, we define $\mathbf{K}$ as a Kronecker product of three matrices $\mathbf{K} \mathrel{\text{:=}} \mathbf{B} \otimes \mathbf{C} \otimes \mathbf{D}$ respectively accounting for temporal-, node-, and dimension specific covariance structures. Specifically, we define $\mathbf{B} \mathrel{\text{:=}} \left\lbrack {c}_{{\mathbf{x}}^{0}}\right\rbrack \oplus {\left\lbrack \exp {\left( -{\left( {c}_{b} - {\widetilde{c}}_{\widetilde{b}}\right) }^{2}/2{\sigma }_{\mathbf{B}}^{2}\right) }_{1 \leq b,\widetilde{b} \leq B}\right\rbrack }_{1 \leq b,\widetilde{b} \leq B}$ is a $\left( {B + 1}\right) \times \left( {B + 1}\right)$ matrix intending to capture the smoothness of velocities across time-bins where ${c}_{b} = \frac{{t}_{b - 1} + {t}_{b}}{2}$ is the center of the corresponding
|
| 126 |
+
|
| 127 |
+
bin, and the matrix is constructed by combining the radial basis function kernel (RBF) with a scalar term ${c}_{{\mathbf{x}}^{0}}$ corresponding to the initial position being decoupled from the structure of the velocities. The node specific matrix, $\mathbf{C} \in {\mathbb{R}}^{N \times N}$ , is constructed as a product of a low-rank matrix $\mathbf{C} \mathrel{\text{:=}} {\mathbf{{QQ}}}^{\top }$ where the row sums of $\mathbf{Q} \in {\mathbb{R}}^{N \times k}$ equals to $1\left( {k \ll N}\right)$ , and it aims to extract covariation patterns of the motion of the nodes. Finally, we simply set the dimensionality matrix to the identity: $\mathbf{D} \mathrel{\text{:=}} \mathbf{I} \in {\mathbb{R}}^{D \times D}$ in order to have uncorrelated dimensions.
|
| 128 |
+
|
| 129 |
+
To sum up, we can express our objective relying on the piecewise velocities with the prior as follows:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\widehat{\Omega } = \underset{\Omega }{\arg \max }\frac{1}{2}\mathop{\sum }\limits_{{\left( {i, j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) + \log \mathcal{N}\left( {\left\lbrack \begin{matrix} {\mathbf{x}}^{\left( 0\right) } \\ \mathbf{v} \end{matrix}\right\rbrack ;\mathbf{0},\mathbf{\sum }}\right) \tag{5}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $\Omega = \left\{ {\mathbf{\beta },{\mathbf{x}}^{\left( 0\right) },\mathbf{v},{\sigma }_{\sum },{\sigma }_{\mathbf{B}},{c}_{{\mathbf{x}}^{0}},\mathbf{Q}}\right\}$ is the set of hyper-parameters, and ${\lambda }_{ij}\left( t\right)$ is the intensity function as defined in Equation (3) based on the node embeddings, ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
|
| 136 |
+
|
| 137 |
+
### 3.4 Optimization
|
| 138 |
+
|
| 139 |
+
Our objective given in Equation (5) is not a convex function, so the learning strategy that we follow is of great significance in order to escape from the local minima and for the quality of the representations. We start by randomly initializing the model’s hyper-parameters from $\left\lbrack {-1,1}\right\rbrack$ except for the velocity tensor, which is set to 0 at the beginning. We adapt the sequential learning strategy in learning these parameters. In other words, we first optimize the initial position and bias terms together, $\left\{ {{\mathbf{x}}^{\left( 0\right) },\mathbf{\beta }}\right\}$ , for a given number of epochs; then, we include the velocity tensor, $\{ \mathbf{v}\}$ , in the optimization process and repeat the training for the same number of epochs. Finally, we add the prior parameters and learn all model hyper-parameters together. We have employed Adam optimizer [34] with learning rate 0.1 .
|
| 140 |
+
|
| 141 |
+
Computational issues and complexity. Note that we need to evaluate the log-intensity term in Equation (5) for each $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ and event time ${e}_{ij} \in {\mathcal{E}}_{ij}$ . Therefore, the computational cost required for the whole network is bounded by $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}\left| \mathcal{E}\right| }\right)$ . However, we can alleviate the computational cost by pre-computing certain coefficients at the beginning of the optimization process so that the complexity can be reduced to $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$ . We also have an explicit formula for the computation of the integral term since we utilize the squared Euclidean distance so that it can be computed in at most $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ operations. Instead of optimizing the whole network at once, we apply a batching strategy over the set of nodes in order to reduce the memory requirements. As a result, we sample $\mathcal{S}$ nodes for each epoch. Hence, the overall complexity for the log-likelihood function is $\mathcal{O}\left( {{\mathcal{S}}^{2}B\mathcal{I}}\right)$ where $\mathcal{I}$ is the number of epochs and $\mathcal{S} \ll \left| \mathcal{V}\right|$ . Similarly, the prior can be computed in at most $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\mathcal{S}}\right)$ operations by using various algebraic properties such as Woodbury matrix identity and Matrix Determinant lemma [35]. To sum up, the complexity of the proposed approach is $\mathcal{O}\left( {B{\mathcal{S}}^{2}\mathcal{I} + {B}^{3}{D}^{3}{K}^{2}\mathcal{S}\mathcal{I}}\right)$ (Please see the appendix for the derivations and other details).
|
| 142 |
+
|
| 143 |
+
## 4 Experiments
|
| 144 |
+
|
| 145 |
+
In this section, we extensively evaluate the performance of the proposed PIecewise-VElocity Model with respect to the well-known baselines in challenging tasks over various datasets of sizes and types. We consider all networks as undirected, and the event times of links are scaled to the interval $\left\lbrack {0,1}\right\rbrack$ for the consistency of experiments. We use the finest granularity level of the given input timestamps, such as seconds and milliseconds. We provide a brief summary of the networks below, but more details and various statistics are reported in Table 4 in the appendix. For all the methods, we learn node embeddings in two-dimensional space $\left( {D = 2}\right)$ since one of the objectives of this work is to produce dynamic node embeddings facilitating human insights into a complex network.
|
| 146 |
+
|
| 147 |
+
Experimental Setup. We first split the networks into two sets, such that the events occurring in the last 10% of the timeline are taken out for the prediction. Then, we randomly choose 10% of the node pairs among all possible dyads in the network for the graph completion task, and we ensure that each node in the residual network contains at least one event keeping the number of nodes fixed. If a pair of nodes only contains events in the prediction set and if these nodes do not have any other links during the training time, they are removed from the networks.
|
| 148 |
+
|
| 149 |
+
For conducting the experiments, we generate the labeled dataset of links as follows: For the positive samples, we construct small intervals of length $2 \times {10}^{-3}$ for each event time (i.e., $\left\lbrack {e - {10}^{-3}, e + {10}^{3}}\right\rbrack$ where $e$ is an event time). We randomly sample an equal number of time points and corresponding node pairs to form negative instances. If a sampled event time is not located inside the interval of a positive sample, we follow the same strategy to build an interval for it, and it is considered a negative instance. Otherwise, we sample another time point and a dyad. Note that some networks might contain a very high number of links, which leads to computational problems for these networks. Therefore, we subsample ${10}^{4}$ positive and negative instances if they contain more than this.
|
| 150 |
+
|
| 151 |
+
Table 1: The performance evaluation for the network reconstruction experiment over various datasets.
|
| 152 |
+
|
| 153 |
+
<table><tr><td rowspan="2"/><td colspan="2">Synthetic(π)</td><td colspan="2">Synthetic $\left( \mu \right)$</td><td colspan="2">College</td><td colspan="2">Contacts</td><td colspan="2">Email</td><td colspan="2">Forum</td><td colspan="2">Hypertext</td></tr><tr><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td></tr><tr><td>LDM</td><td>.563</td><td>.539</td><td>.669</td><td>.642</td><td>.951</td><td>.944</td><td>.860</td><td>.835</td><td>.954</td><td>.948</td><td>.909</td><td>.897</td><td>.818</td><td>.797</td></tr><tr><td>NODE2VEC</td><td>.519</td><td>.507</td><td>.503</td><td>.509</td><td>.711</td><td>.655</td><td>.812</td><td>.756</td><td>.853</td><td>.828</td><td>.677</td><td>.619</td><td>.696</td><td>.648</td></tr><tr><td>CTDNE</td><td>.518</td><td>.522</td><td>.499</td><td>.505</td><td>.689</td><td>.656</td><td>.599</td><td>.584</td><td>.630</td><td>.645</td><td>.643</td><td>.608</td><td>.540</td><td>.545</td></tr><tr><td>PIVEM</td><td>.762</td><td>.713</td><td>.905</td><td>.869</td><td>.948</td><td>.948</td><td>.938</td><td>.938</td><td>.978</td><td>.977</td><td>.907</td><td>.902</td><td>.830</td><td>.823</td></tr></table>
|
| 154 |
+
|
| 155 |
+
Synthetic datasets. We generate two artificial networks in order to evaluate the behavior of the models in controlled experimental settings. (i) Synthetic $\left( \pi \right)$ is sampled from the prior distribution stated in Subsection 3.2. The hyper-parameters, $\mathbf{\beta }, K$ and $B$ are set to0,20and 100, respectively. (ii) Synthetic $\left( \mu \right)$ is constructed based on the temporal block structures. The timeline is divided into 10 sub-intervals, and the nodes are randomly split into 20 groups for each interval. The links within each group are sampled from the Poisson distribution with the constant intensity of 5 .
|
| 156 |
+
|
| 157 |
+
Real Networks. The (iii) Hypertext network [36] was built on the radio badge records showing the interactions of the conference attendees for 2.5 days, and each event time indicates 20 seconds of active contact. Similarly, (iv) the Contacts network [37] was generated concerning the interactions of the individuals in an office environment. (v) Forum [38] is comprised of the activity data of university students on an online social forum system. (vi) CollegeMsg [39] indicates the private messages among the students on an online social platform. Finally, (vii) Eu-Email [40] was constructed based on the exchanged e-mail information among the members of European research institutions.
|
| 158 |
+
|
| 159 |
+
Baselines. We compare the performance of our method with three baselines. We include LDM with Poisson rate and node-specific biases [31, 41] since it is a static method having the closest formulation to ours. A very well-known GRL method, NODE2VEC (or N2V) [7] relies on the explicit generation of the random walks, and learns node embeddings by relying on the node proximities within random walks. In our experiments, we tune the model’s parameters(p, q)from $\{ {0.25},{0.5},1,2,4\}$ . Since it has the ability to run over the weighted networks, we also constructed a weighted graph based on the number of links through time and reported the best score of both versions of the networks. CTDNE [42] is a dynamic node embedding approach performing temporal random walks over the network. But it is unable to produce instantaneous node representations and produces embeddings only for a given time. Therefore, we have utilized the last time of the training set to obtain the representations. We provide the other details about the parameter settings of the baseline methods in the appendix.
|
| 160 |
+
|
| 161 |
+
For our method, we set the parameter $K = {25}$ , and bins count $B = {100}$ to have enough capacity to track node interactions. For the regularization term $\left( \lambda \right)$ of the prior, we first mask ${20}\%$ of the dyads in the optimization of Equation (5). Furthermore, we train the model by starting with $\lambda = {10}^{6}$ , and then we reduce it to one-tenth after each 100 epoch. The same procedure is repeated until $\lambda = {10}^{-6}$ , and we choose the $\lambda$ value minimizing the log-likelihood of the masked pairs. The final embeddings are then obtained by performing this annealing strategy without any mask until this $\lambda$ value. We repeat this procedure 5 times, and we consider the best-performing method in learning the embeddings. The relative standard deviation of the experiments is always less than 0.5 , and Figure 1a shows an illustrative example for tuning $\lambda$ over the Synthetic $\left( \pi \right)$ dataset with 5 random runs.
|
| 162 |
+
|
| 163 |
+
For the performance comparison of the methods, we provide the Area Under Curve (AUC) scores for the Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves [43]. We compute the intensity of a given instance for LDM and PIVEM for the similarity measure of the node pair. Since NODE2VEC and CTDNE rely on the SkipGram architecture [44], we use cosine similarity for them.
|
| 164 |
+
|
| 165 |
+
Network Reconstruction. Our goal is to see how accurately a model can capture the interaction patterns among nodes and generate embeddings exhibiting their temporal relationships in a latent space. In this regard, we train the models on the residual network and generate sample sets as described previously. The performance of the models is reported in Table 1. Comparing the performance of PIVEM against the baselines, we observe favorable results across all networks, highlighting the importance and ability of PIVEM to account for and detect structure in a continuous time manner.
|
| 166 |
+
|
| 167 |
+
Table 2: The performance evaluation for the network completion experiment over various datasets.
|
| 168 |
+
|
| 169 |
+
<table><tr><td rowspan="2"/><td colspan="2">Synthetic(π)</td><td colspan="2">Synthetic $\left( \mu \right)$</td><td colspan="2">College</td><td colspan="2">Contacts</td><td colspan="2">Email</td><td colspan="2">Forum</td><td colspan="2">Hypertext</td></tr><tr><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td></tr><tr><td>LDM</td><td>.535</td><td>.529</td><td>.646</td><td>.631</td><td>.931</td><td>.926</td><td>.836</td><td>.799</td><td>.948</td><td>.942</td><td>.863</td><td>.858</td><td>.761</td><td>.738</td></tr><tr><td>NODE2VEC</td><td>.519</td><td>.511</td><td>.747</td><td>.677</td><td>.685</td><td>.637</td><td>.787</td><td>.744</td><td>.818</td><td>.777</td><td>.635</td><td>.592</td><td>.596</td><td>.588</td></tr><tr><td>CTDNE</td><td>.522</td><td>.527</td><td>.499</td><td>.503</td><td>.647</td><td>.599</td><td>.658</td><td>.656</td><td>.571</td><td>.593</td><td>.617</td><td>.592</td><td>.464</td><td>.485</td></tr><tr><td>PIVEM</td><td>.750</td><td>.696</td><td>.874</td><td>.851</td><td>.935</td><td>.934</td><td>.873</td><td>.864</td><td>.951</td><td>.953</td><td>.879</td><td>.875</td><td>.770</td><td>.712</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 3: The performance evaluation for the link prediction experiment over various datasets.
|
| 172 |
+
|
| 173 |
+
<table><tr><td rowspan="2"/><td colspan="2">Synthetic(π)</td><td colspan="2">Synthetic $\left( \mu \right)$</td><td colspan="2">College</td><td colspan="2">Contacts</td><td colspan="2">Email</td><td colspan="2">Forum</td><td colspan="2">Hypertext</td></tr><tr><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td><td>ROC</td><td>PR</td></tr><tr><td>LDM</td><td>.562</td><td>.539</td><td>.498</td><td>.642</td><td>.951</td><td>.944</td><td>.860</td><td>.835</td><td>.954</td><td>.948</td><td>.909</td><td>.897</td><td>.819</td><td>.797</td></tr><tr><td>NODE2VEC</td><td>.518</td><td>.506</td><td>.498</td><td>.502</td><td>.705</td><td>.676</td><td>.783</td><td>.716</td><td>.825</td><td>.807</td><td>.635</td><td>.605</td><td>.748</td><td>.739</td></tr><tr><td>CTDNE</td><td>.514</td><td>.526</td><td>.457</td><td>.481</td><td>.666</td><td>.643</td><td>.632</td><td>.623</td><td>.629</td><td>.629</td><td>.621</td><td>.599</td><td>.508</td><td>.532</td></tr><tr><td>PIVEM</td><td>.716</td><td>.689</td><td>.474</td><td>.485</td><td>.891</td><td>.887</td><td>.876</td><td>.884</td><td>.964</td><td>.964</td><td>.894</td><td>.895</td><td>.756</td><td>.767</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Network Completion. The network completion experiment is a relatively more challenging task than the reconstruction. Since we hide 10% of the network, the dyads containing events are also viewed as non-link pairs, and the temporal models should place these nodes in distant locations of the embedding space. However, it might be possible to predict these events accurately if the network links have temporal triangle patterns through certain time intervals. In Table 2, we report the AUC-ROC and PR-AUC scores for the network completion experiment. Once more, PIVEM outperforms the baselines (in most cases significantly). We again discovered evidence supporting the necessity for modeling and tracking temporal networks with time-evolving embedding representations.
|
| 176 |
+
|
| 177 |
+
Future Prediction. Finally, we examine the performance of the models in the future prediction task. Here, the models are asked to forecast the ${10}\%$ future of the timeline. For PIVEM, the similarity between nodes is obtained by calculating the intensity function for the timeline of the training set (i.e., from 0 to 0.9 ), and we keep our previously described strategies for the baselines since they generate the embeddings only for the last training time. Table 3 presents the performances of the models. It is noteworthy that while PIVEM outperforms the baselines significantly on the Synthetic $\left( \pi \right)$ network, it does not show promising results on Synthetic $\left( \mu \right)$ . Since the first network is compatible with our model, it successfully learns the dominant link pattern of the network. However, the second network conflicts with our model: it forms a completely different structure for every 0.1 second. For the real datasets, we observe mostly on-par results, especially with LDM. Some real networks contain link patterns that become "static" with respect to the future prediction task.
|
| 178 |
+
|
| 179 |
+
We have previously described how we set the prior coefficient, $\lambda$ , and now we will examine the influence of the other hyperparameters over the $\operatorname{Synthetic}\left( \pi \right)$ dataset for network reconstruction.
|
| 180 |
+
|
| 181 |
+
Influence of dimension size(D). We report the AUC-ROC and AUC-PR scores in Figure 1b. When we increase the dimension size, we observe a constant increase in performance. It is not a surprising
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+
Figure 2: Comparisons of the ground truth and learned representations in two-dimensional space.
|
| 188 |
+
|
| 189 |
+
result because we also increase the model's capacity depending on the dimension. However, the two-dimensional space still provides comparable performances in the experiments, facilitating human insights into networks' complex, evolving structures.
|
| 190 |
+
|
| 191 |
+
Influence of bin count(B). Figure $1\mathrm{c}$ demonstrates the effect of the number of bins for the network reconstruction task. We generated the Synthetic $\left( \pi \right)$ network from for 100 bins, so it can be seen that the performance stabilizes around ${2}^{6}$ , which points out that PIVEM reaches enough capability to model the interactions among nodes.
|
| 192 |
+
|
| 193 |
+
Latent Embedding Animation. Although many GRL methods show high performance in the downstream tasks, in general, they require high dimensional spaces, so a postprocessing step later has to be applied in order to visualize the node representations in a small dimensional space. However, such processes cause distortions in the embeddings, which can lead a practitioner to end up with inaccurate arguments about the data.
|
| 194 |
+
|
| 195 |
+
As we have seen in the experimental evaluations, our proposed approach successfully learns embed-dings in the two-dimensional space, and it also produces continuous-time representations. Therefore, it offers the ability to animate how the network evolves through time and can play a crucial role in grasping the underlying characteristics of the networks. As an illustrative example, Figure 2 compares the ground truth representations of Synthetic $\left( \pi \right)$ with the learned ones. The synthetic network consists of small communities of 5 nodes, and each color indicates these groups. Although the problem does not have unique solutions, it can be seen that our model successfully seizes the clustering patterns in the network. We refer the reader to supplementary materials for the full animation.
|
| 196 |
+
|
| 197 |
+
## 5 Conclusion and Limitations
|
| 198 |
+
|
| 199 |
+
In this paper, we have proposed a novel continuous-time dynamic network embedding approach, namely, Piecewise Velocity Model (PIVEM). Its performance has been examined in various experiments, such as network reconstruction and completion tasks over various networks with respect to the very well-known baselines. We demonstrated that it could accurately embed the nodes into a two-dimensional space. Therefore, it can be directly utilized to animate the learned node embeddings, and it can be beneficial in extracting the networks' underlying characteristics, foreseeing how they will evolve through time. We showed that the model could scale up to large networks.
|
| 200 |
+
|
| 201 |
+
Although our model successfully learns continuous-time representations, it is unable to capture temporal patterns in the network in terms of the GP structure. Therefore, we are planning to employ different kernels instead of RBF, such as periodic kernels in the prior. The optimization strategies of the proposed method might be improved to escape from local minima. As a possible future direction, the algorithm can also be adapted for other graph types, such as directed and multi-layer networks.
|
| 202 |
+
|
| 203 |
+
References
|
| 204 |
+
|
| 205 |
+
[1] M. E. J. Newman. The structure and function of complex networks. SIAM Review, 45(2): 167-256, 2003. 1
|
| 206 |
+
|
| 207 |
+
[2] Bomin Kim, Kevin H Lee, Lingzhou Xue, and Xiaoyue Niu. A review of dynamic network models with latent variables. Statistics surveys, 12:105, 2018. 2
|
| 208 |
+
|
| 209 |
+
[3] Katsuhiko Ishiguro, Tomoharu Iwata, Naonori Ueda, and Joshua Tenenbaum. Dynamic infinite relational model for time-varying relational data analysis. NeurIPS, 23, 2010. 2
|
| 210 |
+
|
| 211 |
+
[4] Tue Herlau, Morten Mørup, and Mikkel Schmidt. Modeling temporal evolution and multiscale structure in networks. In International Conference on Machine Learning, pages 960-968. PMLR, 2013. 2, 3
|
| 212 |
+
|
| 213 |
+
[5] Creighton Heaukulani and Zoubin Ghahramani. Dynamic probabilistic models for latent feature propagation in social networks. In International Conference on Machine Learning, pages 275-283. PMLR, 2013. 2
|
| 214 |
+
|
| 215 |
+
[6] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In ${KDD}$ , page 701-710,2014. 3,13
|
| 216 |
+
|
| 217 |
+
[7] Aditya Grover and Jure Leskovec. Node2Vec: Scalable feature learning for networks. In KDD, pages 855-864, 2016. 3, 7, 13
|
| 218 |
+
|
| 219 |
+
[8] Daniele Durante and David Dunson. Bayesian logistic gaussian process models for dynamic networks. In Artificial Intelligence and Statistics, pages 194-201. PMLR, 2014. 3
|
| 220 |
+
|
| 221 |
+
[9] Daniele Durante and David B Dunson. Locally adaptive dynamic networks. The Annals of Applied Statistics, 10(4):2203-2232, 2016. 3
|
| 222 |
+
|
| 223 |
+
[10] Bomin Kim, Kevin Lee, Lingzhou Xue, and Xiaoyue Niu. A review of dynamic network models with latent variables, 2017. URL https://arxiv.org/abs/1711.10421.3
|
| 224 |
+
|
| 225 |
+
[11] Daniel Sewell and Yuguo Chen. Latent space models for dynamic networks. JASA, 110:00-00, 012015.2,3
|
| 226 |
+
|
| 227 |
+
[12] Charles Blundell, Jeff Beck, and Katherine A Heller. Modelling reciprocating relationships with hawkes processes. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, NeurIPS, volume 25. Curran Associates, Inc., 2012. 2
|
| 228 |
+
|
| 229 |
+
[13] Makan Arastuie, Subhadeep Paul, and Kevin S. Xu. Chip: A hawkes process model for continuous-time networks with scalable and consistent estimation, 2019. URL https:// arxiv.org/abs/1908.06940.2
|
| 230 |
+
|
| 231 |
+
[14] Sylvain Delattre, Nicolas Fournier, and Marc Hoffmann. Hawkes processes on large networks. The Annals of Applied Probability, 26(1):216 - 261, 2016. 2
|
| 232 |
+
|
| 233 |
+
[15] Xuhui Fan, Bin Li, Feng Zhou, and Scott SIsson. Continuous-time edge modelling using non-parametric point processes. NeurIPS, 34:2319-2330, 2021. 2, 3
|
| 234 |
+
|
| 235 |
+
[16] Rakshit S. Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. In ICLR, 2019. 2
|
| 236 |
+
|
| 237 |
+
[17] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael M. Bronstein. Temporal graph networks for deep learning on dynamic graphs. CoRR, abs/2006.10637, 2020. URL https://arxiv.org/abs/2006.10637.2, 3
|
| 238 |
+
|
| 239 |
+
[18] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109-137, 1983. 2
|
| 240 |
+
|
| 241 |
+
[19] Krzysztof Nowicki and Tom A. B Snijders. Estimation and prediction for stochastic blockstruc-tures. JASA, 96(455):1077-1087, 2001. 2
|
| 242 |
+
|
| 243 |
+
[20] Alan G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83-90, 1971. 2
|
| 244 |
+
|
| 245 |
+
[21] Alan G. Hawkes. Point spectra of some mutually exciting point processes. J. R. Stat. Soc, 33 (3):438-443, 1971. 2
|
| 246 |
+
|
| 247 |
+
[22] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning systems of concepts with an infinite relational model. In AAAI, page 381-388, 2006. 2
|
| 248 |
+
|
| 249 |
+
[23] Creighton Heaukulani and Zoubin Ghahramani. Dynamic probabilistic models for latent feature propagation in social networks. In Sanjoy Dasgupta and David McAllester, editors, PMLR, volume 28, pages 275-283, 2013. 3
|
| 250 |
+
|
| 251 |
+
[24] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. ICLR. 3
|
| 252 |
+
|
| 253 |
+
[25] Peter D Hoff, Adrian E Raftery, and Mark S Handcock. Latent space approaches to social network analysis. JASA, 97(460):1090-1098, 2002. 3
|
| 254 |
+
|
| 255 |
+
[26] Nikolaos Nakis, Abdulkadir Çelikkanat, Sune Lehmann Jørgensen, and Morten Mørup. A hierarchical block distance model for ultra low-dimensional graph representations, 2022. 3
|
| 256 |
+
|
| 257 |
+
[27] Nikolaos Nakis, Abdulkadir Çelikkanat, and Morten Mørup. Hm-Idm: A hybrid-membership latent distance model, 2022. 3, 4, 5
|
| 258 |
+
|
| 259 |
+
[28] Purnamrita Sarkar and Andrew Moore. Dynamic social network analysis using latent space models. In Y. Weiss, B. Schölkopf, and J. Platt, editors, NeurIPS, volume 18, 2005. 3
|
| 260 |
+
|
| 261 |
+
[29] Roy L. Streit. The Poisson Point Process, pages 11-55. Springer US, Boston, MA, 2010. 4
|
| 262 |
+
|
| 263 |
+
[30] Maximillian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In NIPS, volume 30, 2017. 4
|
| 264 |
+
|
| 265 |
+
[31] Pavel N. Krivitsky, Mark S. Handcock, Adrian E. Raftery, and Peter D. Hoff. Representing degree distributions, clustering, and homophily in social networks with latent cluster random effects models. Social Networks, 31(3):204-213, 2009. 5, 7, 13
|
| 266 |
+
|
| 267 |
+
[32] Nikolaos Nakis, Abdulkadir Çelikkanat, Sune Lehmann Jørgensen, and Morten Mørup. A hierarchical block distance model for ultra low-dimensional graph representations, 2022. 5
|
| 268 |
+
|
| 269 |
+
[33] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. 5
|
| 270 |
+
|
| 271 |
+
[34] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.6,13
|
| 272 |
+
|
| 273 |
+
[35] C.C. Aggarwal. Linear Algebra and Optimization for Machine Learning: A Textbook. Springer International Publishing, 2020. 6, 15
|
| 274 |
+
|
| 275 |
+
[36] Lorenzo Isella, Juliette Stehlé, Alain Barrat, Ciro Cattuto, Jean-François Pinton, and Wouter Van den Broeck. What's in a crowd? analysis of face-to-face behavioral networks. Journal of Theoretical Biology, 271(1):166-180, 2011. 7, 13
|
| 276 |
+
|
| 277 |
+
[37] Mathieu Génois and Alain Barrat. Can co-location be used as a proxy for face-to-face contacts? EPJ Data Science, 7(1):11, May 2018. 7, 13
|
| 278 |
+
|
| 279 |
+
[38] Tore Opsahl. Triadic closure in two-mode networks: Redefining the global and local clustering coefficients. Social Networks, 35, 06 2010. 7, 13
|
| 280 |
+
|
| 281 |
+
[39] Pietro Panzarasa, Tore Opsahl, and Kathleen M. Carley. Patterns and dynamics of users' behavior and interaction: Network analysis of an online community. Journal of the American Society for Information Science and Technology, 60(5):911-932, 2009. 7, 13
|
| 282 |
+
|
| 283 |
+
[40] Ashwin Paranjape, Austin R. Benson, and Jure Leskovec. Motifs in temporal networks. page 601-610, 2017. 7, 13
|
| 284 |
+
|
| 285 |
+
[41] Peter D Hoff. Bilinear mixed-effects models for dyadic data. JASA, 100(469):286-295, 2005. 7, 13
|
| 286 |
+
|
| 287 |
+
[42] Giang Hoang Nguyen, John Boaz Lee, Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, and Sungchul Kim. Continuous-time dynamic network embeddings. In The Web Conf, page 969-976, 2018.7,13
|
| 288 |
+
|
| 289 |
+
[43] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011. 7
|
| 290 |
+
|
| 291 |
+
[44] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111-3119, 2013.7,13
|
| 292 |
+
|
| 293 |
+
## A Appendix
|
| 294 |
+
|
| 295 |
+
### A.1 Experiments
|
| 296 |
+
|
| 297 |
+
We consider all networks used in the experiments as undirected, and the event times of links are scaled to the interval $\left\lbrack {0,1}\right\rbrack$ for the consistency of experiments. We use the finest resolution level of the given input timestamps, such as seconds and milliseconds. We provide a brief summary of the networks below, and various statistics are reported in Table 4. The visualization of the event distributions of the networks through time is depicted in Figure .
|
| 298 |
+
|
| 299 |
+
Synthetic datasets. We generate two artificial networks in order to evaluate the behavior of the models in controlled experimental settings. (i) Synthetic $\left( \pi \right)$ is sampled from the prior distribution stated in Subsection 3.2. The hyper-parameters, $\mathbf{\beta }, K$ and $B$ are set to0,20and 100, respectively. (ii) Synthetic $\left( \mu \right)$ is constructed based on the temporal block structures. The timeline is divided into 10 intervals, and the node set is split into 20 groups. The links within each group are sampled from the Poisson distribution with the constant intensity of 5 .
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 3: Distribution of the links through time.
|
| 304 |
+
|
| 305 |
+
Table 4: Statistics of networks. $\left| \mathcal{V}\right| :$ Number of nodes, $M :$ Number of pairs having at least one link, $\left| \mathcal{E}\right|$ : Total number of links, ${\left| {\mathcal{E}}_{ij}\right| }_{\max }$ : Max. number of links a pair of nodes has.
|
| 306 |
+
|
| 307 |
+
<table><tr><td/><td>$\left| \mathcal{V}\right|$</td><td>$M$</td><td>$\left| \mathcal{E}\right|$</td><td>${\left| {\mathcal{E}}_{ij}\right| }_{max}$</td></tr><tr><td>Synthetic $\left( \mu \right)$</td><td>100</td><td>4,889</td><td>180,658</td><td>124</td></tr><tr><td>Synthetic ( $\pi$ )</td><td>100</td><td>3,009</td><td>22,477</td><td>32</td></tr><tr><td>College</td><td>1,899</td><td>13,838</td><td>59,835</td><td>184</td></tr><tr><td>Contacts</td><td>217</td><td>4,274</td><td>78,249</td><td>1,302</td></tr><tr><td>Hypertext</td><td>113</td><td>2,196</td><td>20,818</td><td>1,281</td></tr><tr><td>Email</td><td>986</td><td>16,064</td><td>332,334</td><td>4,992</td></tr><tr><td>Forum</td><td>899</td><td>7,036</td><td>33,686</td><td>171</td></tr></table>
|
| 308 |
+
|
| 309 |
+
Real Networks. The (iii) Hypertext network [36] was built on the radio badge records showing the interactions of the conference attendees for 2.5 days, and each event time indicates 20 seconds of active contact. Similarly, (iv) the Contacts network [37] was generated concerning the interactions of the individuals in an office environment. (v) Forum [38] is comprised of the activity data of university students on an online social forum system. The (vi) CollegeMsg network [39] indicates the private messages among the students on an online social platform. Finally, (vii) Eu-Email [40] was constructed based on the exchanged e-mail information among the members of European research institutions.
|
| 310 |
+
|
| 311 |
+
Baselines. We compare the performance of our method with three baselines. We include LDM with Poisson rate with node-specific biases [31, 41] since it is a static method having the closest formulation to ours. We randomly initialize the embeddings and bias terms and train the model with the Adam optimizer [34] for 500 epochs and a learning rate of 0.1 . A very well-known GRL method, NODE2VEC (or N2V) [7] relies on the explicit generation of the random walks by starting from each node in the network, then it learns node embeddings by inspiring from the SkipGram [44] algorithm. It optimizes the softmax function for the nodes lying within a fixed window region with respect to a chosen center node over the produced node sequences. It is an extension of the DEEPWALK method [6], and NODE2VEC differs from it by introducing two additional parameters to perform unbiased random walks. In our experiments, we tune the model’s parameters(p, q)from $\{ {0.25},{0.5},1,2,4\}$ . Since it has the ability to run over the weighted networks, we also constructed a weighted graph based on the number of links through time and reported the best score of both versions of the networks. CTDNE [42] is a dynamic node embedding approach performing temporal random walks over the network. But it is unable to produce instantaneous node representations and produces embeddings only for a given time. Therefore, we have utilized the last time of the training set to obtain the representations. We have chosen the recommended values for the common hyper-parameters of NODE2VEC and CTDNE, so the number of walks, walk length, and window size parameters have been set to 10,80, and 10, respectively. We utilized the implementation provided by the StellarGraph Python package to produce the embeddings for CTDNE.
|
| 312 |
+
|
| 313 |
+
Optimization of the proposed approach. Our objective given in Equation (5) is not a convex function, so the learning strategy that we follow is of great importance in order to escape from the local minima and for the quality of the representations. We start by randomly initializing the model's hyper-parameters from $\left\lbrack {-1,1}\right\rbrack$ except for the velocity tensor, which is set to 0 at the beginning. We adapt the sequential learning strategy in learning these parameters. In other words, we first optimize the initial position and bias terms together, $\left\{ {{\mathbf{x}}^{\left( 0\right) },\mathbf{\beta }}\right\}$ , for a given number of epochs; then, we include the velocity tensor, $\{ \mathbf{v}\}$ , in the optimization process and repeat the training for the same number of epochs. Finally, we add the prior parameters and learn all model hyper-parameters together. We have employed Adam optimizer [34] with learning rate 0.1 .
|
| 314 |
+
|
| 315 |
+
For our method, we set the parameter $K = {25}$ , and bins count $B = {100}$ to have enough capacity to track node interactions. For the regularization term $\left( \lambda \right)$ of the prior, we first mask ${20}\%$ of the dyads in the optimization of Equation (5). Furthermore, we train the model by starting with $\lambda = {10}^{6}$ , and then we reduce it to one-tenth after each 100 epoch. The same procedure is repeated until $\lambda = {10}^{-6}$ , and we choose the $\lambda$ value minimizing the log-likelihood of the masked pairs. The final node embeddings are then obtained by performing this annealing strategy without any mask until this $\lambda$ value. We repeat this procedure 5 times with different initializations, and we consider the best-performing
|
| 316 |
+
|
| 317 |
+
method to learn the embeddings. The relative standard deviation of the experiments is always less than 0.5 for all the networks, and we depict the negative log-likelihood of the masked pairs for the annealing strategy with 5 random runs in Figure 1a.
|
| 318 |
+
|
| 319 |
+

|
| 320 |
+
|
| 321 |
+
Figure 4: Negative log-likelihood of the masked pairs for the annealing strategy applied for tuning $\lambda$ parameter with 5 random runs.
|
| 322 |
+
|
| 323 |
+
### A.2 Computational Problems and Model Complexity
|
| 324 |
+
|
| 325 |
+
Log-likelihood function. Note that we need to evaluate the log-intensity term in Equation 5 for each $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ and event time ${e}_{ij} \in {\mathcal{E}}_{ij}$ . Therefore, the computational cost required for the whole network is bounded by $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}\left| \mathcal{E}\right| }\right)$ . However, we can alleviate it by computing certain coefficients at the beginning of the optimization process. If we define ${\alpha }_{ij} \mathrel{\text{:=}} \left( {{e}_{ij} - {\Delta }_{B}\left( {{b}^{ * } - 1}\right) }\right)$ , then it can be seen that the sum over the set of all events, ${\mathcal{E}}_{ij}^{{b}^{ * }}$ , lying inside ${b}^{ * }$ ’th bin (i.e., the events in
|
| 326 |
+
|
| 327 |
+
$\left\lbrack {{\Delta }_{B}\left( {{b}^{ * } - 1}\right) ,{\Delta }_{B}{b}^{ * }}\right)$ can be rewritten by:
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}^{{b}^{ * }}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) = \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( {e}_{ij}\right) - {\mathbf{r}}_{j}\left( {e}_{ij}\right) \end{Vmatrix}}^{2}}\right)
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}^{{b}^{ * }}}}\left( {{\beta }_{i} + {\beta }_{j}}\right) + \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) } + \Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }\left( {e}_{ij} - {\Delta }_{B}\left( {b}^{ * } - 1\right) \right) \end{Vmatrix}}^{2}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
= \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}^{{b}^{ * }}}}\left( {{\beta }_{i} + {\beta }_{j}}\right) + \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\left( {{\alpha }_{ij}^{2}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }\end{Vmatrix}}^{2} + {\left( \Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) }\right) }^{2}}\right.
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
\left. {+2{\alpha }_{ij}\left\langle {\Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) },\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }}\right\rangle }\right)
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
= \left| {\mathcal{E}}_{ij}^{{b}^{ * }}\right| \left( {{\beta }_{i} + {\beta }_{j}}\right) + {\alpha }_{2}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }\end{Vmatrix}}^{2} + \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\left( \Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) }\right) }^{2}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
+ 2{\alpha }_{1}\left\langle {\Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) },\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }}\right\rangle
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
where ${\alpha }_{1}^{\left( {b}^{ * }\right) } \mathrel{\text{:=}} \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\alpha }_{ij}$ and ${\alpha }_{2}^{\left( {b}^{ * }\right) } \mathrel{\text{:=}} \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\alpha }_{ij}^{2}$ . We can follow the same strategy for each bin, then the computational complexity can be reduced to $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$
|
| 354 |
+
|
| 355 |
+
Since we use the squared Euclidean distance in the integral term of our objective, we can derive the exact formula for the computation (please see Lemma A. 3 for the details). We need to evaluate it for all node pairs, so it requires at most $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ operations. Hence, the complexity of the log-likelihood function is $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$ . Instead of optimizing the whole network at once, we are applying the batching strategy over the set of nodes in order to reduce the memory requirements, so we sample $\mathcal{S}$ nodes for each epoch. Hence, the overall complexity of the log-likelihood is $\mathcal{O}\left( {{\mathcal{S}}^{2}B\mathcal{I}}\right)$ where $\mathcal{I}$ is the number of epochs.
|
| 356 |
+
|
| 357 |
+
Computation of the prior function. The covariance matrix, $\mathbf{\sum } \in {\mathbb{R}}^{{BND} \times {BND}}$ , of the prior is defined by $\mathbf{\sum } \mathrel{\text{:=}} {\lambda }^{2}{\left( {\sigma }_{\mathbf{\sum }}^{2}\mathbf{I} + \mathbf{K}\right) }^{-1}$ with a scaling factor $\lambda \in \mathbb{R}$ and a noise variance ${\sigma }_{\mathbf{\sum }}^{2} \in {\mathbb{R}}^{ + }$ . The multivariate normal distribution is parametrized with a noise term ${\sigma }_{\sum }^{2}\mathbf{I}$ and a matrix $\mathbf{K} \in$ ${\mathbb{R}}^{{BND} \times {BND}}$ having a low-rank form. In other words, $\mathbf{K}$ is written by $\mathbf{B} \otimes \mathbf{C} \otimes \mathbf{D}$ where $\mathbf{B}$ is block diagonal matrix combined with parameter ${c}_{{\mathbf{x}}^{0}}$ and the RBF kernel $\exp \left( {-{\left( {c}_{b} - {c}_{{b}^{\prime }}\right) }^{2}/{\sigma }_{\mathbf{B}}^{2}}\right) \in {\mathbb{R}}^{B \times B}$ for ${c}_{b} \mathrel{\text{:=}} \left( {{t}_{b - 1} - {t}_{b}}\right) /2$ . The matrix aiming for capturing the node interactions, $\mathbf{C} \mathrel{\text{:=}} {\mathbf{{QQ}}}^{\top } \in {\mathbb{R}}^{N \times N}$ is defined with a low-rank matrix $\mathbf{Q} \in {\mathbb{R}}^{N \times k}$ whose rows equal to $1\left( {k \ll N}\right)$ , and we set $\mathbf{D} \mathrel{\text{:=}} \mathbf{I}{\mathbf{I}}^{\top } \in {\mathbb{R}}^{D \times D}$ . By considering the Cholesky decomposition [35] of $\mathbf{B} \mathrel{\text{:=}} {\mathbf{{LL}}}^{\top }$ since $\mathbf{B}$ is symmetric positive semi-definite, we can factorize $\mathbf{K} \mathrel{\text{:=}} {\mathbf{K}}_{f}{\mathbf{K}}_{f}^{\top }$ where ${\mathbf{K}}_{f} \mathrel{\text{:=}} \mathbf{L} \otimes \mathbf{Q} \otimes \mathbf{I}$ .
|
| 358 |
+
|
| 359 |
+
Note that the precision matrix, ${\mathbf{\sum }}^{-1}$ , can be written by using the Woodbury matrix identity [35] as follows:
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{\mathbf{\sum }}^{-1} = {\lambda }^{-2}{\left( {\sigma }_{\mathbf{\sum }}^{2}\mathbf{I} + {\mathbf{K}}_{f}{\mathbf{K}}_{f}^{\top }\right) }^{-1} = {\lambda }^{-2}\left( {{\sigma }_{\mathbf{\sum }}^{2}{}^{-1}\mathbf{I} - {\sigma }_{\mathbf{\sum }}^{2}{}^{-1}{\mathbf{K}}_{f}{\mathbf{R}}^{-1}{\mathbf{K}}_{f}^{\top }{\sigma }_{\mathbf{\sum }}^{2}{}^{-1}}\right)
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
where the capacitance matrix $\mathbf{R} \mathrel{\text{:=}} {\mathbf{I}}_{BKD} + {\sigma }_{\mathbf{\sum }}^{2}{}^{-1}{\mathbf{K}}_{f}^{\top }{\mathbf{K}}_{f}$ .
|
| 366 |
+
|
| 367 |
+
The log-determinant of ${\lambda }^{2}\mathbf{\sum }$ can be also simplified by applying Matrix Determinant lemma [35]:
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
\log \left( {\det \left( \mathbf{\sum }\right) }\right) = \left( {BND}\right) \log \left( {\lambda }^{2}\right) + \log \left( {\det \left( {{\sigma }_{\mathbf{\sum }}^{2}{\mathbf{I}}_{BND} + {\mathbf{K}}_{f}{\mathbf{K}}_{f}^{\top }}\right) }\right)
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
= \left( {BND}\right) \log \left( {\lambda }^{2}\right) + \log \left( {\det \left( {{\mathbf{I}}_{BKD} + {\sigma }_{\mathbf{\sum }}^{2}{}^{-1}{\mathbf{K}}_{f}^{\top }{\mathbf{K}}_{f}}\right) }\right) + \left( {BND}\right) \log \left( {\sigma }_{\mathbf{\sum }}^{2}\right)
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
= \left( {BND}\right) \left( {\log \left( {\lambda }^{2}\right) + \log \left( {\sigma }_{\mathbf{\sum }}^{2}\right) }\right) + \log \left( {\det \left( \mathbf{R}\right) }\right)
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
Note that the most cumbersome points in the computation of the prior are the calculations of the inverse and determinant of the terms and some matrix multiplication operations. Since $R$ is a matrix of size ${BKD} \times {BKD}$ , its inverse and determinant can be found in at most $\mathcal{O}\left( {{B}^{3}{K}^{3}{D}^{3}}\right)$ operations. We also need the term, ${\mathbf{K}}_{f}{\mathbf{R}}^{-1}\mathbf{R}$ , which can also be computed in $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\left| \mathcal{V}\right| }\right)$ steps, so the number of operations required for the prior can be bounded by $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\left| \mathcal{V}\right| }\right)$ . It is worth noticing that we cannot directly apply the batching strategy for the computation of the inverse of the capacitance matrix, $\mathbf{R}$ . However, we can compute it once and then we can utilize it for the calculation of the log-prior for different sets of node samples, then we can recompute it when we decide to update the parameters again.
|
| 382 |
+
|
| 383 |
+
To sum up, the complexity of our proposed approach is $\mathcal{O}\left( {B\mathcal{I}{\mathcal{S}}^{2} + {B}^{3}{D}^{3}{K}^{2}\mathcal{S}\mathcal{I}}\right)$ where $\mathcal{S}$ is the batch size and $\mathcal{I}$ is the number of epochs.
|
| 384 |
+
|
| 385 |
+
### A.3 Theoretical Results
|
| 386 |
+
|
| 387 |
+
Lemma A.1. For given fixed bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ , the node embeddings, ${\left\{ {\mathbf{r}}_{i}\left( t\right) \right\} }_{i \in \mathcal{V}}$ , learned by optimizing the objective function given in Equation 1 satisfy
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }\;\text{ for all }\left( {i, j}\right) \in {\mathcal{V}}^{2}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
where ${p}_{ij}$ is the probability of having more than ${m}_{ij}$ links between $i$ and $j$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ .
|
| 394 |
+
|
| 395 |
+
Proof. Let ${X}_{ij} \mathrel{\text{:=}} \left| {{\mathcal{E}}_{ij}\left\lbrack {{t}_{l},{t}_{u}}\right) }\right|$ be the number of links between nodes $i, j \in \mathcal{V}$ following a nonhomogeneous Poisson process with intensity function, ${\lambda }_{ij}\left( t\right)$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ . By Markov’s inequality, it can be written that
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
{p}_{ij} \mathrel{\text{:=}} \mathbb{P}\left\{ {{X}_{ij} \geq {m}_{ij}}\right\} \leq \frac{\mathbb{E}\left\lbrack {X}_{ij}\right\rbrack }{{m}_{ij}}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
= \frac{1}{{m}_{ij}}{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) {dt}
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
= \frac{1}{{m}_{ij}}\exp \left( {{\beta }_{i} + {\beta }_{j}}\right) {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) {dt}
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
\leq \frac{1}{{m}_{ij}}\left( {{t}_{u} - {t}_{l}}\right) \exp \left( {{\beta }_{i} + {\beta }_{j}}\right) \exp \left( {-\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}{dt}}\right)
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
\leq \frac{1}{{m}_{ij}}\left( {{t}_{u} - {t}_{l}}\right) \exp \left( {{\beta }_{i} + {\beta }_{j}}\right) \exp \left( {-\frac{1}{{\left( {t}_{u} - {t}_{l}\right) }^{2}}{\left( {\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}dt\right) }^{2}}\right)
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
where the last two lines follow from Jensen's inequality. Finally, it can be concluded that
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\log \left( {\exp \left( {{\beta }_{i} + {\beta }_{j}}\right) \frac{\left( {t}_{u} - {t}_{l}\right) }{{m}_{ij}{p}_{ij}}}\right) }
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
$$
|
| 424 |
+
= \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }
|
| 425 |
+
$$
|
| 426 |
+
|
| 427 |
+
597
|
| 428 |
+
|
| 429 |
+
Theorem A.2. Let $\mathbf{f}\left( t\right) : \left\lbrack {0, T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding of a node. For any given $\epsilon > 0$ , there exists a continuous, piecewise-linear node embedding, $\mathbf{r}\left( t\right)$ , satisfying $\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} < \epsilon$ for all $t \in \left\lbrack {0, T}\right\rbrack$ where $\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( b\right) }\left( t\right)$ for all $\left( {b - 1}\right) {\Delta }_{B} \leq t < b{\Delta }_{B},\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( B\right) }\left( t\right)$ for $t = T$ and ${\Delta }_{B} = T/B$ for some $B \in {\mathbb{N}}^{ + }$ .
|
| 430 |
+
|
| 431 |
+
Proof. Let $\mathbf{f}\left( t\right) : \left\lbrack {0, T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding so it is also uniformly continuous by the Heine-Cantor theorem since $\left\lbrack {0, T}\right\rbrack$ is a compact set. Then, we can find some $B \in {\mathbb{N}}^{ + }$ such that for every $t,\widetilde{t} \in \left\lbrack {0, T}\right\rbrack$ with $\left| {t - \widetilde{t}}\right| \leq {\Delta }_{B} \mathrel{\text{:=}} T/B$ implies $\parallel \mathbf{f}\left( t\right) - \mathbf{f}\left( \widetilde{t}\right) {\parallel }_{2} < \epsilon /2$ for any given $\epsilon > 0$ .
|
| 432 |
+
|
| 433 |
+
Let us define ${\mathbf{r}}^{\left( b\right) }\left( t\right) = {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + {\mathbf{v}}_{b}\left( {t - \left( {b - 1}\right) {\Delta }_{B}}\right)$ recursively for each $b \in \{ 1,\ldots , B\}$
|
| 434 |
+
|
| 435 |
+
where ${\mathbf{r}}^{\left( 0\right) }\left( 0\right) \mathrel{\text{:=}} {\mathbf{x}}_{0} = \mathbf{f}\left( 0\right)$ , and ${\mathbf{v}}_{b} \mathrel{\text{:=}} \frac{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }{{\Delta }_{B}}$ . Then it can be seen that we have
|
| 436 |
+
|
| 437 |
+
${\mathbf{r}}^{\left( b\right) }\left( {b{\Delta }_{B}}\right) = \mathbf{f}\left( {b{\Delta }_{B}}\right)$ for all $b \in \{ 1,\ldots , B\}$ because
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
{\mathbf{r}}^{\left( b\right) }\left( {b{\Delta }_{B}}\right) = {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + {\mathbf{v}}_{b}\left( {b{\Delta }_{B} - {\Delta }_{B}\left( {b - 1}\right) }\right)
|
| 441 |
+
$$
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
= {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + {\mathbf{v}}_{b}{\Delta }_{B}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
$$
|
| 448 |
+
= {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + \left( \frac{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }{{\Delta }_{B}}\right) {\Delta }_{B}
|
| 449 |
+
$$
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
= {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\right)
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
$$
|
| 456 |
+
= {\mathbf{r}}^{\left( b - 2\right) }\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) + \left( {\mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) }\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\right)
|
| 457 |
+
$$
|
| 458 |
+
|
| 459 |
+
$$
|
| 460 |
+
= {\mathbf{r}}^{\left( b - 2\right) }\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) }\right)
|
| 461 |
+
$$
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
= \cdots
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
$$
|
| 468 |
+
= {\mathbf{r}}^{\left( 0\right) }\left( 0\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( 0\right) }\right)
|
| 469 |
+
$$
|
| 470 |
+
|
| 471 |
+
$$
|
| 472 |
+
= \mathbf{f}\left( {b{\Delta }_{B}}\right)
|
| 473 |
+
$$
|
| 474 |
+
|
| 475 |
+
where the last line follows from the fact that ${\mathbf{r}}^{\left( 0\right) }\left( 0\right) = {\mathbf{x}}_{0} = \mathbf{f}\left( 0\right)$ by the definition. Therefore, for
|
| 476 |
+
|
| 477 |
+
any given point $t \in \lbrack 0, T)$ for $b = \left\lfloor {t/{\Delta }_{b}}\right\rfloor + 1$ , it can be seen that
|
| 478 |
+
|
| 479 |
+
$$
|
| 480 |
+
\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} = {\begin{Vmatrix}\mathbf{f}\left( t\right) - {\mathbf{r}}^{\left( b\right) }\left( t\right) \end{Vmatrix}}_{2}
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
= {\begin{Vmatrix}\mathbf{f}\left( t\right) - \left( {\mathbf{r}}^{\left( b - 1\right) }\left( \left( b - 1\right) {\Delta }_{B}\right) + {\mathbf{v}}_{b}\left( t - \left( b - 1\right) {\Delta }_{B}\right) \right) \end{Vmatrix}}_{2}
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
= {\begin{Vmatrix}\mathbf{f}\left( t\right) - \left( {\mathbf{r}}^{\left( b - 1\right) }\left( \left( b - 1\right) {\Delta }_{B}\right) + \left( \frac{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }{{\Delta }_{B}}\right) \left( t - \left( b - 1\right) {\Delta }_{B}\right) \right) \end{Vmatrix}}_{2}
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
$$
|
| 492 |
+
= {\begin{Vmatrix}\left( \mathbf{f}\left( t\right) - {\mathbf{r}}^{\left( b - 1\right) }\left( \left( b - 1\right) {\Delta }_{B}\right) \right) + \left( \mathbf{f}\left( b{\Delta }_{B}\right) - \mathbf{f}\left( \left( b - 1\right) {\Delta }_{B}\right) \right) \left( \frac{t - \left( {b - 1}\right) {\Delta }_{B}}{{\Delta }_{B}}\right) \end{Vmatrix}}_{2}
|
| 493 |
+
$$
|
| 494 |
+
|
| 495 |
+
$$
|
| 496 |
+
\leq \begin{Vmatrix}{\mathbf{f}\left( t\right) - {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\end{Vmatrix} + \begin{Vmatrix}{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\end{Vmatrix}
|
| 497 |
+
$$
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
< \frac{\epsilon }{2} + \frac{\epsilon }{2}
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
$$
|
| 504 |
+
= \epsilon
|
| 505 |
+
$$
|
| 506 |
+
|
| 507 |
+
where the inequlity in the fifth line holds since we have $\left| \frac{t - \left( {b - 1}\right) {\Delta }_{B}}{{\Delta }_{B}}\right| \leq 1$
|
| 508 |
+
|
| 509 |
+
Lemma A. 3 (Integral Computation). The integral of the intensity function, ${\lambda }_{ij}\left( t\right)$ , from ${t}_{l}$ to ${t}_{u}$ is equal to
|
| 510 |
+
|
| 511 |
+
$$
|
| 512 |
+
{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {{\beta }_{ij} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = \frac{\sqrt{\pi }\exp \left( {{\beta }_{ij} + {r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) }{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}\operatorname{erf}\left( {\left. \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right| }_{t = {t}_{l}}^{t = {t}_{u}}\right)
|
| 513 |
+
$$
|
| 514 |
+
|
| 515 |
+
where ${\beta }_{ij} \mathrel{\text{:=}} {\beta }_{i} + {\beta }_{j},\Delta {\mathbf{x}}_{ij} \mathrel{\text{:=}} {\mathbf{x}}_{i}^{\left( 0\right) } - {\mathbf{x}}_{j}^{\left( 0\right) },\Delta {\mathbf{v}}_{ij} \mathrel{\text{:=}} {\mathbf{v}}_{i}^{\left( 1\right) } - {\mathbf{v}}_{j}^{\left( 1\right) }$ and $r \mathrel{\text{:=}} \frac{\left\langle \Delta {\mathbf{v}}_{ij},\Delta {\mathbf{x}}_{ij}\right\rangle }{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}$ .
|
| 516 |
+
|
| 517 |
+
Proof.
|
| 518 |
+
|
| 519 |
+
$$
|
| 520 |
+
{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}^{2}{t}^{2} - 2\left\langle {\Delta {\mathbf{x}}_{ij},\Delta {\mathbf{v}}_{ij}}\right\rangle t - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) \mathrm{d}t
|
| 521 |
+
$$
|
| 522 |
+
|
| 523 |
+
$$
|
| 524 |
+
= {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) }^{2} + {r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) \mathrm{d}t \tag{6}
|
| 525 |
+
$$
|
| 526 |
+
|
| 527 |
+
where ${r}_{ij} \mathrel{\text{:=}} \frac{\left\langle \Delta {\mathbf{v}}_{ij},\Delta {\mathbf{x}}_{ij}\right\rangle }{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}$ . The substitution $u = \begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}t + {r}_{ij}$ yields $\mathrm{d}u = \begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}\mathrm{d}t$ . Furthermore,
|
| 528 |
+
|
| 529 |
+
we have
|
| 530 |
+
|
| 531 |
+
$$
|
| 532 |
+
{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) }^{2}}\right) \mathrm{d}t = \frac{1}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}{\int }_{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{l} + {r}_{ij}}^{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{u} + {r}_{ij}}\exp \left( {-{u}^{2}}\right) \mathrm{d}u
|
| 533 |
+
$$
|
| 534 |
+
|
| 535 |
+
$$
|
| 536 |
+
= \frac{1}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}\frac{\sqrt{\pi }}{2}\left( {\frac{2}{\sqrt{\pi }}{\int }_{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{l} + {r}_{ij}}^{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{u} + {r}_{ij}}\exp \left( {-{u}^{2}}\right) \mathrm{d}u}\right)
|
| 537 |
+
$$
|
| 538 |
+
|
| 539 |
+
$$
|
| 540 |
+
= {\left. \frac{\sqrt{\pi }}{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}\operatorname{erf}\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) \right| }_{t = {t}_{l}}^{t = {t}_{u}} \tag{7}
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
616 By using Equations 6 and 7, it can be obtained that
|
| 544 |
+
|
| 545 |
+
$$
|
| 546 |
+
{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = \exp \left( {{r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) }^{2}}\right) \mathrm{d}t
|
| 547 |
+
$$
|
| 548 |
+
|
| 549 |
+
$$
|
| 550 |
+
= \frac{\sqrt{\pi }\exp \left( {{r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) }{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}{\left. \operatorname{erf}\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) \right| }_{t = {t}_{l}}^{t = {t}_{u}}
|
| 551 |
+
$$
|
| 552 |
+
|
| 553 |
+
617 Therefore, we can conclude that
|
| 554 |
+
|
| 555 |
+
$$
|
| 556 |
+
{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {{\beta }_{ij} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = \frac{\sqrt{\pi }\exp \left( {{\beta }_{ij} + {r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) }{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}\operatorname{erf}\left( {\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}t + {r}_{ij}}\right) {\left. \right| }_{t = {t}_{l}}^{t = {t}_{u}}
|
| 557 |
+
$$
|
| 558 |
+
|
| 559 |
+
618
|
| 560 |
+
|
| 561 |
+
### A.4 Table of Symbols
|
| 562 |
+
|
| 563 |
+
The detailed list of the symbols used throughout the manuscript and their corresponding definitions can be found in Table 5.
|
| 564 |
+
|
| 565 |
+
Table 5: Table of symbols
|
| 566 |
+
|
| 567 |
+
<table><tr><td>Symbol</td><td>Description</td></tr><tr><td>$\mathcal{G}$</td><td>Graph</td></tr><tr><td>V</td><td>Vertex set</td></tr><tr><td>$\varepsilon$</td><td>Edge set</td></tr><tr><td>${\mathcal{E}}_{ij}$</td><td>Edge set of node pair(i, j)</td></tr><tr><td>$N$</td><td>Number of nodes</td></tr><tr><td>$D$</td><td>Dimension size</td></tr><tr><td>${\mathcal{I}}_{T}$</td><td>Time interval</td></tr><tr><td>$T$</td><td>Time length</td></tr><tr><td>$B$</td><td>Number of bins</td></tr><tr><td>${\beta }_{i}$</td><td>Bias term of node $i$</td></tr><tr><td>X</td><td>Initial position matrix</td></tr><tr><td>${\mathbf{v}}^{\left( b\right) }$</td><td>Velocity matrix for bin $b$</td></tr><tr><td>${\mathbf{r}}_{i}\left( t\right)$</td><td>Position of node $i$ at time $t$</td></tr><tr><td>${\lambda }_{ij}\left( t\right)$</td><td>Intensity of node pair(i, j)at time $t$</td></tr><tr><td>${e}_{ij}$</td><td>An event time of node pair(i, j)</td></tr><tr><td>$\sum$</td><td>Covariance matrix</td></tr><tr><td>$\lambda$</td><td>Scaling factor of the covariance</td></tr><tr><td>${\sigma }_{\sum }$</td><td>Noise variance</td></tr><tr><td>${\sigma }_{\mathrm{B}}$</td><td>Lengthscale variable of RBF kernel</td></tr><tr><td>②</td><td>Kronecker product</td></tr><tr><td>I</td><td>Identity matrix</td></tr><tr><td>B</td><td>Bin interaction matrix</td></tr><tr><td>C</td><td>Node interaction matrix</td></tr><tr><td>D</td><td>Dimension interaction matrix</td></tr><tr><td>$\mathbf{R}$</td><td>Capacitance matrix</td></tr><tr><td>$K$</td><td>Latent dimension of $\mathbf{C}$</td></tr></table>
|
| 568 |
+
|
papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,258 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PIECEWISE-VELOCITY MODEL FOR LEARNING CONTINUOUS-TIME DYNAMIC NODE REPRESENTATIONS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Networks have become indispensable and ubiquitous structures in many fields to model the interactions among different entities, such as friendship in social networks or protein interactions in biological graphs. A major challenge is to understand the structure and dynamics of these systems. Although networks evolve through time, most existing graph representation learning methods target only static networks. Whereas approaches have been developed for the modeling of dynamic networks, there is a lack of efficient continuous time dynamic graph representation learning methods that can provide accurate network characterization and visualization in low dimensions while explicitly accounting for prominent network characteristics such as homophily and transitivity. In this paper, we propose the Precewise-VElocity Model (PIVEM) for the representation of continuous-time dynamic networks. It learns dynamic embeddings in which the temporal evolution of nodes is approximated by piecewise linear interpolations based on a latent distance model with piecewise constant node-specific velocities. The model allows for analytically tractable expressions of the associated Poisson process likelihood with scalable inference invariant to the number of events. We further impose a scalable Kronecker structured Gaussian Process prior to the dynamics accounting for community structure, temporal smoothness, and disentangled (uncorrelated) latent embedding dimensions optimally learned to characterize the network dynamics. We show that PIVEM can successfully represent network structure and dynamics in ultra-low two and three-dimensional embedding spaces. We further extensively evaluate the performance of the approach on various networks of different types and sizes and find that it outperforms existing relevant state-of-art methods in downstream tasks such as link prediction. In summary, PIVEM enables easily interpretable dynamic network visualizations and characterizations that can further improve our understanding of the intrinsic dynamics of time-evolving networks.
|
| 12 |
+
|
| 13 |
+
§ 28 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
With technological advancements in data storage and production systems, we have witnessed the massive growth of graph (or network) data in recent years, with many prominent examples, including social, technological, and biological networks from diverse disciplines [1]. They propose an exquisite way to store and represent the interactions among data points and machine learning techniques on graphs have thus gained considerable attention to extract meaningful information from these complex systems and perform various predictive tasks. In this regard, Graph Representation Learning (GRL) techniques have become a cornerstone in the field through their exceptional performance in many downstream tasks such as node classification and edge prediction. Unlike the classical techniques relying on the extraction and design of handcrafted feature vectors peculiar to given networks, GRL approaches aim to design algorithms that can automatically learn features optimally preserving various characteristics of networks in their induced latent space.
|
| 16 |
+
|
| 17 |
+
Many networks evolve through time and are liable to modifications in structure with newly arriving nodes or emerging connections, the GRL methods have primarily addressed static networks, in other words, a snapshot of the networks at a specific time. However, recent years have seen increasing efforts toward modeling dynamic complex networks, see also [2] for a review. Whereas most approaches have concentrated their attention on discrete-time temporal networks, which have built upon a collection of time-stamped networks (c.f. [3-11]) modeling of networks in continuous time has also been studied (c.f. [12-15]). These approaches have been based on latent class [3, 4, 12-14] and latent feature modeling approaches [5-11, 15] including advanced dynamic graph neural network representations [16, 17].
|
| 18 |
+
|
| 19 |
+
Although these procedures have enabled to characterize evolving networks useful for downstream tasks such as link prediction and node classification, existing dynamic latent feature models are either in discrete time or do not explicitly account for network homophily and transitivity in terms of their latent representations. Whereas latent class models typically provide interpretable representations at the level of groups, latent feature models in general rely on high-dimensional latent representations that are not easily amenable to visualization and interpretation. A further complication of most existing dynamic modeling approaches is their scaling typically growing in complexity by the numbers of observed events and number of network dyads.
|
| 20 |
+
|
| 21 |
+
This work addresses the embedding problem of nodes in a continuous-time latent space and seeks to accurately model network interaction patterns using low dimensional scalable representations explicitly accounting for network homopholy and transitivity. The main contributions of the paper can be summarized as follows:
|
| 22 |
+
|
| 23 |
+
* We propose a novel scalable GRL method, the Precewise-VElocity Model (PIVEM), to flexibly learn continuous-time dynamic node representations.
|
| 24 |
+
|
| 25 |
+
* We present a framework balancing the trade-off between the smoothness of node trajectories in the latent space and model capacity accounting for the temporal evolution.
|
| 26 |
+
|
| 27 |
+
* We show that the PIVEM can embed nodes accurately in very low dimensional spaces, i.e., $D = 2$ , such that it serves as a dynamic network visualization tool facilitating human insights into networks' complex, evolving structures.
|
| 28 |
+
|
| 29 |
+
* The performance of the introduced approach is extensively evaluated in various downstream tasks, such as network reconstruction and link prediction. We show that it outperforms wellknown baseline methods on a wide range of datasets. Besides, we propose an efficient model optimization strategy enabling the PIVEM to scale to large networks.
|
| 30 |
+
|
| 31 |
+
Source code and other materials. The datasets, implementation of the method in Python, and all the generated animations can be found at the address: https://tinyurl.com/pivem.
|
| 32 |
+
|
| 33 |
+
§ 2 RELATED WORK
|
| 34 |
+
|
| 35 |
+
The work on dynamic modeling of complex networks has spurred substantial attention in recent years and covers approaches for the modeling of dynamic structures at the level of groups (i.e., latent class models) and dynamic representation learning approaches based on latent feature models including graph neural networks (GNNs). Whereas most attention has been given to discrete time dynamic networks a substantial body of work has also covered continuous time modeling as outlined below.
|
| 36 |
+
|
| 37 |
+
§ 2.1 DYNAMIC LATENT CLASS MODELS
|
| 38 |
+
|
| 39 |
+
Initial efforts for modeling continuously evolving networks has combined latent class models defined by the stochastic block models [18, 19] with Hawkes processes [20, 21]. In the work of [12], co-dependent (through time) Hawkes processes were combined with the Infinite Relational Model [22] (Hawkes IRM), yielding a non-parametric Bayesian approach capable of expressing reciprocity between inferred groups of actors. A drawback of such a model is the computational cost of the imposed Markov-chain Monte-Carlo optimization, as well as, its limitation on modelling only reciprocation effects. Scalability issues were addressed in [13] via the Block Hawkes Model (BHM), which utilizes variational inference and simplifies the Hawkes IRM model by associating only the inferred block structure pairs with a univariate point process. Recently, the BHM model was extended to decoupling interactions between different pairs of nodes belonging to the same block pair, through the use of independent univariate Hawkes processes, defining the Community Hawkes Independent Pairs model [14]. Whereas the above works have been based on continuous time modeling of dynamic networks the dynamic-IRM (dIRM) of [3] focused on the modeling of discrete time networks by inducing a infinite Hidden Markov Model (IHMM) to account for transitions over time of nodes between communities. In [4] a dynamic hierarchical block model was proposed based on the modeling of change points admitting dynamic node relocation within a Gibbs fragmentation tree. Despite the various advantages of such models, networks are constrained to be regarded and analyzed at a block level which in many cases is restrictive.
|
| 40 |
+
|
| 41 |
+
§ 2.2 DYNAMIC LATENT FEATURE MODELS
|
| 42 |
+
|
| 43 |
+
Prominent works around node-level representations of continuous-time networks have originally considered feature propagation within the discrete time network topology [23] or extended the random-walk frameworks of [6] and [7] to the temporal case yielding the Continuous-Time Dynamic Network Embeddings model (CTDNE), outperforming the aforementioned original approaches in multiple temporal settings. CTDNE provides a single temporal-aware node embedding, meaning that network and node evolution are unable to be visualized and explored. A more flexible approach was designed in [24] (DyRep) where temporal node embeddings are learned under a so-called latent mediation process, combining an association process describing the dynamics of the network with a communication process describing the dynamics on the network. The DyRep model uses deep recurrent architectures to parameterize the intensity function of the point process, and thus the embedding space suffers from lack of explainability. Graph neural networks (GNNs) can be extended to the analysis of continuous networks via the Temporal Graph Network (TGN) [17] where the classical encoder-decoder architecture is coupled with a memory cell.
|
| 44 |
+
|
| 45 |
+
In the context of latent feature dynamic network models Gaussian Processes (GP) has been used to characterize the smoothness of the temporal dynamics. This includes the discrete time dynamic network model considered in [8] in which latent factors where endowed a GP prior based on radial basis function kernels imposing temporal smoothness within the latent representation. The approach was extended in [9] to impose stochastic differential equations for the evolution of latent factors. In [15] GPs were used for the modeling of continuous time dynamic networks based on Poisson and Hawkes processes respectively including exogenous as well as endogenous features specified by a radial basis function prior.
|
| 46 |
+
|
| 47 |
+
Latent Distance Models (LDM) as proposed in [25] have recently been shown to outperform prominent GRL methods utilizing very-low dimensions in the static case [26, 27]. LDMs for temporal networks have been mostly studied in the discrete case [10], considering mainly diffusion dynamics in order to make predictions, as firstly studied in [28] and extended with popularity and activity effects [11]. While all these models express homophily (a tendency where similar nodes are more likely to connect to each other than dissimilar ones) and transitivity ("a friend of a friend is a friend") in the dynamic case, they fail to account for continuous dynamics.
|
| 48 |
+
|
| 49 |
+
Our work is inspired by these previous approaches for the modeling of dynamic complex networks. Specifically, we make use of the latent distance model formulation to account for homophily and transitivity, the Poisson Process for the characterization of continuous time dynamics, and a Gaussian Process prior based on the radial-basis-function kernel to account for temporal smoothness within the latent representation. Inspired by latent class models we further impose a structured low-rank representation of nodes based on soft-assigning nodes to communities exhibiting similar temporal dynamics. Notably, we exploit how LDMs as opposed to GNN approaches in general can provide easy interpretable yet accurate network representations in ultra low $D = 2$ dimensional spaces facilitating accurate dynamic network visualization and interpretation.
|
| 50 |
+
|
| 51 |
+
§ 3 PROPOSED APPROACH
|
| 52 |
+
|
| 53 |
+
Our main objective is to represent every node of a given network, $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , into a low-dimensional metric space, $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , in which the pairwise node proximities will be characterized by their distances in a continuous-time latent space (Objective 3.1). Since we address the continuous-time dynamic networks, the interactions among nodes through time can vary, with new links appearing or disappearing at any time. More precisely, we will presently consider undirected continuous-time networks:
|
| 54 |
+
|
| 55 |
+
Definition 3.1. A continuous-time dynamic undirected graph on a time interval ${\mathcal{I}}_{T} \mathrel{\text{ := }} \left\lbrack {0,T}\right\rbrack$ is an ordered pair $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ where $\mathcal{V} = \{ 1,\ldots ,N\}$ is a set of nodes and $\mathcal{E} \subseteq \left\{ {\{ i,j,t\} \in {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \mid 1 \leq }\right.$ $i < j \leq N\}$ is a set of events or edges.
|
| 56 |
+
|
| 57 |
+
We will use the symbol, $N$ , to denote the number of nodes in the vertex set and ${\mathcal{E}}_{ij}\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq \mathcal{E}$ to indicate the set of edges between nodes $i$ and $j$ occurring on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$
|
| 58 |
+
|
| 59 |
+
But we note that the approach readily extends to directed and bipartite dynamic networks.
|
| 60 |
+
|
| 61 |
+
§ 3.1 NONHOMOGENEOUS POISSON POINT PROCESSES
|
| 62 |
+
|
| 63 |
+
The Poisson Point Processes (PPP)s are one of the natural choices widely used to model the number of random events occurring in time or the locations in a spatial space. PPPs are parameterized by a quantity known as the rate or the intensity indicating the average density of the points in the underlying space of the Poisson process. If the intensity depends on the time or location, the point process is called Nonhomogeneous PPP (Defn. 3.2), and it is typically adapted for applications in which the event points are not uniformly distributed [29].
|
| 64 |
+
|
| 65 |
+
Definition 3.2. [Nonhomogeneous PPP] A counting process $\{ M\left( t\right) ,t \geq 0\}$ is called a nonhomogeneous Poisson process with intensity function $\lambda \left( t\right) ,t \geq 0$ if (i) $M\left( 0\right) = 0$ ,(ii) $M\left( t\right)$ has independent increments: i.e., $\left( {M\left( {t}_{1}\right) - M\left( {t}_{0}\right) }\right) ,\ldots ,\left( {M\left( {t}_{B}\right) - M\left( {t}_{B - 1}\right) }\right)$ are independent random variables for each $0 \leq {t}_{0} < \cdots < {t}_{B}$ , and (iii) $M\left( {t}_{u}\right) - M\left( {t}_{l}\right)$ is Poisson distributed with mean ${\int }_{{t}_{l}}^{{t}_{u}}\lambda \left( t\right) {dt}$ .
|
| 66 |
+
|
| 67 |
+
In this paper, we consider continuous-time dynamic networks such that the events (or links/edges) among nodes can occur at any point in time. As we will examine in the following sections, these interactions do not necessarily exhibit any recurring characteristics; instead, they vary over time in many real networks. In this regard, we assume that the number of links, $M\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack$ , between a pair of node $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ follows a nonhomogeneous Poisson point process (NHPP) with intensity function ${\lambda }_{ij}\left( t\right)$ on the time interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ , and for a given network $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , the log-likelihood function can be written by
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\mathcal{L}\left( \Omega \right) \mathrel{\text{ := }} \log p\left( {\mathcal{G} \mid \Omega }\right) = \frac{1}{2}\mathop{\sum }\limits_{{\left( {i,j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where ${\mathcal{E}}_{i,j} \subseteq \mathcal{E}\left\lbrack {0,T}\right\rbrack$ is the set of links of node pair $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ on the timeline ${\mathcal{I}}_{T} \mathrel{\text{ := }} \left\lbrack {0,T}\right\rbrack$ , and $\Omega = {\left\{ {\lambda }_{ij}\right\} }_{1 \leq i < j \leq N}$ indicates the set of intensity functions.
|
| 74 |
+
|
| 75 |
+
§ 3.2 PROBLEM FORMULATION
|
| 76 |
+
|
| 77 |
+
Without loss of generality, it can be assumed that the timeline starts from 0 and is bounded by $T \in {\mathbb{R}}^{ + }$ . Since the interactions among nodes can occur at any time point on ${\mathcal{I}}_{T} = \left\lbrack {0,T}\right\rbrack$ , we would like to identify an accurate continuous-time node representation $\{ r\left( {i,t}\right) {\} }_{\left( {i,t}\right) \in \mathcal{V} \times {\mathcal{I}}_{T}}$ defined using a low-dimensional latent space ${\mathbb{R}}^{D}\left( {D \ll N}\right)$ where $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow {\mathbb{R}}^{D}$ is a map indicating the embedding or representation of node $i \in \mathcal{V}$ at time point $t \in {\mathcal{I}}_{T}$ . We define our objective more formally as follows:
|
| 78 |
+
|
| 79 |
+
Objective 3.1. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ be a continuous-time dynamic network and ${\lambda }^{ * } : {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \rightarrow \mathbb{R}$ be an unknown intensity function of a nonhomogeneous Poisson point process. For a given metric space $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , our purpose it to learn a function or representation $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow \mathrm{X}$ satisfying
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{d}_{\mathrm{X}}\left( {\mathbf{r}\left( {i,t}\right) ,\mathbf{r}\left( {j,t}\right) }\right) {dt} \approx \frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{\mathbf{\lambda }}^{ * }\left( {i,j,t}\right) {dt} \tag{2}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
for all $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ pairs, and for every interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$ .
|
| 86 |
+
|
| 87 |
+
In this work, we consider the Euclidean metric on a $D$ -dimensional real vector space, $\mathrm{X} \mathrel{\text{ := }} {\mathbb{R}}^{D}$ and the embedding of node $i \in \mathcal{V}$ at time $t \in {\mathcal{I}}_{T}$ will be denoted by ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
|
| 88 |
+
|
| 89 |
+
§ 3.3 PIVEM: PIECEWISE-VELOCITY MODEL FOR LEARNING CONTINUOUS-TIME EMBEDDINGS
|
| 90 |
+
|
| 91 |
+
We learn continuous-time node representations by employing the canonical exponential link-function defining the intensity function as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\lambda }_{ij}\left( t\right) \mathrel{\text{ := }} \exp \left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ and ${\beta }_{i} \in \mathbb{R}$ denote the embedding vector at time $t$ and the bias term of node $i \in \mathcal{V}$ , respectively. For given bias terms, it can be seen by Lemma 3.1, that the definition of the intensity function provides a guarantee for our goal given in Equation (2), and a pair of nodes having a high number of interactions can be positioned close in the latent space. Although we utilize the squared Euclidean distance in Equation (3), which is not a metric, but we impose it as a distance [27, 30].
|
| 98 |
+
|
| 99 |
+
Lemma 3.1. For given fixed bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ , the node embeddings, ${\left\{ {\mathbf{r}}_{i}\left( t\right) \right\} }_{i \in \mathcal{V}}$ , learned by optimizing the objective function given in Equation (1) satisfy
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }\;\text{ for all }\left( {i,j}\right) \in {\mathcal{V}}^{2}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where ${p}_{ij}$ is the probability of having more than ${m}_{ij}$ links between $i$ and $j$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ .
|
| 106 |
+
|
| 107 |
+
§ PROOF. PLEASE SEE THE APPENDIX FOR THE PROOF.
|
| 108 |
+
|
| 109 |
+
Notably, constraining the approximation of the unknown intensity function by a metric space imposes the homophily property (i.e., similar nodes in the graph are placed close to each other in embedding space). When we have a pair of nodes exhibiting high interactions, they must have average intensity, so the term, ${p}_{ij}\left( {{m}_{ij}/\left( {{t}_{u} - {t}_{l}}\right) }\right.$ , in Lemma 3.1 converges to 1, and the average distance between the nodes is bounded by the sum of their bias terms. It can also be seen that the transitivity property holds up to some extend (i.e., if node $i$ is similar to $j$ and $j$ similar to $k$ , then $i$ should also be similar to $k$ ) since we can bound the squared Euclidean distance [27,31].
|
| 110 |
+
|
| 111 |
+
Importantly, for a dynamic embedding, we would like to have embeddings of a pair of nodes close enough to each other when they have high interactions during a particular time interval and far away from each other if they have less or no links. Note that the bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ are responsible for the node-specific effects such as degree heterogeneity [31, 32], and they provide additional flexibility to the model by acting as scaling factor for the corresponding nodes so that, for instance, a hub node might have a high number of interactions simultaneously without getting close to the others in the latent space.
|
| 112 |
+
|
| 113 |
+
Since our primary purpose is to learn continuous node representations in a latent space, we define the representation of node $i \in \mathcal{V}$ at time $t$ based on a linear model by ${\mathbf{r}}_{i}\left( t\right) \mathrel{\text{ := }} {\mathbf{x}}_{i}^{\left( 0\right) } + {\mathbf{v}}_{i}t$ . Here, ${\mathbf{x}}_{i}^{\left( 0\right) }$ can be considered as the initial position and ${\mathbf{v}}_{i}$ the velocity of the corresponding node. However, the linear model provides a minimal capacity for tracking the nodes and modeling their representations. Therefore, we reinterpret the given timeline ${\mathcal{I}}_{T} \mathrel{\text{ := }} \left\lbrack {0,T}\right\rbrack$ by dividing it into $B$ equally-sized bins, $\left\lbrack {{t}_{b - 1},{t}_{b}}\right) ,\left( {1 \leq b \leq B}\right)$ such that $\left\lbrack {0,T}\right\rbrack = \left\lbrack {0,{t}_{1}}\right) \cup \cdots \cup \left\lbrack {{t}_{B - 1},{t}_{B}}\right\rbrack$ where ${t}_{0} \mathrel{\text{ := }} 0$ and ${t}_{B} \mathrel{\text{ := }} T$ . By applying the linear model for each subinterval, we obtain a piecewise linear approximation of general intensity functions strengthening the models' capacity. As a result, we can write the position of node $i$ at time $t \in {\mathcal{I}}_{T}$ as follows:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
{\mathbf{r}}_{i}\left( t\right) \mathrel{\text{ := }} {\mathbf{x}}_{i}^{\left( 0\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 1\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 2\right) } + \cdots + \left( {t{\;\operatorname{mod}\;\left( {\Delta }_{B}\right) }}\right) {\mathbf{v}}_{i}^{\left( \left\lfloor t/{\Delta }_{B}\right\rfloor + 1\right) } \tag{4}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where ${\Delta }_{B}$ indicates the bin widths, $T/B$ , and ${\;\operatorname{mod}\;\left( \cdot \right) }$ is the modulo operation used to compute the remaining time. Note that the piece-wise interpretation of the timeline allows us to track better the path of the nodes in the embedding space, and it can be seen by Theorem 3.2 that we can obtain more accurate trails by augmenting the number of bins.
|
| 120 |
+
|
| 121 |
+
Theorem 3.2. Let $\mathbf{f}\left( t\right) : \left\lbrack {0,T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding of a node. For any given $\epsilon > 0$ , there exists a continuous, piecewise-linear node embedding, $\mathbf{r}\left( t\right)$ , satisfying $\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} < \epsilon$ for all $t \in \left\lbrack {0,T}\right\rbrack$ where $\mathbf{r}\left( t\right) \mathrel{\text{ := }} {\mathbf{r}}^{\left( b\right) }\left( t\right)$ for all $\left( {b - 1}\right) {\Delta }_{B} \leq t < b{\Delta }_{B},\mathbf{r}\left( t\right) \mathrel{\text{ := }} {\mathbf{r}}^{\left( B\right) }\left( t\right)$ for $t = T$ and ${\Delta }_{B} = T/B$ for some $B \in {\mathbb{N}}^{ + }$ .
|
| 122 |
+
|
| 123 |
+
§ PROOF. PLEASE SEE THE APPENDIX FOR THE PROOF.
|
| 124 |
+
|
| 125 |
+
Prior probability. In order to control the smoothness of the motion in the latent space, we employ a Gaussian Process (GP) [33] prior over the initial position ${\mathbf{x}}^{\left( 0\right) } \in {\mathbb{R}}^{N \times D}$ and velocity vectors $\mathbf{v} \in {\mathbb{R}}^{B \times N \times D}$ . Hence, we suppose that $\operatorname{vect}\left( {\mathbf{x}}^{\left( \mathbf{0}\right) }\right) \oplus \operatorname{vect}\left( \mathbf{v}\right) \sim \mathcal{N}\left( {\mathbf{0},\mathbf{\sum }}\right)$ where $\mathbf{\sum } \mathrel{\text{ := }} {\lambda }^{2}\left( {{\sigma }_{\sum }^{2}\mathbf{I} + \mathbf{K}}\right)$ is the covariance matrix with a scaling factor $\lambda \in \mathbb{R}$ . We utilize, ${\sigma }_{\sum \in \mathbb{R}}$ , to denote the noise of the covariance, and $\operatorname{vect}\left( \mathbf{z}\right)$ is the vectorization operator stacking the columns to form a single vector. To reduce the number of parameters of the prior and enable scalable inference, we define $\mathbf{K}$ as a Kronecker product of three matrices $\mathbf{K} \mathrel{\text{ := }} \mathbf{B} \otimes \mathbf{C} \otimes \mathbf{D}$ respectively accounting for temporal-, node-, and dimension specific covariance structures. Specifically, we define $\mathbf{B} \mathrel{\text{ := }} \left\lbrack {c}_{{\mathbf{x}}^{0}}\right\rbrack \oplus {\left\lbrack \exp {\left( -{\left( {c}_{b} - {\widetilde{c}}_{\widetilde{b}}\right) }^{2}/2{\sigma }_{\mathbf{B}}^{2}\right) }_{1 \leq b,\widetilde{b} \leq B}\right\rbrack }_{1 \leq b,\widetilde{b} \leq B}$ is a $\left( {B + 1}\right) \times \left( {B + 1}\right)$ matrix intending to capture the smoothness of velocities across time-bins where ${c}_{b} = \frac{{t}_{b - 1} + {t}_{b}}{2}$ is the center of the corresponding
|
| 126 |
+
|
| 127 |
+
bin, and the matrix is constructed by combining the radial basis function kernel (RBF) with a scalar term ${c}_{{\mathbf{x}}^{0}}$ corresponding to the initial position being decoupled from the structure of the velocities. The node specific matrix, $\mathbf{C} \in {\mathbb{R}}^{N \times N}$ , is constructed as a product of a low-rank matrix $\mathbf{C} \mathrel{\text{ := }} {\mathbf{{QQ}}}^{\top }$ where the row sums of $\mathbf{Q} \in {\mathbb{R}}^{N \times k}$ equals to $1\left( {k \ll N}\right)$ , and it aims to extract covariation patterns of the motion of the nodes. Finally, we simply set the dimensionality matrix to the identity: $\mathbf{D} \mathrel{\text{ := }} \mathbf{I} \in {\mathbb{R}}^{D \times D}$ in order to have uncorrelated dimensions.
|
| 128 |
+
|
| 129 |
+
To sum up, we can express our objective relying on the piecewise velocities with the prior as follows:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\widehat{\Omega } = \underset{\Omega }{\arg \max }\frac{1}{2}\mathop{\sum }\limits_{{\left( {i,j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) + \log \mathcal{N}\left( {\left\lbrack \begin{matrix} {\mathbf{x}}^{\left( 0\right) } \\ \mathbf{v} \end{matrix}\right\rbrack ;\mathbf{0},\mathbf{\sum }}\right) \tag{5}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $\Omega = \left\{ {\mathbf{\beta },{\mathbf{x}}^{\left( 0\right) },\mathbf{v},{\sigma }_{\sum },{\sigma }_{\mathbf{B}},{c}_{{\mathbf{x}}^{0}},\mathbf{Q}}\right\}$ is the set of hyper-parameters, and ${\lambda }_{ij}\left( t\right)$ is the intensity function as defined in Equation (3) based on the node embeddings, ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
|
| 136 |
+
|
| 137 |
+
§ 3.4 OPTIMIZATION
|
| 138 |
+
|
| 139 |
+
Our objective given in Equation (5) is not a convex function, so the learning strategy that we follow is of great significance in order to escape from the local minima and for the quality of the representations. We start by randomly initializing the model’s hyper-parameters from $\left\lbrack {-1,1}\right\rbrack$ except for the velocity tensor, which is set to 0 at the beginning. We adapt the sequential learning strategy in learning these parameters. In other words, we first optimize the initial position and bias terms together, $\left\{ {{\mathbf{x}}^{\left( 0\right) },\mathbf{\beta }}\right\}$ , for a given number of epochs; then, we include the velocity tensor, $\{ \mathbf{v}\}$ , in the optimization process and repeat the training for the same number of epochs. Finally, we add the prior parameters and learn all model hyper-parameters together. We have employed Adam optimizer [34] with learning rate 0.1 .
|
| 140 |
+
|
| 141 |
+
Computational issues and complexity. Note that we need to evaluate the log-intensity term in Equation (5) for each $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ and event time ${e}_{ij} \in {\mathcal{E}}_{ij}$ . Therefore, the computational cost required for the whole network is bounded by $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}\left| \mathcal{E}\right| }\right)$ . However, we can alleviate the computational cost by pre-computing certain coefficients at the beginning of the optimization process so that the complexity can be reduced to $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$ . We also have an explicit formula for the computation of the integral term since we utilize the squared Euclidean distance so that it can be computed in at most $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ operations. Instead of optimizing the whole network at once, we apply a batching strategy over the set of nodes in order to reduce the memory requirements. As a result, we sample $\mathcal{S}$ nodes for each epoch. Hence, the overall complexity for the log-likelihood function is $\mathcal{O}\left( {{\mathcal{S}}^{2}B\mathcal{I}}\right)$ where $\mathcal{I}$ is the number of epochs and $\mathcal{S} \ll \left| \mathcal{V}\right|$ . Similarly, the prior can be computed in at most $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\mathcal{S}}\right)$ operations by using various algebraic properties such as Woodbury matrix identity and Matrix Determinant lemma [35]. To sum up, the complexity of the proposed approach is $\mathcal{O}\left( {B{\mathcal{S}}^{2}\mathcal{I} + {B}^{3}{D}^{3}{K}^{2}\mathcal{S}\mathcal{I}}\right)$ (Please see the appendix for the derivations and other details).
|
| 142 |
+
|
| 143 |
+
§ 4 EXPERIMENTS
|
| 144 |
+
|
| 145 |
+
In this section, we extensively evaluate the performance of the proposed PIecewise-VElocity Model with respect to the well-known baselines in challenging tasks over various datasets of sizes and types. We consider all networks as undirected, and the event times of links are scaled to the interval $\left\lbrack {0,1}\right\rbrack$ for the consistency of experiments. We use the finest granularity level of the given input timestamps, such as seconds and milliseconds. We provide a brief summary of the networks below, but more details and various statistics are reported in Table 4 in the appendix. For all the methods, we learn node embeddings in two-dimensional space $\left( {D = 2}\right)$ since one of the objectives of this work is to produce dynamic node embeddings facilitating human insights into a complex network.
|
| 146 |
+
|
| 147 |
+
Experimental Setup. We first split the networks into two sets, such that the events occurring in the last 10% of the timeline are taken out for the prediction. Then, we randomly choose 10% of the node pairs among all possible dyads in the network for the graph completion task, and we ensure that each node in the residual network contains at least one event keeping the number of nodes fixed. If a pair of nodes only contains events in the prediction set and if these nodes do not have any other links during the training time, they are removed from the networks.
|
| 148 |
+
|
| 149 |
+
For conducting the experiments, we generate the labeled dataset of links as follows: For the positive samples, we construct small intervals of length $2 \times {10}^{-3}$ for each event time (i.e., $\left\lbrack {e - {10}^{-3},e + {10}^{3}}\right\rbrack$ where $e$ is an event time). We randomly sample an equal number of time points and corresponding node pairs to form negative instances. If a sampled event time is not located inside the interval of a positive sample, we follow the same strategy to build an interval for it, and it is considered a negative instance. Otherwise, we sample another time point and a dyad. Note that some networks might contain a very high number of links, which leads to computational problems for these networks. Therefore, we subsample ${10}^{4}$ positive and negative instances if they contain more than this.
|
| 150 |
+
|
| 151 |
+
Table 1: The performance evaluation for the network reconstruction experiment over various datasets.
|
| 152 |
+
|
| 153 |
+
max width=
|
| 154 |
+
|
| 155 |
+
2*X 2|c|Synthetic(π) 2|c|Synthetic $\left( \mu \right)$ 2|c|College 2|c|Contacts 2|c|Email 2|c|Forum 2|c|Hypertext
|
| 156 |
+
|
| 157 |
+
2-15
|
| 158 |
+
ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR
|
| 159 |
+
|
| 160 |
+
1-15
|
| 161 |
+
LDM .563 .539 .669 .642 .951 .944 .860 .835 .954 .948 .909 .897 .818 .797
|
| 162 |
+
|
| 163 |
+
1-15
|
| 164 |
+
NODE2VEC .519 .507 .503 .509 .711 .655 .812 .756 .853 .828 .677 .619 .696 .648
|
| 165 |
+
|
| 166 |
+
1-15
|
| 167 |
+
CTDNE .518 .522 .499 .505 .689 .656 .599 .584 .630 .645 .643 .608 .540 .545
|
| 168 |
+
|
| 169 |
+
1-15
|
| 170 |
+
PIVEM .762 .713 .905 .869 .948 .948 .938 .938 .978 .977 .907 .902 .830 .823
|
| 171 |
+
|
| 172 |
+
1-15
|
| 173 |
+
|
| 174 |
+
Synthetic datasets. We generate two artificial networks in order to evaluate the behavior of the models in controlled experimental settings. (i) Synthetic $\left( \pi \right)$ is sampled from the prior distribution stated in Subsection 3.2. The hyper-parameters, $\mathbf{\beta },K$ and $B$ are set to0,20and 100, respectively. (ii) Synthetic $\left( \mu \right)$ is constructed based on the temporal block structures. The timeline is divided into 10 sub-intervals, and the nodes are randomly split into 20 groups for each interval. The links within each group are sampled from the Poisson distribution with the constant intensity of 5 .
|
| 175 |
+
|
| 176 |
+
Real Networks. The (iii) Hypertext network [36] was built on the radio badge records showing the interactions of the conference attendees for 2.5 days, and each event time indicates 20 seconds of active contact. Similarly, (iv) the Contacts network [37] was generated concerning the interactions of the individuals in an office environment. (v) Forum [38] is comprised of the activity data of university students on an online social forum system. (vi) CollegeMsg [39] indicates the private messages among the students on an online social platform. Finally, (vii) Eu-Email [40] was constructed based on the exchanged e-mail information among the members of European research institutions.
|
| 177 |
+
|
| 178 |
+
Baselines. We compare the performance of our method with three baselines. We include LDM with Poisson rate and node-specific biases [31, 41] since it is a static method having the closest formulation to ours. A very well-known GRL method, NODE2VEC (or N2V) [7] relies on the explicit generation of the random walks, and learns node embeddings by relying on the node proximities within random walks. In our experiments, we tune the model’s parameters(p, q)from $\{ {0.25},{0.5},1,2,4\}$ . Since it has the ability to run over the weighted networks, we also constructed a weighted graph based on the number of links through time and reported the best score of both versions of the networks. CTDNE [42] is a dynamic node embedding approach performing temporal random walks over the network. But it is unable to produce instantaneous node representations and produces embeddings only for a given time. Therefore, we have utilized the last time of the training set to obtain the representations. We provide the other details about the parameter settings of the baseline methods in the appendix.
|
| 179 |
+
|
| 180 |
+
For our method, we set the parameter $K = {25}$ , and bins count $B = {100}$ to have enough capacity to track node interactions. For the regularization term $\left( \lambda \right)$ of the prior, we first mask ${20}\%$ of the dyads in the optimization of Equation (5). Furthermore, we train the model by starting with $\lambda = {10}^{6}$ , and then we reduce it to one-tenth after each 100 epoch. The same procedure is repeated until $\lambda = {10}^{-6}$ , and we choose the $\lambda$ value minimizing the log-likelihood of the masked pairs. The final embeddings are then obtained by performing this annealing strategy without any mask until this $\lambda$ value. We repeat this procedure 5 times, and we consider the best-performing method in learning the embeddings. The relative standard deviation of the experiments is always less than 0.5, and Figure 1a shows an illustrative example for tuning $\lambda$ over the Synthetic $\left( \pi \right)$ dataset with 5 random runs.
|
| 181 |
+
|
| 182 |
+
For the performance comparison of the methods, we provide the Area Under Curve (AUC) scores for the Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves [43]. We compute the intensity of a given instance for LDM and PIVEM for the similarity measure of the node pair. Since NODE2VEC and CTDNE rely on the SkipGram architecture [44], we use cosine similarity for them.
|
| 183 |
+
|
| 184 |
+
Network Reconstruction. Our goal is to see how accurately a model can capture the interaction patterns among nodes and generate embeddings exhibiting their temporal relationships in a latent space. In this regard, we train the models on the residual network and generate sample sets as described previously. The performance of the models is reported in Table 1. Comparing the performance of PIVEM against the baselines, we observe favorable results across all networks, highlighting the importance and ability of PIVEM to account for and detect structure in a continuous time manner.
|
| 185 |
+
|
| 186 |
+
Table 2: The performance evaluation for the network completion experiment over various datasets.
|
| 187 |
+
|
| 188 |
+
max width=
|
| 189 |
+
|
| 190 |
+
2*X 2|c|Synthetic(π) 2|c|Synthetic $\left( \mu \right)$ 2|c|College 2|c|Contacts 2|c|Email 2|c|Forum 2|c|Hypertext
|
| 191 |
+
|
| 192 |
+
2-15
|
| 193 |
+
ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR
|
| 194 |
+
|
| 195 |
+
1-15
|
| 196 |
+
LDM .535 .529 .646 .631 .931 .926 .836 .799 .948 .942 .863 .858 .761 .738
|
| 197 |
+
|
| 198 |
+
1-15
|
| 199 |
+
NODE2VEC .519 .511 .747 .677 .685 .637 .787 .744 .818 .777 .635 .592 .596 .588
|
| 200 |
+
|
| 201 |
+
1-15
|
| 202 |
+
CTDNE .522 .527 .499 .503 .647 .599 .658 .656 .571 .593 .617 .592 .464 .485
|
| 203 |
+
|
| 204 |
+
1-15
|
| 205 |
+
PIVEM .750 .696 .874 .851 .935 .934 .873 .864 .951 .953 .879 .875 .770 .712
|
| 206 |
+
|
| 207 |
+
1-15
|
| 208 |
+
|
| 209 |
+
Table 3: The performance evaluation for the link prediction experiment over various datasets.
|
| 210 |
+
|
| 211 |
+
max width=
|
| 212 |
+
|
| 213 |
+
2*X 2|c|Synthetic(π) 2|c|Synthetic $\left( \mu \right)$ 2|c|College 2|c|Contacts 2|c|Email 2|c|Forum 2|c|Hypertext
|
| 214 |
+
|
| 215 |
+
2-15
|
| 216 |
+
ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR
|
| 217 |
+
|
| 218 |
+
1-15
|
| 219 |
+
LDM .562 .539 .498 .642 .951 .944 .860 .835 .954 .948 .909 .897 .819 .797
|
| 220 |
+
|
| 221 |
+
1-15
|
| 222 |
+
NODE2VEC .518 .506 .498 .502 .705 .676 .783 .716 .825 .807 .635 .605 .748 .739
|
| 223 |
+
|
| 224 |
+
1-15
|
| 225 |
+
CTDNE .514 .526 .457 .481 .666 .643 .632 .623 .629 .629 .621 .599 .508 .532
|
| 226 |
+
|
| 227 |
+
1-15
|
| 228 |
+
PIVEM .716 .689 .474 .485 .891 .887 .876 .884 .964 .964 .894 .895 .756 .767
|
| 229 |
+
|
| 230 |
+
1-15
|
| 231 |
+
|
| 232 |
+
Network Completion. The network completion experiment is a relatively more challenging task than the reconstruction. Since we hide 10% of the network, the dyads containing events are also viewed as non-link pairs, and the temporal models should place these nodes in distant locations of the embedding space. However, it might be possible to predict these events accurately if the network links have temporal triangle patterns through certain time intervals. In Table 2, we report the AUC-ROC and PR-AUC scores for the network completion experiment. Once more, PIVEM outperforms the baselines (in most cases significantly). We again discovered evidence supporting the necessity for modeling and tracking temporal networks with time-evolving embedding representations.
|
| 233 |
+
|
| 234 |
+
Future Prediction. Finally, we examine the performance of the models in the future prediction task. Here, the models are asked to forecast the ${10}\%$ future of the timeline. For PIVEM, the similarity between nodes is obtained by calculating the intensity function for the timeline of the training set (i.e., from 0 to 0.9 ), and we keep our previously described strategies for the baselines since they generate the embeddings only for the last training time. Table 3 presents the performances of the models. It is noteworthy that while PIVEM outperforms the baselines significantly on the Synthetic $\left( \pi \right)$ network, it does not show promising results on Synthetic $\left( \mu \right)$ . Since the first network is compatible with our model, it successfully learns the dominant link pattern of the network. However, the second network conflicts with our model: it forms a completely different structure for every 0.1 second. For the real datasets, we observe mostly on-par results, especially with LDM. Some real networks contain link patterns that become "static" with respect to the future prediction task.
|
| 235 |
+
|
| 236 |
+
We have previously described how we set the prior coefficient, $\lambda$ , and now we will examine the influence of the other hyperparameters over the $\operatorname{Synthetic}\left( \pi \right)$ dataset for network reconstruction.
|
| 237 |
+
|
| 238 |
+
Influence of dimension size(D). We report the AUC-ROC and AUC-PR scores in Figure 1b. When we increase the dimension size, we observe a constant increase in performance. It is not a surprising
|
| 239 |
+
|
| 240 |
+
< g r a p h i c s >
|
| 241 |
+
|
| 242 |
+
< g r a p h i c s >
|
| 243 |
+
|
| 244 |
+
Figure 2: Comparisons of the ground truth and learned representations in two-dimensional space.
|
| 245 |
+
|
| 246 |
+
result because we also increase the model's capacity depending on the dimension. However, the two-dimensional space still provides comparable performances in the experiments, facilitating human insights into networks' complex, evolving structures.
|
| 247 |
+
|
| 248 |
+
Influence of bin count(B). Figure $1\mathrm{c}$ demonstrates the effect of the number of bins for the network reconstruction task. We generated the Synthetic $\left( \pi \right)$ network from for 100 bins, so it can be seen that the performance stabilizes around ${2}^{6}$ , which points out that PIVEM reaches enough capability to model the interactions among nodes.
|
| 249 |
+
|
| 250 |
+
Latent Embedding Animation. Although many GRL methods show high performance in the downstream tasks, in general, they require high dimensional spaces, so a postprocessing step later has to be applied in order to visualize the node representations in a small dimensional space. However, such processes cause distortions in the embeddings, which can lead a practitioner to end up with inaccurate arguments about the data.
|
| 251 |
+
|
| 252 |
+
As we have seen in the experimental evaluations, our proposed approach successfully learns embed-dings in the two-dimensional space, and it also produces continuous-time representations. Therefore, it offers the ability to animate how the network evolves through time and can play a crucial role in grasping the underlying characteristics of the networks. As an illustrative example, Figure 2 compares the ground truth representations of Synthetic $\left( \pi \right)$ with the learned ones. The synthetic network consists of small communities of 5 nodes, and each color indicates these groups. Although the problem does not have unique solutions, it can be seen that our model successfully seizes the clustering patterns in the network. We refer the reader to supplementary materials for the full animation.
|
| 253 |
+
|
| 254 |
+
§ 5 CONCLUSION AND LIMITATIONS
|
| 255 |
+
|
| 256 |
+
In this paper, we have proposed a novel continuous-time dynamic network embedding approach, namely, Piecewise Velocity Model (PIVEM). Its performance has been examined in various experiments, such as network reconstruction and completion tasks over various networks with respect to the very well-known baselines. We demonstrated that it could accurately embed the nodes into a two-dimensional space. Therefore, it can be directly utilized to animate the learned node embeddings, and it can be beneficial in extracting the networks' underlying characteristics, foreseeing how they will evolve through time. We showed that the model could scale up to large networks.
|
| 257 |
+
|
| 258 |
+
Although our model successfully learns continuous-time representations, it is unable to capture temporal patterns in the network in terms of the GP structure. Therefore, we are planning to employ different kernels instead of RBF, such as periodic kernels in the prior. The optimization strategies of the proposed method might be improved to escape from local minima. As a possible future direction, the algorithm can also be adapted for other graph types, such as directed and multi-layer networks.
|
papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,564 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# On the Expressiveness and Generalization of Hypergraph Neural Networks
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (Hyper-GNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of Hyper-GNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge arities. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Reasoning over graph-structured data is an important task in many applications, including molecule analysis, social network modeling, and knowledge graph reasoning [1-3]. While we have seen great success of various relational neural networks, such as Graph Neural Networks [GNNs; 4] and Neural Logical Machines [NLM; 5] in a variety of applications [6-8], we do not yet have a full understanding of how different design parameters, such as the depth of the neural network, affects the expressiveness of these models, or how effectively these models generalize from limited data.
|
| 16 |
+
|
| 17 |
+
This paper analyzes the expressiveness and generalization of relational neural networks applied to hypergraphs, which are graphs with edges connecting more than two nodes. We have formally shown the "if and only if" conditions for the expressive power with respect to the edge arity. That is, $k$ -ary hyper-graph neural networks are sufficient and necessary for realizing FOC- $k$ , a fragment of first-order logic which involves at most $k$ variables. This is a helpful result because now we can determine whether a specific hypergraph neural network can solve a problem by understanding what form of logic formula can represent the solution to this problem. Next, we formally described the relationship between expressiveness and non-constant-depth networks. We state a conjecture about the "depth hierarchy," and connect the potential proof of this conjecture to the distributed computing literature. Our results highlight that: Even when the inputs and outputs of models have only unary and binary relations, allowing intermediate hyperedge representations increases the expressiveness.
|
| 18 |
+
|
| 19 |
+
Furthermore, we prove, under certain realistic assumptions, it is possible to train a hypergraph neural networks on a finite set of small graphs, and it will generalize to arbitrarily large graphs. This ability is the result of the weight-sharing nature of hypergraph neural networks. We hope our work can serve as a foundation for designing hypergraph neural networks: to solve a specific problem, what arity do you need? What depth do you need? Will my model have structural generalization (i.e., to larger graphs)? Our theoretical results on learning are further supported by experiments, for empirical demonstration of the theorems.
|
| 20 |
+
|
| 21 |
+
## 2 Hypergraph Reasoning Problems and Hypergraph Neural Networks
|
| 22 |
+
|
| 23 |
+
A hypergraph representation $G$ is a tuple(V, X), where $V$ is a set of entities (nodes), and $X$ is a set of hypergraph representation functions. Specifically, $X = \left\{ {{X}_{0},{X}_{1},{X}_{2},\cdots ,{X}_{k}}\right\}$ , where ${X}_{j} : \left( {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right) \rightarrow \mathcal{S}$ is a function mapping every tuple of $j$ nodes to a value. We call $j$ the arity of the hyperedge and $k$ is the max arity of input hyperedges.m The range $\mathcal{S}$ can be any set of discrete labels that describes relation type, or a scalar number (e.g., the length of an edge), or a vector. In general, we will use the arity 0 representation function ${X}_{0}\left( \varnothing \right) \rightarrow \mathcal{S}$ to represent any global properties of the graph as a whole.
|
| 24 |
+
|
| 25 |
+
A graph reasoning function $f$ is a mapping from a hypergraph representation $G = \left( {V, X}\right)$ to another hyperedge representation function $Y$ on $V$ . As concrete examples, asking whether a graph is fully connected is a graph classification problem, where the output $Y = \left\{ {Y}_{0}\right\}$ and ${Y}_{0}\left( \varnothing \right) \rightarrow {\mathcal{S}}^{\prime } = \{ 0,1\}$ is a global label; finding the set of disconnected subgraphs of size $k$ is a $k$ -ary hyperedge classification problem, where the output $Y = \left\{ {Y}_{k}\right\}$ is a label for each $k$ -ary hyperedges.
|
| 26 |
+
|
| 27 |
+
There are two main motivations and constructions of a neural network applied to graph reasoning problems: message-passing-based and first-order-logic-inspired. Both approaches construct the computation graph layer by layer. The input to the entire neural network consists of the input features of nodes and hyperedges, while the output of the neural network is the per-node or per-edge prediction of desired properties, depending on the training task.
|
| 28 |
+
|
| 29 |
+
In a nutshell, within each layer, message-passing-based hypergraph neural networks, Higher-Order GNNs [9], perform message passing between each hyperedge and its neighbours. Specifically, we say the j-th neighbour set of a hyperedge $u = \left( {{x}_{1},{x}_{2},\cdots ,{x}_{i}}\right)$ of arity $i$ is ${N}_{j}\left( u\right) =$ $\left\{ \left( {{x}_{1},{x}_{2},\cdots ,{x}_{j - 1}, r,{x}_{j + 1},\cdots ,{x}_{i}}\right) \right\}$ , where $r \in V$ . Then, the all neighbours of node $u$ is the union of all ${N}_{j}$ ’s, where $j = 1,2,\cdots , i$ .
|
| 30 |
+
|
| 31 |
+
On the other hand, first-order-logic-inspired hypergraph neural networks consider building neural networks that can emulate first logic formulas. Neural Logic Machines [NLM; 5] are defined in terms of a set of input hyperedges; each hyperedge of arity $k$ is represented by a vector of (possibly real) values obtained by applying all of the k -ary predicates in the domain to the tuple of vertices it connects. Each layer in an NLM learns to apply a linear transformation with nonlinear activation and quantification operators (analogous to the for all $\forall$ and exists $\exists$ quantifiers in first-order logic), on these values. It is easy to prove, by construction, that given a sufficient number of layers and maximum arity, NLMs can learn to realize any first-order-logic formula. For readers who are not familiar with HO-GNNs [9] and NLMs [5], we include a mathematical summary of their computation graph in Appendix B. Our analysis starts from the following theorem.
|
| 32 |
+
|
| 33 |
+
Theorem 2.1. HO-GNNs [9] are equivalent to NLMs in terms of expressiveness. Specifically, a B-ary HO-GNN is equivalent to an NLM applied to $B + 1$ -ary hyperedges. Proofs are in Appendix B.3.
|
| 34 |
+
|
| 35 |
+
Given Theorem 2.1, we can focus our analysis on just one single type of hypergraph neural network. Specifically, we will focus on Neural Logic Machines [NLM; 5] because its architecture naturally aligns with first-order logic formula structures, which will aid some of our analysis. An NLM is characterized by hyperparameters $D$ (depth), and $B$ maximum arity. We are going to assume that $B$ is a constant, but $D$ can be dependent on the size of the input graph. We will use $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ to denote an NLM family with depth $D$ and max arity $B$ . Other parameters such as the width of neural networks affects the precise details of what functions can be realized, as it does in a regular neural network, but does not affect the analyses in this extended abstract. Furthermore, we will be focusing on neural networks with bounded precision, and briefly discuss how our results generalize to unbounded precision cases.
|
| 36 |
+
|
| 37 |
+
## 3 Expressiveness of Relational Neural Networks
|
| 38 |
+
|
| 39 |
+
We start from a formal definition of hypergraph neural network expressiveness.
|
| 40 |
+
|
| 41 |
+
Definition 3.1 (Expressiveness). We say a model family ${\mathcal{M}}_{1}$ is at least expressive as ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ , if for all ${M}_{2} \in {\mathcal{M}}_{2}$ , there exists ${M}_{1} \in {\mathcal{M}}_{1}$ such that ${M}_{1}$ can realize ${M}_{2}$ . A model family ${\mathcal{M}}_{1}$ is more expressive than ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succ {\mathcal{M}}_{2}$ , if ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ and $\exists {M}_{1} \in {\mathcal{M}}_{1}$ , $\forall {M}_{2} \in {\mathcal{M}}_{2},{M}_{2}$ can not realize ${M}_{1}$ .
|
| 42 |
+
|
| 43 |
+
Arity Hierarchy We first aim to quantify how the maximum arity $B$ of the network’s representation affects its expressiveness and find that, in short, even if the inputs and outputs of neural networks are of low arity, the higher the maximum arity for intermediate layers, the more expressive the NLM is.
|
| 44 |
+
|
| 45 |
+
Theorem 3.1 (Arity Hierarchy). For any maximum arity $B$ , there exists a depth ${D}^{ * }$ such that: $\forall D \geq {D}^{ * },\operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ is more expressive than $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ . This theorem applies to both fixed-precision and unbounded-precision networks.
|
| 46 |
+
|
| 47 |
+
Proof sketch: Our proof slightly extends the proof of Morris et al. [9]. First, the set of graphs distinguishable by $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ is bounded by graphs distinguishable by a $D$ -round order- $B$ Weisfeiler-Leman test [10]. If models in NLM $\left\lbrack {D, B}\right\rbrack$ cannot generate different outputs for two distinct
|
| 48 |
+
|
| 49 |
+
hypergraphs ${G}_{1}$ and ${G}_{2}$ , but there exists $M \in \operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ that can generate different outputs for ${G}_{1}$ and ${G}_{2}$ , then we can construct a graph classification function $f$ that $\operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ (with some fixed precision) can realize but $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ (even with unbounded precision) cannot.* The full proof is described in Appendix C.1.
|
| 50 |
+
|
| 51 |
+
It is also important to quantify the minimum arity for realizing certain graph reasoning functions.
|
| 52 |
+
|
| 53 |
+
Theorem 3.2 (FOL realization bounds). Let ${\mathrm{{FOC}}}_{B}$ denote a fragment of first order logic with at most $B$ variables, extended with counting quantifiers of the form ${\exists }^{ \geq n}\phi$ , which state that there are at least $n$ nodes satisfying formula $\phi$ [11].
|
| 54 |
+
|
| 55 |
+
- (Upper Bound) Any function $f$ in ${\mathrm{{FOC}}}_{B}$ can be realized by $\mathrm{{NLM}}\left\lbrack {D, B}\right\rbrack$ for some $D$ .
|
| 56 |
+
|
| 57 |
+
- (Lower Bound) There exists a function $f \in {\mathrm{{FOC}}}_{B}$ such that for all $D, f$ cannot be realized by $\operatorname{NLM}\left\lbrack {D, B - 1}\right\rbrack$ .
|
| 58 |
+
|
| 59 |
+
Proof: The upper bound part of the claim has been proved by Barceló et al. [12] for $B = 2$ . The results generalize easily to arbitrary $B$ because the counting quantifiers can be realized by sum aggregation. The lower bound part can be proved by applying Section 5 of [11], in which they show that ${\mathrm{{FOC}}}_{B}$ is equivalent to a(B - 1)-dimensional WL test in distinguishing non-isomorphic graphs. Given that $\mathrm{{NLM}}\left\lbrack {D, B - 1}\right\rbrack$ is equivalent to the(B - 2)-dimensional WL test of graph isomorphism, there must be an ${\mathrm{{FOL}}}_{B}$ formula that distinguishes two non-isomorphic graphs that $\operatorname{NLM}\left\lbrack {D, B - 1}\right\rbrack$ cannot. Hence, ${\mathrm{{FOL}}}_{B}$ cannot be realized by $\mathrm{{NLM}}\left\lbrack {\cdot , B - 1}\right\rbrack$ .
|
| 60 |
+
|
| 61 |
+
Depth Hierarchy We now study the dependence of the expressiveness of NLMs on depth $D$ . Neural networks are generally defined to have a fixed depth, but allowing them to have a depth that is dependent on the number of nodes $n = \left| V\right|$ in the graph can substantially increase their expressive power. In the following, we define a depth hierarchy by analogy to the time hierarchy in computational complexity theory [13], and we extend our notation to let $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) , B}\right\rbrack$ denote the class of adaptive-depth NLMs in which the growth-rate of depth $D$ is bounded by $O\left( {f\left( n\right) }\right)$ .
|
| 62 |
+
|
| 63 |
+
Conjecture 3.3 (Depth hierarchy). For any maximum arity $B$ , for any two functions $f$ and $g$ , if $g\left( n\right) = o\left( {f\left( n\right) /\log n}\right)$ , that is, $f$ grows logarithmically more quickly than $g$ , then fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) , B}\right\rbrack$ is more expressive than fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {g\left( n\right) }\right) , B}\right\rbrack$ .
|
| 64 |
+
|
| 65 |
+
There is a closely related result for the congested clique model in distributed computing, where [14] proved that $\operatorname{CLIQUE}\left( {g\left( n\right) }\right) \varsubsetneq \operatorname{CLIQUE}\left( {f\left( n\right) }\right)$ if $g\left( n\right) = o\left( {f\left( n\right) }\right)$ . This result does not have the $\log n$ gap because the congested clique model allows $\log n$ bits to transmit between nodes at each iteration, while fixed-precision NLM allows only a constant number of bits. The reason why the result on congested clique can not be applied to fixed-precision NLMs is that congested clique assumes unbounded precision representation for each individual node.
|
| 66 |
+
|
| 67 |
+
However, Conjecture 3.3 is not true for NLMs with unbounded precision, because there is an upper bound depth $O\left( {n}^{B - 1}\right)$ for a model’s expressiveness power. ${}^{ \dagger }$ That is, an unbounded-precision NLM can not achieve stronger expressiveness by increasing its depth beyond $O\left( {n}^{B - 1}\right)$ .
|
| 68 |
+
|
| 69 |
+
It is important to point out that, to realize a specific graph reasoning function, NLMs with different maximum arity $B$ may require different depth $D$ . Fürer [15] provides a general construction for problems that higher-dimensional NLMs can solve in asymptotically smaller depth than lower-dimensional NLMs. In the following we give a concrete example for computing $S - T$ Connectivity- $k$ , which asks whether there is a path of nodes from $S$ and $T$ in a graph, with length $\leq k$ .
|
| 70 |
+
|
| 71 |
+
Theorem 3.4 (S-T Connectivity- $k$ with Different Max Arity). For any function $f\left( k\right)$ , if $f\left( k\right) = o\left( k\right)$ , $\operatorname{NLM}\left\lbrack {O\left( {f\left( k\right) }\right) ,2}\right\rbrack$ cannot realize S-T Connectivity- $k$ . That is, S-T Connectivity- $k$ requires depth at least $O\left( k\right)$ for a relational neural network with an maximum arity of $B = 2$ . However, S-T Connectivity- $k$ can be realized by $\operatorname{NLM}\left\lbrack {O\left( {\log k}\right) ,3}\right\rbrack$ .
|
| 72 |
+
|
| 73 |
+
Proof sketch. For any integer $k$ , we can construct a graph with two chains of length $k$ , so that if we mark two of the four ends as $S$ or $T$ , any $\operatorname{NLM}\left\lbrack {k - 1,2}\right\rbrack$ cannot tell whether $S$ and $T$ are on the same chain. The full proof is described in Appendix C.3.
|
| 74 |
+
|
| 75 |
+
There are many important graph reasoning tasks that do not have known depth lower bounds, including all-pair connectivity and shortest distance [16, 17]. In Appendix C.3, we discuss the concrete complexity bounds for a series of graph reasoning problems.
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
${}^{ * }$ Note that the arity hierarchy is applied to fixed-precision and unbounded-precision separately. For example, $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ with unbounded precision is incomparable with $\operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ with fixed precision.
|
| 80 |
+
|
| 81 |
+
${}^{ \dagger }$ See appendix C. 2 for a formal statement and the proof.
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
## 4 Learning and Generalization in Relational Neural Networks
|
| 86 |
+
|
| 87 |
+
Given our understanding of what functions can be realized by NLMs, we move on to the problems of learning them: Can we effectively learn a NLMs to solve a desired task given a sufficient number of input-output examples? In this paper, we show that applying enumerative training with examples up to some fixed graph size can ensure that the trained neural network will generalize to all graphs larger than those appearing in the training set.
|
| 88 |
+
|
| 89 |
+
A critical determinant of the generalization ability for NLMs is the aggregation function they use. Specifically, Xu et al. [18] have shown that using sum as the aggregation function provides maximum expressiveness for graph neural networks. However, sum aggregation cannot be implemented in fixed-precision models with an arbitrary number of nodes, because as the graph size $n$ increases, the range of the sum aggregation also increases.
|
| 90 |
+
|
| 91 |
+
Definition 4.1 (Fixed-precision aggregation function). An aggregation function is fixed precision if it maps from any finite set of inputs with values drawn from finite domains to a fixed finite set of possible output values; that is, the cardinality of the range of the function cannot grow with the number of elements in the input set. Two useful fixed-precision aggregation functions are max, which computes the dimension-wise maximum over the set of input values, and fixed-precision mean, which approximates the dimension-wise mean to a fixed decimal place.
|
| 92 |
+
|
| 93 |
+
In order to focus on structural generalization in this section, we consider an enumerative training paradigm. When the input hypergraph representation domain $\mathcal{S}$ is a finite set, we can enumerate the set ${\mathcal{G}}_{ < N}$ of all possible input hypergraph representations of size bounded by $N$ . We first enumerate all graph sizes $n \leq N$ ; for each $n$ , we enumerate all possible values assigned to the hyperedges in the input. Given training size $N$ , we enumerate all inputs in ${\mathcal{G}}_{ \leq N}$ , associate with each one the corresponding ground-truth output representation, and train the model with these input-output pairs.
|
| 94 |
+
|
| 95 |
+
This has much stronger data requirements than the standard sampling-based training mechanisms in machine learning. In practice, this can be approximated well when the input domain $\mathcal{S}$ is small and the input data distribution is approximately uniformly distributed. The enumerative learning setting is studied by the language identification in the limit community [19], in which it is called complete presentation. This is an interesting learning setting because even if the domain for each individual hyperedge representation is finite, as the graph size can go arbitrarily large, the number of possible inputs is enumerable but unbounded.
|
| 96 |
+
|
| 97 |
+
Theorem 4.1 (Fixed-precision generalization under complete presentation). For any hypergraph reasoning function $f$ , if it can be realized by a fixed-precision relational neural network model $\mathcal{M}$ , then there exists an integer $N$ , such that if we train the model with complete presentation on all input hypergraph representations with size smaller than $N,{\mathcal{G}}_{ \leq N}$ , then for all $M \in \mathcal{M}$ ,
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathop{\sum }\limits_{{G \in {\mathcal{G}}_{ < N}}}1\left\lbrack {M\left( G\right) \neq f\left( G\right) }\right\rbrack = 0 \Rightarrow \forall G \in {\mathcal{G}}_{\infty } : M\left( G\right) = f\left( G\right) .
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
That is, as long as $M$ fits all training examples, it will generalize to all possible hypergraphs in ${\mathcal{G}}_{\infty }$ .
|
| 104 |
+
|
| 105 |
+
Proof. The key observation is that for any fixed vector representation length $W$ , there are only a finite number of distinctive models in a fixed-precision NLM family, independent of the graph size $n$ . Let ${W}_{b}$ be the number of bits in each intermediate representation of a fixed-precision NLM. There are at most ${\left( {2}^{{W}_{b}}\right) }^{{2}^{{W}_{b}}}$ different mappings from inputs to outputs. Hence, if $N$ is sufficiently large to enumerate all input hypergraphs, we can always identify the correct model in the hypothesis space.
|
| 106 |
+
|
| 107 |
+
Our results are related to the algorithmic alignment approach [20, 21]. In contrast to their Probably Approximately Correct Learning (PAC-Learning) bounds for sample efficiency, our expressiveness results directly quantifies whether a hypergraph neural network can be trained to realize a specific function. Our generalization theorem applies to more generally than their result on Max-Degree function learning due to the assumption of fixed precision.
|
| 108 |
+
|
| 109 |
+
## 5 Conclusion
|
| 110 |
+
|
| 111 |
+
In this extended abstract, we have shown the substantial increase of expressive power due to higher-arity relations and increasing depth, and have characterized very powerful structural generalization from training on small graphs to performance on larger ones. We further discuss the relationship between these results and existing results in Appendix A. All theoretical results are further supported by the empirical results, discussed in Appendix D. Although many questions remain open about the overall generalization capacity of these models in continuous and noisy domains, we believe this work has shed some light on their utility and potential for application in a variety of problems.
|
| 112 |
+
|
| 113 |
+
References
|
| 114 |
+
|
| 115 |
+
[1] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In ${ICML},{2017.1}$
|
| 116 |
+
|
| 117 |
+
[2] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In ESWC, 2018.
|
| 118 |
+
|
| 119 |
+
[3] Qi Liu, Maximilian Nickel, and Douwe Kiela. Hyperbolic graph neural networks. IEEE Transactions on Knowledge and Data Engineering, 2017. 1
|
| 120 |
+
|
| 121 |
+
[4] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80, 2008. 1
|
| 122 |
+
|
| 123 |
+
[5] Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. Neural logic machines. In ${ICLR},{2019.1},2,7,8,{16}$
|
| 124 |
+
|
| 125 |
+
[6] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 1
|
| 126 |
+
|
| 127 |
+
[7] Christian Merkwirth and Thomas Lengauer. Automatic generation of complementary descriptors with molecular graph networks. Journal of chemical information and modeling, 45(5):1159- 1168, 2005.
|
| 128 |
+
|
| 129 |
+
[8] Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In ${ICLR},{2020.1}$
|
| 130 |
+
|
| 131 |
+
[9] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In ${AAAI},{2019.2},7,{12}$
|
| 132 |
+
|
| 133 |
+
[10] AA Leman and B Weisfeiler. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsiya, 2(9):12-16, 1968. 2
|
| 134 |
+
|
| 135 |
+
[11] Jin-Yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389-410, 1992. 3, 12, 13
|
| 136 |
+
|
| 137 |
+
[12] Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In ICLR, 2020. 3, 7, 13
|
| 138 |
+
|
| 139 |
+
[13] Juris Hartmanis and Richard E Stearns. On the computational complexity of algorithms. Transactions of the American Mathematical Society, 117:285-306, 1965. 3
|
| 140 |
+
|
| 141 |
+
[14] Janne H Korhonen and Jukka Suomela. Towards a complexity theory for the congested clique. In ${SPAA},{2018.3}$
|
| 142 |
+
|
| 143 |
+
[15] Martin Fürer. Weisfeiler-lehman refinement requires at least a linear number of iterations. In ICALP, 2001. 3
|
| 144 |
+
|
| 145 |
+
[16] Mauricio Karchmer and Avi Wigderson. Monotone circuits for connectivity require super-logarithmic depth. SIAM J. Discrete Math., 3(2):255-265, 1990. 3, 7
|
| 146 |
+
|
| 147 |
+
[17] Shreyas Pai and Sriram V Pemmaraju. Connectivity lower bounds in broadcast congested clique. In ${PODC},{2019.3}$
|
| 148 |
+
|
| 149 |
+
[18] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019. 4, 7
|
| 150 |
+
|
| 151 |
+
[19] E Mark Gold. Language identification in the limit. Inf. Control., 10(5):447-474, 1967. 4
|
| 152 |
+
|
| 153 |
+
[20] Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? In ICLR, 2020. 4, 7
|
| 154 |
+
|
| 155 |
+
[21] Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. How neural networks extrapolate: From feedforward to graph neural networks. In ICLR, 2021. 4, 7
|
| 156 |
+
|
| 157 |
+
[22] Yuan Li, Alexander Razborov, and Benjamin Rossman. On the ac^0 complexity of subgraph isomorphism. SIAM J. Comput., 46(3):936-971, 2017. 7
|
| 158 |
+
|
| 159 |
+
[23] Benjamin Rossman. Average-case complexity of detecting cliques. PhD thesis, Massachusetts Institute of Technology, 2010. 7
|
| 160 |
+
|
| 161 |
+
[24] Andreas Loukas. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations, 2020. 7
|
| 162 |
+
|
| 163 |
+
[25] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. In ${ICLR},{2019.7}$
|
| 164 |
+
|
| 165 |
+
[26] Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In ${AAAI},{2019.11}$
|
| 166 |
+
|
| 167 |
+
[27] Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method of training graph convolutional networks on hypergraphs. In NeurIPS, 2019.
|
| 168 |
+
|
| 169 |
+
[28] Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition, 110:107637, 2021. 11
|
| 170 |
+
|
| 171 |
+
[29] Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, and Huan Liu. Be more with less: Hypergraph attention networks for inductive text classification. In EMNLP, 2020. 11
|
| 172 |
+
|
| 173 |
+
[30] Jing Huang and Jie Yang. Unignn: a unified framework for graph and hypergraph neural networks. In IJCAI, 2021. 11
|
| 174 |
+
|
| 175 |
+
[31] Sandra Kiefer and Pascal Schweitzer. Upper bounds on the quantifier depth for graph differentiation in first order logic. In LICS, 2016. 12
|
| 176 |
+
|
| 177 |
+
[32] Efim A Dinic. Algorithm for solution of a problem of maximum flow in networks with power estimation. In Soviet Math. Doklady, volume 11, pages 1277-1280, 1970. 15
|
| 178 |
+
|
| 179 |
+
272
|
| 180 |
+
|
| 181 |
+
## Appendix
|
| 182 |
+
|
| 183 |
+
The appendix is organized as the following. In Appendix A, we discuss the related work. In Appendix B, we provide a formalization of two types of hypergraph neural networks discussed in the main paper, and proved their equivalence. In Appendix C, we prove the theorems for the arity hierarchy and provide concrete examples for expressiveness analyses. Finally, in Appendix D, we include additional experiment results to empirically illustrate the application of theorems discussed in the paper.
|
| 184 |
+
|
| 185 |
+
## A Related Work
|
| 186 |
+
|
| 187 |
+
Solving problems on graphs of arbitrary size is studied in many fields. NLMs can be viewed as circuit families with constrained architecture. In distributed computation, the congested clique model can be viewed as 2-arity NLMs, where nodes have identities as extra information. Common graph problems including sub-structure detection $\left\lbrack {{22},{23}}\right\rbrack$ and connectivity $\left\lbrack {16}\right\rbrack$ are studied for lower bounds in terms of depth, width and communication. This has been connected to GNNs for deriving expressiveness bounds [24].
|
| 188 |
+
|
| 189 |
+
Studies have been conducted on the expressiveness of GNNs and their variants. [18] provide an illuminating characterization of GNN expressiveness in terms of the WL graph isomorphism test. [12] reviewed GNNs from the logical perspective and rigorously refined their logical expressiveness with respect to fragments of first-order logic. [5] proposed Neural Logical Machines (NLMs) to reason about higher-order relations, and showed that increasing order inreases expressiveness. It is also possible to gain expressiveness using unbounded computation time, as shown by the work of Dehghani et al. [25] on dynamic halting in transformers.
|
| 190 |
+
|
| 191 |
+
It is interesting that GNNs may generalize to larger graphs. Xu et al. [20, 21] have studied the notion of algorithmic alignment to quantify such structural generalization. Dong et al. [5] provided empirical results showing that NLMs generalize to much larger graphs on certain tasks. In Xu et al. [20], they analyzed and compared the sample complexity of Graph Neural Networks. This is different from our notion of expressiveness for realizing functions. In Xu et al. [21], they showed emperically on some problems (e.g., Max-Degree, Shortest Path, and n-body problem) that algorithm alignment helps GNNs to extrapolate, and theoretically proved the improvement by algorithm alignment on the Max-Degree problem. In this extended abstract, instead of focusing on computing specific graph problems, we analyzed how GNNs can extrapolate to larger graphs in a general case, based on the assumption of fixed precision computation.
|
| 192 |
+
|
| 193 |
+
## B Hypergraph Neural Networks
|
| 194 |
+
|
| 195 |
+
We now introduce two important hypergraph neural network implementations that can be trained to solve graph reasoning problems: Higher-order Graph Neural Networks [HO-GNN; 9] and Neural Logic Machines [NLM; 5]. The are equivalent to each other in terms of expressiveness. Showing this equivalence allows us to focus the rest of the paper on analyzing a single model type, with the understanding that the conclusions generalize to a broader class of hypergraph neural networks.
|
| 196 |
+
|
| 197 |
+
### B.1 Higher-order Graph Neural Networks
|
| 198 |
+
|
| 199 |
+
Higher-order Graph Neural Networks [HO-GNNs; 9] are Graph Neural Networks (GNNs) that apply to hypergraphs. A GNN is usually defined based on two message passing operations.
|
| 200 |
+
|
| 201 |
+
- Edge update: the feature of each edge is updated by features of its ends.
|
| 202 |
+
|
| 203 |
+
- Note update: the feature of each node is updated by features of all edges adjacent to it.
|
| 204 |
+
|
| 205 |
+
However, computing only node-wise and edge-wise features does not handle higher-order relations, such as triangles in the graph. In order to obtain more expressive power, GNNs have be extend to hypergraphs of higher arity [9]. Specifically, HO-GNNs on $B$ -ary hypergraph maintains features for all $B$ -tuple of nodes, and the neighborhood is extended to $B$ -tuples accordingly: the feature of tuple $\left( {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right)$ is updated by the $\left| V\right|$ element multiset (contain $\left| V\right|$ elements for each $u \in V$ ) of $B$ -tuples of features
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\left( {{H}_{i}\left\lbrack {u,{v}_{2},\cdots ,{v}_{B}}\right\rbrack ,{H}_{i - 1}\left\lbrack {{v}_{1}, u,{v}_{2},\cdots ,{v}_{B}}\right\rbrack ,\cdots {H}_{i - 1}\left\lbrack {{v}_{1},\cdots ,{v}_{B - 1}, u}\right\rbrack }\right) \tag{B.1}
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
Figure 1: The overall architecture of our Neural Logic Machines (NLMs). It follows the computation graph of NLM [5] and can be applied to hypergraphs.
|
| 214 |
+
|
| 215 |
+
where ${H}_{i - 1}\left\lbrack \mathbf{v}\right\rbrack$ is the feature of tuple $\mathbf{v}$ from the previous iteration.
|
| 216 |
+
|
| 217 |
+
We now introduce the formal definition of the high-dimensional message passing. We denote $v$ as a $B$ -tuple of nodes $\left( {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right)$ , and generalize the neighborhood to a higher dimension by defining the neighborhood of $\mathbf{v}$ as all node tuples that differ from $\mathbf{v}$ at one position.
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
\operatorname{Neighbors}\left( {\mathbf{v}, u}\right) = \left( {\left( {u,{v}_{2},\cdots ,{v}_{B}}\right) ,\left( {{v}_{1}, u,{v}_{3},\cdots ,{v}_{B}}\right) ,\cdots ,\left( {{v}_{1},\cdots ,{v}_{B - 1}, u}\right) }\right) \tag{B.2}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
N\left( \mathbf{v}\right) = \{ \text{ Neighbors }\left( {\mathbf{v}, u}\right) \mid u \in V\} \tag{B.3}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
Then message passing scheme naturally generalizes to high-dimensional features using the high-dimensional neighborhood.
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
{\operatorname{Received}}_{i}\left\lbrack \mathbf{v}\right\rbrack = \mathop{\sum }\limits_{u}\left( {{\mathrm{{NN}}}_{1}\left( {{H}_{i - 1}\left\lbrack \mathbf{v}\right\rbrack ;{\operatorname{CONCAT}}_{{\mathbf{v}}^{\prime } \in \text{ neighbors }\left( {\mathbf{v}, u}\right) }{H}_{i - 1}\left\lbrack {\mathbf{v}}^{\prime }\right\rbrack }\right) }\right) \tag{B.4}
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
### B.2 Neural Logic Machines
|
| 234 |
+
|
| 235 |
+
A NLM is a multi-layer neural network that operates on hypergraph representations, in which the hypergraph representation functions are represented as tensors. The input is a hypergraph representation(V, X). There are then several computational layers, each of which produces a hypergraph representation with nodes $V$ and a new set of representation functions. Specifically, a $B$ -ary NLM produces hypergraph representation functions with arities from 0 up to a maximum hyperedge arity of $B$ . We let ${T}_{i, j}$ denote the tensor representation for the output at layer $i$ and arity $j$ . Each entry in the tensor is a mapping from a set of node indices $\left( {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right)$ to a vector in a latent space ${\mathbb{R}}^{W}$ . Thus, ${T}_{i, j}$ is a tensor of $j + 1$ dimensions, with the first $j$ dimensions corresponding to $j$ -tuple of nodes, and the last feature dimension. For convenience, we write ${h}_{0, \cdot }$ for the input hypergraph representation and ${h}_{D, \cdot }$ for the output of the NLM.
|
| 236 |
+
|
| 237 |
+
Fig. 1a shows the overall architecture of NLMs. It has $D \times B$ computation blocks, namely relational reasoning layers (RRLs). Each block ${\mathrm{{RRL}}}_{i, j}$ , illustrated in Fig. 1b, takes the output from neighboring arities in the previous layer, ${T}_{i - 1, j - 1},{T}_{i - 1, j}$ and ${T}_{i - 1, j + 1}$ , and produces ${T}_{i, j}$ . Below we show the computation of each primitive operation in an RRL.
|
| 238 |
+
|
| 239 |
+
The expand operation takes tensor ${T}_{i - 1, j - 1}$ (arity $j - 1$ ) and produces a new tensor ${T}_{i - 1, j - 1}^{E}$ of arity $j$ . The reduce operation takes tensor ${T}_{i - 1, j + 1}$ (arity $j + 1$ ) and produces a new tensor ${T}_{i - 1, j + 1}^{R}$ of arity $j + 1$ . Mathematically,
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
{T}_{i - 1, j - 1}^{E}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right\rbrack = {T}_{i - 1, j - 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j - 1}}\right\rbrack ;
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
{T}_{i - 1, j + 1}^{R}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right\rbrack = {\operatorname{Agg}}_{{v}_{j + 1}}\left\{ {{T}_{i - 1, j + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j},{v}_{j + 1}}\right\rbrack }\right\} .
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
Here, Agg is called the aggregation function of a NLM. For example, a sum aggregation function takes the summation along the dimension $j + 1$ of the tensor, and a max aggregation function takes the max along that dimension.
|
| 250 |
+
|
| 251 |
+
The concat (concatenate) operation $\oplus$ is applied at the "vector representation" dimension. The permute operation generates a new tensor of the same arity, but it fuses the representations of hyperedges that share the same set of entities but in different order, such as $\left( {{v}_{1},{v}_{2}}\right)$ and $\left( {{v}_{2},{v}_{1}}\right)$ . Mathematically, for tensor $X$ of arity $j$ , if $Y =$ permute(X)then
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
Y\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right\rbrack = \mathop{\operatorname{Concat}}\limits_{{\sigma \in {S}_{j}}}\left\{ {X\left\lbrack {{v}_{{\sigma }_{1}},{v}_{{\sigma }_{2}},\cdots ,{v}_{{\sigma }_{j}}}\right\rbrack }\right\} ,
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
where $\sigma \in {S}_{j}$ iterates over all permuations of $\{ 1,2,\cdots j\} .{\mathrm{{NN}}}_{j}$ is a multi-layer perceptron (MLP) applied to each entry in the tensor produced after permutation, with nonlinearity $\sigma$ (e.g., ReLU).
|
| 258 |
+
|
| 259 |
+
It is important to note that we intentionally name the MLPs ${\mathrm{{NN}}}_{j}$ instead of ${\mathrm{{NN}}}_{i, j}$ . In generalized relational neural networks, for a given arity $j$ , all MLPs across all layers $i$ are shared. It is straightforward to see that this "weight-shared" model can realize a "non-weight-shared" NLM that uses different weights for MLPs at different layers when the number of layers is a constant. With a sufficiently large length of the representation vector, we can simulate the computation of applying different transformations by constructing block matrix weights. (A more formal proof is in Appendix B) The advantage of this weight sharing is that the network can be easily extended to a "recurrent" model. For example, we can apply the NLM for a number of layers that is a function of $n$ , where $n$ is the the number of nodes in the input graph. Thus, we will use the term layers and iterations interchangeably.
|
| 260 |
+
|
| 261 |
+
Handling high-arity features and using deeper models usually increase the computational cost. In appendix B.5, we show that the time and space complexity of NLM $\left\lbrack {D, B}\right\rbrack$ is $O\left( {D{n}^{B}}\right)$ .
|
| 262 |
+
|
| 263 |
+
Note that even when hyperparameters such as the maximum arity and the number of iterations are fixed, a NLM is still a model family $\mathcal{M}$ : the weights for MLPs will be trained on some data. Furthermore, each model $M \in \mathcal{M}$ is a NLM with a specific set of MLP weights.
|
| 264 |
+
|
| 265 |
+
### B.3 Expressiveness Equivalence of Relational Neural Networks
|
| 266 |
+
|
| 267 |
+
Since we are going to study both constant-depth and adaptive-depth graph neural networks, we first prove the following lemma (for general multi-layer neural networks), which helps us simplify the analysis.
|
| 268 |
+
|
| 269 |
+
Lemma B.1. A neural network with representation width $W$ that has $D$ different layers ${\mathrm{{NN}}}_{1},\cdots ,{\mathrm{{NN}}}_{D}$ can be realized by a neural network that applies a single layer ${\mathrm{{NN}}}^{\prime }$ for $D$ iterations with width $\left( {D + 1}\right) \left( {W + 1}\right)$ .
|
| 270 |
+
|
| 271 |
+
Proof. The representation for ${\mathrm{{NN}}}^{\prime }$ can be partitioned into $D + 1$ segments each of length $W + 1$ . Each segment consist of a "flag" element and a $W$ -element representation, which are all 0 initially, except for the first segment, where the flag is set to 1 , and the representation is the input.
|
| 272 |
+
|
| 273 |
+
${\mathrm{{NN}}}^{\prime }$ has the weights for all ${\mathrm{{NN}}}_{1},\cdots ,{\mathrm{{NN}}}_{D}$ , where weights ${\mathrm{{NN}}}_{i}$ are used to compute the representation in segment $i + 1$ from the representation in segment $i$ . Additionally, at each iteration, segment $i + 1$ can only be computed if the flag in segment $i$ is 1, in which case the flag of segment $i + 1$ is set to 1 . Clearly, after $D$ iterations, the output of ${\mathrm{{NN}}}_{k}$ should be the representation in segment $D + 1$ .
|
| 274 |
+
|
| 275 |
+
Due to Lemma B.1, we consider the neural networks that recurrently apply the same layer because a) they are as expressive as those using layers of different weights, b) it is easier to analyze a single neural network layer than $D$ layers, and c) they naturally generalize to neural networks that runs for adaptive number of iterations (e.g. GNNs that run $O\left( {\log n}\right)$ iterations where $n$ is the size of the input graph).
|
| 276 |
+
|
| 277 |
+
We first describe a framework for quantifying if two hypergraph neural network models are equally expressive on regression tasks (which is more general than classification problems). The framework view the expressiveness from the perspective of computation. Specifically, we will prove the expressiveness equivalence between models by showing that their computation can be aligned.
|
| 278 |
+
|
| 279 |
+
In complexity, we usually show a problem is at least as hard as the other one by showing a reduction from the other problem to the problem. Similarly, on the expressiveness of NLMs, we can construct reduction from model family $\mathcal{A}$ to model family $\mathcal{B}$ to show that $\mathcal{B}$ can realize all computation that $\mathcal{A}$ does, or even more. Formally, we have the following definition.
|
| 280 |
+
|
| 281 |
+
Definition B.1 (Expressiveness reduction). For two model families $\mathcal{A}$ and $\mathcal{B}$ , we say $\mathcal{A}$ can be reduced to $\mathcal{B}$ if and only if there is a function $r : \mathcal{A} \rightarrow \mathcal{B}$ such that for each model instance $A \in \mathcal{A}$ , $r\left( A\right) \in \mathcal{B}$ and $A$ have the same outputs on all inputs. In this case, we say $\mathcal{B}$ is at least as expressive as $\mathcal{A}$ .
|
| 282 |
+
|
| 283 |
+
Definition B. 2 (Expressiveness equivalence). For two model families $\mathcal{A}$ and $\mathcal{B}$ , if $\mathcal{A}$ and $\mathcal{B}$ can be reduced to each other, then $\mathcal{A}$ and $\mathcal{B}$ are equally expressive. Note that this definition of expressiveness equivalence generalizes to both classification and regression tasks.
|
| 284 |
+
|
| 285 |
+
Equivalence between HO-GNNs and NLMs. We will prove the equivalence between HO-GNNs and NLMs by making reductions in both directions.
|
| 286 |
+
|
| 287 |
+
Lemma B.2. A $B$ -ary HO-GNN with depth $D$ can be realized by a NLM with maximum arity $B + 1$ and depth ${2D}$ .
|
| 288 |
+
|
| 289 |
+
Proof. We prove lemma B. 2 by showing that one layer of GNNs on $B$ -ary hypergraphs can be realized by two NLM with maximum arity $B + 1$ .
|
| 290 |
+
|
| 291 |
+
Firstly, a GNN layer maintain features of $B$ -tuples, which are stored in correspondingly in an NLM layer at dimension $B$ . Then we will realize the message passing scheme using the NLM features of dimension $B$ and $B + 1$ in two steps.
|
| 292 |
+
|
| 293 |
+
Recall the message passing scheme generalized to high dimensions (to distinguish, we use $H$ for HO-GNN features and $T$ for NLM features.)
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
{\operatorname{Received}}_{i}\left( \mathbf{v}\right) = \mathop{\sum }\limits_{u}\left( {{\mathrm{{NN}}}_{1}\left( {{H}_{i - 1, B}\left\lbrack \mathbf{v}\right\rbrack ;{\operatorname{CONCAT}}_{{\mathbf{v}}^{\prime } \in \text{ neighbors }\left( {\mathbf{v}, u}\right) }{H}_{i - 1}\left\lbrack {\mathbf{v}}^{\prime }\right\rbrack }\right) }\right) \tag{B.5}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
At the first step, the Expand operation first raise the dimension to $B + 1$ by expanding a non-related variable $u$ to the end, and the Permute operation can then swap $u$ with each of the elements (or no swap). Particularly, ${T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack$ will be expand to
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
{T}_{i + 1, B + 1}\left\lbrack {u,{v}_{2},{v}_{3},\cdots ,{v}_{B},{v}_{1}}\right\rbrack ,{T}_{i + 1, B + 1}\left\lbrack {{v}_{1}, u,{v}_{3},\cdots ,{v}_{B},{v}_{2}}\right\rbrack ,\cdots ,
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
{T}_{i + 1, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B - 1}, u,{v}_{B}}\right\rbrack \text{, and}{T}_{i + 1, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B - 1},{v}_{B}, u}\right\rbrack
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
Hence, ${T}_{i + 1, B + 1}\left\lbrack {{v}_{1},{v}_{2},{v}_{3},\cdots ,{v}_{B}, u}\right\rbrack$ receives the features from
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
{T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack ,{T}_{i, B}\left\lbrack {u,{v}_{2},{v}_{3},\cdots ,{v}_{B}}\right\rbrack ,{T}_{i, B}\left\lbrack {{v}_{1}, u,{v}_{3},\cdots ,{v}_{B}}\right\rbrack ,\cdots ,{T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B - 1}, u}\right\rbrack
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
These features matches the input of ${\mathrm{{NN}}}_{1}$ in equation B. 5, and in this layer ${\mathrm{{NN}}}_{1}$ can be applied to compute things inside the summation.
|
| 316 |
+
|
| 317 |
+
Then at the second step, the last element is reduced to get what tuple $v$ should receive, so $v$ can be updated. Since each HO-GNN layer can be realized by such two NLM layers, each $B$ -ary HO-GNN with depth $D$ can be realized by a NLM of maximum arity $\left( {B + 1}\right)$ and depth ${2D}$ .
|
| 318 |
+
|
| 319 |
+
To complete the proof we need to find a reduction from NLMs of maximum arity $B + 1$ to $B$ -ary HO-GNNs. The key observation here is that the features of $\left( {B + 1}\right)$ -tuples in NLMs can only be expanded from sub-tuples, and the expansion and reduction involving $\left( {B + 1}\right)$ -tuples can be simulated by the message passing process.
|
| 320 |
+
|
| 321 |
+
Lemma B.3. The features of $\left( {B + 1}\right)$ -tuples feature ${T}_{i, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B + 1}}\right\rbrack$ can be computed from the following tuples
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\left( {{T}_{i, B}\left\lbrack {{v}_{2},{v}_{3},\cdots ,{v}_{B + 1}}\right\rbrack ,{T}_{i, B}\left\lbrack {{v}_{1},{v}_{3},\cdots ,{v}_{B + 1}}\right\rbrack ,\cdots ,{T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack }\right) .
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
Proof. Lemma B. 3 is true because $\left( {B + 1}\right)$ -dimensional representations can either be computed from themselves at the previous iteration, or expanded from $B$ -dimensional representations. Since representations at all previous iterations $j < i$ can be contained in ${T}_{i, B}$ , it is sufficient to compute ${T}_{i, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B + 1}}\right\rbrack$ from all its $B$ -ary sub-tuples. Then let's construct the HO-GNN for given NLM to show the existence of the reduction.
|
| 328 |
+
|
| 329 |
+
Lemma B.4. A NLM of maximum arity $B + 1$ and depth $D$ can be realized by a $B$ -ary HO-GNN with no more than $D$ iterations.
|
| 330 |
+
|
| 331 |
+
Proof. We can realize the Expand and Reduce operation with only the $B$ -dimensional features using the broadcast message passing scheme. Note that Expand and Reduce between $B$ -dimensional features and $\left( {B + 1}\right)$ -dimensional features in the NLM is a special case where claim B.3 is applied.
|
| 332 |
+
|
| 333 |
+
Let’s start with Expand and Reduce operations between features of dimension $B$ or lower. For the $b$ -dimensional feature in the NLM, we keep ${n}^{\underline{b}}{n}^{B - b \ddagger }$ copies of it and store them the representation of every $B$ -tuple who has a sub-tuple ${}^{§}$ that is a permutation of the $b$ -tuple. That is, for each $B$ -tuple in the $B$ -ary HO-GNN, for its every sub-tuple of length $b$ , we store $b$ ! representations corresponding to every permutation of the $b$ -tuple in the NLM. Keeping representation for all sub-tuple permutations make it possible to realize the Permute operation. Also, it is easy to notice that Expand operation is realized already, as all features with dimension lower than $B$ are naturally expanded to $B$ dimension by filling in all possible combinations of the rest elements. Finally, the Reduce operation can be realized using a broadcast casting message passing on certain position of the tuple.
|
| 334 |
+
|
| 335 |
+
Now let's move to the special case - the Expand and Reduce operation between features of dimensions $B$ and $B + 1$ . Claim B. 3 suggests how the $\left( {B + 1}\right)$ -dimensional features are stored in $B$ -dimensional representations in GNNs, and we now show how the Reduce can be realized by message passing.
|
| 336 |
+
|
| 337 |
+
We first bring in claim B. 3 to the HO-GNN message passing, where we have ${\operatorname{Received}}_{i}\left\lbrack \mathbf{v}\right\rbrack$ to be
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\sum \left( {{\mathrm{{NN}}}_{1}\left( {{T}_{i - 1, B}\left\lbrack {{v}_{2},{v}_{3},\cdots ,{v}_{B}, u}\right\rbrack ,{T}_{i - 1, B}\left\lbrack {{v}_{1},{v}_{3},\cdots ,{v}_{B}, u}\right\rbrack ,\cdots ,{T}_{\left( {i - 1}\right) , B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack }\right) }\right)
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
Note that the last term ${T}_{i - 1, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack$ is contained in ${H}_{i - 1}\left( v\right)$ in equation B.5, and other terms are contained in ${H}_{i - 1}\left( {v}^{\prime }\right)$ for ${v}^{\prime } \in$ neighbors(v, u). Hence, equation B. 5 is sufficient to simulate the Reduce operation.
|
| 344 |
+
|
| 345 |
+
Theorem B.5. $B$ -ary HO-GNNs are equally expressive as NLMs with maximum arity $B + 1$ .
|
| 346 |
+
|
| 347 |
+
Proof. This is a direct conclusion by combining Lemma B. 2 and Lemma B.4.
|
| 348 |
+
|
| 349 |
+
### B.4 Expressiveness of hypergraph convolution and attention
|
| 350 |
+
|
| 351 |
+
There exist other variants of hypergraph neural networks. In particular, hypergraph convolution[26- 28], attention[29] and message passing[30] focus on updating node features instead of tuple features through hyperedges . These approaches can be viewed as instances of hypergraph neural networks, and they have smaller time complexity because they do not model all high-arity tuples. However, they are less expressive than the standard hypergraph neural networks with equal max arity.
|
| 352 |
+
|
| 353 |
+
These approaches can be formulated to two steps at each iteration. At the first step, each hyperedge is updated by the features of nodes it connects.
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
{h}_{i, e} = {\mathrm{{AGG}}}_{v \in e}{f}_{i - 1, v} \tag{B.6}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
At the second step, each node is updated by the features of hyperedges connecting it.
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
{f}_{i, v} = {\mathrm{{AGG}}}_{v \in e}{h}_{i, e} \tag{B.7}
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
where ${f}_{i, v}$ is the feature of node $v$ at iteration $i$ , and ${h}_{i, v}$ is the aggregated message passing through hyperedge $e$ at iteration $i + 1$ .
|
| 366 |
+
|
| 367 |
+
It is not hard to see that B. 6 can be realized by $B$ iterations of NLM layers with Expand operations where $B$ is the max arity of hyperedges. This can be done by expanding each node feature to every high arity features that contain the node, and aggregate them at the tuple corresponding to each hyperedge. Then, B. 7 can also be realized by $B$ iterations of NLM layers with Reduce operations, as the tuple feature will finally be reduced to a single node contained in the tuple.
|
| 368 |
+
|
| 369 |
+
---
|
| 370 |
+
|
| 371 |
+
${}^{ \ddagger }{n}^{\underline{k}} = n \times \left( {n - 1}\right) \times \cdots \times \left( {n - k + 1}\right) .$
|
| 372 |
+
|
| 373 |
+
${}^{§}$ The sub-tuple does not have to be consecutive, but instead can be a any subset of the tuple that keeps the element order.
|
| 374 |
+
|
| 375 |
+
---
|
| 376 |
+
|
| 377 |
+
This approach has lower complexity compared to the GNNs we study applied on hyperedges, because it only requires communication between nodes and hyperedges connecting to them, which takes $O\left( {\left| V\right| \cdot \left| E\right| }\right)$ time at each iteration. Compared to them, NLMs takes $O\left( {\left| V\right| }^{B}\right)$ time because NLMs keep features of every tuple with max arity $B$ , and allow communication from tuples to tuples instead of between tuples and single nodes. An example is provided below that this approach can not solve while NLMs can.
|
| 378 |
+
|
| 379 |
+
Consider a graph with 6 nodes and 6 edges forming two triangles(1,2,3)and(4,5,6). Because of the symmetry, the representation of each node should be identical throughout hypergraph message passing rounds. Hence, it is impossible for these models to conclude that(1,2,3)is a triangle but (4,2,3)is not, based only on the node representations, because they are identical. In contrast, NLMs with max arity 3 can solve them (as standard triangle detection problem in Table 1).
|
| 380 |
+
|
| 381 |
+
### B.5 The Time and Space Complexity of NLMs
|
| 382 |
+
|
| 383 |
+
Handling high-arity features and using deeper models usually increase the computational cost in terms of time and space. As an instance that use the architecture of RelNN, NLMs with depth $D$ and max arity $B$ takes $O\left( {D{n}^{B}}\right)$ time when applying to graphs with size $n$ . This is because both Expand and Reduce operation have linear time complexity with respect to the input size (which is $O\left( {n}^{B}\right)$ at each iteration). If we need to record the computational history (which is typically the case when training the network using back propagation), the space complexity is the same as the time complexity.
|
| 384 |
+
|
| 385 |
+
GNNs applied to(B - 1)-ary hyperedges and depth $D$ are equally expressive as RelNNs with depth $O\left( D\right)$ and max arity $B$ . Though up to(B - 1)-ary features are kept in their architecture, the broadcast message passing scheme scale up the complexity by a factor of $O\left( n\right)$ , so they also have time and space complexity $O\left( {D{n}^{B}}\right)$ . Here the length of feature tensors $W$ is treated as a constant.
|
| 386 |
+
|
| 387 |
+
## C Arity and Depth Hierarchy: Proofs and Analysis
|
| 388 |
+
|
| 389 |
+
### C.1 Proof of Theorem 3.1: Arity Hierarchy.
|
| 390 |
+
|
| 391 |
+
[9] have connected high-dimensional GNNs with high-dimensional WL tests. Specifically, they showed that the $B$ -ary HO-GNNs are equally expressive as $B$ -dimensional WL test on graph isomorphism test problem. In Theorem B. 5 we proved that $B$ -ary HO-GNNs are equivalent to NLM of maximum arity $B + 1$ in terms of expressiveness. Hence, NLM of maximum arity $B + 1$ can distinguish if two non-isomorphic graphs if and only if $B$ -dimensional WL test can distinguish them.
|
| 392 |
+
|
| 393 |
+
However, Cai et al. [11] provided an construction that can generate a pair of non-isomorphic graphs for every $B$ , which can not be distinguished by(B - 1)-dimensional WL test but can be distinguished by $B$ -dimensional WL test. Let ${G}_{B}^{1}$ and ${G}_{B}^{2}$ be such a pair of graph.
|
| 394 |
+
|
| 395 |
+
Since NLM of maximum arity $B + 1$ is equally expressive as $B$ -ary HO-GNNs, there must be such a NLM that classify ${G}_{B}^{1}$ and ${G}_{B}^{2}$ into different label. However, such NLM can not be realized by any NLM of maximum arity $B$ because they are proven to have identical outputs on ${G}_{B}^{1}$ and ${G}_{B}^{2}$ .
|
| 396 |
+
|
| 397 |
+
In the other direction, NLMs of maximum arity $B + 1$ can directly realize NLMs of maximum arity $B$ , which completes the proof.
|
| 398 |
+
|
| 399 |
+
### C.2 Upper Depth Bound for Unbounded-Precision NLM.
|
| 400 |
+
|
| 401 |
+
The idea for proving an upper bound on depth is to connect NLMs to WL-test, and use the $O\left( {n}^{B}\right)$ upper bound on number of iterations for $B$ -dimensional test [31], and FOC formula is the key connection.
|
| 402 |
+
|
| 403 |
+
For any fixed $n, B$ -dimensional WL test divide all graphs of size $n,{\mathcal{G}}_{ = n}$ , into a set of equivalence classes $\left\{ {{\mathcal{C}}_{1},{\mathcal{C}}_{2},\cdots ,{\mathcal{C}}_{m}}\right\}$ , where two graphs belong to the same class if they can not be distinguished by the WL test. We have shown that NLMs of maximum arity $\left( {B + 1}\right)$ must have the same input for all graphs in the same equivalence class. Thus, any NLM of maximum arity $B + 1$ can be view as a labeling over ${\mathcal{C}}_{1},\cdots ,{\mathcal{C}}_{m}$ .
|
| 404 |
+
|
| 405 |
+
Stated by Cai et al. [11], $B$ -dimensional WL test are as powerful as ${\mathrm{{FOC}}}_{B + 1}$ in differentiating graphs graphs. Combined with the $O\left( {n}^{B}\right)$ upper bound of WL test iterations, for each ${\mathcal{C}}_{i}$ , there must be an ${\mathrm{{FOC}}}_{B + 1}$ formula of quantifier depth $O\left( {n}^{B}\right)$ that exactly recognize ${\mathcal{C}}_{i}$ over ${\mathcal{G}}_{ = n}$ .
|
| 406 |
+
|
| 407 |
+
Finally, with unbounded precision, for any $f\left( n\right)$ , NLM of maximum arity $B + 1$ and depth $f\left( n\right)$ can compute all ${\mathrm{{FOC}}}_{B + 1}$ formulas with quantifier depth $f\left( n\right)$ . Note that there are finite number of such formula because the supscript of counting quantifiers is bounded by $n$ .
|
| 408 |
+
|
| 409 |
+
For any graph in some class ${\mathcal{C}}_{i}$ , the class can be determined by evaluating these FOC formulas, and then the label is determined. Therefore, any NLM of maximum arity $B + 1$ can be realized by a NLM of maximum arity $B + 1$ and depth $O\left( {n}^{B}\right)$ .
|
| 410 |
+
|
| 411 |
+
### C.3 Graph Problems
|
| 412 |
+
|
| 413 |
+
<table><tr><td>$B = 4$</td><td>4-Clique Detection NLM $\left\lbrack {O\left( 1\right) ,4}\right\rbrack$</td><td>4-Clique Count NLM $\left\lbrack {O\left( 1\right) ,4}\right\rbrack$</td></tr><tr><td>$B = 3$</td><td>Triangle Detection NLM $\left\lbrack {O\left( 1\right) ,3}\right\rbrack$ Bipartiteness NLM[O( $\log n$ ), 3]* All-Pair Connectivity NLM[O( $\log n$ ),3] ${}^{ \star }$ All-Pair Connectivity- $k$ NLM ${\left\lbrack O\left( \log k\right) ,3\right\rbrack }^{ \star }$</td><td>All-Pair Distance NLM ${\left\lbrack O\left( \log n\right) ,3\right\rbrack }^{ \star }$</td></tr><tr><td>$B = 2$</td><td>${\mathrm{{FOC}}}_{2}$ Realization NLM $\left\lbrack {\cdot ,2}\right\rbrack \left\lbrack {12}\right\rbrack$ 3/4-Link Detection NLM[O(1), 2] S-T Connectivity NLM $\left\lbrack {O\left( n\right) ,2}\right\rbrack$ S-T Connectivity- $k$ NLM $\left\lbrack {O\left( k\right) ,2}\right\rbrack$</td><td>S-T Distance NLM $\left\lbrack {O\left( n\right) ,2}\right\rbrack$ Max Degree NLM $\left\lbrack {O\left( 1\right) ,2}\right\rbrack$ Max Flow NLM ${\left\lbrack O\left( {n}^{3}\right) ,2\right\rbrack }^{ \star }$</td></tr><tr><td>$B = 1$</td><td>Node Color Majority: NLM[O(1), 1]</td><td>Count Red Nodes: NLM $\left\lbrack {O\left( 1\right) ,1}\right\rbrack$</td></tr><tr><td/><td>Classification Tasks</td><td>Regression Tasks</td></tr></table>
|
| 414 |
+
|
| 415 |
+
Table 1: The minimum depth and arity of NLMs for solving graph classification and regression tasks. The * symbol indicates that these are conjectured lower bounds.
|
| 416 |
+
|
| 417 |
+
We list a number of examples for graph classification and regression tasks, and we provide the definitions and the current best known NLMs for learning these tasks from data. For some of the problems, we will also show why they can not be solved by a simpler problems, or indicate them as open problems.
|
| 418 |
+
|
| 419 |
+
Node Color Majority. Each node is assigned a color $c \in \mathcal{C}$ where $\mathcal{C}$ is a finite set of all colors. The model needs to predict which color the most nodes have.
|
| 420 |
+
|
| 421 |
+
Using a single layer with sum aggregation, the model can count the number of nodes of color $c$ for each $c \in \mathcal{C}$ on its global representation.
|
| 422 |
+
|
| 423 |
+
Count Red Nodes. Each node is assigned a color of red or blue. The model needs to count the number of red nodes.
|
| 424 |
+
|
| 425 |
+
Similarly, using a single layer with sum aggregation, the model can count the number of red nodes on its global representation.
|
| 426 |
+
|
| 427 |
+
3-Link Detection. Given an unweighted, undirected graph, the model needs to detect whether there is a triple of nodes(a, b, c)such that $a \neq c$ and(a, b)and(b, c)are edges.
|
| 428 |
+
|
| 429 |
+
This is equivalent to check whether there exists a node with degree at least 2 . We can use a Reduction operation with sum aggregation to compute the degree for each node, and then use a Reduction operation with max aggregation to check whether the maximum degree of nodes is greater than or equal to 2 .
|
| 430 |
+
|
| 431 |
+
Note that this can not be done with 1 layer, because the edge information is necessary for the problem, and they require at least 2 layers to be passed to the global representation.
|
| 432 |
+
|
| 433 |
+
4-Link Detection. Given an unweighted undirected graph, the model needs to detect whether there is a 4-tuple of nodes(a, b, c, d)such that $a \neq c, b \neq d$ and $\left( {a, b}\right) ,\left( {b, c}\right) ,\left( {c, d}\right)$ are edges (note that a triangle is also a 4-link).
|
| 434 |
+
|
| 435 |
+
This problem is equivalent to check whether there is an edge between two nodes with degrees $\geq 2$ . We can first reduce the edge information to compute the degree for each node, and then expand it back to 2-dimensional representations, so we can check for each edge if the degrees of its ends are $\geq 2$ . Then the results are reduced to the global representation with existential quantifier (realized by max aggregation) in 2 layers.
|
| 436 |
+
|
| 437 |
+
Triangle Detection. Given a unweighted undirected graph, the model is asked to determine whether there is a triangle in the graph i.e. a tuple(a, b, c)so that $\left( {a, b}\right) ,\left( {b, c}\right) ,\left( {c, a}\right)$ are all edges.
|
| 438 |
+
|
| 439 |
+
This problem can be solved by NLM [4,3]: we first expand the edge to 3-dimensional representations, and determine for each 3-tuple if they form a triangle. The results of 3-tuples require 3 layers to be passed to the global representation.
|
| 440 |
+
|
| 441 |
+
We can prove that Triangle Detection indeed requires breadth at least 3 . Let $k$ -regular graphs be graphs where each node has degree $k$ . Consider two $k$ -regular graphs both with $n$ nodes, so that exactly one of them contains a triangle ${}^{1}$ . However, NLMs of breadth 2 has been proven not to be stronger than WL test on distinguish graphs, and thus can not distinguish these two graphs (WL test can not distinguish any two $k$ -regular graphs with equal size).
|
| 442 |
+
|
| 443 |
+
4-Clique Detection and Counting. Given an undirected graph, check existence of, or count the number of tuples(a, b, c, d)so that there are edges between every pair of nodes in the tuple.
|
| 444 |
+
|
| 445 |
+
This problem can be easily solved by a NLM with breadth 4 that first expand the edge information to the 4-dimensional representations, and for each tuple determine whether its is a 4-clique. Then the information of all 4-tuples are reduced 4 times to the global representation (sum aggregation can be used for counting those).
|
| 446 |
+
|
| 447 |
+
Though we did not find explicit counter-example construction on detecting 4-cliques with NLMs of breadth 3 , we suggest that this problem can not be solved with NLMs with 3 or lower breadth.
|
| 448 |
+
|
| 449 |
+
Connectivity. The connectivity problems are defined on unweighted undirected graphs. S-T connectivity problems provides two nodes $S$ and $T$ (labeled with specific colors), and the model needs to predict if they are connected by some edges. All pair connectivity problem require the model to answer for every pair of nodes. Connectivity- $k$ problems have an additional requirement that the distance between the pair of nodes can not exceed $k$ .
|
| 450 |
+
|
| 451 |
+
S-T connectivity- $k$ can be solved by a NLM of breadth 2 with $k$ iterations. Assume $S$ is colored with color $c$ , at every iteration, every node with color $c$ will spread the color to its neighbors. Then, after $k$ iterations, it is sufficient to check whether $T$ has the color $c$ .
|
| 452 |
+
|
| 453 |
+
With NLMs of breadth 3, we can use $O\left( {\log k}\right)$ matrix multiplications to solve connectivity- $k$ between every pair of nodes. Since the matrix multiplication can naturally be realized by NLMs of breadth 3 with two layers. All-pair connectivity problems can all be solved with $O\left( {\log k}\right)$ layers.
|
| 454 |
+
|
| 455 |
+
Theorem C.1(S-T connectivity- $k$ with NLM). S-T connectivity- $k$ can not be solved by a NLM of maximum arity within $o\left( k\right)$ iterations.
|
| 456 |
+
|
| 457 |
+
Proof. We construct two graphs each has ${2k}$ nodes ${u}_{1},\cdots ,{u}_{k},{v}_{1},\cdots ,{v}_{k}$ . In both graph, there are edges $\left( {{u}_{i},{u}_{i + 1}}\right)$ and $\left( {{v}_{i},{v}_{i + 1}}\right)$ for $1 \leq i \leq k - 1$ i.e. there are two links of length $k$ . We then set $S = {u}_{1}, T = {u}_{n}$ and $S = {u}_{1}, T = {v}_{n}$ the the two graphs.
|
| 458 |
+
|
| 459 |
+
We will analysis GNNs as NLMs are proved to be equivalent to them by scaling the depth by a constant factor. Now consider the node refinement process where each node $x$ is refined by the multiset of labels of $x$ ’s neighbots and the multiiset of labels of $x$ ’s non-neighbors.
|
| 460 |
+
|
| 461 |
+
Let ${C}_{j}^{\left( i\right) }\left( x\right)$ be the label of $x$ in graph $j$ after $i$ iterations, at the beginning, WLOG, we have
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
{C}_{1}^{\left( 0\right) }\left( {u}_{1}\right) = 1,{C}_{1}^{\left( 0\right) }\left( {u}_{n}\right) = 2{C}_{1}^{\left( 0\right) }\left( {u}_{1}\right) = 1,{C}_{1}^{\left( 0\right) }\left( {v}_{n}\right) = 2
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
and all other nodes are labeled as 0 .
|
| 468 |
+
|
| 469 |
+
Then we can prove by induction: after $i \leq \frac{k}{2} - 1$ iterations, for $1 \leq t \leq i + 1$ we have
|
| 470 |
+
|
| 471 |
+
$$
|
| 472 |
+
{C}_{1}^{\left( {u}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {u}_{t}\right) ,{C}_{1}^{\left( {v}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {v}_{t}\right)
|
| 473 |
+
$$
|
| 474 |
+
|
| 475 |
+
598
|
| 476 |
+
|
| 477 |
+
$$
|
| 478 |
+
{C}_{1}^{\left( {u}_{k - t + 1}\right) } = {C}_{2}^{\left( i\right) }\left( {v}_{k - t + 1}\right) ,{C}_{1}^{\left( {v}_{k - t + 1}\right) } = {C}_{2}^{\left( i\right) }\left( {u}_{k - t + 1}\right)
|
| 479 |
+
$$
|
| 480 |
+
|
| 481 |
+
---
|
| 482 |
+
|
| 483 |
+
${}^{1}$ Such construction is common. One example is $k = 2, n = 6$ , and the graph may consist of two separated triangles or one hexagon
|
| 484 |
+
|
| 485 |
+
---
|
| 486 |
+
|
| 487 |
+
and for $i + 2 \leq t \leq k - i - 1$ we have
|
| 488 |
+
|
| 489 |
+
$$
|
| 490 |
+
{C}_{1}^{\left( {u}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {u}_{t}\right) ,{C}_{1}^{\left( {v}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {v}_{t}\right)
|
| 491 |
+
$$
|
| 492 |
+
|
| 493 |
+
This is true because before $\frac{k}{2}$ iterations are run, the multiset of all node labels are identical for the two graphs (say ${S}^{\left( i\right) }$ ). Hence each node $x$ is actually refined by its neighbors and ${S}^{\left( i\right) }$ where ${S}^{\left( i\right) }$ is the same for all nodes. Hence, before running $\frac{k}{2}$ iterations when the message between $S$ and $T$ finally meets in the first graph, GNN can not distinguish the two graphs, and thus can not solve the connectivity with distance $k - 1$ .
|
| 494 |
+
|
| 495 |
+
Max Degree. The max degree problem gives a graph and ask the model to output the maximum degree of its nodes.
|
| 496 |
+
|
| 497 |
+
Like we mentioned in 3-link detection, one layer for computing the degree for each node, and another layer for taking the max operation over nodes should be sufficient.
|
| 498 |
+
|
| 499 |
+
Max Flow. The Max Flow problem gives a directional graph with capacities on edges, and indicate two nodes $S$ and $T$ . The models is then asked to compute the amount of max-flow from $S$ to $T$ .
|
| 500 |
+
|
| 501 |
+
Notice that the Breadth First Search (BFS) component in Dinic's algorithm[32] can be implemented on NLMs as they does not require node identities (all new-visited nodes can augment to their non-visited neighbors in parallel). Since the BFS runs for $O\left( n\right)$ iteration, and the Dinic’s algorithm runs BFS $O\left( {n}^{2}\right)$ times, the max-flow can be solved by NLMs with in $O\left( {n}^{3}\right)$ iterations.
|
| 502 |
+
|
| 503 |
+
Distance. Given a graph with weighted edges, compute the length of the shortest between specified node pair (S-T Distance) or all node pairs (All-pair Distance).
|
| 504 |
+
|
| 505 |
+
Similar to Connectivity problems, but Distance problems now additionally record the minimum distance from $S$ (for S-T) or between every node pairs (for All-pair), which can be updated using min operator (using Min-plus matrix multiplication for All-pair case).
|
| 506 |
+
|
| 507 |
+
## D Experiments
|
| 508 |
+
|
| 509 |
+
We now study how our theoretical results on model expressiveness and learning apply to relational neural networks trained with gradient descent on practically meaningful problems. We begin by describing two synthetic benchmarks: graph substructure detection and relational reasoning.
|
| 510 |
+
|
| 511 |
+
In the graph substructure detection dataset, there are several tasks of predicting whether there input graph containd a sub-graph with specific structure. The tasks are: 3-link (length-3 path), 4-link, triangle, and 4-clique. These are important graph properties with many potential applications.
|
| 512 |
+
|
| 513 |
+
The relational reasoning dataset is composed of two family-relationship prediction tasks and two connectivity-prediction tasks. They are all binary edge classification tasks. In the family-relationship prediction task, the input contains the mother and father relationships, and the task is to predict the grandparent and uncle relationships between all pairs of entities. In the connectivity-prediction tasks, the input is the edges in an undirected graph and the task is to predict, for all pairs of nodes, whether they are connected with a path of length $\leq 4$ (connectivity-4) and whether they are connected with a path of arbitrary length (connectivity). The data generation for all datasets is included in Appendix D.
|
| 514 |
+
|
| 515 |
+
### D.1 Experiment Setup
|
| 516 |
+
|
| 517 |
+
For all problems, we have 800 training samples, 100 validation samples, and 300 test samples for each different $n$ we are testing the models on.
|
| 518 |
+
|
| 519 |
+
We then provide the details on how we synthesize the data. For most of the problems, we generate the graph by randomly selecting from all potential edges i.e. the Erdős-Rényi model. We sample the number of edges around $n,{2n}, n\log n$ and ${n}^{2}/2$ . For all problems, with ${50}\%$ probability the graph will first be divided into2,3,4or 5 parts with equal number of components, where we use the first generated component to fill the edges for rest of the components. Some random edges are added afterwards. This make the data contain more isomorphic sub-graphs, which we found challenging empirically.
|
| 520 |
+
|
| 521 |
+
Substructure Detection. To generate a graph that does not contain a certain substructure, we randomly add edges when reaching a maximal graph not containing the substructure or reaching the
|
| 522 |
+
|
| 523 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Agg.</td><td colspan="2">3-link</td><td colspan="2">4-link</td><td colspan="2">triangle</td><td colspan="2">4-clique</td></tr><tr><td>$n = {10}$</td><td>$n = {30}$</td><td>$n = {10}$</td><td>$n = {30}$</td><td>$n = {10}$</td><td>$n = {30}$</td><td>$n = {10}$</td><td>$n = {30}$</td></tr><tr><td rowspan="2">1-ary GNN</td><td>Max</td><td>${70.0} \pm {0.0}$</td><td>${82.7} \pm {0.0}$</td><td>${92.0}_{\pm {0.0}}$</td><td>${91.7} \pm {0.0}$</td><td>${73.7}_{\pm {3.2}}$</td><td>${50.2} \pm {1.8}$</td><td>${55.3}_{\pm {4.0}}$</td><td>46.2±1.3</td></tr><tr><td>Sum</td><td>${100.0}_{\pm {0.0}}$</td><td>${89.4}_{\pm {0.4}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${86.1}_{\pm {1.2}}$</td><td>${77.7}_{\pm {8.5}}$</td><td>${48.6}_{\pm {1.6}}$</td><td>${53.7}_{\pm {0.6}}$</td><td>${55.2}_{\pm {0.8}}$</td></tr><tr><td rowspan="2">2-ary NLM</td><td>Max</td><td>${65.3}_{\pm {0.6}}$</td><td>${54.0}_{\pm {0.6}}$</td><td>${93.0}_{\pm {0.0}}$</td><td>${95.7}_{\pm {0.0}}$</td><td>${51.0}_{\pm {1.7}}$</td><td>${49.2} \pm {0.4}$</td><td>${55.0}_{\pm {0.0}}$</td><td>${45.7}_{\pm {0.0}}$</td></tr><tr><td>Sum</td><td>${100.0}_{\pm {0.0}}$</td><td>${88.3}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${67.4}_{\pm {16.4}}$</td><td>${82.0}_{\pm {2.6}}$</td><td>${48.3}_{\pm {0.0}}$</td><td>${53.0}_{\pm {0.0}}$</td><td>${54.4}_{\pm {1.5}}$</td></tr><tr><td rowspan="2">2-ary GNN</td><td>Max</td><td>${78.7}_{\pm {0.6}}$</td><td>${76.0}_{\pm {17.3}}$</td><td>${97.7}_{\pm {4.0}}$</td><td>${98.6}_{\pm {2.5}}$</td><td>${100.0} \pm {0.0}$</td><td>${100.0} \pm {0.0}$</td><td>${55.0}_{\pm {0.0}}$</td><td>${45.7}_{\pm {0.0}}$</td></tr><tr><td>Sum</td><td>${100.0}_{\pm {0.0}}$</td><td>${51.2} \pm {7.9}$</td><td>100.0 $\pm {0.0}$</td><td>${}_{{45.7} \pm {7.6}}$</td><td>100.0 $\pm {0.0}$</td><td>${49.2} \pm {1.0}$</td><td>${61.0}_{\pm {5.6}}$</td><td>${54.3}_{\pm {0.0}}$</td></tr><tr><td rowspan="2">3-ary NLM</td><td>Max</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${59.0}_{\pm {6.9}}$</td><td>${45.9}_{\pm {0.4}}$</td></tr><tr><td>Sum</td><td>${100.0}_{\pm {0.0}}$</td><td>${87.6}_{\pm {11.0}}$</td><td>100.0 ${}_{\pm {0.0}}$</td><td>${65.4}_{\pm {14.3}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${80.6} \pm {8.8}$</td><td>${73.7}_{\pm {13.8}}$</td><td>${53.3}_{\pm {8.8}}$</td></tr><tr><td rowspan="2">3-ary GNN</td><td>Max</td><td>${79.0}_{\pm {0.0}}$</td><td>${86.0} \pm {0.0}$</td><td>${100.0} \pm {0.0}$</td><td>100.0±0.0</td><td>${100.0} \pm {0.0}$</td><td>100.0±0.0</td><td>${84.0}_{\pm {0.0}}$</td><td>${93.3}_{\pm {0.0}}$</td></tr><tr><td>Sum</td><td>${100.0}_{\pm {0.0}}$</td><td>${84.1}_{\pm {18.6}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${61.1}_{\pm {15.0}}$</td><td>100.0 ${}_{\pm {0.0}}$</td><td>${95.1}_{\pm {7.3}}$</td><td>${80.5}_{\pm {0.7}}$</td><td>${66.2}_{\pm {19.6}}$</td></tr><tr><td rowspan="2">4-ary NLM</td><td>Max</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${82.0}_{\pm {1.7}}$</td><td>${93.1}_{\pm {0.2}}$</td></tr><tr><td>Sum</td><td>${100.0} \pm {0.0}$</td><td>${59.1}_{\pm {5.3}}$</td><td>${100.0} \pm {0.0}$</td><td>${67.7}_{\pm {24.1}}$</td><td>${100.0} \pm {0.0}$</td><td>${82.1}_{\pm {12.8}}$</td><td>${84.0} \pm {0.0}$</td><td>${67.0}_{\pm {18.9}}$</td></tr></table>
|
| 524 |
+
|
| 525 |
+
Table 2: Overall accuracy on relational reasoning problems. All models are trained on $n = {10}$ , and tested on $n = {30}$ . The standard error of all values are computed based on three random seeds.
|
| 526 |
+
|
| 527 |
+
edge limit. For generating a graph that does contain the certain substructure, we first generate one that does not contain, and then randomly replace present edges with missing edges until we detect the substructure in the graph. This aim to change the label from "No" to "Yes" while minimizing the change to the overall graph properties, and we found that data generated using edge replacing is much more difficult for neural networks compared to random generated graphs from scratch.
|
| 528 |
+
|
| 529 |
+
Family Tree. We generate the family trees using the algorithm modified from [5]. We add people to the family one by one. When a person is added, with probability $p$ we will try to find a single woman and a single man, get them married and let the new children be their child, and otherwise the new person is introduced as a non-related person. Every new person is marked as single and set the gender with a coin flip.
|
| 530 |
+
|
| 531 |
+
We adjust $p$ based on the ratio of single population: $p = {0.7}$ when more than ${40}\%$ of the population are single, and $p = {0.3}$ when less than ${20}\%$ of the population are single, and $p = {0.5}$ otherwise.
|
| 532 |
+
|
| 533 |
+
Connectivity. For connectivity problems, we use the similar generation method as the substructure detection. We sample the query pairs so that the labels are balanced.
|
| 534 |
+
|
| 535 |
+
### D.2 Model Implementation Details
|
| 536 |
+
|
| 537 |
+
For all models, we use a hidden dimension 128 except for 3-dimensional HO-GNN and 4-dimensional NLM where we use hidden dimension 64.
|
| 538 |
+
|
| 539 |
+
All model have 4 layers that each has its own parameters, except for connectivity where we use the recurrent models that apply the second layer $k$ times, where $k$ is sampled from integers in $\left\lbrack {2\log n,3\log n}\right\rbrack$ . The depths are proven to be sufficient for solving these problems (unless the model itself can not solve).
|
| 540 |
+
|
| 541 |
+
All models are trained for 100 epochs using adam optimizer with learning rate $3 \times {10}^{-4}$ decaying at epoch 50 and 80 .
|
| 542 |
+
|
| 543 |
+
We have varied the depth, the hidden dimension, and the activation function of different models. We select sufficient hidden dimension and depth for every model and problem (i.e., we stop when increasing depth or hidden dimension doesn't increase the accuracy). We tried linear, ReLU, and Sigmoid activation functions, and ReLU performed the best overall combinations of models and tasks.
|
| 544 |
+
|
| 545 |
+
### D.3 Results
|
| 546 |
+
|
| 547 |
+
Our main results on all datasets are shown in Table 2 and Table 3. We empirically compare relational neural networks with different maximum arity $B$ , different model architecture (GNN and NLM), and different aggregation functions (max and sum). All models use sigmoidal activation for all MLPs. For each task on both datasets we train on a set of small graphs $\left( {n = {10}}\right)$ and test the trained model on both small graphs and large graphs $\left( {n = {10}\text{and}n = {30}}\right)$ . We summarize the findings below.
|
| 548 |
+
|
| 549 |
+
<table><tr><td rowspan="2">Model</td><td/><td colspan="2">grand parent</td><td colspan="2">uncle</td><td colspan="2">connectivity-4 ${}^{\parallel }$</td><td colspan="2">connectivity</td></tr><tr><td>Agg.</td><td>$n = {20}$</td><td>$n = {80}$</td><td>$n = {20}$</td><td>$n = {80}$</td><td>$n = {10}$</td><td>$n = {80}$</td><td>$n = {10}$</td><td>$n = {80}$</td></tr><tr><td rowspan="2">1-ary GNN</td><td>Max</td><td>${84.0}_{\pm {0.3}}$</td><td>${64.8}_{\pm {0.0}}$</td><td>${93.6}_{\pm {0.3}}$</td><td>${66.1}_{\pm {0.0}}$</td><td>${72.6}_{\pm {3.6}}$</td><td>${67.5}_{\pm {0.5}}$</td><td>${85.6}_{\pm {0.3}}$</td><td>${75.1}_{\pm {1.9}}$</td></tr><tr><td>Sum</td><td>${84.7}_{\pm {0.1}}$</td><td>${64.4}_{\pm {0.0}}$</td><td>${94.3}_{\pm {0.2}}$</td><td>${66.2}_{\pm {0.0}}$</td><td>${79.6}_{\pm {0.1}}$</td><td>${68.3}_{\pm {0.1}}$</td><td>${87.1}_{\pm {0.3}}$</td><td>${75.0}_{\pm {0.2}}$</td></tr><tr><td rowspan="2">2-ary NLM</td><td>Max</td><td>${82.3}_{\pm {0.5}}$</td><td>${65.6}_{\pm {0.1}}$</td><td>${93.1}_{\pm {0.0}}$</td><td>${66.6}_{\pm {0.0}}$</td><td>${91.2}_{\pm {0.2}}$</td><td>${51.0}_{\pm {0.6}}$</td><td>${88.9}_{\pm {2.6}}$</td><td>${67.1}_{\pm {4.8}}$</td></tr><tr><td>Sum</td><td>${82.9} \pm {0.1}$</td><td>${64.6}_{\pm {0.1}}$</td><td>${93.4}_{\pm {0.0}}$</td><td>${66.7}_{\pm {0.2}}$</td><td>${96.0} \pm {0.4}$</td><td>${68.3}_{\pm {0.5}}$</td><td>${84.0}_{\pm {0.0}}$</td><td>${71.9}_{\pm {0.0}}$</td></tr><tr><td rowspan="2">2-ary GNN</td><td>Max</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0} \pm {0.0}$</td><td>100.0±0.0</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${84.0}_{\pm {0.0}}$</td><td>${71.9}_{\pm {0.0}}$</td></tr><tr><td>Sum</td><td>100.0±0.0</td><td>${35.7}_{\pm {0.0}}$</td><td>100.0 ${}_{\pm {0.0}}$</td><td>${33.9}_{\pm {0.0}}$</td><td>100.0 ${}_{\pm {0.0}}$</td><td>${51.3}_{\pm {5.3}}$</td><td>${84.0}_{\pm {0.0}}$</td><td>${71.9}_{\pm {0.0}}$</td></tr><tr><td rowspan="2">3-ary NLM</td><td>Max</td><td>100.0±0.0</td><td>100.0±0.0</td><td>100.0±0.0</td><td>100.0±0.0</td><td>100.0 $\pm {0.0}$</td><td>100.0 $\pm {0.0}$</td><td>100.0±0.0</td><td>100.0±0.1</td></tr><tr><td>Sum</td><td>100.0 ${}_{\pm {0.0}}$</td><td>${35.7}_{\pm {0.0}}$</td><td>100.0 ${}_{\pm {0.0}}$</td><td>${50.8}_{\pm {29.4}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${77.8}_{\pm {11.8}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${88.2}_{\pm {8.0}}$</td></tr><tr><td rowspan="2">3-ary ${\mathrm{{NLM}}}_{\mathrm{{HE}}}$</td><td>Max</td><td>${100.0} \pm {0.0}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${100.0} \pm {0.0}$</td><td>100.0±0.0</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>Sum</td><td>${100.0}_{\pm {0.0}}$</td><td>${35.7}_{\pm {0.0}}$</td><td>${100.0}_{\pm {0.0}}$</td><td>${33.8}_{\pm {29.4}}$</td><td>N/A</td><td>N/A</td><td>N/A</td><td>N/A</td></tr></table>
|
| 550 |
+
|
| 551 |
+
Table 3: Overall accuracy on relational reasoning problems. Models for family-relationship prediction are trained on $n = {20}$ , while models for connectivity problems are trained on $n = {10}$ . All model are tested on $n = {80}$ . The standard error of all values are computed based on three random seeds. The 3-ary NLMs marked with "HE" have hyperedges in inputs, where each family is represented by a 3-ary hyperedge instead of two parent-child edges, and the results are similar to binary edges.
|
| 552 |
+
|
| 553 |
+
Expressiveness. We have seen a theoretical equal expressiveness between GNNs and NLMs applied to hypergraphs. That is, a GNN applied to $B$ -ary hyperedges is equivalent to a $\left( {B + 1}\right)$ -ary NLM. Table 2 and 3 further suggest their similar performance on tasks when trained with gradient descent.
|
| 554 |
+
|
| 555 |
+
Formally, triangle detection requires NLMs with at least $B = 3$ to solve. Thus, we see that all NLMs with arity $B = 2$ fail on this task, but models with $B = 3$ perform well. Formally,4-clique is realizable by NLMs with maximum arity $B = 4$ , but we failed to reliably train models to reach perfect accuracy on this problem. It is not yet clear what the cause of this behavior is.
|
| 556 |
+
|
| 557 |
+
Structural generalization. We discussed the structural generalization properties of NLMs in Section 4, in a learning setting based on fixed-precision networks and enumerative training. This setting can be approximated by training NLMs with max aggregation and sigmoidal activation on sufficient data.
|
| 558 |
+
|
| 559 |
+
We run a case study on the problem connectivity-4 about how the generalization performance changes when the test graph size gradually becomes larger. Figure 2 show how these models generalize to gradually larger graphs with size increasing from 10 to 80 . From the curves we can see that only models with sufficient expressiveness can get ${100}\%$ accuracy on the same size graphs, and among them the models using max aggregation generalize to larger graphs with no performance drop. 2-ary GNN and 3-ary NLM that use max aggregation have sufficient expressiveness and better generalization property. They achieve ${100}\%$ accuracy on the original graph size and generalize 698 perfectly to larger graphs.
|
| 560 |
+
|
| 561 |
+

|
| 562 |
+
|
| 563 |
+
Figure 2: How the performance of models drop when generalizing to larger graphs on the problem connectivity-4 (trained on graphs with size 10).
|
| 564 |
+
|
papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ON THE EXPRESSIVENESS AND GENERALIZATION OF HYPERGRAPH NEURAL NETWORKS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (Hyper-GNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of Hyper-GNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge arities. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Reasoning over graph-structured data is an important task in many applications, including molecule analysis, social network modeling, and knowledge graph reasoning [1-3]. While we have seen great success of various relational neural networks, such as Graph Neural Networks [GNNs; 4] and Neural Logical Machines [NLM; 5] in a variety of applications [6-8], we do not yet have a full understanding of how different design parameters, such as the depth of the neural network, affects the expressiveness of these models, or how effectively these models generalize from limited data.
|
| 16 |
+
|
| 17 |
+
This paper analyzes the expressiveness and generalization of relational neural networks applied to hypergraphs, which are graphs with edges connecting more than two nodes. We have formally shown the "if and only if" conditions for the expressive power with respect to the edge arity. That is, $k$ -ary hyper-graph neural networks are sufficient and necessary for realizing FOC- $k$ , a fragment of first-order logic which involves at most $k$ variables. This is a helpful result because now we can determine whether a specific hypergraph neural network can solve a problem by understanding what form of logic formula can represent the solution to this problem. Next, we formally described the relationship between expressiveness and non-constant-depth networks. We state a conjecture about the "depth hierarchy," and connect the potential proof of this conjecture to the distributed computing literature. Our results highlight that: Even when the inputs and outputs of models have only unary and binary relations, allowing intermediate hyperedge representations increases the expressiveness.
|
| 18 |
+
|
| 19 |
+
Furthermore, we prove, under certain realistic assumptions, it is possible to train a hypergraph neural networks on a finite set of small graphs, and it will generalize to arbitrarily large graphs. This ability is the result of the weight-sharing nature of hypergraph neural networks. We hope our work can serve as a foundation for designing hypergraph neural networks: to solve a specific problem, what arity do you need? What depth do you need? Will my model have structural generalization (i.e., to larger graphs)? Our theoretical results on learning are further supported by experiments, for empirical demonstration of the theorems.
|
| 20 |
+
|
| 21 |
+
§ 2 HYPERGRAPH REASONING PROBLEMS AND HYPERGRAPH NEURAL NETWORKS
|
| 22 |
+
|
| 23 |
+
A hypergraph representation $G$ is a tuple(V, X), where $V$ is a set of entities (nodes), and $X$ is a set of hypergraph representation functions. Specifically, $X = \left\{ {{X}_{0},{X}_{1},{X}_{2},\cdots ,{X}_{k}}\right\}$ , where ${X}_{j} : \left( {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right) \rightarrow \mathcal{S}$ is a function mapping every tuple of $j$ nodes to a value. We call $j$ the arity of the hyperedge and $k$ is the max arity of input hyperedges.m The range $\mathcal{S}$ can be any set of discrete labels that describes relation type, or a scalar number (e.g., the length of an edge), or a vector. In general, we will use the arity 0 representation function ${X}_{0}\left( \varnothing \right) \rightarrow \mathcal{S}$ to represent any global properties of the graph as a whole.
|
| 24 |
+
|
| 25 |
+
A graph reasoning function $f$ is a mapping from a hypergraph representation $G = \left( {V,X}\right)$ to another hyperedge representation function $Y$ on $V$ . As concrete examples, asking whether a graph is fully connected is a graph classification problem, where the output $Y = \left\{ {Y}_{0}\right\}$ and ${Y}_{0}\left( \varnothing \right) \rightarrow {\mathcal{S}}^{\prime } = \{ 0,1\}$ is a global label; finding the set of disconnected subgraphs of size $k$ is a $k$ -ary hyperedge classification problem, where the output $Y = \left\{ {Y}_{k}\right\}$ is a label for each $k$ -ary hyperedges.
|
| 26 |
+
|
| 27 |
+
There are two main motivations and constructions of a neural network applied to graph reasoning problems: message-passing-based and first-order-logic-inspired. Both approaches construct the computation graph layer by layer. The input to the entire neural network consists of the input features of nodes and hyperedges, while the output of the neural network is the per-node or per-edge prediction of desired properties, depending on the training task.
|
| 28 |
+
|
| 29 |
+
In a nutshell, within each layer, message-passing-based hypergraph neural networks, Higher-Order GNNs [9], perform message passing between each hyperedge and its neighbours. Specifically, we say the j-th neighbour set of a hyperedge $u = \left( {{x}_{1},{x}_{2},\cdots ,{x}_{i}}\right)$ of arity $i$ is ${N}_{j}\left( u\right) =$ $\left\{ \left( {{x}_{1},{x}_{2},\cdots ,{x}_{j - 1},r,{x}_{j + 1},\cdots ,{x}_{i}}\right) \right\}$ , where $r \in V$ . Then, the all neighbours of node $u$ is the union of all ${N}_{j}$ ’s, where $j = 1,2,\cdots ,i$ .
|
| 30 |
+
|
| 31 |
+
On the other hand, first-order-logic-inspired hypergraph neural networks consider building neural networks that can emulate first logic formulas. Neural Logic Machines [NLM; 5] are defined in terms of a set of input hyperedges; each hyperedge of arity $k$ is represented by a vector of (possibly real) values obtained by applying all of the k -ary predicates in the domain to the tuple of vertices it connects. Each layer in an NLM learns to apply a linear transformation with nonlinear activation and quantification operators (analogous to the for all $\forall$ and exists $\exists$ quantifiers in first-order logic), on these values. It is easy to prove, by construction, that given a sufficient number of layers and maximum arity, NLMs can learn to realize any first-order-logic formula. For readers who are not familiar with HO-GNNs [9] and NLMs [5], we include a mathematical summary of their computation graph in Appendix B. Our analysis starts from the following theorem.
|
| 32 |
+
|
| 33 |
+
Theorem 2.1. HO-GNNs [9] are equivalent to NLMs in terms of expressiveness. Specifically, a B-ary HO-GNN is equivalent to an NLM applied to $B + 1$ -ary hyperedges. Proofs are in Appendix B.3.
|
| 34 |
+
|
| 35 |
+
Given Theorem 2.1, we can focus our analysis on just one single type of hypergraph neural network. Specifically, we will focus on Neural Logic Machines [NLM; 5] because its architecture naturally aligns with first-order logic formula structures, which will aid some of our analysis. An NLM is characterized by hyperparameters $D$ (depth), and $B$ maximum arity. We are going to assume that $B$ is a constant, but $D$ can be dependent on the size of the input graph. We will use $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ to denote an NLM family with depth $D$ and max arity $B$ . Other parameters such as the width of neural networks affects the precise details of what functions can be realized, as it does in a regular neural network, but does not affect the analyses in this extended abstract. Furthermore, we will be focusing on neural networks with bounded precision, and briefly discuss how our results generalize to unbounded precision cases.
|
| 36 |
+
|
| 37 |
+
§ 3 EXPRESSIVENESS OF RELATIONAL NEURAL NETWORKS
|
| 38 |
+
|
| 39 |
+
We start from a formal definition of hypergraph neural network expressiveness.
|
| 40 |
+
|
| 41 |
+
Definition 3.1 (Expressiveness). We say a model family ${\mathcal{M}}_{1}$ is at least expressive as ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ , if for all ${M}_{2} \in {\mathcal{M}}_{2}$ , there exists ${M}_{1} \in {\mathcal{M}}_{1}$ such that ${M}_{1}$ can realize ${M}_{2}$ . A model family ${\mathcal{M}}_{1}$ is more expressive than ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succ {\mathcal{M}}_{2}$ , if ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ and $\exists {M}_{1} \in {\mathcal{M}}_{1}$ , $\forall {M}_{2} \in {\mathcal{M}}_{2},{M}_{2}$ can not realize ${M}_{1}$ .
|
| 42 |
+
|
| 43 |
+
Arity Hierarchy We first aim to quantify how the maximum arity $B$ of the network’s representation affects its expressiveness and find that, in short, even if the inputs and outputs of neural networks are of low arity, the higher the maximum arity for intermediate layers, the more expressive the NLM is.
|
| 44 |
+
|
| 45 |
+
Theorem 3.1 (Arity Hierarchy). For any maximum arity $B$ , there exists a depth ${D}^{ * }$ such that: $\forall D \geq {D}^{ * },\operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ is more expressive than $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ . This theorem applies to both fixed-precision and unbounded-precision networks.
|
| 46 |
+
|
| 47 |
+
Proof sketch: Our proof slightly extends the proof of Morris et al. [9]. First, the set of graphs distinguishable by $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ is bounded by graphs distinguishable by a $D$ -round order- $B$ Weisfeiler-Leman test [10]. If models in NLM $\left\lbrack {D,B}\right\rbrack$ cannot generate different outputs for two distinct
|
| 48 |
+
|
| 49 |
+
hypergraphs ${G}_{1}$ and ${G}_{2}$ , but there exists $M \in \operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ that can generate different outputs for ${G}_{1}$ and ${G}_{2}$ , then we can construct a graph classification function $f$ that $\operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ (with some fixed precision) can realize but $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ (even with unbounded precision) cannot.* The full proof is described in Appendix C.1.
|
| 50 |
+
|
| 51 |
+
It is also important to quantify the minimum arity for realizing certain graph reasoning functions.
|
| 52 |
+
|
| 53 |
+
Theorem 3.2 (FOL realization bounds). Let ${\mathrm{{FOC}}}_{B}$ denote a fragment of first order logic with at most $B$ variables, extended with counting quantifiers of the form ${\exists }^{ \geq n}\phi$ , which state that there are at least $n$ nodes satisfying formula $\phi$ [11].
|
| 54 |
+
|
| 55 |
+
* (Upper Bound) Any function $f$ in ${\mathrm{{FOC}}}_{B}$ can be realized by $\mathrm{{NLM}}\left\lbrack {D,B}\right\rbrack$ for some $D$ .
|
| 56 |
+
|
| 57 |
+
* (Lower Bound) There exists a function $f \in {\mathrm{{FOC}}}_{B}$ such that for all $D,f$ cannot be realized by $\operatorname{NLM}\left\lbrack {D,B - 1}\right\rbrack$ .
|
| 58 |
+
|
| 59 |
+
Proof: The upper bound part of the claim has been proved by Barceló et al. [12] for $B = 2$ . The results generalize easily to arbitrary $B$ because the counting quantifiers can be realized by sum aggregation. The lower bound part can be proved by applying Section 5 of [11], in which they show that ${\mathrm{{FOC}}}_{B}$ is equivalent to a(B - 1)-dimensional WL test in distinguishing non-isomorphic graphs. Given that $\mathrm{{NLM}}\left\lbrack {D,B - 1}\right\rbrack$ is equivalent to the(B - 2)-dimensional WL test of graph isomorphism, there must be an ${\mathrm{{FOL}}}_{B}$ formula that distinguishes two non-isomorphic graphs that $\operatorname{NLM}\left\lbrack {D,B - 1}\right\rbrack$ cannot. Hence, ${\mathrm{{FOL}}}_{B}$ cannot be realized by $\mathrm{{NLM}}\left\lbrack {\cdot ,B - 1}\right\rbrack$ .
|
| 60 |
+
|
| 61 |
+
Depth Hierarchy We now study the dependence of the expressiveness of NLMs on depth $D$ . Neural networks are generally defined to have a fixed depth, but allowing them to have a depth that is dependent on the number of nodes $n = \left| V\right|$ in the graph can substantially increase their expressive power. In the following, we define a depth hierarchy by analogy to the time hierarchy in computational complexity theory [13], and we extend our notation to let $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) ,B}\right\rbrack$ denote the class of adaptive-depth NLMs in which the growth-rate of depth $D$ is bounded by $O\left( {f\left( n\right) }\right)$ .
|
| 62 |
+
|
| 63 |
+
Conjecture 3.3 (Depth hierarchy). For any maximum arity $B$ , for any two functions $f$ and $g$ , if $g\left( n\right) = o\left( {f\left( n\right) /\log n}\right)$ , that is, $f$ grows logarithmically more quickly than $g$ , then fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) ,B}\right\rbrack$ is more expressive than fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {g\left( n\right) }\right) ,B}\right\rbrack$ .
|
| 64 |
+
|
| 65 |
+
There is a closely related result for the congested clique model in distributed computing, where [14] proved that $\operatorname{CLIQUE}\left( {g\left( n\right) }\right) \varsubsetneq \operatorname{CLIQUE}\left( {f\left( n\right) }\right)$ if $g\left( n\right) = o\left( {f\left( n\right) }\right)$ . This result does not have the $\log n$ gap because the congested clique model allows $\log n$ bits to transmit between nodes at each iteration, while fixed-precision NLM allows only a constant number of bits. The reason why the result on congested clique can not be applied to fixed-precision NLMs is that congested clique assumes unbounded precision representation for each individual node.
|
| 66 |
+
|
| 67 |
+
However, Conjecture 3.3 is not true for NLMs with unbounded precision, because there is an upper bound depth $O\left( {n}^{B - 1}\right)$ for a model’s expressiveness power. ${}^{ \dagger }$ That is, an unbounded-precision NLM can not achieve stronger expressiveness by increasing its depth beyond $O\left( {n}^{B - 1}\right)$ .
|
| 68 |
+
|
| 69 |
+
It is important to point out that, to realize a specific graph reasoning function, NLMs with different maximum arity $B$ may require different depth $D$ . Fürer [15] provides a general construction for problems that higher-dimensional NLMs can solve in asymptotically smaller depth than lower-dimensional NLMs. In the following we give a concrete example for computing $S - T$ Connectivity- $k$ , which asks whether there is a path of nodes from $S$ and $T$ in a graph, with length $\leq k$ .
|
| 70 |
+
|
| 71 |
+
Theorem 3.4 (S-T Connectivity- $k$ with Different Max Arity). For any function $f\left( k\right)$ , if $f\left( k\right) = o\left( k\right)$ , $\operatorname{NLM}\left\lbrack {O\left( {f\left( k\right) }\right) ,2}\right\rbrack$ cannot realize S-T Connectivity- $k$ . That is, S-T Connectivity- $k$ requires depth at least $O\left( k\right)$ for a relational neural network with an maximum arity of $B = 2$ . However, S-T Connectivity- $k$ can be realized by $\operatorname{NLM}\left\lbrack {O\left( {\log k}\right) ,3}\right\rbrack$ .
|
| 72 |
+
|
| 73 |
+
Proof sketch. For any integer $k$ , we can construct a graph with two chains of length $k$ , so that if we mark two of the four ends as $S$ or $T$ , any $\operatorname{NLM}\left\lbrack {k - 1,2}\right\rbrack$ cannot tell whether $S$ and $T$ are on the same chain. The full proof is described in Appendix C.3.
|
| 74 |
+
|
| 75 |
+
There are many important graph reasoning tasks that do not have known depth lower bounds, including all-pair connectivity and shortest distance [16, 17]. In Appendix C.3, we discuss the concrete complexity bounds for a series of graph reasoning problems.
|
| 76 |
+
|
| 77 |
+
${}^{ * }$ Note that the arity hierarchy is applied to fixed-precision and unbounded-precision separately. For example, $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ with unbounded precision is incomparable with $\operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ with fixed precision.
|
| 78 |
+
|
| 79 |
+
${}^{ \dagger }$ See appendix C. 2 for a formal statement and the proof.
|
| 80 |
+
|
| 81 |
+
§ 4 LEARNING AND GENERALIZATION IN RELATIONAL NEURAL NETWORKS
|
| 82 |
+
|
| 83 |
+
Given our understanding of what functions can be realized by NLMs, we move on to the problems of learning them: Can we effectively learn a NLMs to solve a desired task given a sufficient number of input-output examples? In this paper, we show that applying enumerative training with examples up to some fixed graph size can ensure that the trained neural network will generalize to all graphs larger than those appearing in the training set.
|
| 84 |
+
|
| 85 |
+
A critical determinant of the generalization ability for NLMs is the aggregation function they use. Specifically, Xu et al. [18] have shown that using sum as the aggregation function provides maximum expressiveness for graph neural networks. However, sum aggregation cannot be implemented in fixed-precision models with an arbitrary number of nodes, because as the graph size $n$ increases, the range of the sum aggregation also increases.
|
| 86 |
+
|
| 87 |
+
Definition 4.1 (Fixed-precision aggregation function). An aggregation function is fixed precision if it maps from any finite set of inputs with values drawn from finite domains to a fixed finite set of possible output values; that is, the cardinality of the range of the function cannot grow with the number of elements in the input set. Two useful fixed-precision aggregation functions are max, which computes the dimension-wise maximum over the set of input values, and fixed-precision mean, which approximates the dimension-wise mean to a fixed decimal place.
|
| 88 |
+
|
| 89 |
+
In order to focus on structural generalization in this section, we consider an enumerative training paradigm. When the input hypergraph representation domain $\mathcal{S}$ is a finite set, we can enumerate the set ${\mathcal{G}}_{ < N}$ of all possible input hypergraph representations of size bounded by $N$ . We first enumerate all graph sizes $n \leq N$ ; for each $n$ , we enumerate all possible values assigned to the hyperedges in the input. Given training size $N$ , we enumerate all inputs in ${\mathcal{G}}_{ \leq N}$ , associate with each one the corresponding ground-truth output representation, and train the model with these input-output pairs.
|
| 90 |
+
|
| 91 |
+
This has much stronger data requirements than the standard sampling-based training mechanisms in machine learning. In practice, this can be approximated well when the input domain $\mathcal{S}$ is small and the input data distribution is approximately uniformly distributed. The enumerative learning setting is studied by the language identification in the limit community [19], in which it is called complete presentation. This is an interesting learning setting because even if the domain for each individual hyperedge representation is finite, as the graph size can go arbitrarily large, the number of possible inputs is enumerable but unbounded.
|
| 92 |
+
|
| 93 |
+
Theorem 4.1 (Fixed-precision generalization under complete presentation). For any hypergraph reasoning function $f$ , if it can be realized by a fixed-precision relational neural network model $\mathcal{M}$ , then there exists an integer $N$ , such that if we train the model with complete presentation on all input hypergraph representations with size smaller than $N,{\mathcal{G}}_{ \leq N}$ , then for all $M \in \mathcal{M}$ ,
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathop{\sum }\limits_{{G \in {\mathcal{G}}_{ < N}}}1\left\lbrack {M\left( G\right) \neq f\left( G\right) }\right\rbrack = 0 \Rightarrow \forall G \in {\mathcal{G}}_{\infty } : M\left( G\right) = f\left( G\right) .
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
That is, as long as $M$ fits all training examples, it will generalize to all possible hypergraphs in ${\mathcal{G}}_{\infty }$ .
|
| 100 |
+
|
| 101 |
+
Proof. The key observation is that for any fixed vector representation length $W$ , there are only a finite number of distinctive models in a fixed-precision NLM family, independent of the graph size $n$ . Let ${W}_{b}$ be the number of bits in each intermediate representation of a fixed-precision NLM. There are at most ${\left( {2}^{{W}_{b}}\right) }^{{2}^{{W}_{b}}}$ different mappings from inputs to outputs. Hence, if $N$ is sufficiently large to enumerate all input hypergraphs, we can always identify the correct model in the hypothesis space.
|
| 102 |
+
|
| 103 |
+
Our results are related to the algorithmic alignment approach [20, 21]. In contrast to their Probably Approximately Correct Learning (PAC-Learning) bounds for sample efficiency, our expressiveness results directly quantifies whether a hypergraph neural network can be trained to realize a specific function. Our generalization theorem applies to more generally than their result on Max-Degree function learning due to the assumption of fixed precision.
|
| 104 |
+
|
| 105 |
+
§ 5 CONCLUSION
|
| 106 |
+
|
| 107 |
+
In this extended abstract, we have shown the substantial increase of expressive power due to higher-arity relations and increasing depth, and have characterized very powerful structural generalization from training on small graphs to performance on larger ones. We further discuss the relationship between these results and existing results in Appendix A. All theoretical results are further supported by the empirical results, discussed in Appendix D. Although many questions remain open about the overall generalization capacity of these models in continuous and noisy domains, we believe this work has shed some light on their utility and potential for application in a variety of problems.
|
papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,343 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Beyond 1-WL with Local Ego-Network Encodings
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Identifying similar network structures is key to capture graph isomorphisms and learn representations that exploit structural information encoded in graph data. This work shows that ego-networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler-Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego-networks into sparse vectors that enrich Message Passing (MP) Graph Neural Networks (GNNs) beyond 1-WL expressivity. We describe formally the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on seven GNN architectures.
|
| 12 |
+
|
| 13 |
+
## 13 1 Introduction
|
| 14 |
+
|
| 15 |
+
Novel approaches for learning on graphs have appeared in recent years within the machine learning community [1]. Notably, the introduction of Graph Convolutional Networks [2, 3] led to a broad body of research aiming to efficiently capture network interactions, leveraging spectral information [4], scaling beyond seen nodes [5], or generalizing attention to Graph Neural Networks (GNNs) [6]. Underlying this family of GNN models is the Message Passing (MP) mechanism [7]. In MP-GNNs, a node is represented by iteratively aggregating feature 'messages' from its neighbours based on edge connectivity, with successful applications on several domains [7-11]. However, recently it has been shown that message passing limits the representational power of GNNs, which are bound by the Weisfeiler-Lehman (1-WL) test [12]. As such, MP-GNNs cannot reach the expressivity of $k$ -dimensional WL generalizations [13,14] ( $k$ -WL) or analogous MATLANG [15,16] languages. Casting MP-GNN representations in these terms [17] has driven recent research towards expressivity.
|
| 16 |
+
|
| 17 |
+
To improve expressivity, recent approaches extend message-passing, leveraging topological information from cell complexes [18], extending the message-passing mechanism with sub-graph information [19-22], propagating messages through $k$ network hops [23], introducing relative positioning information for network vertices [24], or using higher order $k$ -vertex tuples to reach $k$ -WL expressivity [14]. In an other direction, methods such as Provably Powerful Graph Networks (PPGN) [25] are guaranteed to be as expressive as the 3-WL test at cubic time and quadratic memory costs. More recently, GNNML3 [26] introduced a network architecture with equal memory and time costs to MP-GNNs, but experimentally capable of 3-WL expressivity that introduces spectral information through a preprocessing step with cubic worst-case time complexity.
|
| 18 |
+
|
| 19 |
+
The aforementioned approaches improve expressivity by extending MP-GNNs architectures, often evaluating on standarized benchmarks [27-29]. However, identifying the optimal approach on novel domains remains unclear and requires costly architecture search. In this work, we present IGEL, an Inductive Graph Encoding of Local information allowing MP-GNN and Deep Neural Network (DNN) models to go beyond 1-WL expressivity without modifying model architectures. IGEL is closely related to the Weisfeiler-Lehman isomorphism test, and produces inductive representations of vertex structures that can be introduced into MP-GNN models. IGEL reframes capturing 1-WL information irrespective of model architecture as a pre-processing step that simply extends node attributes.
|
| 20 |
+
|
| 21 |
+
## 2 IGEL: Ego-Networks As Sparse Inductive Representations.
|
| 22 |
+
|
| 23 |
+
Given a graph $G = \left( {V, E}\right)$ , we define $n = \left| V\right|$ and $m = \left| E\right| ,{d}_{G}\left( v\right)$ is the degree of a node $v$ in $G$ and ${d}_{\max }$ is the maximum degree. For $u, v \in V,{l}_{G}\left( {u, v}\right)$ is their shortest distance, and $\operatorname{diam}\left( G\right) = \max \left( {{l}_{G}\left( {u, v}\right) \mid u, v \in V}\right)$ is the diameter of $G.{\mathcal{N}}_{G}^{\alpha }\left( v\right)$ is the set of neighbours of $v$ in $G$ up to distance $\alpha$ (Equation 1), and ${\mathcal{E}}_{v}^{\alpha }$ is the $\alpha$ -depth ego-network centered on $v$ (Equation 2):
|
| 24 |
+
|
| 25 |
+
$$
|
| 26 |
+
{\mathcal{N}}_{G}^{\alpha }\left( v\right) = \left\{ {u \mid u \in V \land {l}_{G}\left( {u, v}\right) \leq \alpha }\right\} ,
|
| 27 |
+
$$
|
| 28 |
+
|
| 29 |
+
Let $\{ | \cdot \rangle \}$ denote a lexicographically-ordered multi-set. Algorithm 1 shows the 1-WL test, where hash a 1-WL iteration. The output of 1-WL is ${\mathbb{N}}^{n}$ -mapping each node to a color, bounded by $n$ distinct operate on $k$ -tuples of vertices, such that colors are assigned to $k$ -vertex tuples. If two graphs ${G}_{1},{G}_{2}$ are not distinguishable by the $k$ -WL test (that is, their coloring histograms match), they are $k$ -WL equivalent-denoted ${G}_{1}{ \equiv }_{k - \mathrm{{WL}}}{G}_{2}$ . Due to the hashing step,1-WL does not preserve distance information in the encoding, and perturbations (e.g. different color in a neighbour) produce different node-level representations. IGEL addresses both limitations, improving expressivity in the process.
|
| 30 |
+
|
| 31 |
+
### 2.1 The IGEL Algorithm
|
| 32 |
+
|
| 33 |
+
Intuitively, IGEL encodes a vertex $v$ with the multi-set of ordered degree sequences at each distance $\alpha$ steps with two modifications. First, the hashing step is removed and replaced by computing the union of multi-sets across steps $\left( \cup \right)$ ; second, the iteration number is explicitly introduced in the representation-with the output multi-set ${e}_{v}^{\alpha }$ shown in Algorithm 2.
|
| 34 |
+
|
| 35 |
+
In order to be used as vertex features, the multi-set can be represented as a sparse vector ${\operatorname{IGEL}}_{\text{vec }}^{\alpha }\left( v\right)$ , where the $i$ -th index contains the frequency of path-length and degree pairs $\left( {\lambda ,\delta }\right)$ . Degrees greater than ${d}_{\max }$ are capped to ${d}_{\max }$ , and vector indices are output by bijective function $f : \left( {\mathbb{N},\mathbb{N}}\right) \mapsto \mathbb{N}$ -shown in Figure 1:
|
| 36 |
+
|
| 37 |
+
${\operatorname{IGEL}}_{\text{vec }}^{\alpha }{\left( v\right) }_{i} = \left| \left\{ {\left( {\lambda ,\delta }\right) \in {e}_{v}^{\alpha }\text{ s.t. }f\left( {\lambda ,\delta }\right) = i}\right\} \right| .$
|
| 38 |
+
|
| 39 |
+
${G}_{1} = \left( {{V}_{1},{E}_{1}}\right)$ and ${G}_{2} = \left( {{V}_{1},{E}_{1}}\right)$ are IGEL-equivalent for $\alpha$ if the sorted multi-set containing node representations is the same for ${G}_{1}$ and ${G}_{2}$ :
|
| 40 |
+
|
| 41 |
+
${G}_{1}{ \equiv }_{\text{IGEL }}^{\alpha }{G}_{2} \Leftrightarrow$
|
| 42 |
+
|
| 43 |
+
$\left\{ \left\{ {{e}_{{v}_{1}}^{\alpha } : \forall {v}_{1} \in {V}_{1}}\right\} \right\} = \left\{ \left\{ {{e}_{{v}_{2}}^{\alpha } : \forall {v}_{2} \in {V}_{2}}\right\} \right\} .$(1)
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{\mathcal{E}}_{v}^{\alpha } = \left( {{V}^{\prime },{E}^{\prime }}\right) \subseteq G\text{, s.t.}u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \Leftrightarrow u \in {V}^{\prime },\left( {u, v}\right) \in {E}^{\prime } \subseteq E \Leftrightarrow u, v \in {V}^{\prime }\text{.} \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
maps a multi-set to an equivalence class shared by all nodes with matching multi-set encodings after colors if each node is uniquely colored. $k$ -higher order variants of the WL test (denoted $k$ -WL) tests within ${\mathcal{E}}_{v}^{\alpha }$ . As such, IGEL is a variant of the 1-WL algorithm shown in Algorithm 1, executed for
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
|
| 53 |
+
Figure 1: IGEL encoding of the green vertex. Dashed region denotes ${\mathcal{E}}_{v}^{\alpha }\left( {\alpha = 2}\right)$ . The green vertex is at distance 0 , blue vertices at 1 and red vertices at 2. Labels show degrees in ${\mathcal{E}}_{v}^{\alpha }$ . The frequency of $\left( {\lambda ,\delta }\right)$ tuples forming ${\operatorname{IGEL}}_{\text{vec }}^{\alpha }\left( v\right)$ is: $\{ \left( {0,2}\right) : 1,\left( {1,2}\right) : 1,\left( {1,4}\right) : 1,\left( {2,3}\right) : 2,\left( {2,4}\right) : 1\}$ .
|
| 54 |
+
|
| 55 |
+
Algorithm 1 1-WL (Color refinement).
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
Input: $G = \left( {V, E}\right)$
|
| 60 |
+
|
| 61 |
+
1: ${c}_{v}^{0} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{d}_{G}\left( v\right) }\right\} \right\} \right) \forall v \in V$
|
| 62 |
+
|
| 63 |
+
do
|
| 64 |
+
|
| 65 |
+
${c}_{v}^{i + 1} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{c}_{u}^{i} : \mathop{\forall }\limits_{{u \neq v}}u \in {\mathcal{N}}_{G}^{1}\left( v\right) }\right\} \right\} \right)$
|
| 66 |
+
|
| 67 |
+
while ${c}_{v}^{i} \neq {c}_{v}^{i - 1}$
|
| 68 |
+
|
| 69 |
+
Output: ${c}_{v}^{i} : V \mapsto \mathbb{N}$
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
Algorithm 2 IGEL Encoding.
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
Input: $G = \left( {V, E}\right) ,\alpha : \mathbb{N}$
|
| 78 |
+
|
| 79 |
+
${e}_{v}^{0} \mathrel{\text{:=}} \left\{ {\{ \left( {0,{d}_{G}\left( v\right) }\right) \} }\right\} \forall v \in V$
|
| 80 |
+
|
| 81 |
+
for $i \mathrel{\text{:=}} 1;i + = 1$ until $i = \alpha$ do
|
| 82 |
+
|
| 83 |
+
${e}_{v}^{i} \mathrel{\text{:=}} \bigcup \left( {{e}_{v}^{i - 1},}\right.$
|
| 84 |
+
|
| 85 |
+
$\{ (i,{d}_{{\mathcal{E}}_{C}^{\alpha }(v)}(u))$
|
| 86 |
+
|
| 87 |
+
$\left. \left. {\forall u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \mid {l}_{G}\left( {u, v}\right) = i}\right\} \right)$
|
| 88 |
+
|
| 89 |
+
end for
|
| 90 |
+
|
| 91 |
+
Dutput: ${e}_{v}^{\alpha } : V \mapsto \{ \{ \left( {\mathbb{N},\mathbb{N}}\right) \} \}$
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
Space complexity. IGEL’s worst case space complexity is $\mathcal{O}\left( {\alpha \cdot n \cdot {d}_{\max }}\right)$ , conservatively assuming that every node will require ${d}_{\max }$ parameters at every $\alpha$ depth from the center of the ego-network.
|
| 96 |
+
|
| 97 |
+
Time complexity. For IGEL, each vertex has ${d}_{\max }$ neighbours where the $\alpha$ iterations imply traversing through geometrically larger ego-networks with ${\left( {d}_{\max }\right) }^{\alpha }$ vertices, upper bounded by $m$ . Thus IGEL’s time complexity follows $\mathcal{O}\left( {n \cdot \min \left( {m,{\left( {d}_{\max }\right) }^{\alpha }}\right) }\right)$ , with $\mathcal{O}\left( {n \cdot m}\right)$ when $\alpha \geq \operatorname{diam}\left( G\right)$ .
|
| 98 |
+
|
| 99 |
+
## 3 Theoretical and Experimental Findings
|
| 100 |
+
|
| 101 |
+
First, we analyze IGEL's expressive power with respect to 1-WL and recent improvements. Second, we measure the impact of IGEL as an additional input to enrich existing MP-GNN architectures.
|
| 102 |
+
|
| 103 |
+
### 3.1 Expressivity: Which Graphs are IGEL-Distinguishable?
|
| 104 |
+
|
| 105 |
+
In this section, we discuss the increased expressivity of IGEL with respect to 1-WL, and identify expressivity upper-bounds for graphs that are indistinguishable under MATLANG and the 3-WL test.
|
| 106 |
+
|
| 107 |
+
- Relationship to 1-WL. IGEL is capable of distinguishing graphs that are indistinguishable by the 1-WL test-e.g., $d$ -regular graphs. A graph is $d$ -regular graph if all nodes have degree $d.d$ - regular graphs with equal cardinality are indistinguishable by 1-WL. Specifically, for any pair of $d$ - regular graphs ${G}_{1}$ and ${G}_{2}$ such that $\left| {V}_{1}\right| = \left| {V}_{2}\right|$ , ${G}_{1}{ \equiv }_{1 - \mathrm{{WL}}}{G}_{2}$ (see Appendix A for details).
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 2: IGEL encodings for two Cospectral 4- regular graphs from [30]. IGEL distinguishes 4 kinds of structures within the graphs (associated with every node as a, b, c, and d). The two graphs can be distinguished since the encoded structures and their frequencies do not match.
|
| 112 |
+
|
| 113 |
+
However, there exist $d$ -regular graphs that can be distinguished by IGEL, as shown in Figure 2. Since the graph is $d$ -regular, tracing Algorithm 1 shows that the 1-WL test assigns the same color to all nodes and stabilizes after one iteration. In contrast, IGEL with $\alpha = 1$ identifies 4 kinds of structures with different frequencies between the graphs-thus being able to distinguish them.
|
| 114 |
+
|
| 115 |
+
- Expressivity upper bounds. We identify an upper expressivity bound for IGEL, where the method fails to distinguish graphs e.g. Strongly Regular Graphs (Definition 1) with equal parameters (Theorem 1, see Appendix B for details): Definition 1. An-vertex $d$ -regular graph is strongly regular-denoted $\operatorname{SRG}\left( {n, d,\beta ,\gamma }\right)$ -if adjacent vertices have $\beta$ vertices in common, and non-adjacent vertices have $\gamma$ vertices in common.
|
| 116 |
+
|
| 117 |
+
Theorem 1. IGEL cannot distinguish SRGs when $n, d$ , and $\beta$ are the same, and between any value of $\gamma$ (same or otherwise). IGEL when $\alpha = 1$ can only distinguish SRGs with different values of $n, d$ , and $\beta$ , while IGEL when $\alpha = 2$ can only distinguish SRGs with different values of $n$ and $d$ .
|
| 118 |
+
|
| 119 |
+
Our findings show that IGEL is a powerful representation, capable of distinguishing 1-WL equivalent graphs such as Figure 2-which as cospectral graphs, are known to be expressable in strictly more powerful MATLANG sub-languages than 1-WL [16]. Additionally, the upper bound on Strongly Regular Graphs is a hard ceiling on expressivity since SRGs are known to be indistinguishable by 3-WL [31]. IGEL shares the experimental upper-bound of expressivity of recent methods such as GNNML3 [26]. Furthermore, IGEL can provably reach comparable expressivity on SRGs with respect to sub-graph methods implemented within MP-GNN architectures (see Appendix B, subsection B.2), such as Nested GNNs [21] and GNN-AK [22], which are known to be not less powerful than 3-WL, and the ESAN framework when leveraging ego-networks with root-node flags as a subgraph sampling policy (EGO+) [19], which is as powerful as the 3-WL.
|
| 120 |
+
|
| 121 |
+
### 3.2 Experimental Evaluation
|
| 122 |
+
|
| 123 |
+
We evaluate ${\operatorname{IGEL}}_{\text{vec }}^{\alpha }\left( v\right)$ as a method of producing architecture-agnostic vertex features on five tasks: graph classification, isomorphism detection, graphlet counting, link prediction, and node classification.
|
| 124 |
+
|
| 125 |
+
Experimental Setup. We reproduce results from [26], introducing IGEL as features on graph classification, isomorphism and graphlet counting, comparing the performance of adding/removing IGEL on six GNN architectures. We also evaluate IGEL on link prediction against transductive baselines, and on node classification as an additional feature used in MLPs without message-passing
|
| 126 |
+
|
| 127 |
+
Notation. The following formatting denotes significant (as per paired t-tests) positive, negative, and insignificant differences after introducing IGEL, with the best results per task / dataset underlined.
|
| 128 |
+
|
| 129 |
+
Table 1: Per-model graph classification accuracy met rics on TU data sets. Each cell shows the average accuracy of the model and data set in that row and column, with IGEL (left) and without IGEL (right).
|
| 130 |
+
|
| 131 |
+
<table><tr><td>Model</td><td>Enzymes</td><td>$\mathbf{{Mutag}}$</td><td>Proteins</td><td>PTC</td></tr><tr><td>$\mathbf{{MLP}}$</td><td>41.10>26.18 ${}^{ \circ }$</td><td>${87.61} > {84.61}^{ \circ }$</td><td>75.43~75.01</td><td>${64.59} > {62.79}^{ \circ }$</td></tr><tr><td>GCN</td><td>${54.48} > {48.60}^{ \circ }$</td><td>${89.61} > {85.42}^{ \circ }$</td><td>75.67>74.50 ${}^{ \circ }$</td><td>65.76~65.21</td></tr><tr><td>$\mathbf{{GAT}}$</td><td>54.88~54.95</td><td>90.00>86.14 ${}^{ \circ }$</td><td>73.44>70.51 ${}^{ \circ }$</td><td>66.29~66.29</td></tr><tr><td>GIN</td><td>${{54.77} > {53.44}}^{ * }$</td><td>89.56~88.33</td><td>${73.32}^{ \circ }{72.05}^{ \circ }$</td><td>61.44~60.21</td></tr><tr><td>Chebnet</td><td>61.88~62.23</td><td>91.44>88.33 ${}^{ \circ }$</td><td>74.30>66.94 ${}^{ \circ }$</td><td>64.79~63.87</td></tr><tr><td>GNNML3</td><td>${61.42} < {62.79}^{ \circ }$</td><td>${92.50} > {91.47}^{ * }$</td><td>75.54>62.32 ${}^{ \circ }$</td><td>${64.26} < {66.10}^{ \circ }$</td></tr><tr><td colspan="5">* $p < {0.01}$ ,$p < {0.0001}$</td></tr></table>
|
| 132 |
+
|
| 133 |
+
Table 2: Mean $\pm$ stddev of the best IGEL-augmented graph classification model and reported results on $k$ -hop, GSN, and ESAN from $\left\lbrack {{19},{20},{23}}\right\rbrack$ . Best performing baselines underlined.
|
| 134 |
+
|
| 135 |
+
<table><tr><td>Model</td><td>Mutag</td><td>Proteins</td><td>PTC</td></tr><tr><td>IGEL (best)</td><td>${92.5} \pm {1.2}$</td><td>${75.7} \pm {0.3}$</td><td>${66.3} \pm {1.3}$</td></tr><tr><td>$k$ -hop [23] ${}^{ \dagger }$</td><td>${87.9} \pm {1.2}^{\diamond }$</td><td>${75.3} \pm {0.4}$</td><td>-</td></tr><tr><td>GSN [20] ${}^{ \dagger }$</td><td>${92.2} \pm {7.5}$</td><td>${76.6} \pm {5.0}$</td><td>${68.2} \pm {7.2}$</td></tr><tr><td>ESAN [19] ${}^{ \dagger }$</td><td>${91.1} \pm {7.0}$</td><td>${76.7} \pm {4.1}$</td><td>${69.2} \pm {6.5}$</td></tr></table>
|
| 136 |
+
|
| 137 |
+
$\dagger$ : Results as reported by $\left\lbrack {{19},{20},{23}}\right\rbrack$ .
|
| 138 |
+
|
| 139 |
+
— Graph Classification. Table 1 shows graph classification results on the TU molecule data sets [28]. We evaluate differences in mean accuracy between 10 runs with (left) / without (right) IGEL. We do not tune network hyper-parameters and establish statistical significance through paired t-tests, with $p < {0.01}$ (*) and $p < {0.0001}$ (*). Our results show that IGEL in the Mutag and Proteins data sets improves the performance of all MP-GNN models. On the Enzymes and PTC data sets, results are mixed: for all models other than GNNML3, IGEL either significantly improves accuracy (on MLPNet, GCN, and GIN on Enzymes), or does not have a negative impact on performance.
|
| 140 |
+
|
| 141 |
+
In Table 2, we compare the best IGEL results from Table 1 with reported results for expressive baselines: $k$ -hop GNNs [23], GSNs [20], and ESAN [19]. All results are comparable to IGEL except Mutag, where IGEL significantly outperforms $k$ -hop with $p < {0.0001}$ . When comparing IGEL and best performing baselines for every data set, no differences are statistically significant $\left( {p > {0.01}}\right)$ .
|
| 142 |
+
|
| 143 |
+
- Isomorphism Detection & Graphlet Counting. Adding IGEL to the six models in Table 1 on the EXP [32] graph isomorphism task produces significant improvements: all GNN models distinguish all non-isomorphic yet 1-WL equivalent EXP graph pairs with IGEL vs. 50% accuracy without IGEL (i.e. random guessing). Likewise, IGEL significantly improves GNN performance on the RandomGraph data set [33] counting triangles, tailed triangles and the custom 1-WL graphlets proposed by [26] (see detailed results on Appendix C).
|
| 144 |
+
|
| 145 |
+
- Link Prediction & Node Classification. We test IGEL on edge / node level tasks to assess its use as a baseline in non-GNN settings. On a transductive link prediction task, we train DeepWalk [34] style embeddings of IGEL encodings rather than node identities on the Facebook and CA-AstroPh graphs [35]. IGEL-derived embeddings outperform transductive baselines modelling link prediction as an edge-level binary classification task, measuring 0.976 vs. 0.968 (Facebook) and 0.984 vs. 0.937 (CA-AstroPh) AUC comparing IGEL vs. node2vec [36]. On multi-label node classification on PPI [5], we train an MLP (e.g. no message passing) with node features and IGEL encodings. Our MLP shows better micro-F1 (0.850) when $\alpha = 1$ than MP-GNN architectures such as GraphSAGE (0.768, as reported in [6]), but underperforms compared to a 3-layer GAT (0.973 micro-F1 from [6]).
|
| 146 |
+
|
| 147 |
+
- Experimental Summary. Introducing IGEL yields comparable performance to state-of-the-art methods without architectural modifications-including when compared to strong baseline models focused on WL expressivity such as GNNML3, $k$ -hop, GSN or ESAN. Furthermore, IGEL achieves this at a lower computational cost, in comparison for instance with GNNML3, which requires a $\mathcal{O}\left( {n}^{3}\right)$ eigen-decomposition step to introduce spectral channels. Finally, IGEL can also be used in transductive settings (link prediction) as well as node-level tasks (node classification) and outperform strong transductive baselines or enhance models without message-passing, such as MLPs. As such, we believe IGEL is an attractive baseline with a clear relationship to the 1-WL test that can be used to improve MP-GNN expressivity without the need of costly architecture search.
|
| 148 |
+
|
| 149 |
+
## 4 Conclusions
|
| 150 |
+
|
| 151 |
+
We presented IGEL, a novel vertex representation algorithm on unattributed graphs allowing MP-GNN architectures to go beyond 1-WL expressivity. We showed that IGEL is related and more expressive than the 1-WL test, and formally proved an expressivity upper bound on certain families of Strongly Regular Graphs. Finally, our experimental results indicate that introducing IGEL in existing MP-GNN architectures yield comparable performance to state-of-the-art methods, without architectural modifications and at lower computational costs than other approaches.
|
| 152 |
+
|
| 153 |
+
References
|
| 154 |
+
|
| 155 |
+
[1] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. ArXiv, abs/2104.13478, 2021. 1
|
| 156 |
+
|
| 157 |
+
[2] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pages 2014-2023, New York, USA, 2016. 1
|
| 158 |
+
|
| 159 |
+
[3] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR, 2017. 1
|
| 160 |
+
|
| 161 |
+
[4] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. 1
|
| 162 |
+
|
| 163 |
+
[5] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30, 2017. 1, 4
|
| 164 |
+
|
| 165 |
+
[6] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018.1,4
|
| 166 |
+
|
| 167 |
+
[7] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML), volume 70, page 1263-1272, 2017. 1
|
| 168 |
+
|
| 169 |
+
[8] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM International Conference on Knowledge Discovery & Data Mining, pages 974-983, 2018.
|
| 170 |
+
|
| 171 |
+
[9] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
|
| 172 |
+
|
| 173 |
+
[10] Bidisha Samanta, Abir De, Gourhari Jana, Vicenç Gómez, Pratim Chattaraj, Niloy Ganguly, and Manuel Gomez-Rodriguez. NEVAE: A deep generative model for molecular graphs. Journal of Machine Learning Research, 21(114):1-33, 2020.
|
| 174 |
+
|
| 175 |
+
[11] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and koray kavukcuoglu. Interaction networks for learning about objects, relations and physics. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. 1
|
| 176 |
+
|
| 177 |
+
[12] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. 1
|
| 178 |
+
|
| 179 |
+
[13] Martin Grohe. Descriptive Complexity, Canonisation, and Definable Graph Structure Theory. Lecture Notes in Logic. Cambridge University Press, 2017. 1
|
| 180 |
+
|
| 181 |
+
[14] Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):4602-4609, Jul. 2019. 1
|
| 182 |
+
|
| 183 |
+
[15] Robert Brijder, Floris Geerts, Jan Van Den Bussche, and Timmy Weerwag. On the expressive power of query languages for matrices. ACM Trans. Database Syst., 44(4), oct 2019. 1
|
| 184 |
+
|
| 185 |
+
[16] Floris Geerts. On the expressive power of linear algebra on graphs. Theory of Computing Systems, 65:1-61, 01 2021. 1, 3
|
| 186 |
+
|
| 187 |
+
[17] Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M. Kriege, Martin Grohe, Matthias Fey, and Karsten Borgwardt. Weisfeiler and Leman go machine learning: The story so far. Weisfeiler and Leman go Machine Learning: The Story so far, 2021. 1
|
| 188 |
+
|
| 189 |
+
[18] Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Liò, Guido F Montufar, and Michael Bronstein. Weisfeiler and Lehman go cellular: CW networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 2625-2640. Curran Associates, Inc., 2021. 1
|
| 190 |
+
|
| 191 |
+
[19] Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. In International Conference on Learning Representations, 2022. 1, 3, 4
|
| 192 |
+
|
| 193 |
+
[20] Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M. Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting, 2021. 4
|
| 194 |
+
|
| 195 |
+
[21] Muhan Zhang and Pan Li. Nested graph neural networks. arXiv preprint arXiv:2110.13197, 2021. 3
|
| 196 |
+
|
| 197 |
+
[22] Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. From stars to subgraphs: Uplifting any GNN with local structure awareness. In International Conference on Learning Representations, 2022.1,3
|
| 198 |
+
|
| 199 |
+
[23] Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. k-hop graph neural networks. Neural Networks, 130:195-205, 2020. 1, 4
|
| 200 |
+
|
| 201 |
+
[24] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 7134- 7143, Long Beach, California, USA, 09-15 Jun 2019. PMLR. 1
|
| 202 |
+
|
| 203 |
+
[25] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. 1
|
| 204 |
+
|
| 205 |
+
[26] Muhammet Balcilar, Pierre Héroux, Benoit Gaüzère, Pascal Vasseur, Sébastien Adam, and Paul Honeine. Breaking the limits of message passing graph neural networks. In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021. 1, 3, 4, 10
|
| 206 |
+
|
| 207 |
+
[27] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020. 1
|
| 208 |
+
|
| 209 |
+
[28] Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In ${ICML}$ 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020. URL www.graphlearning.io. 4
|
| 210 |
+
|
| 211 |
+
[29] Jiaxuan You, Rex Ying, and Jure Leskovec. Design space for graph neural networks. In NeurIPS, 2020. 1
|
| 212 |
+
|
| 213 |
+
[30] Edwin R Van Dam and Willem H Haemers. Which graphs are determined by their spectrum? Linear Algebra and its Applications, 373:241-272, 2003. 3
|
| 214 |
+
|
| 215 |
+
[31] V. Arvind, Frank Fuhlbrück, Johannes Köbler, and Oleg Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 113:42-59, 2020. 3
|
| 216 |
+
|
| 217 |
+
[32] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 2112-2118. International Joint Conferences on Artificial Intelligence Organization, 8 2021.4
|
| 218 |
+
|
| 219 |
+
[33] Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 10383-10395. Curran Associates, Inc., 2020. 4, 10
|
| 220 |
+
|
| 221 |
+
[34] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 701-710, 2014. 4
|
| 222 |
+
|
| 223 |
+
[35] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection, June 2014. URL http://snap.stanford.edu/data.4
|
| 224 |
+
|
| 225 |
+
[36] Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 855-864, 2016. 4
|
| 226 |
+
|
| 227 |
+
## A 1-WL Expressivity and Regular Graphs.
|
| 228 |
+
|
| 229 |
+
Remark 1 shows that 1-WL, as defined in Algorithm 1, is unable of distinguishing $d$ -regular graphs:
|
| 230 |
+
|
| 231 |
+
Remark 1. Let ${G}_{1}$ and ${G}_{2}$ be two $d$ -regular graphs such that $\left| {V}_{1}\right| = \left| {V}_{1}\right|$ . Tracing Algorithm 1, all vertices in ${V}_{1},{V}_{2}$ share the same initial color due to d-regularity: $\forall v \in {V}_{1}\bigcup {V}_{2};{c}_{v}^{0} = \operatorname{hash}\left( {\{ \{ d\} \} }\right)$ . After the first color refinement iteration, consider the colorings of ${G}_{1}$ and ${G}_{2}$ :
|
| 232 |
+
|
| 233 |
+
$\neg \forall {v}_{1} \in {V}_{1};{c}_{{v}_{1}}^{1} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{c}_{{u}_{1}}^{0} : \mathop{\forall }\limits_{{{u}_{1} \neq {v}_{1}}}{u}_{1} \in {\mathcal{N}}_{{G}_{1}}^{1}\left( {v}_{1}\right) }\right\} \right\} \right) ,$
|
| 234 |
+
|
| 235 |
+
$- \forall {v}_{2} \in {V}_{2};{c}_{{v}_{2}}^{1} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{c}_{{u}_{2}}^{0} : \mathop{\forall }\limits_{{{u}_{2} \neq {v}_{2}}}{u}_{2} \in {\mathcal{N}}_{{G}_{2}}^{1}\left( {v}_{2}\right) }\right\} \right\} \right) .$
|
| 236 |
+
|
| 237 |
+
Since $\forall {v}_{1} \in {V}_{1},{v}_{2} \in {V}_{2};d = \left| {{\mathcal{N}}_{{G}_{1}}^{1}\left( {v}_{1}\right) }\right| = \left| {{\mathcal{N}}_{{G}_{2}}^{1}\left( {v}_{2}\right) }\right|$ , substituting ${c}_{{v}_{1}}^{1},{c}_{{v}_{2}}^{1}$ in the next iteration step yields $\left\{ \left\{ {\operatorname{hash}\left( {c}_{{v}_{1}}^{1}\right) : \forall {v}_{1} \in {V}_{1}}\right\} \right\} = \left\{ \left\{ {\operatorname{hash}\left( {c}_{{v}_{2}}^{1}\right) : \forall {v}_{2} \in {V}_{2}}\right\} \right\}$ . Thus, on any pair of $d$ -regular graphs with equal cardinality, 1-WL stabilizes after one iteration produces equal colorings for all nodes on both graphs-regardless of whether ${G}_{1}$ and ${G}_{2}$ are isomorphic, as Figure 2 shows.
|
| 238 |
+
|
| 239 |
+
## B Proof of Theorem 1.
|
| 240 |
+
|
| 241 |
+
In this appendix, we provide proof for Theorem 1, showing that IGEL cannot distinguish certain pairs of SRGs with equal parameters of $n$ (cardinality), $d$ (degree), $\beta$ (shared edges between adjacent nodes), and $\gamma$ (shared edges between non-adjacent nodes). Let $\{ \{ \cdot \} {\} }^{d}$ denote a repeated multi-set with $d$ -times the cardinality of the items in the multi-set, and let ${e}_{G}^{\alpha } = \left\{ \left\{ {{e}_{v}^{\alpha } : \forall v \in V}\right\} \right\}$ be short-hand notation for the IGEL encoding of $G$ , defined as the sorted multi-set containing IGEL encodings of all nodes in $G$ .
|
| 242 |
+
|
| 243 |
+
Proof. Per Remark 2 and Remark 3, SRGs have a maximum diameter of two, and IGEL encodings are equal for all $\alpha \geq \operatorname{diam}\left( G\right)$ . Thus, given $G = \operatorname{SRG}\left( {n, d,\beta ,\gamma }\right)$ , only $\alpha \in \{ 1,2\}$ produce different encodings of $G$ . It can be shown that ${e}_{v}^{\alpha }$ can only distinguish different values of $n, d$ and $\beta$ , and ${\mathrm{{IGEL}}}_{\text{enc }}^{2}$ can only distinguish values of $n$ and $d$ :
|
| 244 |
+
|
| 245 |
+
- Let $\alpha = 1 : \forall v \in V,{\mathcal{E}}_{v}^{1} = \left( {{V}^{\prime },{E}^{\prime }}\right)$ s.t. ${V}^{\prime } = {\mathcal{N}}_{G}^{1}\left( v\right)$ . Since $G$ is $d$ -regular, $v$ is the center of ${\mathcal{E}}_{v}^{1}$ , and has $d$ -neighbours. By SRG’s definition, the $d$ neighbours of $v$ have $\beta$ shared neighbours with $v$ each, plus an edge with $v$ . Thus, for any SRGs ${G}_{1},{G}_{2}$ where ${n}_{1} = {n}_{2},{d}_{1} = {d}_{2}$ , and ${\beta }_{1} = {\beta }_{2}$ , ${e}_{{G}_{1}}^{1} = {e}_{{G}_{2}}^{1}$ produce equal encodings by expanding ${e}_{v}^{1}$ in Algorithm 2:
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
{e}_{v}^{1} = \{ \left( {0, d}\right) \} \bigcup \{ \left( {1,\beta + 1}\right) {\} }^{d}
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
- Let $\alpha = 2 : \forall v \in V,{\mathcal{E}}_{v}^{2} = G$ as $\forall u \in V, u \in {\mathcal{N}}_{G}^{2}\left( v\right)$ when $\operatorname{diam}\left( G\right) \leq 2.G$ is $d$ -regular, so $\forall v \in V, d = {d}_{{\mathcal{E}}_{v}^{2}}\left( v\right) = {d}_{G}\left( v\right)$ . Thus, for any SRGs ${G}_{1},{G}_{2}$ s.t. ${n}_{1} = {n}_{2}$ and ${d}_{1} = {d}_{2},{e}_{{G}_{1}}^{2} = {e}_{{G}_{1}}^{2}$ , containing $n$ equal ${e}_{v}^{2}$ encodings by expanding Algorithm 2:
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
{e}_{v}^{2} = \{ \{ \left( {0, d}\right) \} \bigcup \{ \left( {1, d}\right) {\} }^{d}\bigcup \{ \left( {2, d}\right) {\} }^{n - d - 1}
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
Thus, IGEL cannot distinguish pairs of SRGs when $n, d$ , and $\beta$ are the same, and between any value of $\gamma$ (equal or different between the pair). IGEL when $\alpha = 1$ can only distinguish SRGs with different values of $n, d$ , and $\beta$ , while IGEL when $\alpha = 2$ can only distinguish SRGs with different values of $n$ and $d$ .
|
| 258 |
+
|
| 259 |
+
We note that it is straightforward to extend IGEL so that different values of $\gamma$ can be distinguished. We explore one possible extension in subsection B.2.
|
| 260 |
+
|
| 261 |
+
### B.1 Additional Remarks used by Proof 1.
|
| 262 |
+
|
| 263 |
+
Remark 2. For any $G = \operatorname{SRG}\left( {n, d,\beta ,\gamma }\right)$ , $\operatorname{diam}\left( G\right) \leq 2$ .
|
| 264 |
+
|
| 265 |
+
Note that by definition of SRGs, $n$ affects cardinality while $d$ and $\beta$ control adjacent vertex connectivity at 1-hop. For $\gamma$ , we have to consider two cases: when $\gamma \geq 1$ and when $\gamma = 0$ : - Let $\gamma \geq 1$ : by definition, $\forall u, v \in V$ s.t. $\left( {u, v}\right) \notin E,\exists w \in V$ s.t. $\left( {u, w}\right) \in E \land \left( {v, w}\right) \in E$ . Thus, $\forall \left( {u, v}\right) \in E,{l}_{G}\left( {u, v}\right) = 1$ and $\forall \left( {u, v}\right) \notin E,{l}_{G}\left( {u, v}\right) = 2$ .
|
| 266 |
+
|
| 267 |
+
- Let $\gamma = 0 : \forall u, v \in V$ , if $\left( {u, v}\right) \notin E$ then $\nexists w \in V$ s.t. $\left( {u, w}\right) \in E \land \left( {v, w}\right) \in E$ as $w$ is in common between $u$ and $v$ . Then, $\forall u, v, w \in V$ s.t. $\left( {u, v}\right) \in E,\left( {u, w}\right) \in E \Leftrightarrow \left( {v, w}\right) \in E$ -hence, only nodes and their neighbours can be in common. Thus: $\forall u, v \in V$ s.t. $u \neq v,{l}_{G}\left( {u, v}\right) = 1$ .
|
| 268 |
+
|
| 269 |
+
Given both scenarios, we can conclude that for any $\gamma \in \mathbb{N},\forall u, v \in V,{l}_{G}\left( {u, v}\right) \leq 2$ and thus $\operatorname{diam}\left( G\right) \leq 2$ .
|
| 270 |
+
|
| 271 |
+
Remark 3. For any finite graph $G$ , there is a finite range of $\alpha \in \mathbb{N}$ where IGEL encodings distinguish between different values of $\alpha$ . For values of $\alpha$ larger than the diameter of the graph (that is, $\alpha \geq \operatorname{diam}\left( G\right)$ ), it holds that ${e}_{v}^{\alpha } = {e}_{v}^{\alpha + 1}$ as ${\mathcal{E}}_{v}^{\alpha } = {\mathcal{E}}_{v}^{\alpha + 1} = G$ .
|
| 272 |
+
|
| 273 |
+
### B.2 Improving Expressivity on the $\gamma$ Parameter.
|
| 274 |
+
|
| 275 |
+
IGEL as presented is unable to distinguish between any values of $\gamma$ in SRGs. However, IGEL can be trivially extended to distinguish between pairs of SRGs, bringing parity with methods such as the EGO+ policy in ESAN, NGNNs and GNN-AK.
|
| 276 |
+
|
| 277 |
+
Intuitively, IGEL is unable to distinguish $\gamma$ because its $\left( {\lambda ,\delta }\right)$ tuples are unable to represent relationships between vertices at different distances (e.g. the $\gamma$ parameter). The structural feature definition may be extended to compute the degree between 'distance layers' in the sub-graphs, addressing this pitfall. This means modifying ${e}_{v}^{i}$ in Algorithm 2:
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
{e}_{v}^{i} = {e}_{v}^{i - 1} \cup \left\{ {\rho \left( {u, v}\right) : \forall u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \mid {l}_{G}\left( {u, v}\right) \in \{ i, i + 1\} }\right\}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
where:
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
\rho \left( {u, v}\right) = \left( {{l}_{{\mathcal{E}}_{v}^{\alpha }}\left( {u, v}\right) ,{d}_{{\mathcal{E}}_{v}^{\alpha }}^{0}\left( {u, v}\right) ,{d}_{{\mathcal{E}}_{v}^{\alpha }}^{1}\left( {u, v}\right) }\right)
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
and ${d}_{G}^{p}\left( {u, v}\right)$ generalizes ${d}_{G}\left( u\right)$ to count edges of $u$ at a relative distance $p$ of $v$ in $G = \left( {V, E}\right)$ :
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
{d}_{G}^{p}\left( {u, v}\right) = \left| {\left( {u, w}\right) \in E\forall w \in V\text{ s.t. }{l}_{G}\left( {u, w}\right) = {l}_{G}\left( {u, v}\right) + p}\right| .
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
It can be shown that this definition of ${e}_{v}^{i}$ is strictly more powerful distinguishing at SRGs following an expansion of Algorithm 2 with $\alpha = 2$ :
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
{e}_{v}^{2} = \{ \{ \left( {0,0, d}\right) \} \bigcup \{ \left( {1,\beta ,\gamma }\right) {\} }^{d}\bigcup \{ \left( {2, d - \gamma ,0}\right) {\} }^{n - d - 1}
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
Proof. For any $G = \operatorname{SRG}\left( {n, d,\beta ,\gamma }\right) ,\forall v \in V,{l}_{{\mathcal{E}}_{v}^{2}}\left( {v, v}\right) = 0$ and there are $d$ edges towards its neighbours-thus the root is encoded as(0,0, d). Each neighbour is at ${l}_{{\mathcal{E}}_{v}^{2}}\left( {u, v}\right) = 1$ , with $\beta$ edges among each other, and $\gamma$ with vertices not adjacent to $v$ -thus $\left( {1,\beta ,\gamma }\right)$ , where $d = 1 + \beta + \gamma$ . By definition, every vertex $w \in V$ s.t. $\left( {u, w}\right) \notin E$ has $\gamma$ neighbours shared with $v$ , and $d$ neighbours overall. Per Remark 2, the maximum diameter of $G$ is two, hence ${l}_{{\mathcal{E}}_{v}^{2}}\left( {v, w}\right) = 2$ and for any $w$ , the representation is $\left( {2, d - \gamma ,0}\right)$ .
|
| 302 |
+
|
| 303 |
+
## C Extended Results on Isomorphism Detection and Graphlet Counting.
|
| 304 |
+
|
| 305 |
+
In this section we summarize additional results on isomorphism detection and graphlet counting.
|
| 306 |
+
|
| 307 |
+
### C.1 Isomorphism Detection.
|
| 308 |
+
|
| 309 |
+
We provide a detailed breakdown of isomorphism detection performance after introducing IGEL in Table 3, complimenting our summary on subsection 3.2.
|
| 310 |
+
|
| 311 |
+
- Graph8c. On the Graph8c dataset ${}^{1}$ , introducing IGEL significantly reduces the amount of graph pairs erroneously identified as isomorphic for all MP-GNN models, as shown in Table 3. Furthermore, IGEL allows a linear baseline employing a sum readout function over input feature vectors, then projecting onto a 10-component space, to identify all but 1571 non-isomorphic pairs compared to the erroneous pairs GCNs (4196 errors) or GATs (1827 errors) can identify without IGEL. Additionally, we find that all Graph8c graphs can be distinguished if the IGEL encodings for $\alpha = 1$ and $\alpha = 2$ are concatenated. We do not explore the expressivity of combinations of $\alpha$ in this work, but hypothesize that concatenated encodings of $\alpha$ may be more expressive.
|
| 312 |
+
|
| 313 |
+
Table 3: Graph isomorphism detection results. The IGEL column denotes whether IGEL is used or not in the configuration. For Graph8c, we describe graph pairs erroneously detected as isomorphic. For EXP classify, we show the accuracy of distinguishing non-isomorphic graphs in a binary classification task.
|
| 314 |
+
|
| 315 |
+
<table><tr><td>Model</td><td>+IGEL</td><td>Graph8c (#Errors)</td><td>EXP Classify (Accuracy)</td></tr><tr><td rowspan="2">Linear</td><td>No</td><td>6.242M</td><td>50%</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>1571</td><td>97.25%</td></tr><tr><td rowspan="2">$\mathbf{{MLP}}$</td><td>No</td><td>293K</td><td>50%</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>1487</td><td>100%</td></tr><tr><td rowspan="2">GCN</td><td>No</td><td>4196</td><td>50%</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>5</td><td>100%</td></tr><tr><td rowspan="2">GAT</td><td>No</td><td>1827</td><td>50%</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>5</td><td>100%</td></tr><tr><td rowspan="2">GIN</td><td>No</td><td>571</td><td>50%</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>5</td><td>100%</td></tr><tr><td rowspan="2">Chebnet</td><td>No</td><td>44</td><td>50%</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>1</td><td>100%</td></tr><tr><td rowspan="2">GNNML3</td><td>No</td><td>0</td><td>100%</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>0</td><td>100%</td></tr></table>
|
| 316 |
+
|
| 317 |
+
— Empirical Results on Strongly Regular Graphs. We also evaluate IGEL on ${\mathrm{{SR25}}}^{2}$ , which contains 15 Strongly Regular graphs with 25 vertices, known to be indistinguishable by 3-WL. With SR25, we empirically validate Theorem 1. [26] showed that no models in our benchmark distinguish any of the 105 non-isomorphic graph pairs in SR25. As expected from Theorem 1, introducing IGEL does not improve distinguishability.
|
| 318 |
+
|
| 319 |
+
### C.2 Graphlet Counting.
|
| 320 |
+
|
| 321 |
+
We evaluate IGEL on a (regression) graphlet ${}^{3}$ counting task. We minimize Mean Squared Error (MSE) on normalized graphlet counts ${}^{4}$ . Table 4 shows the results of introducing IGEL in 5 graphlet counting tasks on the RandomGraph data set [33]. Stat sig. differences $\left( {p < {0.0001}}\right)$ shown in bold green, with best (lowest MSE) per-graphlet results underlined.
|
| 322 |
+
|
| 323 |
+
Introducing IGEL improves counting performance on triangles, tailed triangles and the custom 1-WL graphlets proposed by [26]. Star graphlets can be identified by all baselines, and IGEL only produces statistically significant improvements for the Linear baseline.
|
| 324 |
+
|
| 325 |
+
Table 4: Graphlet counting results. Cells contain mean test set MSE error (lower is better), stat. sig highlighted.
|
| 326 |
+
|
| 327 |
+
<table><tr><td>Model</td><td>+ IGEL</td><td>Star</td><td>Triangle</td><td>Tailed Tri.</td><td>4-Cycle</td><td>$\mathbf{{Custom}}$</td></tr><tr><td rowspan="2">Linear</td><td>No</td><td>${1.60}\mathrm{E} - {01}$</td><td>${3.41}\mathrm{E} - {01}$</td><td>${2.82}\mathrm{E} - {01}$</td><td>${2.03}\mathrm{E} - {01}$</td><td>${5.11}\mathrm{E} - {01}$</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>${4.23}\mathrm{E} - {03}$</td><td>4.38E-03</td><td>${1.85}\mathrm{E} - {02}$</td><td>${1.36}\mathrm{E} - {01}$</td><td>${5.25}\mathrm{E} - {02}$</td></tr><tr><td rowspan="2">MLP</td><td>No</td><td>${2.66}\mathrm{E} - {06}$</td><td>${2.56}\mathrm{E} - {01}$</td><td>${1.60}\mathrm{E} - {01}$</td><td>${1.18}\mathrm{E} - {01}$</td><td>${4.54}\mathrm{E} - {01}$</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>${8.31}\mathrm{E} - {05}$</td><td>${5.69}\mathrm{E} - {05}$</td><td>5.57E-05</td><td>${7.64}\mathrm{E} - {02}$</td><td>${2.34}\mathrm{E} - {04}$</td></tr><tr><td rowspan="2">GCN</td><td>No</td><td>4.72E-04</td><td>${2.42}\mathrm{E} - {01}$</td><td>${1.35}\mathrm{E} - {01}$</td><td>${1.11}\mathrm{E} - {01}$</td><td>${1.54}\mathrm{E} - {03}$</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>${8.26}\mathrm{E} - {04}$</td><td>${1.25}\mathrm{E} - {03}$</td><td>4.15E-03</td><td>7.32E-02</td><td>1.17E-03</td></tr><tr><td rowspan="2">$\mathbf{{GAT}}$</td><td>No</td><td>${4.15}\mathrm{E} - {04}$</td><td>${2.35}\mathrm{E} - {01}$</td><td>${1.28}\mathrm{E} - {01}$</td><td>${1.11}\mathrm{E} - {01}$</td><td>${2.85}\mathrm{E} - {03}$</td></tr><tr><td>Yes</td><td>4.52E-04</td><td>6.22E-04</td><td>7.77E-04</td><td>7.33E-02</td><td>${6.66}\mathrm{E} - {04}$</td></tr><tr><td rowspan="2">GIN</td><td>No</td><td>${3.17}\mathrm{E} - {04}$</td><td>${2.26}\mathrm{E} - {01}$</td><td>${1.22}\mathrm{E} - {01}$</td><td>${1.11}\mathrm{E} - {01}$</td><td>${2.69}\mathrm{E} - {03}$</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>${6.09}\mathrm{E} - {04}$</td><td>${1.03}\mathrm{E} - {03}$</td><td>2.72E-03</td><td>${6.98}\mathrm{E} - {02}$</td><td>${2.18}\mathrm{E} - {03}$</td></tr><tr><td rowspan="2">Chebnet</td><td>No</td><td>${5.79}\mathrm{E} - {04}$</td><td>${1.71}\mathrm{E} - {01}$</td><td>${1.12}\mathrm{E} - {01}$</td><td>${8.95}\mathrm{E} - {02}$</td><td>${2.06}\mathrm{E} - {03}$</td></tr><tr><td>$\mathbf{{Yes}}$</td><td>${3.81}\mathrm{E} - {03}$</td><td>${7.88}\mathrm{E} - {04}$</td><td>${2.10}\mathrm{E} - {03}$</td><td>7.90E-02</td><td>${2.05}\mathrm{E} - {03}$</td></tr><tr><td rowspan="2">GNNML3</td><td>No</td><td>${8.90}\mathrm{E} - {05}$</td><td>${2.36}\mathrm{E} - {04}$</td><td>${2.91}\mathrm{E} - {04}$</td><td>${6.82}\mathrm{E} - {04}$</td><td>${9.86}\mathrm{E} - {04}$</td></tr><tr><td>Yes</td><td>${9.29}\mathrm{E} - {04}$</td><td>${2.19}\mathrm{E} - {04}$</td><td>${4.23}\mathrm{E} - {04}$</td><td>${6.98}\mathrm{E} - {04}$</td><td>4.17E-04</td></tr></table>
|
| 328 |
+
|
| 329 |
+
Notably, the Linear baseline plus IGEL outperforms MP-GNNs without IGEL for star, triangle, tailed triangle and custom 1-WL graphlets. By introducing IGEL on the MLP baseline, it outperforms all other models including GNNML3 on the triangle, tailed-triangle and custom 1-WL graphlets.
|
| 330 |
+
|
| 331 |
+
Since Linear and MLP baselines do not use message passing, we believe raw IGEL encodings may be sufficient to identify certain graph structures even with simple linear models. For all graphlets except 4-cycles, introducing IGEL yields performance similar to GNNML3 at lower pre-processing and model training/inference costs, as IGEL obviates the need for costly eigen-decomposition and can be used in simple models only performing graph-level readouts without message passing.
|
| 332 |
+
|
| 333 |
+
---
|
| 334 |
+
|
| 335 |
+
${}^{1}$ Simple 8 vertices graphs from: http://users.cecs.anu.edu.au/~bdm/data/graphs.html
|
| 336 |
+
|
| 337 |
+
${}^{2}$ SRG(25,12,5,6)graphs from: http://users.cecs.anu.edu.au/~bdm/data/graphs.html
|
| 338 |
+
|
| 339 |
+
${}^{3}$ 3-stars, triangles, tailed triangles and 4-cycles, plus a custom 1-WL graphlet proposed in [26]
|
| 340 |
+
|
| 341 |
+
${}^{4}$ Counts are stddev-normalized so that MSE values are comparable across graphlet types, following [26].
|
| 342 |
+
|
| 343 |
+
---
|
papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ BEYOND 1-WL WITH LOCAL EGO-NETWORK ENCODINGS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Identifying similar network structures is key to capture graph isomorphisms and learn representations that exploit structural information encoded in graph data. This work shows that ego-networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler-Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego-networks into sparse vectors that enrich Message Passing (MP) Graph Neural Networks (GNNs) beyond 1-WL expressivity. We describe formally the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on seven GNN architectures.
|
| 12 |
+
|
| 13 |
+
§ 13 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Novel approaches for learning on graphs have appeared in recent years within the machine learning community [1]. Notably, the introduction of Graph Convolutional Networks [2, 3] led to a broad body of research aiming to efficiently capture network interactions, leveraging spectral information [4], scaling beyond seen nodes [5], or generalizing attention to Graph Neural Networks (GNNs) [6]. Underlying this family of GNN models is the Message Passing (MP) mechanism [7]. In MP-GNNs, a node is represented by iteratively aggregating feature 'messages' from its neighbours based on edge connectivity, with successful applications on several domains [7-11]. However, recently it has been shown that message passing limits the representational power of GNNs, which are bound by the Weisfeiler-Lehman (1-WL) test [12]. As such, MP-GNNs cannot reach the expressivity of $k$ -dimensional WL generalizations [13,14] ( $k$ -WL) or analogous MATLANG [15,16] languages. Casting MP-GNN representations in these terms [17] has driven recent research towards expressivity.
|
| 16 |
+
|
| 17 |
+
To improve expressivity, recent approaches extend message-passing, leveraging topological information from cell complexes [18], extending the message-passing mechanism with sub-graph information [19-22], propagating messages through $k$ network hops [23], introducing relative positioning information for network vertices [24], or using higher order $k$ -vertex tuples to reach $k$ -WL expressivity [14]. In an other direction, methods such as Provably Powerful Graph Networks (PPGN) [25] are guaranteed to be as expressive as the 3-WL test at cubic time and quadratic memory costs. More recently, GNNML3 [26] introduced a network architecture with equal memory and time costs to MP-GNNs, but experimentally capable of 3-WL expressivity that introduces spectral information through a preprocessing step with cubic worst-case time complexity.
|
| 18 |
+
|
| 19 |
+
The aforementioned approaches improve expressivity by extending MP-GNNs architectures, often evaluating on standarized benchmarks [27-29]. However, identifying the optimal approach on novel domains remains unclear and requires costly architecture search. In this work, we present IGEL, an Inductive Graph Encoding of Local information allowing MP-GNN and Deep Neural Network (DNN) models to go beyond 1-WL expressivity without modifying model architectures. IGEL is closely related to the Weisfeiler-Lehman isomorphism test, and produces inductive representations of vertex structures that can be introduced into MP-GNN models. IGEL reframes capturing 1-WL information irrespective of model architecture as a pre-processing step that simply extends node attributes.
|
| 20 |
+
|
| 21 |
+
§ 2 IGEL: EGO-NETWORKS AS SPARSE INDUCTIVE REPRESENTATIONS.
|
| 22 |
+
|
| 23 |
+
Given a graph $G = \left( {V,E}\right)$ , we define $n = \left| V\right|$ and $m = \left| E\right| ,{d}_{G}\left( v\right)$ is the degree of a node $v$ in $G$ and ${d}_{\max }$ is the maximum degree. For $u,v \in V,{l}_{G}\left( {u,v}\right)$ is their shortest distance, and $\operatorname{diam}\left( G\right) = \max \left( {{l}_{G}\left( {u,v}\right) \mid u,v \in V}\right)$ is the diameter of $G.{\mathcal{N}}_{G}^{\alpha }\left( v\right)$ is the set of neighbours of $v$ in $G$ up to distance $\alpha$ (Equation 1), and ${\mathcal{E}}_{v}^{\alpha }$ is the $\alpha$ -depth ego-network centered on $v$ (Equation 2):
|
| 24 |
+
|
| 25 |
+
$$
|
| 26 |
+
{\mathcal{N}}_{G}^{\alpha }\left( v\right) = \left\{ {u \mid u \in V \land {l}_{G}\left( {u,v}\right) \leq \alpha }\right\} ,
|
| 27 |
+
$$
|
| 28 |
+
|
| 29 |
+
Let $\{ | \cdot \rangle \}$ denote a lexicographically-ordered multi-set. Algorithm 1 shows the 1-WL test, where hash a 1-WL iteration. The output of 1-WL is ${\mathbb{N}}^{n}$ -mapping each node to a color, bounded by $n$ distinct operate on $k$ -tuples of vertices, such that colors are assigned to $k$ -vertex tuples. If two graphs ${G}_{1},{G}_{2}$ are not distinguishable by the $k$ -WL test (that is, their coloring histograms match), they are $k$ -WL equivalent-denoted ${G}_{1}{ \equiv }_{k - \mathrm{{WL}}}{G}_{2}$ . Due to the hashing step,1-WL does not preserve distance information in the encoding, and perturbations (e.g. different color in a neighbour) produce different node-level representations. IGEL addresses both limitations, improving expressivity in the process.
|
| 30 |
+
|
| 31 |
+
§ 2.1 THE IGEL ALGORITHM
|
| 32 |
+
|
| 33 |
+
Intuitively, IGEL encodes a vertex $v$ with the multi-set of ordered degree sequences at each distance $\alpha$ steps with two modifications. First, the hashing step is removed and replaced by computing the union of multi-sets across steps $\left( \cup \right)$ ; second, the iteration number is explicitly introduced in the representation-with the output multi-set ${e}_{v}^{\alpha }$ shown in Algorithm 2.
|
| 34 |
+
|
| 35 |
+
In order to be used as vertex features, the multi-set can be represented as a sparse vector ${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }\left( v\right)$ , where the $i$ -th index contains the frequency of path-length and degree pairs $\left( {\lambda ,\delta }\right)$ . Degrees greater than ${d}_{\max }$ are capped to ${d}_{\max }$ , and vector indices are output by bijective function $f : \left( {\mathbb{N},\mathbb{N}}\right) \mapsto \mathbb{N}$ -shown in Figure 1:
|
| 36 |
+
|
| 37 |
+
${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }{\left( v\right) }_{i} = \left| \left\{ {\left( {\lambda ,\delta }\right) \in {e}_{v}^{\alpha }\text{ s.t. }f\left( {\lambda ,\delta }\right) = i}\right\} \right| .$
|
| 38 |
+
|
| 39 |
+
${G}_{1} = \left( {{V}_{1},{E}_{1}}\right)$ and ${G}_{2} = \left( {{V}_{1},{E}_{1}}\right)$ are IGEL-equivalent for $\alpha$ if the sorted multi-set containing node representations is the same for ${G}_{1}$ and ${G}_{2}$ :
|
| 40 |
+
|
| 41 |
+
${G}_{1}{ \equiv }_{\text{ IGEL }}^{\alpha }{G}_{2} \Leftrightarrow$
|
| 42 |
+
|
| 43 |
+
$\left\{ \left\{ {{e}_{{v}_{1}}^{\alpha } : \forall {v}_{1} \in {V}_{1}}\right\} \right\} = \left\{ \left\{ {{e}_{{v}_{2}}^{\alpha } : \forall {v}_{2} \in {V}_{2}}\right\} \right\} .$(1)
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{\mathcal{E}}_{v}^{\alpha } = \left( {{V}^{\prime },{E}^{\prime }}\right) \subseteq G\text{ , s.t. }u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \Leftrightarrow u \in {V}^{\prime },\left( {u,v}\right) \in {E}^{\prime } \subseteq E \Leftrightarrow u,v \in {V}^{\prime }\text{ . } \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
maps a multi-set to an equivalence class shared by all nodes with matching multi-set encodings after colors if each node is uniquely colored. $k$ -higher order variants of the WL test (denoted $k$ -WL) tests within ${\mathcal{E}}_{v}^{\alpha }$ . As such, IGEL is a variant of the 1-WL algorithm shown in Algorithm 1, executed for
|
| 50 |
+
|
| 51 |
+
< g r a p h i c s >
|
| 52 |
+
|
| 53 |
+
Figure 1: IGEL encoding of the green vertex. Dashed region denotes ${\mathcal{E}}_{v}^{\alpha }\left( {\alpha = 2}\right)$ . The green vertex is at distance 0, blue vertices at 1 and red vertices at 2. Labels show degrees in ${\mathcal{E}}_{v}^{\alpha }$ . The frequency of $\left( {\lambda ,\delta }\right)$ tuples forming ${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }\left( v\right)$ is: $\{ \left( {0,2}\right) : 1,\left( {1,2}\right) : 1,\left( {1,4}\right) : 1,\left( {2,3}\right) : 2,\left( {2,4}\right) : 1\}$ .
|
| 54 |
+
|
| 55 |
+
Algorithm 1 1-WL (Color refinement).
|
| 56 |
+
|
| 57 |
+
Input: $G = \left( {V,E}\right)$
|
| 58 |
+
|
| 59 |
+
1: ${c}_{v}^{0} \mathrel{\text{ := }} \operatorname{hash}\left( \left\{ \left\{ {{d}_{G}\left( v\right) }\right\} \right\} \right) \forall v \in V$
|
| 60 |
+
|
| 61 |
+
do
|
| 62 |
+
|
| 63 |
+
${c}_{v}^{i + 1} \mathrel{\text{ := }} \operatorname{hash}\left( \left\{ \left\{ {{c}_{u}^{i} : \mathop{\forall }\limits_{{u \neq v}}u \in {\mathcal{N}}_{G}^{1}\left( v\right) }\right\} \right\} \right)$
|
| 64 |
+
|
| 65 |
+
while ${c}_{v}^{i} \neq {c}_{v}^{i - 1}$
|
| 66 |
+
|
| 67 |
+
Output: ${c}_{v}^{i} : V \mapsto \mathbb{N}$
|
| 68 |
+
|
| 69 |
+
Algorithm 2 IGEL Encoding.
|
| 70 |
+
|
| 71 |
+
Input: $G = \left( {V,E}\right) ,\alpha : \mathbb{N}$
|
| 72 |
+
|
| 73 |
+
${e}_{v}^{0} \mathrel{\text{ := }} \left\{ {\{ \left( {0,{d}_{G}\left( v\right) }\right) \} }\right\} \forall v \in V$
|
| 74 |
+
|
| 75 |
+
for $i \mathrel{\text{ := }} 1;i + = 1$ until $i = \alpha$ do
|
| 76 |
+
|
| 77 |
+
${e}_{v}^{i} \mathrel{\text{ := }} \bigcup \left( {{e}_{v}^{i - 1},}\right.$
|
| 78 |
+
|
| 79 |
+
$\{ (i,{d}_{{\mathcal{E}}_{C}^{\alpha }(v)}(u))$
|
| 80 |
+
|
| 81 |
+
$\left. \left. {\forall u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \mid {l}_{G}\left( {u,v}\right) = i}\right\} \right)$
|
| 82 |
+
|
| 83 |
+
end for
|
| 84 |
+
|
| 85 |
+
Dutput: ${e}_{v}^{\alpha } : V \mapsto \{ \{ \left( {\mathbb{N},\mathbb{N}}\right) \} \}$
|
| 86 |
+
|
| 87 |
+
Space complexity. IGEL’s worst case space complexity is $\mathcal{O}\left( {\alpha \cdot n \cdot {d}_{\max }}\right)$ , conservatively assuming that every node will require ${d}_{\max }$ parameters at every $\alpha$ depth from the center of the ego-network.
|
| 88 |
+
|
| 89 |
+
Time complexity. For IGEL, each vertex has ${d}_{\max }$ neighbours where the $\alpha$ iterations imply traversing through geometrically larger ego-networks with ${\left( {d}_{\max }\right) }^{\alpha }$ vertices, upper bounded by $m$ . Thus IGEL’s time complexity follows $\mathcal{O}\left( {n \cdot \min \left( {m,{\left( {d}_{\max }\right) }^{\alpha }}\right) }\right)$ , with $\mathcal{O}\left( {n \cdot m}\right)$ when $\alpha \geq \operatorname{diam}\left( G\right)$ .
|
| 90 |
+
|
| 91 |
+
§ 3 THEORETICAL AND EXPERIMENTAL FINDINGS
|
| 92 |
+
|
| 93 |
+
First, we analyze IGEL's expressive power with respect to 1-WL and recent improvements. Second, we measure the impact of IGEL as an additional input to enrich existing MP-GNN architectures.
|
| 94 |
+
|
| 95 |
+
§ 3.1 EXPRESSIVITY: WHICH GRAPHS ARE IGEL-DISTINGUISHABLE?
|
| 96 |
+
|
| 97 |
+
In this section, we discuss the increased expressivity of IGEL with respect to 1-WL, and identify expressivity upper-bounds for graphs that are indistinguishable under MATLANG and the 3-WL test.
|
| 98 |
+
|
| 99 |
+
* Relationship to 1-WL. IGEL is capable of distinguishing graphs that are indistinguishable by the 1-WL test-e.g., $d$ -regular graphs. A graph is $d$ -regular graph if all nodes have degree $d.d$ - regular graphs with equal cardinality are indistinguishable by 1-WL. Specifically, for any pair of $d$ - regular graphs ${G}_{1}$ and ${G}_{2}$ such that $\left| {V}_{1}\right| = \left| {V}_{2}\right|$ , ${G}_{1}{ \equiv }_{1 - \mathrm{{WL}}}{G}_{2}$ (see Appendix A for details).
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Figure 2: IGEL encodings for two Cospectral 4- regular graphs from [30]. IGEL distinguishes 4 kinds of structures within the graphs (associated with every node as a, b, c, and d). The two graphs can be distinguished since the encoded structures and their frequencies do not match.
|
| 104 |
+
|
| 105 |
+
However, there exist $d$ -regular graphs that can be distinguished by IGEL, as shown in Figure 2. Since the graph is $d$ -regular, tracing Algorithm 1 shows that the 1-WL test assigns the same color to all nodes and stabilizes after one iteration. In contrast, IGEL with $\alpha = 1$ identifies 4 kinds of structures with different frequencies between the graphs-thus being able to distinguish them.
|
| 106 |
+
|
| 107 |
+
* Expressivity upper bounds. We identify an upper expressivity bound for IGEL, where the method fails to distinguish graphs e.g. Strongly Regular Graphs (Definition 1) with equal parameters (Theorem 1, see Appendix B for details): Definition 1. An-vertex $d$ -regular graph is strongly regular-denoted $\operatorname{SRG}\left( {n,d,\beta ,\gamma }\right)$ -if adjacent vertices have $\beta$ vertices in common, and non-adjacent vertices have $\gamma$ vertices in common.
|
| 108 |
+
|
| 109 |
+
Theorem 1. IGEL cannot distinguish SRGs when $n,d$ , and $\beta$ are the same, and between any value of $\gamma$ (same or otherwise). IGEL when $\alpha = 1$ can only distinguish SRGs with different values of $n,d$ , and $\beta$ , while IGEL when $\alpha = 2$ can only distinguish SRGs with different values of $n$ and $d$ .
|
| 110 |
+
|
| 111 |
+
Our findings show that IGEL is a powerful representation, capable of distinguishing 1-WL equivalent graphs such as Figure 2-which as cospectral graphs, are known to be expressable in strictly more powerful MATLANG sub-languages than 1-WL [16]. Additionally, the upper bound on Strongly Regular Graphs is a hard ceiling on expressivity since SRGs are known to be indistinguishable by 3-WL [31]. IGEL shares the experimental upper-bound of expressivity of recent methods such as GNNML3 [26]. Furthermore, IGEL can provably reach comparable expressivity on SRGs with respect to sub-graph methods implemented within MP-GNN architectures (see Appendix B, subsection B.2), such as Nested GNNs [21] and GNN-AK [22], which are known to be not less powerful than 3-WL, and the ESAN framework when leveraging ego-networks with root-node flags as a subgraph sampling policy (EGO+) [19], which is as powerful as the 3-WL.
|
| 112 |
+
|
| 113 |
+
§ 3.2 EXPERIMENTAL EVALUATION
|
| 114 |
+
|
| 115 |
+
We evaluate ${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }\left( v\right)$ as a method of producing architecture-agnostic vertex features on five tasks: graph classification, isomorphism detection, graphlet counting, link prediction, and node classification.
|
| 116 |
+
|
| 117 |
+
Experimental Setup. We reproduce results from [26], introducing IGEL as features on graph classification, isomorphism and graphlet counting, comparing the performance of adding/removing IGEL on six GNN architectures. We also evaluate IGEL on link prediction against transductive baselines, and on node classification as an additional feature used in MLPs without message-passing
|
| 118 |
+
|
| 119 |
+
Notation. The following formatting denotes significant (as per paired t-tests) positive, negative, and insignificant differences after introducing IGEL, with the best results per task / dataset underlined.
|
| 120 |
+
|
| 121 |
+
Table 1: Per-model graph classification accuracy met rics on TU data sets. Each cell shows the average accuracy of the model and data set in that row and column, with IGEL (left) and without IGEL (right).
|
| 122 |
+
|
| 123 |
+
max width=
|
| 124 |
+
|
| 125 |
+
Model Enzymes $\mathbf{{Mutag}}$ Proteins PTC
|
| 126 |
+
|
| 127 |
+
1-5
|
| 128 |
+
$\mathbf{{MLP}}$ 41.10>26.18 ${}^{ \circ }$ ${87.61} > {84.61}^{ \circ }$ 75.437̃5.01 ${64.59} > {62.79}^{ \circ }$
|
| 129 |
+
|
| 130 |
+
1-5
|
| 131 |
+
GCN ${54.48} > {48.60}^{ \circ }$ ${89.61} > {85.42}^{ \circ }$ 75.67>74.50 ${}^{ \circ }$ 65.766̃5.21
|
| 132 |
+
|
| 133 |
+
1-5
|
| 134 |
+
$\mathbf{{GAT}}$ 54.885̃4.95 90.00>86.14 ${}^{ \circ }$ 73.44>70.51 ${}^{ \circ }$ 66.296̃6.29
|
| 135 |
+
|
| 136 |
+
1-5
|
| 137 |
+
GIN ${{54.77} > {53.44}}^{ * }$ 89.568̃8.33 ${73.32}^{ \circ }{72.05}^{ \circ }$ 61.446̃0.21
|
| 138 |
+
|
| 139 |
+
1-5
|
| 140 |
+
Chebnet 61.886̃2.23 91.44>88.33 ${}^{ \circ }$ 74.30>66.94 ${}^{ \circ }$ 64.796̃3.87
|
| 141 |
+
|
| 142 |
+
1-5
|
| 143 |
+
GNNML3 ${61.42} < {62.79}^{ \circ }$ ${92.50} > {91.47}^{ * }$ 75.54>62.32 ${}^{ \circ }$ ${64.26} < {66.10}^{ \circ }$
|
| 144 |
+
|
| 145 |
+
1-5
|
| 146 |
+
5|c|* $p < {0.01}$ , $p < {0.0001}$
|
| 147 |
+
|
| 148 |
+
1-5
|
| 149 |
+
|
| 150 |
+
Table 2: Mean $\pm$ stddev of the best IGEL-augmented graph classification model and reported results on $k$ -hop, GSN, and ESAN from $\left\lbrack {{19},{20},{23}}\right\rbrack$ . Best performing baselines underlined.
|
| 151 |
+
|
| 152 |
+
max width=
|
| 153 |
+
|
| 154 |
+
Model Mutag Proteins PTC
|
| 155 |
+
|
| 156 |
+
1-4
|
| 157 |
+
IGEL (best) ${92.5} \pm {1.2}$ ${75.7} \pm {0.3}$ ${66.3} \pm {1.3}$
|
| 158 |
+
|
| 159 |
+
1-4
|
| 160 |
+
$k$ -hop [23] ${}^{ \dagger }$ ${87.9} \pm {1.2}^{\diamond }$ ${75.3} \pm {0.4}$ -
|
| 161 |
+
|
| 162 |
+
1-4
|
| 163 |
+
GSN [20] ${}^{ \dagger }$ ${92.2} \pm {7.5}$ ${76.6} \pm {5.0}$ ${68.2} \pm {7.2}$
|
| 164 |
+
|
| 165 |
+
1-4
|
| 166 |
+
ESAN [19] ${}^{ \dagger }$ ${91.1} \pm {7.0}$ ${76.7} \pm {4.1}$ ${69.2} \pm {6.5}$
|
| 167 |
+
|
| 168 |
+
1-4
|
| 169 |
+
|
| 170 |
+
$\dagger$ : Results as reported by $\left\lbrack {{19},{20},{23}}\right\rbrack$ .
|
| 171 |
+
|
| 172 |
+
— Graph Classification. Table 1 shows graph classification results on the TU molecule data sets [28]. We evaluate differences in mean accuracy between 10 runs with (left) / without (right) IGEL. We do not tune network hyper-parameters and establish statistical significance through paired t-tests, with $p < {0.01}$ (*) and $p < {0.0001}$ (*). Our results show that IGEL in the Mutag and Proteins data sets improves the performance of all MP-GNN models. On the Enzymes and PTC data sets, results are mixed: for all models other than GNNML3, IGEL either significantly improves accuracy (on MLPNet, GCN, and GIN on Enzymes), or does not have a negative impact on performance.
|
| 173 |
+
|
| 174 |
+
In Table 2, we compare the best IGEL results from Table 1 with reported results for expressive baselines: $k$ -hop GNNs [23], GSNs [20], and ESAN [19]. All results are comparable to IGEL except Mutag, where IGEL significantly outperforms $k$ -hop with $p < {0.0001}$ . When comparing IGEL and best performing baselines for every data set, no differences are statistically significant $\left( {p > {0.01}}\right)$ .
|
| 175 |
+
|
| 176 |
+
* Isomorphism Detection & Graphlet Counting. Adding IGEL to the six models in Table 1 on the EXP [32] graph isomorphism task produces significant improvements: all GNN models distinguish all non-isomorphic yet 1-WL equivalent EXP graph pairs with IGEL vs. 50% accuracy without IGEL (i.e. random guessing). Likewise, IGEL significantly improves GNN performance on the RandomGraph data set [33] counting triangles, tailed triangles and the custom 1-WL graphlets proposed by [26] (see detailed results on Appendix C).
|
| 177 |
+
|
| 178 |
+
* Link Prediction & Node Classification. We test IGEL on edge / node level tasks to assess its use as a baseline in non-GNN settings. On a transductive link prediction task, we train DeepWalk [34] style embeddings of IGEL encodings rather than node identities on the Facebook and CA-AstroPh graphs [35]. IGEL-derived embeddings outperform transductive baselines modelling link prediction as an edge-level binary classification task, measuring 0.976 vs. 0.968 (Facebook) and 0.984 vs. 0.937 (CA-AstroPh) AUC comparing IGEL vs. node2vec [36]. On multi-label node classification on PPI [5], we train an MLP (e.g. no message passing) with node features and IGEL encodings. Our MLP shows better micro-F1 (0.850) when $\alpha = 1$ than MP-GNN architectures such as GraphSAGE (0.768, as reported in [6]), but underperforms compared to a 3-layer GAT (0.973 micro-F1 from [6]).
|
| 179 |
+
|
| 180 |
+
* Experimental Summary. Introducing IGEL yields comparable performance to state-of-the-art methods without architectural modifications-including when compared to strong baseline models focused on WL expressivity such as GNNML3, $k$ -hop, GSN or ESAN. Furthermore, IGEL achieves this at a lower computational cost, in comparison for instance with GNNML3, which requires a $\mathcal{O}\left( {n}^{3}\right)$ eigen-decomposition step to introduce spectral channels. Finally, IGEL can also be used in transductive settings (link prediction) as well as node-level tasks (node classification) and outperform strong transductive baselines or enhance models without message-passing, such as MLPs. As such, we believe IGEL is an attractive baseline with a clear relationship to the 1-WL test that can be used to improve MP-GNN expressivity without the need of costly architecture search.
|
| 181 |
+
|
| 182 |
+
§ 4 CONCLUSIONS
|
| 183 |
+
|
| 184 |
+
We presented IGEL, a novel vertex representation algorithm on unattributed graphs allowing MP-GNN architectures to go beyond 1-WL expressivity. We showed that IGEL is related and more expressive than the 1-WL test, and formally proved an expressivity upper bound on certain families of Strongly Regular Graphs. Finally, our experimental results indicate that introducing IGEL in existing MP-GNN architectures yield comparable performance to state-of-the-art methods, without architectural modifications and at lower computational costs than other approaches.
|
papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Continuous Neural Algorithmic Planners
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Neural algorithmic reasoning studies the problem of learning classical algorithms with neural networks, especially with a focus on graph architectures. A recent proposal, XLVIN, reaps the benefits of using a graph neural network that simulates value iteration algorithm in deep reinforcement learning agents. It allows model-free planning without access to privileged information of the environment, which is usually unavailable. However, XLVIN only supports discrete action spaces, and is hence nontrivially applicable to most tasks of real-world interest. We expand XLVIN to continuous action spaces by discretization, and evaluate several selective expansion policies to deal with the large planning graphs. Our proposal, CNAP, demonstrates how neural algorithmic reasoning can make a measurable impact in higher-dimensional continuous control settings, such as MuJoCo, bringing gains in low-data settings and outperforming model-free baselines.
|
| 12 |
+
|
| 13 |
+
## 14 1 Introduction
|
| 14 |
+
|
| 15 |
+
Neural networks are capable of learning directly from high-dimensional unstructured input data, tackling the input constraints that limit classical algorithms from solving more complex problems. However, neural networks often require large amounts of data to train and suffer from poor generalization and interpretability. On the other hand, algorithms intrinsically generalize and provide mathematical provability with performance guarantees. The complementary relationship motivates the topic of neural algorithmic reasoning to study the problem of learning classical algorithms with neural networks [1].
|
| 16 |
+
|
| 17 |
+
Recent works focus on utilizing Graph Neural Networks (GNNs) [2-4] for algorithmic reasoning tasks due to the close algorithmic alignment that was proven to bring better sample efficiency and generalization ability $\left\lbrack {5,6}\right\rbrack$ . Besides shortest-path and spanning-tree algorithms, there have been a number of successful applications by aligning GNNs with classical algorithms, covering a range of problems such as bipartite matching [7], min-cut problem [8], and Travelling Salesman Problem [9].
|
| 18 |
+
|
| 19 |
+
We look at the application of using a GNN that simulates the value iteration algorithm [10] in deep reinforcement learning agents. Value iteration [11] is a dynamic programming algorithm that guarantees to solve a reinforcement learning problem but is traditionally inhibited by its requirement of tabulated inputs. Earlier works [12-16] introduced value iteration as an inductive bias to facilitate the agents to perform implicit planning, without the need of explicitly invoking a planning algorithm, but were found to suffer from an algorithmic bottleneck [17]. Conversely, eXecuted Latent Value Iteration Net (XLVIN) [17] was proposed to leverage a value-iteration-behaving GNN [10] by adopting the neural algorithmic framework [1]. XLVIN is able to learn under a low-data regime, tackling the algorithmic bottleneck suffered by other implicit planners.
|
| 20 |
+
|
| 21 |
+
One particular difficulty of implicit planners is handling a continuous action space. XLVIN uses a transition model to build a planning graph, over which the pre-trained GNN can execute value iteration in a latent space. So far, it only applies to environments with small and discrete action spaces. The limitation is that the construction of the planning graph requires an enumeration of all possible actions - starting from the current state and expanding for a number of hops equal to the planning horizon. The graph size quickly explodes as the dimensionality of the action space increases. Moreover, a continuous action space results in an infinite pool of action choices, making the construction of a planning graph infeasible.
|
| 22 |
+
|
| 23 |
+
Nevertheless, continuous control is of significant importance, as most simulation or robotics control tasks [18] have continuous action spaces by design. High complexity also naturally arises as the problem moves towards more powerful real-world domains. To extend such an agent powered by neural algorithmic reasoning to complex continuous control problems, we propose Continuous Neural Algorithmic Planner (CNAP). It generalizes XLVIN to continuous action spaces by discretizing them through binning. Moreover, CNAP handles the large planning graph by following a sampling policy that carefully selects actions during the neighbor expansion stage. Choosing which actions to sample is critical as the graph built determines where the GNN would simulate value iteration computation, and ultimately influences the planning performance.
|
| 24 |
+
|
| 25 |
+
In addition, the discreteness of the graph neural network simulating the value iteration update rule contrasts with the continuous action space, corresponding to continuous edges between states. CNAP also presents a novel setup for neural algorithmic reasoning, where the downstream task does not fully align with the algorithm studied. This opens a new path for the direction, going beyond the current standard of precise application of learned classical graph algorithms.
|
| 26 |
+
|
| 27 |
+
We confirm the feasibility of CNAP on a continuous relaxation of a classical low-dimensional control task, where we can still fully expand all of the binned actions after discretization. Then, we apply CNAP to general MuJoCo [19] environments with complex continuous dynamics, where expanding the planning graph by taking all actions is impossible. By expanding the application scope from simple discrete control to complex continuous control, we show that such an intelligent agent with algorithmic reasoning power can be applied to tasks with more real-world interests.
|
| 28 |
+
|
| 29 |
+
## 2 Background
|
| 30 |
+
|
| 31 |
+
### 2.1 Markov Decision Process (MDP)
|
| 32 |
+
|
| 33 |
+
A reinforcement learning problem can be formally described using the MDP framework. At each time step $t \in \{ 0,1,\ldots , T\}$ , the agent performs an action ${a}_{t} \in \mathcal{A}$ given the current state ${s}_{t} \in \mathcal{S}$ . This spawns a transition into a new state ${s}_{t + 1} \in \mathcal{S}$ according to the transition probability $p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ , and produces a reward ${r}_{t} = r\left( {{s}_{t},{a}_{t}}\right)$ . A policy $\pi \left( {{a}_{t} \mid {s}_{t}}\right)$ guides an agent by specifying the probability of choosing an action ${a}_{t}$ given a state ${s}_{t}$ . The trajectory $\tau$ is the sequence of actions and states the agents took $\left( {{s}_{0},{a}_{0},\ldots ,{s}_{T},{a}_{T}}\right)$ . We define the infinite horizon discounted return as $R\left( \tau \right) = \mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}$ , where $\gamma \in \left\lbrack {0,1}\right\rbrack$ is the discount factor. The goal of an agent is to maximize the overall return by finding the optimal policy ${\pi }^{ * } = {\operatorname{argmax}}_{\pi }{\mathbb{E}}_{\tau \sim \pi }\left\lbrack {R\left( \tau \right) }\right\rbrack$ . We can measure the desirability of a state $s$ using the state-value function ${V}^{ * }\left( s\right) = {\mathbb{E}}_{\tau \sim {\pi }^{ * }}\left\lbrack {R\left( \tau \right) \mid {s}_{t} = s}\right\rbrack$ .
|
| 34 |
+
|
| 35 |
+
### 2.2 Value Iteration
|
| 36 |
+
|
| 37 |
+
Value iteration is a dynamic programming algorithm that computes the optimal policy and its value function given a tabulated MDP that perfectly describes the environment. It randomly initializes ${V}^{ * }\left( s\right)$ and iteratively updates the value function of each state $s$ using the Bellman optimality equation [11]:
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{V}_{i + 1}^{ * }\left( s\right) = \mathop{\max }\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s, a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s, a}\right) {V}_{t}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
and we can extract the optimal policy using:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{\pi }^{ * }\left( s\right) = \mathop{\operatorname{argmax}}\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s, a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s, a}\right) {V}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
### 2.3 Message-Passing GNN
|
| 50 |
+
|
| 51 |
+
Graph Neural Networks (GNNs) generalize traditional deep learning techniques onto graph-structured data [20][21]. A message-passing GNN [3] iteratively updates its node feature ${\overrightarrow{h}}_{s}$ by aggregating messages from its neighboring nodes. At each timestep $t$ , a message can be computed between each connected pair of nodes via a message function $M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right)$ , where ${\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}$ is the edge feature. A node receives messages from all its connected neighbors $\mathcal{N}\left( s\right)$ and aggregates them via a permutation-invariant operator $\oplus$ that produces the same output regardless of the spatial permutation of the inputs. The aggregated message ${\overrightarrow{m}}_{s}^{t}$ of a node $s$ can be formulated as:
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\overrightarrow{m}}_{s}^{t} = {\bigoplus }_{{s}^{\prime } \in \mathcal{N}\left( s\right) }M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right) \tag{3}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
The node feature ${\overrightarrow{h}}_{s}^{t}$ is then transformed via an update function $U$ :
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\overrightarrow{h}}_{s}^{t + 1} = U\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{m}}_{s}^{t}}\right) \tag{4}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
### 2.4 Neural Algorithmic Reasoning
|
| 64 |
+
|
| 65 |
+
A dynamic programming (DP) algorithm breaks down the problem into smaller sub-problems, and recursively computes the optimal solutions. DP algorithm has a general form:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\text{Answer}\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack = \text{DP-Update}\left( {\{ \text{Answer}\left\lbrack {k - 1}\right\rbrack \left\lbrack j\right\rbrack \} , j = 1\ldots n}\right) \tag{5}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
The alignment between GNN and DP can be seen from mapping nodes representation ${\overrightarrow{h}}_{s}$ to Answer $\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack$ , and the aggregation step of GNN to DP-Update.
|
| 72 |
+
|
| 73 |
+
An algorithmic alignment framework was proposed by [5], where they proved that GNNs could simulate dynamic programming algorithms efficiently with good sample complexity. Furthermore, [6] showed that imitating the individual steps and intermediate outputs of graph algorithms using GNNs can generalize well into out-of-distribution data.
|
| 74 |
+
|
| 75 |
+
## 3 Related Work
|
| 76 |
+
|
| 77 |
+
### 3.1 Continuous action space
|
| 78 |
+
|
| 79 |
+
A common technique for dealing with continuous control problems is to discretize the action space, converting them into discrete control problems. However, discretization leads to an explosion in action space. [22] proposed to use a policy with factorized distribution across action dimensions, and proved it effective on high-dimensional complex tasks with on-policy optimization algorithms. Moreover, we can sample a subset of actions during node expansion when constructing a planning graph. Sampled MuZero [23] extended MuZero [24] with a sample-based policy based on parameter reuse for policy iteration algorithms. Instead, our work constructs a graph for a neural algorithmic reasoner to execute value iteration algorithm, where the actions sampled would directly participate in the Bellman optimality equation (1).
|
| 80 |
+
|
| 81 |
+
### 3.2 Large-scale graphs
|
| 82 |
+
|
| 83 |
+
Sampling modules [25] are introduced into GNN architectures to deal with large-scale graphs as a result of neighbor explosion from stacking multiple layers. The unrolling process to construct a planning graph requires node-level sampling. Previous work GraphSAGE [26] introduces a fixed size of node expansion procedure into GCN [2]. This is followed by PinSage [27], which uses a random-walk-based GCN to perform importance-based sampling. However, our work looks at sampling under an implicit planning context, where the importance of each node in sampling is more difficult to understand due to the lack of an exact description of the environment dynamics. Furthermore, sampling in a multi-dimensional action space also requires more careful thinking in the decision-making process.
|
| 84 |
+
|
| 85 |
+
## 4 Architecture
|
| 86 |
+
|
| 87 |
+
Our architecture uses XLVIN as a starting point, which we introduce first. This is followed by a discussion of the challenges that arise from extending neural algorithmic implicit planners to the continuous action space and the approaches we proposed to address them.
|
| 88 |
+
|
| 89 |
+
### 4.1 XLVIN modules
|
| 90 |
+
|
| 91 |
+
Given the observation space $\mathbf{S}$ and the action space $\mathcal{A}$ , we let the dimension of state embeddings in the latent space be $k$ . The XLVIN architecture can be broken down into four modules:
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
Figure 1: XLVIN modules
|
| 96 |
+
|
| 97 |
+
Encoder $\left( {z : S \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP which encodes the raw observation from the environment $s \in \mathbf{S}$ , to a state embedding ${\overrightarrow{h}}_{s} = z\left( s\right)$ in the latent space.
|
| 98 |
+
|
| 99 |
+
Transition $\left( {T : {\mathbb{R}}^{k} \times \mathcal{A} \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP with layer norm taken before the last layer that takes two inputs: the state embedding of an observation $z\left( s\right) \in {\mathbb{R}}^{k}$ , and an action $a \in \mathcal{A}$ . It predicts the next state embedding $z\left( {s}^{\prime }\right) \in \mathbb{R}$ , where ${s}^{\prime }$ is the next state transitioned into when the agent performed an action $a$ under current state $s$ .
|
| 100 |
+
|
| 101 |
+
Executor $\left( {X : {\mathbb{R}}^{k} \times {\mathbb{R}}^{\left| \mathcal{A}\right| \times k} \rightarrow {\mathbb{R}}^{k}}\right)$ : A message-passing GNN pre-trained to simulate each individual step of the value iteration algorithm following the set-up in [10]. Given the current state embedding ${\overrightarrow{h}}_{s}$ , a graph is constructed by enumerating all possible actions $a \in \mathcal{A}$ as edges to expand, and then using the Transition module to predict the next state embeddings as neighbors $\mathcal{N}\left( {\bar{h}}_{s}\right)$ . Finally, the Executor output is an updated state embedding ${\mathcal{X}}_{s} = X\left( {{\overrightarrow{h}}_{s},\mathcal{N}\left( {\overrightarrow{h}}_{s}\right) }\right)$ .
|
| 102 |
+
|
| 103 |
+
Policy and Value $\left( {P : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow {\left\lbrack 0,1\right\rbrack }^{\left| \mathcal{A}\right| }\text{and}V : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow \mathbb{R}}\right)$ : The Policy module is a linear layer that takes the outputs from the Encoder and Executor, i.e. the state embedding ${\overrightarrow{h}}_{s}$ and the updated state embedding ${\overrightarrow{\mathcal{X}}}_{s}$ , and produces a categorical distribution corresponding to the estimated policy, $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ . The Tail module is also a linear layer that takes the same inputs and produces the estimated state-value function, $V\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ .
|
| 104 |
+
|
| 105 |
+
The training procedure follows the XLVIN paper [17], and Proximal Policy Optimization (PPO) [28] is used to train the model, apart from the Executor. We use the PPO implementation and hyperparameters by [29]. The Executor is pre-trained as shown in [10] and directly plugged in.
|
| 106 |
+
|
| 107 |
+
### 4.2 Discretization of the continuous action space
|
| 108 |
+
|
| 109 |
+
Assume the continuous action space $\mathcal{A}$ has $D$ dimensions. Given the number of action bins $N,\mathcal{A}$ is discretized into evenly spaced discrete action bins. That is, in each dimension $i \in \{ 1,\ldots , D\}$ , ${\mathcal{A}}_{i} = \left\lbrack {{v}_{1},{v}_{2}}\right\rbrack$ is converted to $\left\{ {{a}_{i}^{1},{a}_{i}^{2},\ldots ,{a}_{i}^{N}}\right\}$ where ${a}_{i}^{k} = \left\lbrack {{v}_{1} + \left( {\left( {{v}_{2} - {v}_{1}}\right) /N}\right) \cdot k,{v}_{1} + \left( \left( {{v}_{2} - }\right. \right. }\right.$ $\left. {\left. {v}_{1}\right) /N}\right) \cdot \left( {k + 1}\right) )$ , and the upper bound is taken inclusively when $k = N$ . For each action bin ${a}_{i}^{k}$ , the median value is chosen as the action to take.
|
| 110 |
+
|
| 111 |
+
Challenge: The discretization of a multi-dimensional continuous action space leads to a combinatorial explosion in action space size. The explosion results in two bottlenecks in the architecture: (i) the Policy module that produces the action probabilities and (ii) the construction of the GNN graph, which requires an enumeration of all possible actions. Below, we address the two bottlenecks respectively.
|
| 112 |
+
|
| 113 |
+
### 4.3 Factorized joint policy
|
| 114 |
+
|
| 115 |
+
Assume each action $\overrightarrow{a} \in \mathcal{A}$ has $D$ dimensions, and each dimension has $N$ discrete action bins. A naive policy ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ produces a categorical distribution with ${N}^{D}$ possible actions. To tackle this challenge, we follow a factorized joint policy proposed in [22]:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\pi }^{ * }\left( {\overrightarrow{a} \mid s}\right) = \mathop{\prod }\limits_{{i = 1}}^{D}{\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right) \tag{6}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
|
| 123 |
+
Figure 2: (a) Factorized joint policy on an action space with dimension of two. (b) Sampling methods when constructing the graph in Executor.
|
| 124 |
+
|
| 125 |
+
As illustrated in Figure 2(a), a factorized joint policy $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ is a linear layer with an output dimension of $N * D$ . It approximates $D$ policies simultaneously. Each policy ${\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right)$ indicates the probability of choosing an action ${a}_{i} \in {\mathcal{A}}_{i}$ in the ${i}^{\text{th }}$ dimension, where $\left| {\mathcal{A}}_{i}\right| = N$ . This deals with the exponential explosion of action bins due to discretization, and the increase is now linear. Note there is a trade-off in the choice of $N$ , as a larger number of action bins retains more information from the continuous action space, but it also implies larger graphs and hence computation costs. We provide an ablation study in evaluation on the impact of this choice.
|
| 126 |
+
|
| 127 |
+
### 4.4 Neighbor sampling methods
|
| 128 |
+
|
| 129 |
+
As shown in Figure 2(b), the second bottleneck occurs when constructing a graph to execute the pre-trained GNN. It treats each state as a node, then enumerates all possible actions ${\overrightarrow{a}}_{i} \in \left| \mathcal{A}\right|$ to connect neighbors via approximating $\overrightarrow{h}\left( s\right) \overset{{\overrightarrow{a}}_{i}}{ \rightarrow }\overrightarrow{h}\left( {s}_{i}^{\prime }\right)$ . Therefore, each node has degree $\left| \mathcal{A}\right|$ , and graph size grows even faster as it expands deeper. To tackle this challenge, instead of using all possible actions, we propose to use a neighbor sampling method to choose a subset of actions to expand. The important question is which actions to select. The pre-trained GNN uses the graph constructed to simulate value iteration behavior and predict the state-value function. Hence, it is critical that we can include the action that produces a good approximation of the state-value function in our sampling.
|
| 130 |
+
|
| 131 |
+
Below, we propose four possible methods to sample $K$ actions from $\mathcal{A}$ , where $K \ll \left| \mathcal{A}\right|$ is a fixed number, under the context of value-iteration-based planning.
|
| 132 |
+
|
| 133 |
+
#### 4.4.1 Gaussian methods
|
| 134 |
+
|
| 135 |
+
Gaussian distribution is a common baseline policy distribution for continuous action spaces, and it is straightforward to interpret. Furthermore, it discourages extreme actions while encouraging neutral ones with some level of continuity, which suits the requirement of many planning problems. We propose two variants of sampling policy based on Gaussian distribution.
|
| 136 |
+
|
| 137 |
+
(1) Manual-Gaussian: A Gaussian distribution is used to randomly sample action values in each dimension ${a}_{i} \in {\mathcal{A}}_{i}$ , which are stacked together as a final action vector $\overrightarrow{a} = {\left\lbrack {a}_{0},\ldots ,{a}_{D - 1}\right\rbrack }^{T} \in \mathcal{A}$ . We repeat for $K$ times to sample a subset of $K$ action vectors. We set the mean $\mu = N/2$ and standard deviation $\sigma = N/4$ , where $N$ is the number of discrete action bins. These two parameters are chosen to spread a reasonable distribution over $\left\lbrack {0, N - 1}\right\rbrack$ . Outliers and non-integers are rounded to the nearest whole number within the range of $\left\lbrack {0, N - 1}\right\rbrack$ .
|
| 138 |
+
|
| 139 |
+
(2) Learned-Gaussian: The two parameters manually chosen in the previous method pose a constraint on placing the median action in each dimension as the most likely. Here instead, two fully-connected linear layers are used to separately estimate the mean $\mu$ and standard deviation $\sigma$ . They take the state embedding ${\overrightarrow{h}}_{s}$ from Encoder and output parameter estimations for each dimension. We use the reparameterization trick [30] to make the sampling differentiable.
|
| 140 |
+
|
| 141 |
+
#### 4.4.2 Parameter reuse
|
| 142 |
+
|
| 143 |
+
Gaussian methods still restrain a fixed distribution on the sampling distribution, which may not necessarily fit. Previous work [23] studied a similar action sampling problem. They reasoned that since the actions selected by the policy are expected to be more valuable, we can directly use the policy for sampling.
|
| 144 |
+
|
| 145 |
+
(3) Reuse-Policy: We can reuse Policy layer $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ to sample the actions when we expand the graph in Executor. This is equivalent to using the policy distribution ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ as the neighbor sampling distribution. However, the second input ${\overrightarrow{\mathcal{X}}}_{s}$ for Policy layer comes from Executor, which is not available at the time of constructing the graph. It is filled up by setting ${\overrightarrow{\mathcal{X}}}_{s} = \overrightarrow{0}$ as placeholders.
|
| 146 |
+
|
| 147 |
+
#### 4.4.3 Learn to expand
|
| 148 |
+
|
| 149 |
+
Lastly, we can also use a separate layer to learn the neighbor sampling distribution.
|
| 150 |
+
|
| 151 |
+
(4) Learned-Sampling: This uses a fully-connected linear layer that consumes ${\overrightarrow{h}}_{s}$ and produces an output dimension of $\left| {N \cdot D}\right|$ . It is expected to learn the optimal neighbor sampling distribution in a factorized joint manner, same as Figure 2(a). The outputs are logits for $D$ categorical distributions, where we used Gumbel-Softmax [31] for differentiable sampling actions in each dimension, together producing $\overrightarrow{a} = {\left\lbrack {a}_{1},\ldots ,{a}_{D}\right\rbrack }^{T}$ .
|
| 152 |
+
|
| 153 |
+
## 5 Results
|
| 154 |
+
|
| 155 |
+
### 5.1 Classic Control
|
| 156 |
+
|
| 157 |
+
To evaluate the performance of CNAP agents, we first ran the experiments on a relatively simple MountainCarContinuous-v0 environment from OpenAI Gym Classic Control suite [32], where the action space was one-dimensional. The training of the agent used PPO under 20 rollouts with 5 training episodes each, so the training consumed 100 episodes in total.
|
| 158 |
+
|
| 159 |
+
We compared two variants of CNAP agents: "CNAP-B" had its Executor pre-trained on a type of binary graph that aimed to simulate the bi-directional control of the car, and "CNAP-R" had its Executor pre-trained on random synthetic Erdős-Rényi graphs. In Table 1, we compared both CNAP agents against a "PPO Baseline" agent that consisted of only the Encoder and Policy/Tail modules. Both the CNAP agents outperformed the baseline agent for this environment, indicating the success of extending XLVIN onto continuous settings via binning.
|
| 160 |
+
|
| 161 |
+
Table 1: Mean rewards for MountainCarContinuous-v0 using PPO Baseline and two variants of CNAP agents. All three agents ran on 10 action bins, and were trained on 100 episodes in total. Both CNAP agents executed one step of value iteration. The reward was averaged over 100 episodes and 10 seeds.
|
| 162 |
+
|
| 163 |
+
<table><tr><td>Model</td><td>MountainCarContinuous-v0</td></tr><tr><td>PPO Baseline</td><td>$- {4.96} \pm {1.24}$</td></tr><tr><td>CNAP-B</td><td>${55.73} \pm {45.10}$</td></tr><tr><td>CNAP-R</td><td>$\mathbf{{63.41}} \pm {37.89}$</td></tr></table>
|
| 164 |
+
|
| 165 |
+
#### 5.1.1 Effect of GNN width and depth
|
| 166 |
+
|
| 167 |
+
In Table 2 and 3, we varied the two hyperparameters of the CNAP agents. In Table 2, we varied the number of action bins into which the continuous action space was discretized. In Table 3, we varied the number of GNN steps, corresponding to the number of steps we simulated in the value iteration algorithm. The two hyperparameters controlled the width and depth of the GNN graphs constructed, respectively. The two agents performed best with 10 action bins and one GNN step. We note that the number of training samples might not be sufficient when given larger graph width and depth. Also, a deeper graph required repeatedly applying the Transition module, where the imprecision might add on, leading to inappropriate state embeddings and hence less desirable results.
|
| 168 |
+
|
| 169 |
+
Table 2: Mean rewards for MountainCarContinuous-v0 using Baseline and CNAP agents by varying number of action bins, i.e., width of graph. The results were averaged over 100 episodes and 10 seeds.
|
| 170 |
+
|
| 171 |
+
<table><tr><td>Model</td><td>Action Bins</td><td>MountainCar-Continuous</td></tr><tr><td rowspan="3">PPO</td><td>5</td><td>$- {2.16} \pm {1.25}$</td></tr><tr><td>10</td><td>$- {4.96} \pm {1.24}$</td></tr><tr><td>15</td><td>$- {3.95} \pm {0.77}$</td></tr><tr><td rowspan="3">CNAP-B</td><td>5</td><td>${29.46} \pm {57.57}$</td></tr><tr><td>10</td><td>${55.73} \pm {45.10}$</td></tr><tr><td>15</td><td>${22.79} \pm {41.24}$</td></tr><tr><td rowspan="3">CNAP-R</td><td>5</td><td>${20.32} \pm {53.13}$</td></tr><tr><td>10</td><td>$\mathbf{{63.41}} \pm {37.89}$</td></tr><tr><td>15</td><td>${26.21} \pm {46.44}$</td></tr></table>
|
| 172 |
+
|
| 173 |
+
Table 3: Mean rewards for MountainCarContinuous-v0 using CNAP agents by varying number of GNN steps, i.e., depth of graph. The results were averaged over 100 episodes and 10 seeds.
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Model</td><td>GNN Steps</td><td>MountainCar-Continuous</td></tr><tr><td rowspan="3">CNAP-B</td><td>1</td><td>${55.73} \pm {45.10}$</td></tr><tr><td>2</td><td>${46.93} \pm {44.13}$</td></tr><tr><td>3</td><td>${40.58} \pm {48.20}$</td></tr><tr><td rowspan="3">CNAP-R</td><td>1</td><td>$\mathbf{{63.41}} \pm {37.89}$</td></tr><tr><td>2</td><td>${34.49} \pm {47.77}$</td></tr><tr><td>3</td><td>${43.61} \pm {46.16}$</td></tr></table>
|
| 176 |
+
|
| 177 |
+
### 5.2 MuJoCo
|
| 178 |
+
|
| 179 |
+
We then ran experiments on more complex environments from OpenAI Gym's MuJoCo suite [19, 32] to evaluate how CNAPs could handle the high increase in scale. Unlike the Classic Control suite, the $\mathrm{{MuJoCo}}$ environments have higher dimensions in both its observation and action spaces. We started by evaluating CNAP agents in two environments with relatively lower action dimensions, and then we moved on to two more environments with much higher dimensions. The discretization of the continuous action space also implied a combinatorial explosion in the action space, resulting in a large graph constructed for the GNN. We used the proposed factorized joint policy from Section 4.3 and the neighbor sampling methods from Section 4.4 to address the limitations.
|
| 180 |
+
|
| 181 |
+
#### 5.2.1 On low-dimensional environments
|
| 182 |
+
|
| 183 |
+
In Figure 3, we experimented with the four sampling methods discussed in Section 4.4 on Swimmer-v2 (action space dimension of 2) and HalfCheetah-v2 (action space dimension of 6). We chose to take the number of action bins $N = {11}$ for all the experiments following [22], where the best performance on MuJoCo environments was obtained when $7 \leq N \leq {15}$ . In all cases, CNAP outperformed the baseline in the final performances. Moreover, Manual-Gaussian and Reuse-Policy were the most promising sampling strategies as they also demonstrated faster learning, hence better sample efficiency. This pointed to the benefits of parameter reuse and the synergistic improvement between learning to act and learning to sample relevant neighbors, as well as the power of a well-chosen manual distribution. We also note that choosing a manual distribution can become non-trivial when the task becomes more complex, especially if choosing the average values for each dimension is not the most desirable. Our work acts as a proof-of-concept of sampling strategies and leaves the choice of parameters for future studies.
|
| 184 |
+
|
| 185 |
+
#### 5.2.2 On high-dimensional environments
|
| 186 |
+
|
| 187 |
+
We then further evaluated the scalability of CNAP agents in more complex environments where the dimensionality of the action space was significantly larger, while retaining a relatively low-data regime $\left( {10}^{6}\right.$ actor steps). In Figure 4, we compared all the previously proposed CNAP methods on two environments with highly complex dynamics, both having an action space dimension of 17. In the Humanoid task, all variants of CNAPs outperformed PPO, acquiring knowledge significantly faster.
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+
Figure 3: Average rewards over time for CNAP (red) and PPO baseline (blue), in Swimmer (action dimension=2) and Halfcheetah (action dimension=6), using different sampling methods. In Swimmer, CNAP with sampling methods were compared with the original version by expanding all actions (green). In (a), the actions were sampled using Gaussian distribution with mean $= N/2$ and std $= N/4$ , where $N$ was the number of action bins used to discretize the continuous action space. In (b), two linear layers were used to learn the mean and std, respectively. In (c), the Policy layer was reused in sampling actions to expand. In (d), a separate linear layer was used to learn the optimal neighbor sampling distribution. The mean rewards were averaged over 100 episodes, and the learning curve was aggregated from 5 seeds.
|
| 192 |
+
|
| 193 |
+
Particularly, we found that nonparametric approaches to sampling the graph in CNAP (e.g. manual Gaussian and policy reuse) acquired this knowledge significantly faster than any other CNAP approach tested. This supplements our previous results well, and further testifies to the improved learning stability when the sampling process does not contain additional parameters to optimise.
|
| 194 |
+
|
| 195 |
+
We also evaluated all of the methods considered against PPO on the HumanoidStandup task, with all methods learning to sit up, and no apparent distinction in the rate of acquisition. However, we provide some qualitative evidence that the solution found by CNAP appears to be more robust in the way this knowledge acquired-see Appendix A.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+
Figure 4: Average rewards over time for CNAP (red) and PPO baseline (blue), in Humanoid (action dimension=17) and HumanoidStandup (action dimension=17), using Manual-Gaussian and Reuse-Policy sampling methods.
|
| 200 |
+
|
| 201 |
+
268
|
| 202 |
+
|
| 203 |
+
#### 5.2.3 Qualitative interpretation
|
| 204 |
+
|
| 205 |
+
269 We captured the video recordings of the interactions between the agents and the environments to provide a qualitative interpretation to the results above. We chose to look at the selected frames at 1 equal time intervals from one episode after the last training iteration by CNAP (Manual-Gaussian) and PPO Baseline, respectively.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
Figure 5: Selected frames of two agents in HalfCheetah
|
| 210 |
+
|
| 211 |
+
From Figure 5's HalfCheetah task, we can see the agent instructed by PPO Baseline fell over quickly and never managed to turn it back. However, CNAP's agent could balance well and kept running 75 forward. This observation could support the higher average episodic rewards gained by CNAP agents than by PPO Baseline in Figure 3.
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+
Figure 6: Selected frames of two agents in Humanoid
|
| 216 |
+
|
| 217 |
+
Similarly, in Figure 6's Humanoid task, PPO Baseline's humanoid stayed stationary and lost balance quickly, while CNAP's humanoid could walk forward in small steps. This observation aligned with the results in Figure 4 where the gain from CNAP was significant.
|
| 218 |
+
|
| 219 |
+
The selected frames for Swimmer and HumanoidStandup tasks are attached in Appendix A. We note that, although quantitatively CNAP agent did not differentiate from PPO Baseline in Humanoid-Standup task as shown in Figure 4, for the trajectories we observed, it successfully remained in a sitting position, while the PPO Baseline fell quickly.
|
| 220 |
+
|
| 221 |
+
## 6 Conclusion
|
| 222 |
+
|
| 223 |
+
We present CNAP, a method that generalizes implicit planners to continuous action spaces for the first time. In particular, we study implicit planners based on neural algorithmic reasoners and the unstudied implications of not having precise alignment between the learned graph algorithm and the setup where the executor is applied. To deal with the challenges in building the planning tree, as a result of the continuous, high-dimensional nature of the action space, we combine previous advancements in XLVIN with binning, as well as parametric and non-parametric neighbor sampling strategies. We evaluate the agent against its model-free variant, observing its efficiency in low-data settings and consistently better performance than the baseline. Moreover, this paves the way for extending other implicit planners to continuous action spaces and studying neural algorithmic reasoning beyond strict applications of graph algorithms.
|
| 224 |
+
|
| 225 |
+
References
|
| 226 |
+
|
| 227 |
+
[1] Petar Veličković and Charles Blundell. Neural algorithmic reasoning. arXiv preprint arXiv:2105.02761, 2021. 1
|
| 228 |
+
|
| 229 |
+
[2] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, 2017. 1, 3
|
| 230 |
+
|
| 231 |
+
[3] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1263-1272. PMLR, 2017. URL http: //proceedings.mlr.press/v70/gilmer17a.html. 2
|
| 232 |
+
|
| 233 |
+
[4] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In ICLR, 2018. 1
|
| 234 |
+
|
| 235 |
+
[5] Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? In 8th International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rJxbJeHFPS.1, 3
|
| 236 |
+
|
| 237 |
+
[6] Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In 8th International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkgKOOEtvS.1, 3
|
| 238 |
+
|
| 239 |
+
[7] Dobrik Georgiev and Pietro Lió. Neural bipartite matching. CoRR, abs/2005.11304, 2020. URL https://arxiv.org/abs/2005.11304.1
|
| 240 |
+
|
| 241 |
+
[8] Pranjal Awasthi, Abhimanyu Das, and Sreenivas Gollapudi. Beyond \{gnn\}s: A sample efficient architecture for graph problems, 2021. URL https://openreview.net/forum?id= Px7xIKHjmMS. 1
|
| 242 |
+
|
| 243 |
+
[9] Chaitanya K. Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent, and Xavier Bresson. Learning TSP requires rethinking generalization. CoRR, abs/2006.07054, 2020. URL https://arxiv.org/abs/2006.07054.1
|
| 244 |
+
|
| 245 |
+
[10] Andreea Deac, Pierre-Luc Bacon, and Jian Tang. Graph neural induction of value iteration. arXiv preprint arXiv:2009.12604, 2020. 1, 4
|
| 246 |
+
|
| 247 |
+
[11] Richard Bellman. Dynamic Programming. Dover Publications, 1957. ISBN 9780486428093. 1,2
|
| 248 |
+
|
| 249 |
+
[12] Aviv Tamar, Sergey Levine, Pieter Abbeel, Yi Wu, and Garrett Thomas. Value iteration networks. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, NIPS, pages 2146-2154, 2016. 1
|
| 250 |
+
|
| 251 |
+
[13] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. In NIPS, pages 6118-6128, 2017.
|
| 252 |
+
|
| 253 |
+
[14] Sufeng Niu, Siheng Chen, Hanyu Guo, Colin Targonski, Melissa C. Smith, and Jelena Kovacevic. Generalized value iteration networks: Life beyond lattices. In AAAI, pages 6246-6253. AAAI Press, 2018.
|
| 254 |
+
|
| 255 |
+
[15] Gregory Farquhar, Tim Rocktäschel, Maximilian Igl, and Shimon Whiteson. Treeqn and atreec: Differentiable tree-structured models for deep reinforcement learning. In ICLR, 2018.
|
| 256 |
+
|
| 257 |
+
[16] Lisa Lee, Emilio Parisotto, Devendra Singh Chaplot, Eric Xing, and Ruslan Salakhutdinov. Gated path planning networks. In International Conference on Machine Learning, pages 2947-2955. PMLR, 2018. 1
|
| 258 |
+
|
| 259 |
+
[17] Andreea Deac, Petar Velickovic, Ognjen Milinkovic, Pierre-Luc Bacon, Jian Tang, and Mladen Nikolic. Neural algorithmic reasoners are implicit planners. In Advances in Neural Information Processing Systems 34, pages 15529-15542, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/82e9e7a12665240d13d0b928be28f230-Abstract.html.1,4
|
| 260 |
+
|
| 261 |
+
[18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. J. Mach. Learn. Res., 17:39:1-39:40, 2016. URL http://jmlr.org/ papers/v17/15-522.html. 2
|
| 262 |
+
|
| 263 |
+
[19] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems, pages 5026-5033. IEEE, 2012. doi: 10.1109/IROS.2012.6386109. URL https://doi.org/10.1109/IROS.2012.6386109.2,7
|
| 264 |
+
|
| 265 |
+
[20] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. doi: 10.1109/TNN.2008.2005605. 2
|
| 266 |
+
|
| 267 |
+
[21] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Velickovic. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. CoRR, abs/2104.13478, 2021. URL https: //arxiv.org/abs/2104.13478. 2
|
| 268 |
+
|
| 269 |
+
[22] Yunhao Tang and Shipra Agrawal. Discretizing continuous action space for on-policy optimization. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 5981-5988. AAAI Press, 2020. URL https://ojs.aaai.org/index.php/AAAI/ article/view/6059. 3, 4, 7
|
| 270 |
+
|
| 271 |
+
[23] Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Mohammadamin Barekatain, Simon Schmitt, and David Silver. Learning and planning in complex action spaces. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pages 4476-4486. PMLR, 2021. URL http://proceedings.mlr.press/v139/hubert21a.html.3, 6
|
| 272 |
+
|
| 273 |
+
[24] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020. 3
|
| 274 |
+
|
| 275 |
+
[25] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review of methods and applications. CoRR, abs/1812.08434, 2018. URL http://arxiv.org/abs/1812.08434.3
|
| 276 |
+
|
| 277 |
+
[26] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. CoRR, abs/1706.02216, 2017. URL http://arxiv.org/abs/1706.02216.3
|
| 278 |
+
|
| 279 |
+
[27] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. CoRR, abs/1806.01973, 2018. URL http://arxiv.org/abs/1806.01973.3
|
| 280 |
+
|
| 281 |
+
[28] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/ abs/1707.06347.4
|
| 282 |
+
|
| 283 |
+
[29] Ilya Kostrikov. Pytorch implementations of reinforcement learning algorithms. https:// github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail,2018.4
|
| 284 |
+
|
| 285 |
+
[30] Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013. URL https: //arxiv.org/abs/1312.6114. 5
|
| 286 |
+
|
| 287 |
+
[31] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax, 2016. URL https://arxiv.org/abs/1611.01144.6
|
| 288 |
+
|
| 289 |
+
[32] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. 6, 7
|
| 290 |
+
|
| 291 |
+
## A Appendix
|
| 292 |
+
|
| 293 |
+
A. 1 Selected frames for Swimmer and HumanoidStandup tasks
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
|
| 297 |
+
Figure 7: Selected frames of two agents in Swimmer
|
| 298 |
+
|
| 299 |
+
387 As seen in Figure 7, CNAP could fold itself slightly faster than PPO Baseline in this episode and
|
| 300 |
+
|
| 301 |
+
388 swam more quickly.
|
| 302 |
+
|
| 303 |
+

|
| 304 |
+
|
| 305 |
+
Figure 8: Selected frames of two agents in HumanoidStandup
|
| 306 |
+
|
| 307 |
+
Then we noticed that although in HumanoidStandup task, the quantitative performances between PPO Baseline and CNAP were similar, Figure 8 revealed some different results. Both agents did not manage to stand up, explaining why the episodic rewards were similar numerically. However, the PPO Baseline agent lost balance and fell back to the ground while the CNAP agent remained sitting, trying to get up. Therefore, the CNAP qualitatively performed better in this example.
|
papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,289 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ CONTINUOUS NEURAL ALGORITHMIC PLANNERS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Neural algorithmic reasoning studies the problem of learning classical algorithms with neural networks, especially with a focus on graph architectures. A recent proposal, XLVIN, reaps the benefits of using a graph neural network that simulates value iteration algorithm in deep reinforcement learning agents. It allows model-free planning without access to privileged information of the environment, which is usually unavailable. However, XLVIN only supports discrete action spaces, and is hence nontrivially applicable to most tasks of real-world interest. We expand XLVIN to continuous action spaces by discretization, and evaluate several selective expansion policies to deal with the large planning graphs. Our proposal, CNAP, demonstrates how neural algorithmic reasoning can make a measurable impact in higher-dimensional continuous control settings, such as MuJoCo, bringing gains in low-data settings and outperforming model-free baselines.
|
| 12 |
+
|
| 13 |
+
§ 14 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Neural networks are capable of learning directly from high-dimensional unstructured input data, tackling the input constraints that limit classical algorithms from solving more complex problems. However, neural networks often require large amounts of data to train and suffer from poor generalization and interpretability. On the other hand, algorithms intrinsically generalize and provide mathematical provability with performance guarantees. The complementary relationship motivates the topic of neural algorithmic reasoning to study the problem of learning classical algorithms with neural networks [1].
|
| 16 |
+
|
| 17 |
+
Recent works focus on utilizing Graph Neural Networks (GNNs) [2-4] for algorithmic reasoning tasks due to the close algorithmic alignment that was proven to bring better sample efficiency and generalization ability $\left\lbrack {5,6}\right\rbrack$ . Besides shortest-path and spanning-tree algorithms, there have been a number of successful applications by aligning GNNs with classical algorithms, covering a range of problems such as bipartite matching [7], min-cut problem [8], and Travelling Salesman Problem [9].
|
| 18 |
+
|
| 19 |
+
We look at the application of using a GNN that simulates the value iteration algorithm [10] in deep reinforcement learning agents. Value iteration [11] is a dynamic programming algorithm that guarantees to solve a reinforcement learning problem but is traditionally inhibited by its requirement of tabulated inputs. Earlier works [12-16] introduced value iteration as an inductive bias to facilitate the agents to perform implicit planning, without the need of explicitly invoking a planning algorithm, but were found to suffer from an algorithmic bottleneck [17]. Conversely, eXecuted Latent Value Iteration Net (XLVIN) [17] was proposed to leverage a value-iteration-behaving GNN [10] by adopting the neural algorithmic framework [1]. XLVIN is able to learn under a low-data regime, tackling the algorithmic bottleneck suffered by other implicit planners.
|
| 20 |
+
|
| 21 |
+
One particular difficulty of implicit planners is handling a continuous action space. XLVIN uses a transition model to build a planning graph, over which the pre-trained GNN can execute value iteration in a latent space. So far, it only applies to environments with small and discrete action spaces. The limitation is that the construction of the planning graph requires an enumeration of all possible actions - starting from the current state and expanding for a number of hops equal to the planning horizon. The graph size quickly explodes as the dimensionality of the action space increases. Moreover, a continuous action space results in an infinite pool of action choices, making the construction of a planning graph infeasible.
|
| 22 |
+
|
| 23 |
+
Nevertheless, continuous control is of significant importance, as most simulation or robotics control tasks [18] have continuous action spaces by design. High complexity also naturally arises as the problem moves towards more powerful real-world domains. To extend such an agent powered by neural algorithmic reasoning to complex continuous control problems, we propose Continuous Neural Algorithmic Planner (CNAP). It generalizes XLVIN to continuous action spaces by discretizing them through binning. Moreover, CNAP handles the large planning graph by following a sampling policy that carefully selects actions during the neighbor expansion stage. Choosing which actions to sample is critical as the graph built determines where the GNN would simulate value iteration computation, and ultimately influences the planning performance.
|
| 24 |
+
|
| 25 |
+
In addition, the discreteness of the graph neural network simulating the value iteration update rule contrasts with the continuous action space, corresponding to continuous edges between states. CNAP also presents a novel setup for neural algorithmic reasoning, where the downstream task does not fully align with the algorithm studied. This opens a new path for the direction, going beyond the current standard of precise application of learned classical graph algorithms.
|
| 26 |
+
|
| 27 |
+
We confirm the feasibility of CNAP on a continuous relaxation of a classical low-dimensional control task, where we can still fully expand all of the binned actions after discretization. Then, we apply CNAP to general MuJoCo [19] environments with complex continuous dynamics, where expanding the planning graph by taking all actions is impossible. By expanding the application scope from simple discrete control to complex continuous control, we show that such an intelligent agent with algorithmic reasoning power can be applied to tasks with more real-world interests.
|
| 28 |
+
|
| 29 |
+
§ 2 BACKGROUND
|
| 30 |
+
|
| 31 |
+
§ 2.1 MARKOV DECISION PROCESS (MDP)
|
| 32 |
+
|
| 33 |
+
A reinforcement learning problem can be formally described using the MDP framework. At each time step $t \in \{ 0,1,\ldots ,T\}$ , the agent performs an action ${a}_{t} \in \mathcal{A}$ given the current state ${s}_{t} \in \mathcal{S}$ . This spawns a transition into a new state ${s}_{t + 1} \in \mathcal{S}$ according to the transition probability $p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ , and produces a reward ${r}_{t} = r\left( {{s}_{t},{a}_{t}}\right)$ . A policy $\pi \left( {{a}_{t} \mid {s}_{t}}\right)$ guides an agent by specifying the probability of choosing an action ${a}_{t}$ given a state ${s}_{t}$ . The trajectory $\tau$ is the sequence of actions and states the agents took $\left( {{s}_{0},{a}_{0},\ldots ,{s}_{T},{a}_{T}}\right)$ . We define the infinite horizon discounted return as $R\left( \tau \right) = \mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}$ , where $\gamma \in \left\lbrack {0,1}\right\rbrack$ is the discount factor. The goal of an agent is to maximize the overall return by finding the optimal policy ${\pi }^{ * } = {\operatorname{argmax}}_{\pi }{\mathbb{E}}_{\tau \sim \pi }\left\lbrack {R\left( \tau \right) }\right\rbrack$ . We can measure the desirability of a state $s$ using the state-value function ${V}^{ * }\left( s\right) = {\mathbb{E}}_{\tau \sim {\pi }^{ * }}\left\lbrack {R\left( \tau \right) \mid {s}_{t} = s}\right\rbrack$ .
|
| 34 |
+
|
| 35 |
+
§ 2.2 VALUE ITERATION
|
| 36 |
+
|
| 37 |
+
Value iteration is a dynamic programming algorithm that computes the optimal policy and its value function given a tabulated MDP that perfectly describes the environment. It randomly initializes ${V}^{ * }\left( s\right)$ and iteratively updates the value function of each state $s$ using the Bellman optimality equation [11]:
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{V}_{i + 1}^{ * }\left( s\right) = \mathop{\max }\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s,a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s,a}\right) {V}_{t}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
and we can extract the optimal policy using:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{\pi }^{ * }\left( s\right) = \mathop{\operatorname{argmax}}\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s,a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s,a}\right) {V}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
§ 2.3 MESSAGE-PASSING GNN
|
| 50 |
+
|
| 51 |
+
Graph Neural Networks (GNNs) generalize traditional deep learning techniques onto graph-structured data [20][21]. A message-passing GNN [3] iteratively updates its node feature ${\overrightarrow{h}}_{s}$ by aggregating messages from its neighboring nodes. At each timestep $t$ , a message can be computed between each connected pair of nodes via a message function $M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right)$ , where ${\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}$ is the edge feature. A node receives messages from all its connected neighbors $\mathcal{N}\left( s\right)$ and aggregates them via a permutation-invariant operator $\oplus$ that produces the same output regardless of the spatial permutation of the inputs. The aggregated message ${\overrightarrow{m}}_{s}^{t}$ of a node $s$ can be formulated as:
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\overrightarrow{m}}_{s}^{t} = {\bigoplus }_{{s}^{\prime } \in \mathcal{N}\left( s\right) }M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right) \tag{3}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
The node feature ${\overrightarrow{h}}_{s}^{t}$ is then transformed via an update function $U$ :
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\overrightarrow{h}}_{s}^{t + 1} = U\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{m}}_{s}^{t}}\right) \tag{4}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
§ 2.4 NEURAL ALGORITHMIC REASONING
|
| 64 |
+
|
| 65 |
+
A dynamic programming (DP) algorithm breaks down the problem into smaller sub-problems, and recursively computes the optimal solutions. DP algorithm has a general form:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\text{ Answer }\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack = \text{ DP-Update }\left( {\{ \text{ Answer }\left\lbrack {k - 1}\right\rbrack \left\lbrack j\right\rbrack \} ,j = 1\ldots n}\right) \tag{5}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
The alignment between GNN and DP can be seen from mapping nodes representation ${\overrightarrow{h}}_{s}$ to Answer $\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack$ , and the aggregation step of GNN to DP-Update.
|
| 72 |
+
|
| 73 |
+
An algorithmic alignment framework was proposed by [5], where they proved that GNNs could simulate dynamic programming algorithms efficiently with good sample complexity. Furthermore, [6] showed that imitating the individual steps and intermediate outputs of graph algorithms using GNNs can generalize well into out-of-distribution data.
|
| 74 |
+
|
| 75 |
+
§ 3 RELATED WORK
|
| 76 |
+
|
| 77 |
+
§ 3.1 CONTINUOUS ACTION SPACE
|
| 78 |
+
|
| 79 |
+
A common technique for dealing with continuous control problems is to discretize the action space, converting them into discrete control problems. However, discretization leads to an explosion in action space. [22] proposed to use a policy with factorized distribution across action dimensions, and proved it effective on high-dimensional complex tasks with on-policy optimization algorithms. Moreover, we can sample a subset of actions during node expansion when constructing a planning graph. Sampled MuZero [23] extended MuZero [24] with a sample-based policy based on parameter reuse for policy iteration algorithms. Instead, our work constructs a graph for a neural algorithmic reasoner to execute value iteration algorithm, where the actions sampled would directly participate in the Bellman optimality equation (1).
|
| 80 |
+
|
| 81 |
+
§ 3.2 LARGE-SCALE GRAPHS
|
| 82 |
+
|
| 83 |
+
Sampling modules [25] are introduced into GNN architectures to deal with large-scale graphs as a result of neighbor explosion from stacking multiple layers. The unrolling process to construct a planning graph requires node-level sampling. Previous work GraphSAGE [26] introduces a fixed size of node expansion procedure into GCN [2]. This is followed by PinSage [27], which uses a random-walk-based GCN to perform importance-based sampling. However, our work looks at sampling under an implicit planning context, where the importance of each node in sampling is more difficult to understand due to the lack of an exact description of the environment dynamics. Furthermore, sampling in a multi-dimensional action space also requires more careful thinking in the decision-making process.
|
| 84 |
+
|
| 85 |
+
§ 4 ARCHITECTURE
|
| 86 |
+
|
| 87 |
+
Our architecture uses XLVIN as a starting point, which we introduce first. This is followed by a discussion of the challenges that arise from extending neural algorithmic implicit planners to the continuous action space and the approaches we proposed to address them.
|
| 88 |
+
|
| 89 |
+
§ 4.1 XLVIN MODULES
|
| 90 |
+
|
| 91 |
+
Given the observation space $\mathbf{S}$ and the action space $\mathcal{A}$ , we let the dimension of state embeddings in the latent space be $k$ . The XLVIN architecture can be broken down into four modules:
|
| 92 |
+
|
| 93 |
+
< g r a p h i c s >
|
| 94 |
+
|
| 95 |
+
Figure 1: XLVIN modules
|
| 96 |
+
|
| 97 |
+
Encoder $\left( {z : S \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP which encodes the raw observation from the environment $s \in \mathbf{S}$ , to a state embedding ${\overrightarrow{h}}_{s} = z\left( s\right)$ in the latent space.
|
| 98 |
+
|
| 99 |
+
Transition $\left( {T : {\mathbb{R}}^{k} \times \mathcal{A} \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP with layer norm taken before the last layer that takes two inputs: the state embedding of an observation $z\left( s\right) \in {\mathbb{R}}^{k}$ , and an action $a \in \mathcal{A}$ . It predicts the next state embedding $z\left( {s}^{\prime }\right) \in \mathbb{R}$ , where ${s}^{\prime }$ is the next state transitioned into when the agent performed an action $a$ under current state $s$ .
|
| 100 |
+
|
| 101 |
+
Executor $\left( {X : {\mathbb{R}}^{k} \times {\mathbb{R}}^{\left| \mathcal{A}\right| \times k} \rightarrow {\mathbb{R}}^{k}}\right)$ : A message-passing GNN pre-trained to simulate each individual step of the value iteration algorithm following the set-up in [10]. Given the current state embedding ${\overrightarrow{h}}_{s}$ , a graph is constructed by enumerating all possible actions $a \in \mathcal{A}$ as edges to expand, and then using the Transition module to predict the next state embeddings as neighbors $\mathcal{N}\left( {\bar{h}}_{s}\right)$ . Finally, the Executor output is an updated state embedding ${\mathcal{X}}_{s} = X\left( {{\overrightarrow{h}}_{s},\mathcal{N}\left( {\overrightarrow{h}}_{s}\right) }\right)$ .
|
| 102 |
+
|
| 103 |
+
Policy and Value $\left( {P : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow {\left\lbrack 0,1\right\rbrack }^{\left| \mathcal{A}\right| }\text{ and }V : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow \mathbb{R}}\right)$ : The Policy module is a linear layer that takes the outputs from the Encoder and Executor, i.e. the state embedding ${\overrightarrow{h}}_{s}$ and the updated state embedding ${\overrightarrow{\mathcal{X}}}_{s}$ , and produces a categorical distribution corresponding to the estimated policy, $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ . The Tail module is also a linear layer that takes the same inputs and produces the estimated state-value function, $V\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ .
|
| 104 |
+
|
| 105 |
+
The training procedure follows the XLVIN paper [17], and Proximal Policy Optimization (PPO) [28] is used to train the model, apart from the Executor. We use the PPO implementation and hyperparameters by [29]. The Executor is pre-trained as shown in [10] and directly plugged in.
|
| 106 |
+
|
| 107 |
+
§ 4.2 DISCRETIZATION OF THE CONTINUOUS ACTION SPACE
|
| 108 |
+
|
| 109 |
+
Assume the continuous action space $\mathcal{A}$ has $D$ dimensions. Given the number of action bins $N,\mathcal{A}$ is discretized into evenly spaced discrete action bins. That is, in each dimension $i \in \{ 1,\ldots ,D\}$ , ${\mathcal{A}}_{i} = \left\lbrack {{v}_{1},{v}_{2}}\right\rbrack$ is converted to $\left\{ {{a}_{i}^{1},{a}_{i}^{2},\ldots ,{a}_{i}^{N}}\right\}$ where ${a}_{i}^{k} = \left\lbrack {{v}_{1} + \left( {\left( {{v}_{2} - {v}_{1}}\right) /N}\right) \cdot k,{v}_{1} + \left( \left( {{v}_{2} - }\right. \right. }\right.$ $\left. {\left. {v}_{1}\right) /N}\right) \cdot \left( {k + 1}\right) )$ , and the upper bound is taken inclusively when $k = N$ . For each action bin ${a}_{i}^{k}$ , the median value is chosen as the action to take.
|
| 110 |
+
|
| 111 |
+
Challenge: The discretization of a multi-dimensional continuous action space leads to a combinatorial explosion in action space size. The explosion results in two bottlenecks in the architecture: (i) the Policy module that produces the action probabilities and (ii) the construction of the GNN graph, which requires an enumeration of all possible actions. Below, we address the two bottlenecks respectively.
|
| 112 |
+
|
| 113 |
+
§ 4.3 FACTORIZED JOINT POLICY
|
| 114 |
+
|
| 115 |
+
Assume each action $\overrightarrow{a} \in \mathcal{A}$ has $D$ dimensions, and each dimension has $N$ discrete action bins. A naive policy ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ produces a categorical distribution with ${N}^{D}$ possible actions. To tackle this challenge, we follow a factorized joint policy proposed in [22]:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\pi }^{ * }\left( {\overrightarrow{a} \mid s}\right) = \mathop{\prod }\limits_{{i = 1}}^{D}{\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right) \tag{6}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
< g r a p h i c s >
|
| 122 |
+
|
| 123 |
+
Figure 2: (a) Factorized joint policy on an action space with dimension of two. (b) Sampling methods when constructing the graph in Executor.
|
| 124 |
+
|
| 125 |
+
As illustrated in Figure 2(a), a factorized joint policy $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ is a linear layer with an output dimension of $N * D$ . It approximates $D$ policies simultaneously. Each policy ${\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right)$ indicates the probability of choosing an action ${a}_{i} \in {\mathcal{A}}_{i}$ in the ${i}^{\text{ th }}$ dimension, where $\left| {\mathcal{A}}_{i}\right| = N$ . This deals with the exponential explosion of action bins due to discretization, and the increase is now linear. Note there is a trade-off in the choice of $N$ , as a larger number of action bins retains more information from the continuous action space, but it also implies larger graphs and hence computation costs. We provide an ablation study in evaluation on the impact of this choice.
|
| 126 |
+
|
| 127 |
+
§ 4.4 NEIGHBOR SAMPLING METHODS
|
| 128 |
+
|
| 129 |
+
As shown in Figure 2(b), the second bottleneck occurs when constructing a graph to execute the pre-trained GNN. It treats each state as a node, then enumerates all possible actions ${\overrightarrow{a}}_{i} \in \left| \mathcal{A}\right|$ to connect neighbors via approximating $\overrightarrow{h}\left( s\right) \overset{{\overrightarrow{a}}_{i}}{ \rightarrow }\overrightarrow{h}\left( {s}_{i}^{\prime }\right)$ . Therefore, each node has degree $\left| \mathcal{A}\right|$ , and graph size grows even faster as it expands deeper. To tackle this challenge, instead of using all possible actions, we propose to use a neighbor sampling method to choose a subset of actions to expand. The important question is which actions to select. The pre-trained GNN uses the graph constructed to simulate value iteration behavior and predict the state-value function. Hence, it is critical that we can include the action that produces a good approximation of the state-value function in our sampling.
|
| 130 |
+
|
| 131 |
+
Below, we propose four possible methods to sample $K$ actions from $\mathcal{A}$ , where $K \ll \left| \mathcal{A}\right|$ is a fixed number, under the context of value-iteration-based planning.
|
| 132 |
+
|
| 133 |
+
§ 4.4.1 GAUSSIAN METHODS
|
| 134 |
+
|
| 135 |
+
Gaussian distribution is a common baseline policy distribution for continuous action spaces, and it is straightforward to interpret. Furthermore, it discourages extreme actions while encouraging neutral ones with some level of continuity, which suits the requirement of many planning problems. We propose two variants of sampling policy based on Gaussian distribution.
|
| 136 |
+
|
| 137 |
+
(1) Manual-Gaussian: A Gaussian distribution is used to randomly sample action values in each dimension ${a}_{i} \in {\mathcal{A}}_{i}$ , which are stacked together as a final action vector $\overrightarrow{a} = {\left\lbrack {a}_{0},\ldots ,{a}_{D - 1}\right\rbrack }^{T} \in \mathcal{A}$ . We repeat for $K$ times to sample a subset of $K$ action vectors. We set the mean $\mu = N/2$ and standard deviation $\sigma = N/4$ , where $N$ is the number of discrete action bins. These two parameters are chosen to spread a reasonable distribution over $\left\lbrack {0,N - 1}\right\rbrack$ . Outliers and non-integers are rounded to the nearest whole number within the range of $\left\lbrack {0,N - 1}\right\rbrack$ .
|
| 138 |
+
|
| 139 |
+
(2) Learned-Gaussian: The two parameters manually chosen in the previous method pose a constraint on placing the median action in each dimension as the most likely. Here instead, two fully-connected linear layers are used to separately estimate the mean $\mu$ and standard deviation $\sigma$ . They take the state embedding ${\overrightarrow{h}}_{s}$ from Encoder and output parameter estimations for each dimension. We use the reparameterization trick [30] to make the sampling differentiable.
|
| 140 |
+
|
| 141 |
+
§ 4.4.2 PARAMETER REUSE
|
| 142 |
+
|
| 143 |
+
Gaussian methods still restrain a fixed distribution on the sampling distribution, which may not necessarily fit. Previous work [23] studied a similar action sampling problem. They reasoned that since the actions selected by the policy are expected to be more valuable, we can directly use the policy for sampling.
|
| 144 |
+
|
| 145 |
+
(3) Reuse-Policy: We can reuse Policy layer $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ to sample the actions when we expand the graph in Executor. This is equivalent to using the policy distribution ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ as the neighbor sampling distribution. However, the second input ${\overrightarrow{\mathcal{X}}}_{s}$ for Policy layer comes from Executor, which is not available at the time of constructing the graph. It is filled up by setting ${\overrightarrow{\mathcal{X}}}_{s} = \overrightarrow{0}$ as placeholders.
|
| 146 |
+
|
| 147 |
+
§ 4.4.3 LEARN TO EXPAND
|
| 148 |
+
|
| 149 |
+
Lastly, we can also use a separate layer to learn the neighbor sampling distribution.
|
| 150 |
+
|
| 151 |
+
(4) Learned-Sampling: This uses a fully-connected linear layer that consumes ${\overrightarrow{h}}_{s}$ and produces an output dimension of $\left| {N \cdot D}\right|$ . It is expected to learn the optimal neighbor sampling distribution in a factorized joint manner, same as Figure 2(a). The outputs are logits for $D$ categorical distributions, where we used Gumbel-Softmax [31] for differentiable sampling actions in each dimension, together producing $\overrightarrow{a} = {\left\lbrack {a}_{1},\ldots ,{a}_{D}\right\rbrack }^{T}$ .
|
| 152 |
+
|
| 153 |
+
§ 5 RESULTS
|
| 154 |
+
|
| 155 |
+
§ 5.1 CLASSIC CONTROL
|
| 156 |
+
|
| 157 |
+
To evaluate the performance of CNAP agents, we first ran the experiments on a relatively simple MountainCarContinuous-v0 environment from OpenAI Gym Classic Control suite [32], where the action space was one-dimensional. The training of the agent used PPO under 20 rollouts with 5 training episodes each, so the training consumed 100 episodes in total.
|
| 158 |
+
|
| 159 |
+
We compared two variants of CNAP agents: "CNAP-B" had its Executor pre-trained on a type of binary graph that aimed to simulate the bi-directional control of the car, and "CNAP-R" had its Executor pre-trained on random synthetic Erdős-Rényi graphs. In Table 1, we compared both CNAP agents against a "PPO Baseline" agent that consisted of only the Encoder and Policy/Tail modules. Both the CNAP agents outperformed the baseline agent for this environment, indicating the success of extending XLVIN onto continuous settings via binning.
|
| 160 |
+
|
| 161 |
+
Table 1: Mean rewards for MountainCarContinuous-v0 using PPO Baseline and two variants of CNAP agents. All three agents ran on 10 action bins, and were trained on 100 episodes in total. Both CNAP agents executed one step of value iteration. The reward was averaged over 100 episodes and 10 seeds.
|
| 162 |
+
|
| 163 |
+
max width=
|
| 164 |
+
|
| 165 |
+
Model MountainCarContinuous-v0
|
| 166 |
+
|
| 167 |
+
1-2
|
| 168 |
+
PPO Baseline $- {4.96} \pm {1.24}$
|
| 169 |
+
|
| 170 |
+
1-2
|
| 171 |
+
CNAP-B ${55.73} \pm {45.10}$
|
| 172 |
+
|
| 173 |
+
1-2
|
| 174 |
+
CNAP-R $\mathbf{{63.41}} \pm {37.89}$
|
| 175 |
+
|
| 176 |
+
1-2
|
| 177 |
+
|
| 178 |
+
§ 5.1.1 EFFECT OF GNN WIDTH AND DEPTH
|
| 179 |
+
|
| 180 |
+
In Table 2 and 3, we varied the two hyperparameters of the CNAP agents. In Table 2, we varied the number of action bins into which the continuous action space was discretized. In Table 3, we varied the number of GNN steps, corresponding to the number of steps we simulated in the value iteration algorithm. The two hyperparameters controlled the width and depth of the GNN graphs constructed, respectively. The two agents performed best with 10 action bins and one GNN step. We note that the number of training samples might not be sufficient when given larger graph width and depth. Also, a deeper graph required repeatedly applying the Transition module, where the imprecision might add on, leading to inappropriate state embeddings and hence less desirable results.
|
| 181 |
+
|
| 182 |
+
Table 2: Mean rewards for MountainCarContinuous-v0 using Baseline and CNAP agents by varying number of action bins, i.e., width of graph. The results were averaged over 100 episodes and 10 seeds.
|
| 183 |
+
|
| 184 |
+
max width=
|
| 185 |
+
|
| 186 |
+
Model Action Bins MountainCar-Continuous
|
| 187 |
+
|
| 188 |
+
1-3
|
| 189 |
+
3*PPO 5 $- {2.16} \pm {1.25}$
|
| 190 |
+
|
| 191 |
+
2-3
|
| 192 |
+
10 $- {4.96} \pm {1.24}$
|
| 193 |
+
|
| 194 |
+
2-3
|
| 195 |
+
15 $- {3.95} \pm {0.77}$
|
| 196 |
+
|
| 197 |
+
1-3
|
| 198 |
+
3*CNAP-B 5 ${29.46} \pm {57.57}$
|
| 199 |
+
|
| 200 |
+
2-3
|
| 201 |
+
10 ${55.73} \pm {45.10}$
|
| 202 |
+
|
| 203 |
+
2-3
|
| 204 |
+
15 ${22.79} \pm {41.24}$
|
| 205 |
+
|
| 206 |
+
1-3
|
| 207 |
+
3*CNAP-R 5 ${20.32} \pm {53.13}$
|
| 208 |
+
|
| 209 |
+
2-3
|
| 210 |
+
10 $\mathbf{{63.41}} \pm {37.89}$
|
| 211 |
+
|
| 212 |
+
2-3
|
| 213 |
+
15 ${26.21} \pm {46.44}$
|
| 214 |
+
|
| 215 |
+
1-3
|
| 216 |
+
|
| 217 |
+
Table 3: Mean rewards for MountainCarContinuous-v0 using CNAP agents by varying number of GNN steps, i.e., depth of graph. The results were averaged over 100 episodes and 10 seeds.
|
| 218 |
+
|
| 219 |
+
max width=
|
| 220 |
+
|
| 221 |
+
Model GNN Steps MountainCar-Continuous
|
| 222 |
+
|
| 223 |
+
1-3
|
| 224 |
+
3*CNAP-B 1 ${55.73} \pm {45.10}$
|
| 225 |
+
|
| 226 |
+
2-3
|
| 227 |
+
2 ${46.93} \pm {44.13}$
|
| 228 |
+
|
| 229 |
+
2-3
|
| 230 |
+
3 ${40.58} \pm {48.20}$
|
| 231 |
+
|
| 232 |
+
1-3
|
| 233 |
+
3*CNAP-R 1 $\mathbf{{63.41}} \pm {37.89}$
|
| 234 |
+
|
| 235 |
+
2-3
|
| 236 |
+
2 ${34.49} \pm {47.77}$
|
| 237 |
+
|
| 238 |
+
2-3
|
| 239 |
+
3 ${43.61} \pm {46.16}$
|
| 240 |
+
|
| 241 |
+
1-3
|
| 242 |
+
|
| 243 |
+
§ 5.2 MUJOCO
|
| 244 |
+
|
| 245 |
+
We then ran experiments on more complex environments from OpenAI Gym's MuJoCo suite [19, 32] to evaluate how CNAPs could handle the high increase in scale. Unlike the Classic Control suite, the $\mathrm{{MuJoCo}}$ environments have higher dimensions in both its observation and action spaces. We started by evaluating CNAP agents in two environments with relatively lower action dimensions, and then we moved on to two more environments with much higher dimensions. The discretization of the continuous action space also implied a combinatorial explosion in the action space, resulting in a large graph constructed for the GNN. We used the proposed factorized joint policy from Section 4.3 and the neighbor sampling methods from Section 4.4 to address the limitations.
|
| 246 |
+
|
| 247 |
+
§ 5.2.1 ON LOW-DIMENSIONAL ENVIRONMENTS
|
| 248 |
+
|
| 249 |
+
In Figure 3, we experimented with the four sampling methods discussed in Section 4.4 on Swimmer-v2 (action space dimension of 2) and HalfCheetah-v2 (action space dimension of 6). We chose to take the number of action bins $N = {11}$ for all the experiments following [22], where the best performance on MuJoCo environments was obtained when $7 \leq N \leq {15}$ . In all cases, CNAP outperformed the baseline in the final performances. Moreover, Manual-Gaussian and Reuse-Policy were the most promising sampling strategies as they also demonstrated faster learning, hence better sample efficiency. This pointed to the benefits of parameter reuse and the synergistic improvement between learning to act and learning to sample relevant neighbors, as well as the power of a well-chosen manual distribution. We also note that choosing a manual distribution can become non-trivial when the task becomes more complex, especially if choosing the average values for each dimension is not the most desirable. Our work acts as a proof-of-concept of sampling strategies and leaves the choice of parameters for future studies.
|
| 250 |
+
|
| 251 |
+
§ 5.2.2 ON HIGH-DIMENSIONAL ENVIRONMENTS
|
| 252 |
+
|
| 253 |
+
We then further evaluated the scalability of CNAP agents in more complex environments where the dimensionality of the action space was significantly larger, while retaining a relatively low-data regime $\left( {10}^{6}\right.$ actor steps). In Figure 4, we compared all the previously proposed CNAP methods on two environments with highly complex dynamics, both having an action space dimension of 17. In the Humanoid task, all variants of CNAPs outperformed PPO, acquiring knowledge significantly faster.
|
| 254 |
+
|
| 255 |
+
< g r a p h i c s >
|
| 256 |
+
|
| 257 |
+
Figure 3: Average rewards over time for CNAP (red) and PPO baseline (blue), in Swimmer (action dimension=2) and Halfcheetah (action dimension=6), using different sampling methods. In Swimmer, CNAP with sampling methods were compared with the original version by expanding all actions (green). In (a), the actions were sampled using Gaussian distribution with mean $= N/2$ and std $= N/4$ , where $N$ was the number of action bins used to discretize the continuous action space. In (b), two linear layers were used to learn the mean and std, respectively. In (c), the Policy layer was reused in sampling actions to expand. In (d), a separate linear layer was used to learn the optimal neighbor sampling distribution. The mean rewards were averaged over 100 episodes, and the learning curve was aggregated from 5 seeds.
|
| 258 |
+
|
| 259 |
+
Particularly, we found that nonparametric approaches to sampling the graph in CNAP (e.g. manual Gaussian and policy reuse) acquired this knowledge significantly faster than any other CNAP approach tested. This supplements our previous results well, and further testifies to the improved learning stability when the sampling process does not contain additional parameters to optimise.
|
| 260 |
+
|
| 261 |
+
We also evaluated all of the methods considered against PPO on the HumanoidStandup task, with all methods learning to sit up, and no apparent distinction in the rate of acquisition. However, we provide some qualitative evidence that the solution found by CNAP appears to be more robust in the way this knowledge acquired-see Appendix A.
|
| 262 |
+
|
| 263 |
+
< g r a p h i c s >
|
| 264 |
+
|
| 265 |
+
Figure 4: Average rewards over time for CNAP (red) and PPO baseline (blue), in Humanoid (action dimension=17) and HumanoidStandup (action dimension=17), using Manual-Gaussian and Reuse-Policy sampling methods.
|
| 266 |
+
|
| 267 |
+
268
|
| 268 |
+
|
| 269 |
+
§ 5.2.3 QUALITATIVE INTERPRETATION
|
| 270 |
+
|
| 271 |
+
269 We captured the video recordings of the interactions between the agents and the environments to provide a qualitative interpretation to the results above. We chose to look at the selected frames at 1 equal time intervals from one episode after the last training iteration by CNAP (Manual-Gaussian) and PPO Baseline, respectively.
|
| 272 |
+
|
| 273 |
+
< g r a p h i c s >
|
| 274 |
+
|
| 275 |
+
Figure 5: Selected frames of two agents in HalfCheetah
|
| 276 |
+
|
| 277 |
+
From Figure 5's HalfCheetah task, we can see the agent instructed by PPO Baseline fell over quickly and never managed to turn it back. However, CNAP's agent could balance well and kept running 75 forward. This observation could support the higher average episodic rewards gained by CNAP agents than by PPO Baseline in Figure 3.
|
| 278 |
+
|
| 279 |
+
< g r a p h i c s >
|
| 280 |
+
|
| 281 |
+
Figure 6: Selected frames of two agents in Humanoid
|
| 282 |
+
|
| 283 |
+
Similarly, in Figure 6's Humanoid task, PPO Baseline's humanoid stayed stationary and lost balance quickly, while CNAP's humanoid could walk forward in small steps. This observation aligned with the results in Figure 4 where the gain from CNAP was significant.
|
| 284 |
+
|
| 285 |
+
The selected frames for Swimmer and HumanoidStandup tasks are attached in Appendix A. We note that, although quantitatively CNAP agent did not differentiate from PPO Baseline in Humanoid-Standup task as shown in Figure 4, for the trajectories we observed, it successfully remained in a sitting position, while the PPO Baseline fell quickly.
|
| 286 |
+
|
| 287 |
+
§ 6 CONCLUSION
|
| 288 |
+
|
| 289 |
+
We present CNAP, a method that generalizes implicit planners to continuous action spaces for the first time. In particular, we study implicit planners based on neural algorithmic reasoners and the unstudied implications of not having precise alignment between the learned graph algorithm and the setup where the executor is applied. To deal with the challenges in building the planning tree, as a result of the continuous, high-dimensional nature of the action space, we combine previous advancements in XLVIN with binning, as well as parametric and non-parametric neighbor sampling strategies. We evaluate the agent against its model-free variant, observing its efficiency in low-data settings and consistently better performance than the baseline. Moreover, this paves the way for extending other implicit planners to continuous action spaces and studying neural algorithmic reasoning beyond strict applications of graph algorithms.
|
papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,324 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Transfer Learning using Spectral Convolutional Autoencoders on Semi-Regular Surface Meshes
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
The underlying dynamics and patterns of 3D surface meshes deforming over time can be discovered by unsupervised learning, especially autoencoders, which calculate low-dimensional embeddings of the surfaces. To study the deformation patterns of unseen shapes by transfer learning, we want to train an autoencoder that can analyze new surface meshes without training a new network. Here, most state-of-the-art autoencoders cannot handle meshes of different connectivity and therefore have limited to no transfer learning capacities to new meshes. Also, reconstruction errors strongly increase in comparison to the errors for the training shapes. To address this, we propose a novel spectral CoSMA (Convolutional SemiRegular Mesh Autoencoder) network. This patch-based approach is combined with a surface-aware training. It reconstructs surfaces not presented during training and generalizes the deformation behavior of the surfaces' patches. The novel approach reconstructs unseen meshes from different datasets in superior quality compared to state-of-the-art autoencoders that have been trained on these shapes. Our transfer learning reconstruction errors are ${40}\%$ lower than those from models learned directly on the data. Furthermore, baseline autoencoders detect deformation patterns of unseen mesh sequences only for the whole shape. In contrast, due to the employed regional patches and stable reconstruction quality, we can localize where on the surfaces these deformation patterns manifest.
|
| 12 |
+
|
| 13 |
+
## 21 1 Introduction
|
| 14 |
+
|
| 15 |
+
We study the deformation of surfaces in $3\mathrm{D}$ , which discretize human bodies, animals, or work pieces from computer aided engineering. Using autoencoders as a method for unsupervised learning, we analyze and detect patterns in the deformation behavior by calculating low-dimensional features. Since surface deformation is locally described by the same physical rules, we want to study the deformation patterns of unseen shapes by transfer learning. That means an autoencoder should be able to analyze new surface meshes without being trained again.
|
| 16 |
+
|
| 17 |
+
While two-dimensional surfaces embedded in ${\mathbb{R}}^{3}$ are locally homeomorphic to the two-dimensional space, they are of non-Euclidean nature. Their representation by surface meshes lacks the regularity of pixels describing images, which is so convenient for 2D CNNs [1]. This is why existing methods for unsupervised learning for irregularly meshed surface meshes depend on the mesh connectivity when defining pooling or convolutional operators. For this reason, a trained mesh autoencoder cannot be applied to a surface that is represented by a different mesh, although the local deformation behavior might be similar.
|
| 18 |
+
|
| 19 |
+
The authors of [2] presented a mesh autoencoder for semi-regular meshes of different sizes. The semi-regular surface representations enforce some local mesh regularity and are made up of regularly meshed patches as illustrated in Figure 1, which allows the application of their patch-wise approach to meshes of different sizes. However, the reconstruction quality decreases by a factor of 4 when applying their mesh autoencoder to new meshes and shapes that have not been used during training. This limits the method's application for unseen shapes.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Remeshing of the horse template mesh. In the semi-regular mesh, the boundaries of the regularly meshed patches are highlighted in gray.
|
| 24 |
+
|
| 25 |
+
Additionally, baseline mesh autoencoders for deforming shapes do not provide an understanding or explanation about which surface areas lead to the patterns in the embedding space. The embeddings represent the entire shape. Nevertheless, when identifying and analyzing deformation patterns, it is of particular relevance where on the surfaces these patterns manifest.
|
| 26 |
+
|
| 27 |
+
Our work remedies these gaps by adopting the patch-based framework for semi-regular meshes and choosing a spectral graph convolutional filter [3] projecting vertex features to the Laplacian eigenvector basis in combination with a surface-aware training. Since the spectral filters consider the entire patch, the network generalizes better in comparison to a spatial approach, whose filters consider smaller $n$ -ring neighborhoods. This improves the quality and smoothness of the reconstruction results when being applied to unknown meshes and the errors are ${40}\%$ lower than errors from models learned directly on the data. Although spectral graph neural network methods require fixed mesh connectivity, our mesh-independent approach is not limited by this constraint. This is because the filters are applied to the regular substructures of semi-regular mesh representations of the surfaces. Furthermore, our patch-based approach allows us to correlate patch-wise embeddings with the embedding of the entire shape. This way we localize and understand where on the surfaces the deformation patterns, which are visible in the low-dimensional representation, manifest.
|
| 28 |
+
|
| 29 |
+
The research objectives can be summarized as a) the definition of a spectral convolutional autoencoder for semi-regular meshes (spectral CoSMA) and a surface-aware training loss, by this means b) improving the generalization capability, transfer learning and runtime of baseline mesh autoencoders, and c) localizing the deformation patterns visible in the low-dimensional embedding on the surfaces.
|
| 30 |
+
|
| 31 |
+
Further on in section 2, we discuss work related to learning features from meshed geometry. In section 3 , we present relevant characteristics of surface meshes for CNNs and introduce the semi-regular remeshing, followed by the definition of our spectral CoSMA in section 4. Results for different datasets containing meshes with different connectivity are presented in section 5 .
|
| 32 |
+
|
| 33 |
+
## 2 Related Work
|
| 34 |
+
|
| 35 |
+
### 2.1 Convolutional Networks for Surfaces
|
| 36 |
+
|
| 37 |
+
Surfaces are generally represented either in form of point clouds or by a surface mesh, which is defined by faces connecting vertices to each other. We only consider the representation via meshes, because their faces describe the underlying surface $\left\lbrack {4,5}\right\rbrack$ . Surface meshes can be viewed as graphs, and hence graph-based convolutional methods are often applied to meshes.
|
| 38 |
+
|
| 39 |
+
Generally, convolutional networks for graphs can be separated into spectral and spatial ones, of which $\left\lbrack {1,6,7}\right\rbrack$ give an overview. Spatial convolutional methods for graphs aggregate features based on a node's spatial relations, which allows generalization across different mesh connectivities [7, 8]. Spectral approaches, on the other hand, interpret information on the vertices as a signal propagation along the vertices. They exploit the connection of the graph Laplacian and the Fourier basis and vertex features are projected to the Laplacian eigenvector basis, where filters are applied [9]. Instead of explicitly computing Laplacian eigenvectors, the authors of [3] use truncated Chebyshev polynomials, and in [10] they use only first-order Chebyshev polynomials. These spectral methods require fixed connectivity of the graph. If not, the adjacency matrix and consequently the Laplacian eigenvector basis change.
|
| 40 |
+
|
| 41 |
+
Furthermore, there are network architectures only for surface meshes, e.g. DiffusionNet [11] and HodgeNet [12], which are applied for classification, mesh segmentation, and shape correspondence. Nevertheless, these architectures cannot be implemented directly into autoencoders, because of missing mesh pooling operators.
|
| 42 |
+
|
| 43 |
+
### 2.2 Neural Networks for Semi-Regular Surface Meshes
|
| 44 |
+
|
| 45 |
+
Semi-regular triangular surface meshes, also known as meshes with subdivision connectivity, come with a regular local structure and a hierarchical multi-resolution structure. In section 3.2, we provide a more detailed definition. The Spatial CoSMA [2] and SubdivNet [13] take advantage of the local regularity of the patches by defining efficient mesh-independent pooling operators and using 2D convolution. By inputting the patches separately into the network, [2] can define an autoencoder pipeline that is independent of the mesh size. [13] apply self-parametrization using the MAPS algorithm [14] to remesh watertight manifold meshes without boundaries. [2] on the other hand, apply a remeshing algorithm that works for meshes with boundaries and coarser base meshes.
|
| 46 |
+
|
| 47 |
+
### 2.3 Mesh Convolutional Autoencoders
|
| 48 |
+
|
| 49 |
+
The first convolutional mesh autoencoder (CoMA) has been introduced in [15]. The authors introduced mesh downsampling and mesh upsampling layers for pooling and unpooling, which are combined with spectral convolutional filters using truncated Chebyshev polynomials as in [3]. The Neural 3D Morphable Models (Neural3DMM) network presented in [4] improves those results using spiral convolutional layers. The authors of [16] apply the CoMA to different datasets and improve the down and upsampling layers slightly. By manually choosing latent vertices for the embedding space, [17] define an autoencoder that allows interpolating in the latent space. All the above-mentioned mesh convolutional autoencoders work only for meshes of the same size and connectivity because the pooling and/or convolutional layers depend on the adjacency matrix. The authors of [2] showed that the latter methods are not able to learn data with greater global variations in comparison to their patch-based approach, which generalizes and reconstructs the deformed meshes to superior quality. Additionally, their architecture can be applied to unseen meshes of different sizes. The MeshCNN architecture [5] can be implemented as an encoder and decoder. Nevertheless, the pooling is feature dependent and therefore the embeddings can be of different significance.
|
| 50 |
+
|
| 51 |
+
## 3 Handling Surface Meshes by Neural Networks
|
| 52 |
+
|
| 53 |
+
The irregularity of surface meshes gives rise to difficulties when handling them with a neural network. These are explained in this section, followed by the motivation and definition of semi-regular meshes.
|
| 54 |
+
|
| 55 |
+
### 3.1 Irregularity of Surface Meshes
|
| 56 |
+
|
| 57 |
+
CNNs in 2D [18, 19] apply the same local filters to local neighborhoods of selected pixels of the image. Because of the global grid structure (defined by the x - and y-axis) of the image, the filters of constant shape can be horizontally and vertically shifted and the local neighborhoods are of regular connectivity. CNNs work so efficiently for images because they are translation equivariant and therefore equivariant to the global symmetry of images [20].
|
| 58 |
+
|
| 59 |
+
The intrinsic dimension of surface meshes is also 2 because they represent a flat surface. Nevertheless, surface meshes lack global regularities, because they are not defined along a global grid, local neighborhoods can have any size and arrangement as long as they are locally Euclidean, and the distance between neighbors is not fixed.
|
| 60 |
+
|
| 61 |
+
One cannot enforce a regular mesh discretization for every surface in ${\mathbb{R}}^{3}$ , which would lead to an underlying global grid [21]. This is why [2, 22] proposed to enforce a similar structure in the local neighborhoods by choosing a semi-regular representation of the surface. In this way, an efficient application of convolution on surface meshes becomes possible. Note that remeshing the polygonal mesh only changes the representation of the objects. The considered surface embedded in ${\mathbb{R}}^{3}$ is the same, but now represented by a different discrete approximation.
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
|
| 65 |
+
Figure 2: Resolution of the regularly meshed patches inside the spectral CoSMA. The encoder pools the patches twice by undoing subdivision. In the decoder, the unpooling increases the resolution again by subdivision. The orange vertices are the vertices from the irregular base mesh. Red and purple vertices have been created during the ${1}^{\text{st }}$ and ${2}^{\text{nd }}$ refinement steps.
|
| 66 |
+
|
| 67 |
+
### 3.2 Definition of Semi-Regular Meshes
|
| 68 |
+
|
| 69 |
+
We consider semi-regular meshes in order to mitigate the problems caused by the irregularity of surface meshes while still allowing a flexible surface representation (see Figure 1). Following the definitions in [23], we call a surface mesh semi-regular if we can convert it to a low-resolution mesh by iteratively merging four triangular faces into one. Consequently, all vertices of the semi-regular mesh except for the ones remaining in the low-resolution mesh are regular (i.e. have six neighbors). Vice versa, the regular subdivision of a possibly irregular low-resolution mesh yields a semi-regular mesh. Such a regular subdivision can be achieved by inserting a vertex on each edge and splitting each original triangle face into 4 sub-triangles. $\left\lbrack {{13},{24}}\right\rbrack$ refer to this property as Loop subdivision connectivity of the semi-regular mesh. The subdivision connectivity makes semi-regular meshes particularly useful for multiresolution analysis and directly implies a suitable local pooling operator on semi-regular meshes (see section 4).
|
| 70 |
+
|
| 71 |
+
### 3.3 Remeshing
|
| 72 |
+
|
| 73 |
+
We apply the remeshing from [2], because other algorithms, e.g. Neural Subdivision [22] or MAPS [14], only work for closed surfaces without boundaries and fail for base meshes as coarse as ours. The algorithm iteratively subdivides a coarse approximation of the original irregular mesh (see Figure 1). The resulting semi-regular mesh is fitted to the original mesh using gradient descent on a loss function based on the chamfer distance. The refinement level ${rl}$ states the number of times each face of the coarse base mesh is iteratively subdivided. The number of faces in the final semi-regular mesh is ${n}_{F}^{\text{semireg }} = {4}^{rl} * {n}_{F}^{c}$ , with ${n}_{F}^{c}$ being the number of faces describing the coarse base mesh. We choose the refinement level ${rl} = 4$ , which leads to finer meshes compared to [2], who chose ${rl} = 3$ .
|
| 74 |
+
|
| 75 |
+
After the remeshing, all vertices that are newly created during the subdivision have six neighbors. Therefore, the resulting mesh is semi-regular or has subdivision connectivity.
|
| 76 |
+
|
| 77 |
+
## 4 Spectral CoSMA
|
| 78 |
+
|
| 79 |
+
The network handles the regional patches separately, which allows us to handle meshes of different sizes. We describe how the graph convolution is combined with the padding and surface-aware loss as well as the pooling of the patches, and how one takes advantage of the semi-regular meshing. The building blocks are set together to define the spectral CoSMA (Spectral Convolutional Semi-Regular Mesh Autoencoder).
|
| 80 |
+
|
| 81 |
+
### 4.1 Spectral Chebyshev Convolutional Filters
|
| 82 |
+
|
| 83 |
+
We apply fast Chebyshev filters [3], as in [15], with the distinction that we are using them to perform spectral convolutions on the regional patches instead of the entire mesh. We justify this different convolution on the patches, compared to [2], by the intuition that spectral filters encode information of a whole patch and the general characteristics of its deformations, whereas in comparison spatial convolution considers just the local neighborhood around a vertex.
|
| 84 |
+
|
| 85 |
+
We use the formulation of [3] for convolving over our regularly meshed patches. We perform spectral decomposition using spectral filters and apply convolutions directly in the frequency space.
|
| 86 |
+
|
| 87 |
+
The spectral filters are approximated by truncated Chebyshev polynomials, which avoid explicitly computing the Laplacian eigenvectors and, by this means, reduce the computational complexity.
|
| 88 |
+
|
| 89 |
+
The decomposition using spectral filters is dependent on the adjacency matrix, which restricts the transfer learning of spectral graph convolution to meshes of the same connectivity. Nevertheless, the adjacency matrix of the patches of our semi-regular meshes is always the same for one refinement level. This allows us to train the filters for all patches together and to apply them to unseen meshes.
|
| 90 |
+
|
| 91 |
+
### 4.2 Pooling and Padding of the Regular Patches
|
| 92 |
+
|
| 93 |
+
We apply the patch-wise average pooling and unpooling from [2] that takes advantage of the multi-scale structure of the semi-regular meshes. The subdivision connectivity guarantees that every 4 faces can be uniformly pooled to 1 . The remaining vertices take the average of their own value and the values of the neighboring vertices that are removed. The unpooling operator subdivides the faces and the newly created vertices are assigned the average value of neighboring vertices from the lower-resolution mesh patch. A similar pooling and unpooling operator is also applied by [13], where the information is saved on the faces.
|
| 94 |
+
|
| 95 |
+
The padding is crucial for the network to consider the regional patches in a larger context. Since the network handles the patches separately, we consider the features of the neighboring patches in a padding of size 2 as in [2]. If the vertices are boundary vertices, we decide to pad the patch with the boundary vertices' features.
|
| 96 |
+
|
| 97 |
+
### 4.3 Network Architecture
|
| 98 |
+
|
| 99 |
+
While using specialized pooling and convolution techniques for the regular patches, the general structure of our network architecture is inspired by $\left\lbrack {2,{15}}\right\rbrack$ . Our autoencoder architecture combines spectral Chebyshev convolutional filters with the described pooling technique to process the padded regular patches of a semi-regular mesh. The autoencoder compresses every padded patch, which corresponds to one face of the low-resolution mesh, from ${\mathbb{R}}^{{276} \times 3}\left( {{rl} = 4}\right)$ to an ${hr} = {10}$ dimensional latent vector and reconstructs the original padded patch from the latent vector.
|
| 100 |
+
|
| 101 |
+
The encoder consists of two blocks containing a Chebyshev convolutional layer followed by an average pooling layer and an exponential linear unit (ELU) as an activation function [25]. The output of the second encoding block is mapped to the latent space by a fully connected layer.
|
| 102 |
+
|
| 103 |
+
The decoder mirrors the structure of the encoder by first applying a fully connected layer, which transforms the latent space vector back to a regular triangle representation with refinement level ${rl} = 2$ . Afterward, two decoding blocks consisting of an unpooling layer followed by a convolutional layer transform the coarse triangle representation back to the original padded patch representation. Finally, another Chebyshev convolutional layer is applied without activation function to reconstruct the original patch coordinates by reducing the number of features to three dimensions.
|
| 104 |
+
|
| 105 |
+
All Chebyshev convolutional layers use $K = 6$ Chebyshev polynomials. Table 3 in the supplementary material gives a detailed view of the structure of the network together with the parameter numbers per layer which sum up to 23,053. Figure 2 illustrates the patch sizes inside the autoencoder. Note that we are able to handle non-manifold edges of the coarse base mesh because the patches, whose interiors by construction have only manifold-edges, are fed separately. The code will be provided as supplementary material.
|
| 106 |
+
|
| 107 |
+
This spectral CoSMA architecture can handle all surface meshes, that have been remeshed into a semi-regular mesh representation of the same refinement level. By handling the regional padded patches separately, this workflow is independent of the original irregular mesh connectivity thanks to the remeshing and patch-wise handling.
|
| 108 |
+
|
| 109 |
+
### 4.4 Surface-Aware Loss Calculation
|
| 110 |
+
|
| 111 |
+
The authors of the patch-based spatial CoSMA [2] employ a patch-wise mean squared error as the training loss. But, that loss calculation is not keeping track of multiple appearances of the vertices in the patch boundaries. Therefore, it is not surface-aware and during training the error on the patch boundaries is weighted higher than in the interior of the patches. By weighting the vertex-wise error in the training loss by the vertices' number of appearances in the different patches, we employ a surface-aware error for training. This reduces the P2S error as visible in the ablation study.
|
| 112 |
+
|
| 113 |
+
Table 1: Point to surface (P2S) errors $\left( {\times {10}^{-2}}\right)$ between reconstructed unseen semi-regular meshes $\left( {{rl} = 4}\right)$ and original irregular mesh and their standard deviations for three different training runs. $\left\lbrack {4,{13},{15}}\right\rbrack$ have to be trained per mesh; we and $\left\lbrack 2\right\rbrack$ train one network for all three animals in the GALLOP dataset. *: the elephant has not been seen by the network during training.
|
| 114 |
+
|
| 115 |
+
<table><tr><td>Mesh Class</td><td>CoMA [15]</td><td>Neural3DMM [4]</td><td>SubdivNet [13]</td><td>Spatial CoSMA [2]</td><td>Ours</td></tr><tr><td>FAUST</td><td>0.7073 + 1.751</td><td>0.4064 + 0.921</td><td>${2.8190} + {4.699}$</td><td>0.0224 + 0.045</td><td>$\mathbf{{0.0031}} + {0.006}$</td></tr><tr><td>Horse</td><td>0.0053 + 0.017</td><td>0.0096 + 0.045</td><td>0.0113 + 0.025</td><td>0.0078 + 0.012</td><td>$\mathbf{{0.0022}} \pm {0.005}$</td></tr><tr><td>Camel</td><td>${0.0075} \pm {0.023}$</td><td>0.0145 + 0.056</td><td>0.0113 + 0.024</td><td>0.0091 + 0.014</td><td>$\mathbf{{0.0030}} \pm {0.006}$</td></tr><tr><td>Elephant</td><td>0.0101 + 0.031</td><td>0.0147 + 0.057</td><td>0.0145 + 0.032</td><td>${0.0316} + {0.068}^{ * }$</td><td>0.0054 + 0.012*</td></tr></table>
|
| 116 |
+
|
| 117 |
+
Table 2: P2S errors $\left( {\times {10}^{-2}}\right)$ for three different training runs. Additionally, the Euclidean P2S error (in cm) is given. *: the entire YARIS dataset has not been seen by the network during training.
|
| 118 |
+
|
| 119 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Component Lengths</td><td colspan="2">Spatial CoSMA [2]</td><td colspan="2">Ours</td></tr><tr><td>Test P2S</td><td>Eucl. E.</td><td>Test P2S</td><td>Eucl. E.</td></tr><tr><td>TRUCK</td><td>135-370 cm</td><td>0.0660 + 0.117</td><td>2.76 cm</td><td>0.0013 + 0.003</td><td>0.26 cm</td></tr><tr><td>YARIS*</td><td>21-91 cm</td><td>${0.2061} \pm {0.438}$</td><td>0.84 cm</td><td>0.0375 + 0.088</td><td>0.31 cm</td></tr></table>
|
| 120 |
+
|
| 121 |
+
## 216 5 Experiments
|
| 122 |
+
|
| 123 |
+
We test our spectral CoSMA for semi-regular meshes using a setup similar to [2] on four different datasets and compare our achieved reconstruction errors to state-of-the-art surface mesh autoencoders.
|
| 124 |
+
|
| 125 |
+
### 5.1 Datasets
|
| 126 |
+
|
| 127 |
+
GALLOP: The dataset contains triangular meshes representing a motion sequence with 48 timesteps from a galloping horse, elephant, and camel [26]. The galloping movement is similar but the meshes representing the surfaces of the three animals are different in connectivity and the number of vertices. This is why the baseline autoencoders have to be trained three times. The surface approximations are remeshed to semi-regular meshes with refinement level ${rl} = 4$ for each animal. The new meshes are still of different connectivity, but all are made up of regional regular patches. Table 6 lists the resulting numbers of vertices. We normalize the semi-regular meshes to $\left\lbrack {-1,1}\right\rbrack$ as in [2]. Before inputting the data to the CoSMAs, every patch is translated to zero mean. We use the first 70% of the galloping sequence of the horse and camel for training. The architecture is tested on the remaining ${30}\%$ and the whole sequence of the elephant, which is never seen during the training for the CoSMAs.
|
| 128 |
+
|
| 129 |
+
FAUST: The dataset contains 100 meshes [27], which are in correspondence to each other. The irregular surface meshes represent 10 different bodies in 10 different poses. For the experiments, we consider two unknown poses of all bodies (20% of the data) in the testing set. The meshes are remeshed and normalized in the same way as for the GALLOP dataset.
|
| 130 |
+
|
| 131 |
+
TRUCK and YARIS: In a car crash simulation the car components, which are generally represented by surface meshes, often deform in different patterns. Every component is discretized by a surface mesh, while the local deformation is described by the same physical rules. Following [2], the TRUCK dataset contains 32 completed frontal crash simulations and 6 components, the YARIS dataset contains 10 simulations and 10 components. 30 simulations and 70% of the timesteps of the TRUCK dataset are included in the training set. The remaining samples from the TRUCK dataset and the entire YARIS dataset, representing a different car, are considered for testing. For this setup, the authors of $\left\lbrack {2,{28}}\right\rbrack$ detect patterns in the deformation of the TRUCK and YARIS components. We normalize the meshes that discretize car components to zero mean and range $\left\lbrack {-1,1}\right\rbrack$ relative to the coordinates' ratio. Every patch is translated to zero mean.
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
Figure 3: Reconstructed unknown FAUST pose and elephant test sample at $t = {43}$ by CoMA, Neural3DMM, SubdivNet Autoencoder, spatial CoSMA, and our network. P2S error of the reconstructed faces is highlighted. More reconstruction examples are given in the supplementary material. * The elephant's mesh has not been presented during training to spatial CoSMA and our network.
|
| 136 |
+
|
| 137 |
+
### 5.2 Training Details
|
| 138 |
+
|
| 139 |
+
We train the network (implemented in Pytorch [29] and Pytorch Geometric [30]) with the adaptive learning rate optimization algorithm [31]. For the GALLOP and the FAUST dataset, we use a learning rate of 0.0001 and train for 150 epochs using a batch size of 100 . For the TRUCK data, we choose a batch size of 100 combined with a learning rate of 0.001 for 300 epochs, since the variation inside the dataset is higher. We minimize the surface-aware loss between the original and reconstructed regional patches of the surface mesh without considering the padding. To augment the data in the case of the GALLOP and the FAUST dataset we rotate the regional patches by ${0}^{ \circ },{120}^{ \circ }$ , and ${240}^{ \circ }$ .
|
| 140 |
+
|
| 141 |
+
Our architecture requires at least ${50}\%$ fewer parameters than the CoMA, Neural3DMM, and Subdi-vNet networks, because for increasing ${rl}$ and consequently finer meshes, the CoSMAs require only a few parameters more in the linear layers (compare Tables 6 and 7 in the supplementary material). This is because the patches and convolutional filters share the parameters. The spectral CoSMA approach requires 15% fewer parameters than the spatial CoSMA approach. The runtime analysis and ablation study justifying parameter choices are provided in the supplementary material.
|
| 142 |
+
|
| 143 |
+
### 5.3 Reconstructions of the Meshes
|
| 144 |
+
|
| 145 |
+
The mean squared error between true and reconstructed vertices of the semi-regular mesh allows a comparison of different methods only if the same remeshing result is used. In difference to [2], we compare the reconstructed semi-regular mesh directly to the original irregular surface mesh by calculating a point to surface error (P2S). We average the mean squared errors between the vertices of the semi-regular mesh and their orthogonal projections to the surface described by the irregular mesh. This allows us to compare the reconstruction errors when using different remeshing results or refinements.
|
| 146 |
+
|
| 147 |
+
Besides CoMA [15] and Neural3DMM [4], we use an additional baseline semi-regular mesh autoen-coder using our network's architectures with the pooling and convolutional layers from SubdivNet [13] to process the entire meshes. In Table 1 we compare the autoencoders for the GALLOP and FAUST dataset in terms of the P2S errors of reconstructed test samples, whose 3D coordinates lie in the range $\left\lbrack {-1,1}\right\rbrack$ . Our network reduces the test reconstruction error for the GALLOP and FAUST dataset by more than ${50}\%$ and ${80}\%$ respectively, if the shape is presented to the autoencoder during the training. For unknown poses from the FAUST dataset, the limbs' positions are reconstructed inaccurately by the CoMA, Neural3DMM, and SubdivNet autoencoders. Especially if the pose is not similar to training poses, their reconstruction fails, as Figure 3 illustrates.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
Figure 4: Reconstructed front beams from the TRUCK (length of ${150}\mathrm{\;{cm}}$ ) at time $t = {24}$ (test sample) from two crash simulations representing different deformation behavior and from the YARIS (length of ${65}\mathrm{\;{cm}}$ ) at $t = {15}$ . The average Euclidean P2S error (in $\mathrm{{cm}}$ ) of the faces is highlighted.
|
| 152 |
+
|
| 153 |
+
The spectral CoSMA's reconstructions are generally smoother than the ones from the spatial CoSMA, which reduces the reconstruction errors. Figure 7 in the supplementary material shows that the reconstructed patch using spectral filters, which encode the connectivity of the whole patch in the Chebyshev polynomials, is smoother than the spatial reconstruction, where the convolutional kernels only consider the close neighborhood. Because the spatial CoSMA uses ${hr} = 8$ and no surface-aware loss, we also list our reconstruction errors using these parameters in the ablation study for a complete comparison.
|
| 154 |
+
|
| 155 |
+
Transfer Learning to Meshes with New Connectivity: Our spectral CoSMA and the spatial CoSMA are the only networks that can reconstruct an unseen shape of different connectivity. The elephant's mesh has never been presented to our network, nevertheless, our reconstruction error is lower. Even though trained on the elephant, the baselines' reconstructions are worse and unstable in the legs, as Figure 3 illustrates. The spatial CoSMA's reconstructions of the unseen elephant are inferior to all the other networks, although the reconstructions of the known camel and horse are of similar quality to the other baselines. This highlights the improved transfer learning and generalization capability of the new spectral approach.
|
| 156 |
+
|
| 157 |
+
Since the TRUCK and YARIS datasets contain 16 different meshes, the reconstruction results are compared between the CoSMA architectures. In Table 2 we present the average P2S errors for the TRUCK and YARIS dataset between the components scaled to range $\left\lbrack {-1,1}\right\rbrack$ and in cm. The entire YARIS dataset has never been presented to the network during training. The results on the YARIS in Figure 4 also show that our network not only reconstructs smoother surfaces in comparison to the spatial CoSMA but also has higher transfer learning capacities.
|
| 158 |
+
|
| 159 |
+
A comparison of the results for refinement levels ${rl} = 3$ and ${rl} = 4$ for the TRUCK and YARIS datasets (see Table 8 in the supplementary material) shows the stability of the results from our spectral CoSMA. For the spatial CoSMA on the other hand, the reconstruction quality decreases when increasing the refinement level. This is due to the fixed kernel size of 2 . Since the mesh is finer, the considered neighborhoods by a spatial filter using kernel size 2 cover smaller areas of the surface. The spectral CoSMA considers the entire patches in spectral representation. Therefore, an increase in the refinement level does not impair the reconstruction quality.
|
| 160 |
+
|
| 161 |
+
### 5.4 Low-dimensional Embedding
|
| 162 |
+
|
| 163 |
+
We project the patch-wise hidden representations of size ${hr}$ into the two-dimensional space using the linear dimensionality reduction method Principal Component Analysis (PCA) [32]. Then we compare these patch-wise results to the $2\mathrm{D}$ embedding over time of the whole shape, by concatenating the hidden patch-wise representations and then applying PCA.
|
| 164 |
+
|
| 165 |
+
The time-dependent embedding for the unseen elephant from the GALLOP dataset exhibits a periodic galloping sequence, visualized in Figure 5 (a). We compare how similar the 2D patch-wise embed-dings are to the $2\mathrm{D}$ embedding for the entire shape, to determine how important the deformation of the patch is for the general deformation behavior of the whole shape. The patch-wise distance is visualized in Figure 5 (b) and its calculation detailed in the supplementary material. We notice that this distance is the lowest for the body and legs, which define the elephant's gallop, whereas the movement of the head does not follow the periodic pattern.
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+
Figure 5: (a) 2D Embedding of the low-dimensional representation of the whole elephant over time. (b) Highlighting the distance of the patch-wise embeddings to the embedding of the whole shape. (c) Patch-wise score for the TRUCK’s front beam from Figure 4 at $t = {24}$ . Only the patch with the high score manifests the deformation in two patterns. This is visible in the example patches with high and low scores. The embedding's colors encode timestep and branch.
|
| 170 |
+
|
| 171 |
+
For the TRUCK and YARIS datasets, the goal is the detection of clusters corresponding to different deformation patterns in the components' embeddings. This speeds up the analysis of car crash simulations since relations between model parameters and the deformation behavior are discovered more easily [28, 33]. In the 2D visualizations for the TRUCK components, we detect two clusters corresponding to a different deformation behavior and our patch-based approach allows us to identify the patches that contribute most to this. For each patch, we define a score, which equals the accuracy of an SVM (between 0.5 and 1) that is classifying the observed two deformation patterns of the entire component from the patch's embedding, see Figure 5 (c). The highlighted patches correlate to the left part of the beam, where the deformation is visibly different for two different TRUCK simulations in Figure 4. Note, that this comparison of patch- and shape-embeddings does not lead to significant results for the spatial CoSMA [2] because of the instability of its results.
|
| 172 |
+
|
| 173 |
+
For the YARIS, which has never been seen by the network during training, we also visualize the low-dimensional representation for different components in 2D using PCA. We detect a deformation pattern in the front beams that splits up the simulation set into two clusters, see Figure 9 in the supplementary material, which is a result similar to [2] who used a nonlinear dimensionality reduction.
|
| 174 |
+
|
| 175 |
+
## 6 Conclusion
|
| 176 |
+
|
| 177 |
+
We have introduced a novel spectral mesh autoencoder pipeline for the analysis of deforming $3\mathrm{D}$ semi-regular surface meshes with different connectivity. This allows us to generate high-quality reconstructions of unseen meshes, that have not been presented during training. In fact, the reconstruction quality for unknown meshes with our spectral CoSMA is higher than with baseline autoencoders that have seen the meshes during training. This improved transfer learning capability and reconstruction quality motivate the future analysis of generative models for the patch-based approach. For high-quality generative results, we also plan to improve the remeshing procedure to focus more on detailed structures. Right now the loss of smaller detailed geometric structures in the remeshing has little effect on the results since we want to detect the behavioral patterns in the low-dimensional representations of global deformation.
|
| 178 |
+
|
| 179 |
+
Additionally, we provide an understanding and interpretation of which surface areas lead to the patterns in the embedding space. We speculate that this information per patch could be used in further analysis. We also plan to apply the architecture to other tasks such as shape matching and segmentation.
|
| 180 |
+
|
| 181 |
+
References
|
| 182 |
+
|
| 183 |
+
[1] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. 1, 2
|
| 184 |
+
|
| 185 |
+
[2] Sara Hahner and Jochen Garcke. Mesh convolutional autoencoder for semi-regular meshes of different sizes. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 885-894, 2022. 1, 3, 4, 5, 6, 7, 9, 12, 14, 15
|
| 186 |
+
|
| 187 |
+
[3] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, volume 29, pages 3844-3852, 2016. 2, 3, 4
|
| 188 |
+
|
| 189 |
+
[4] Giorgos Bouritsas, Sergiy Bokhnyak, Stylianos Ploumpis, Stefanos Zafeiriou, and Michael Bronstein. Neural 3D morphable models: Spiral convolutional networks for 3D shape representation learning and generation. Proceedings of the IEEE International Conference on Computer Vision, 2019-Octob:7212-7221, 2019. doi: 10.1109/ICCV.2019.00731. 2, 3, 6, 7, 14, 15
|
| 190 |
+
|
| 191 |
+
[5] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. MeshCNN: A network with an edge. ACM Transactions on Graphics, 38(4):1-12, jul 2019. doi: 10.1145/3306346.3322959. 2, 3
|
| 192 |
+
|
| 193 |
+
[6] Michael M. Bronstein, Joan Bruna, Yann Lecun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34 (4):18-42, 2017. doi: 10.1109/MSP.2017.2693418. 2
|
| 194 |
+
|
| 195 |
+
[7] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2020. doi: 10.1109/tnnls.2020.2978386. 2
|
| 196 |
+
|
| 197 |
+
[8] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. 34th International Conference on Machine Learning, 3:2053-2070, 2017. 2
|
| 198 |
+
|
| 199 |
+
[9] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and deep locally connected networks on graphs. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, pages 1-14, 2014. 2
|
| 200 |
+
|
| 201 |
+
[10] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, pages 1-14, 2016. 2, 12
|
| 202 |
+
|
| 203 |
+
[11] Nicholas Sharp, Souhaib Attaiki, Keenan Crane, and Maks Ovsjanikov. DiffusionNet: Dis-cretization agnostic learning on surfaces. ACM Transactions on Graphics, 41(3):1-16, 2022. 3
|
| 204 |
+
|
| 205 |
+
[12] Dmitriy Smirnov and Justin Solomon. HodgeNet: Learning spectral geometry on triangle meshes. ACM Transactions on Graphics, 40(4):1-11, jul 2021. doi: 10.1145/3450626.3459797. 3
|
| 206 |
+
|
| 207 |
+
[13] Shi-Min Hu, Zheng-Ning Liu, Meng-Hao Guo, Jun-Xiong Cai, Jiahui Huang, Tai-Jiang Mu, and Ralph R. Martin. Subdivision-based mesh convolution networks. ACM Transactions on Graphics, 41(3):1-16, 2022. 3, 4, 5, 6, 7, 14, 15
|
| 208 |
+
|
| 209 |
+
[14] Aaron W.F. Lee, Wim Sweldens, Peter Schröder, Lawrence Cowsar, and David Dobkin. MAPS: Multiresolution adaptive parameterization of surfaces. Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1998, pages 95-104, 1998. doi: 10.1145/280814.280828. 3, 4
|
| 210 |
+
|
| 211 |
+
[15] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J. Black. Generating 3D faces using convolutional mesh autoencoders. Proceedings of the European Conference on Computer Vision, pages 704-720, 2018. doi: 10.1007/978-3-030-01219-9_43. 3, 4, 5, 6, 7, 14, 15
|
| 212 |
+
|
| 213 |
+
[16] Yu Jie Yuan, Yu Kun Lai, Jie Yang, Qi Duan, Hongbo Fu, and Lin Gao. Mesh variational autoencoders with edge contraction pooling. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, volume 2020-June, pages 1105-1112. IEEE Computer Society, jun 2020. doi: 10.1109/CVPRW50498.2020.00145. 3
|
| 214 |
+
|
| 215 |
+
[17] Yi Zhou, Chenglei Wu, Zimo Li, Chen Cao, Yuting Ye, Jason Saragih, Hao Li, and Yaser Sheikh. Fully convolutional mesh autoencoder using efficient spatially varying kernels. In Advances in Neural Information Processing Systems, volume 33, pages 9251-9262, 2020. 3
|
| 216 |
+
|
| 217 |
+
[18] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. 3
|
| 218 |
+
|
| 219 |
+
[19] Yann LeCun, Lionel D. Jackel, Brian Boser, John S. Denker, Henry P. Graf, Isabelle Guyon, Don Henderson, Richard E. Howard, and William Hubbard. Handwritten digit recognition: applications of neural network chips and automatic learning. IEEE Communications Magazine, 27(11):41-46, nov 1989. doi: 10.1109/35.41400. 3
|
| 220 |
+
|
| 221 |
+
[20] Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral CNN. 36th International Conference on Machine Learning, 2019-June:2357-2371, 2019. 3
|
| 222 |
+
|
| 223 |
+
[21] Luitzen Egbertus Jan Brouwer. Über Abbildung von Mannigfaltigkeiten. Mathematische Annalen, 71(4), dec 1912. doi: 10.1007/BF01456812. 3
|
| 224 |
+
|
| 225 |
+
[22] Hsueh Ti Derek Liu, Vladimir G. Kim, Siddhartha Chaudhuri, Noam Aigerman, and Alec Jacobson. Neural subdivision. ACM Transactions on Graphics, 39(4):1-16, jul 2020. doi: 10.1145/3386569.3392418. 3, 4
|
| 226 |
+
|
| 227 |
+
[23] Frédéric Payan, Céline Roudet, and Basile Sauvage. Semi-regular triangle remeshing: A comprehensive study. Computer Graphics Forum, 34(1):86-102, 2015. doi: 10.1111/cgf.12461. 4
|
| 228 |
+
|
| 229 |
+
[24] Charles Loop. Smooth subdivision surfaces based on triangles. Master's thesis, The University of Utah, jan 1987. 4
|
| 230 |
+
|
| 231 |
+
[25] Djork Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, pages 1-14, 2016. 5
|
| 232 |
+
|
| 233 |
+
[26] Robert W. Sumner and Jovan Popović. Deformation transfer for triangle meshes. ACM Transactions on Graphics, 23(3):399-405, 2004. doi: 10.1145/1186562.1015736. 6
|
| 234 |
+
|
| 235 |
+
[27] Federica Bogo, Javier Romero, Matthew Loper, and Michael J. Black. FAUST: Dataset and evaluation for 3D mesh registration. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3794-3801, 2014. doi: 10.1109/CVPR.2014. 491.6
|
| 236 |
+
|
| 237 |
+
[28] Sara Hahner, Rodrigo Iza-Teran, and Jochen Garcke. Analysis and prediction of deforming 3D shapes using oriented bounding boxes and LSTM autoencoders. In Artificial Neural Networks and Machine Learning, pages 284-296. Springer International Publishing, 2020. 6, 9
|
| 238 |
+
|
| 239 |
+
[29] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, volume 32, pages 8026-8037, 2019. 7
|
| 240 |
+
|
| 241 |
+
[30] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, mar 2019. 7
|
| 242 |
+
|
| 243 |
+
[31] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pages 1-15, 2015. 7
|
| 244 |
+
|
| 245 |
+
[32] Karl Pearson. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559-572, nov 1901. doi: 10.1080/14786440109462720. 8
|
| 246 |
+
|
| 247 |
+
[33] Bastian Bohn, Jochen Garcke, Rodrigo Iza-Teran, Alexander Paprotny, Benjamin Peherstorfer, Ulf Schepsmeier, and Clemens August Thole. Analysis of car crash simulation data with nonlinear machine learning methods. Procedia Computer Science, 18:621-630, 2013. doi: 10.1016/j.procs.2013.05.226.9
|
| 248 |
+
|
| 249 |
+
[34] Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning representations and generative models for 3D point clouds. 35th International Conference on Machine Learning, ICML 2018, 1:67-85, 2018. 13
|
| 250 |
+
|
| 251 |
+
## A Supplementary Material
|
| 252 |
+
|
| 253 |
+
## Code and Detailed Network Architecture
|
| 254 |
+
|
| 255 |
+
As an addition to the architecture's description in section 4 and visualization in Figure 2 we give a detailed distribution of parameters over the hexagonal convolutional, fully connected, and pooling layers in Table 3. We provide the code through an anonymized repository: https://anonymous.4open.science/r/spectralCoSMA-6156/README.md.
|
| 256 |
+
|
| 257 |
+
Table 3: Structure of the autoencoder for refinement level ${rl} = 4$ , number of Chebyshev polynomials $K = 6$ and hidden representation of size ${hr} = {10}$ . The bullets $\bullet$ reference the corresponding batch size. The data's last dimension is the number of vertices considered for each padded patch.
|
| 258 |
+
|
| 259 |
+
<table><tr><td>Encoder Layer</td><td>Output Shape</td><td>Param.</td><td>Decoder Layer</td><td>Output Shape</td><td>Param.</td></tr><tr><td>Input</td><td>(•, 3,267)</td><td>0</td><td>Fully Connected</td><td>$\left( {\bullet ,{2}^{5},\;{15}}\right)$</td><td>5280</td></tr><tr><td>ChebConv</td><td>$\left( {\bullet ,{2}^{4},{267}}\right)$</td><td>304</td><td>Unpooling</td><td>$\left( {\bullet ,{2}^{5},\;{78}}\right)$</td><td>0</td></tr><tr><td>Pooling</td><td>$\left( {\bullet ,{2}^{4},\;{78}}\right)$</td><td>0</td><td>ChebConv</td><td>$\left( {\bullet ,{2}^{5},\;{78}}\right)$</td><td>6176</td></tr><tr><td>ChebConv</td><td>$\left( {\bullet ,{2}^{5},\;{78}}\right)$</td><td>3104</td><td>Unpooling</td><td>$\left( {\bullet ,{2}^{4},{267}}\right)$</td><td>0</td></tr><tr><td>Pooling</td><td>$\left( {\bullet ,{2}^{5},\;{15}}\right)$</td><td>0</td><td>ChebConv</td><td>$( \bullet ,{2}^{4},{267})$</td><td>3088</td></tr><tr><td>Fully Connected</td><td>$\left( {\bullet ,{10}}\right)$</td><td>4810</td><td>ChebConv</td><td>$\left( {\bullet ,\;3,{267}}\right)$</td><td>291</td></tr></table>
|
| 260 |
+
|
| 261 |
+
## Ablation Study
|
| 262 |
+
|
| 263 |
+
We perform an ablation study to justify some of the design and parameter choices in our spectral CoSMA architecture. In Table 4, we report the P2S errors on the FAUST dataset and the elephant from the GALLOP dataset after 50 epochs of training. The accuracy degrades for at least one of the two datasets when we reduce the degree $K$ of the Chebyshev polynomials, reduce the size of the hidden representation ${hr}$ , reduce the number of output channels of the convolutional layers, or change the Chebyshev Graph Convolution to the Graph Convolution from [10], who use only first-order Chebyshev polynomials. For the latter change, the networks are trained for 100 epochs.
|
| 264 |
+
|
| 265 |
+
We also list the P2S errors for a training without using the surface-aware training loss but instead, the patch-wise mean squared error and a hidden representation of size ${hr} = 8$ as in [2]. These networks are trained for 150 epochs as the main experiments.
|
| 266 |
+
|
| 267 |
+
Table 4: Ablation study of our parameter choices based on P2S errors $\left( {\times {10}^{-2}}\right)$ for 2 training runs.
|
| 268 |
+
|
| 269 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">P2S Error</td></tr><tr><td>FAUST</td><td>Elephant</td></tr><tr><td>full</td><td>$\mathbf{{0.0031} + {0.006}}$</td><td>0.0054 + 0.012</td></tr><tr><td>${hr} = 8$</td><td>${0.0053} \pm {0.010}$</td><td>${0.0083} \pm {0.016}$</td></tr><tr><td>$K = 4$</td><td>0.0031 + 0.006</td><td>0.0055 + 0.012</td></tr><tr><td>${2}^{3}$ and ${2}^{4}$ channels</td><td>0.0031 + 0.006</td><td>${0.0060} \pm {0.013}$</td></tr><tr><td>GCN [10]</td><td>${0.0032} \pm {0.006}$</td><td>0.0056 + 0.012</td></tr><tr><td>Patch-wise train MSE</td><td>${0.0033} \pm {0.006}$</td><td>0.0074 + 0.015</td></tr><tr><td>${hr} = 8$ and patch-wise train MSE as in [2]</td><td>0.0041 + 0.007</td><td>0.0085 + 0.016</td></tr></table>
|
| 270 |
+
|
| 271 |
+
## Runtime Analysis
|
| 272 |
+
|
| 273 |
+
Our spectral CoSMA has a similar runtime per epoch for ${rl} = 4$ when comparing it to the spatial CoSMA, see Table 5 for GALLOP and FAUST datasets. For ${rl} = 3$ the runtime is reduced by ${50}\%$ because the spectral CoSMA's runtime scales with the refinement level.
|
| 274 |
+
|
| 275 |
+
For a more detailed comparison, we illustrate the validation error per epoch in Figure 6 when training both networks with the patch-wise training error. It shows, that the spectral CoSMA converges in six times fewer epochs in comparison to the spatial CoSMA. This means that the total training time of a spectral CoSMA on the GALLOP and FAUST datasets is in total reduced by more than 75% for ${rl} = 4$ . The training has been conducted on an Nvidia Tesla V100.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
Figure 6: Training error (Vertex-to-Vertex mean squared error measured for each patch) per Epoch for the GALLOP dataset and ${rl} = 4$ for the training of the CoSMA networks.
|
| 280 |
+
|
| 281 |
+
Table 5: Runtime of different CoSMAs per epoch when training on GALLOP and FAUST datasets using a batch size of 100 .
|
| 282 |
+
|
| 283 |
+
<table><tr><td rowspan="2">Mesh Class</td><td colspan="2">Spatial CoSMA</td><td colspan="2">Ours</td></tr><tr><td>${rl} = 3$</td><td>${rl} = 4$</td><td>${rl} = 3$</td><td>${rl} = 4$</td></tr><tr><td>FAUST</td><td>17.3 sec</td><td>18.7 sec</td><td>6.9 sec</td><td>11.8 sec</td></tr><tr><td>GALLOP</td><td>16.7 sec</td><td>17.8 sec</td><td>10.1 sec</td><td>17.2 sec</td></tr></table>
|
| 284 |
+
|
| 285 |
+
## Additional Reconstructed Samples
|
| 286 |
+
|
| 287 |
+
We provide additional reconstructed samples from the GALLOP and FAUST dataset in Figure 8. Additionally, Figure 7 compares reconstructed patches from the two CoSMA approaches. It is visible that the reconstruction from the novel spectral CoSMA is smoother.
|
| 288 |
+
|
| 289 |
+
## 2D Visualizations of the Embeddings
|
| 290 |
+
|
| 291 |
+
Figure 9 shows the embeddings in the low-dimensional space for two YARIS front beams. The beams deform in two different branches, which manifests in the embedding.
|
| 292 |
+
|
| 293 |
+
For the GALLOP dataset, we calculate a distance between the patch-wise embeddings and the embedding of the entire shape, to determine how important the patch's deformation is for the general deformation behavior of the whole shape. We interpolate and densely subsample the lines connecting the embedding points of consecutive timesteps. Between the sampled points ${p}_{i}^{s}$ describing the deformation of the entire shape over time and the sampled points ${p}_{j}^{p}$ from the patch’s embedding, we calculate a chamfer distance, since the embedding shape is cyclic. The chamfer distance [34] measures the average squared distance between each point ${p}_{i}^{s}$ to its nearest neighbor from all points ${p}_{j}^{p}$ and vice versa. Therefore the distance is the lowest for circle-like patch-wise embeddings.
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
|
| 297 |
+
Figure 7: Comparison of reconstructed patches of the CoSMA networks.
|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
|
| 301 |
+
Figure 8: More reconstructed unknown FAUST pose and reconstructed horse test sample at $t = {39}$ by CoMA, Neural3DMM, SubdivNet Autoencoder, spatial CoSMA, and our network with highlighted P2S error.
|
| 302 |
+
|
| 303 |
+

|
| 304 |
+
|
| 305 |
+
Figure 9: Spectral CoSMA embeddings of the YARIS front beams for 10 simulations, which deform in two branches. Color encodes timestep and branch.
|
| 306 |
+
|
| 307 |
+
## Model Parameters and Reconstruction Errors for Refinement Level 3
|
| 308 |
+
|
| 309 |
+
For the baselines and our spectral CoSMA, we list the number of trainable parameters of the models for the different meshes in refinement level ${rl} = 3$ and ${rl} = 4$ . Increasing the refinement level by one, increases the number of faces by a factor of four.
|
| 310 |
+
|
| 311 |
+
Table 6: Number of vertices per mesh and trainable parameters for the reconstruction of semi-regular meshes using refinement level 4 .
|
| 312 |
+
|
| 313 |
+
<table><tr><td rowspan="2">Mesh Class</td><td colspan="2">#Vertices</td><td rowspan="2">CoMA [15]</td><td rowspan="2">Neural 3DMM [4]</td><td rowspan="2">SubdivNet [13]</td><td rowspan="2">Spatial CoSMA [2]</td><td rowspan="2">Ours</td></tr><tr><td>irregular</td><td>semi-regular</td></tr><tr><td>FAUST</td><td>6890</td><td>12,772</td><td>46,379</td><td>426,195</td><td>879,857</td><td>26,888</td><td>23,053</td></tr><tr><td>Horse</td><td>8,431</td><td>14,745</td><td>50,731</td><td>459,987</td><td>1,010,417</td><td/><td/></tr><tr><td>Camel</td><td>21,887</td><td>12,802</td><td>46,923</td><td>430,419</td><td>879,857</td><td>26,888</td><td>23,053</td></tr><tr><td>Elephant</td><td>42,321</td><td>15,362</td><td>52,363</td><td>472,659</td><td>1,053,937</td><td/><td/></tr></table>
|
| 314 |
+
|
| 315 |
+
Table 7: Comparison of the number of parameters for meshes of refinement level 3 from [2].
|
| 316 |
+
|
| 317 |
+
<table><tr><td>Mesh Class</td><td>CoMA [15]</td><td>Neural 3DMM [4]</td><td>Spatial CoSMA [2]</td><td>Ours</td></tr><tr><td>FAUST</td><td>26,795</td><td>276,275</td><td>18,184</td><td>16,235</td></tr><tr><td>Horse</td><td>27,339</td><td>280,499</td><td rowspan="3">18,184</td><td rowspan="3">16,235</td></tr><tr><td>Camel</td><td>26,795</td><td>292,659</td></tr><tr><td>Elephant</td><td>27,339</td><td>296,883</td></tr></table>
|
| 318 |
+
|
| 319 |
+
Table 8: Point to surface (P2S) errors $\left( {\times {10}^{-2}}\right)$ between reconstructed unseen semi-regular meshes $\left( {{rl} = 3}\right)$ and original irregular mesh and their standard deviations for three different training runs. Additionally, the average Euclidean vertex-wise error (in $\mathrm{{cm}}$ ) is given.
|
| 320 |
+
|
| 321 |
+
*: the entire YARIS dataset has not been seen by the network during training.
|
| 322 |
+
|
| 323 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Component Lengths</td><td colspan="2">Spatial CoSMA [2]</td><td colspan="2">Ours</td></tr><tr><td>Test P2S</td><td>Eucl. E.</td><td>Test P2S</td><td>Eucl. E.</td></tr><tr><td>TRUCK</td><td>135-370 cm</td><td>0.0443 + 0.071</td><td>2.23 cm</td><td>0.0043 + 0.009</td><td>0.43 cm</td></tr><tr><td>YARIS*</td><td>21-91 cm</td><td>${0.1784} \pm {0.380}$</td><td>0.80 cm</td><td>0.0458 + 0.090</td><td>0.37 cm</td></tr></table>
|
| 324 |
+
|
papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TRANSFER LEARNING USING SPECTRAL CONVOLUTIONAL AUTOENCODERS ON SEMI-REGULAR SURFACE MESHES
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
The underlying dynamics and patterns of 3D surface meshes deforming over time can be discovered by unsupervised learning, especially autoencoders, which calculate low-dimensional embeddings of the surfaces. To study the deformation patterns of unseen shapes by transfer learning, we want to train an autoencoder that can analyze new surface meshes without training a new network. Here, most state-of-the-art autoencoders cannot handle meshes of different connectivity and therefore have limited to no transfer learning capacities to new meshes. Also, reconstruction errors strongly increase in comparison to the errors for the training shapes. To address this, we propose a novel spectral CoSMA (Convolutional SemiRegular Mesh Autoencoder) network. This patch-based approach is combined with a surface-aware training. It reconstructs surfaces not presented during training and generalizes the deformation behavior of the surfaces' patches. The novel approach reconstructs unseen meshes from different datasets in superior quality compared to state-of-the-art autoencoders that have been trained on these shapes. Our transfer learning reconstruction errors are ${40}\%$ lower than those from models learned directly on the data. Furthermore, baseline autoencoders detect deformation patterns of unseen mesh sequences only for the whole shape. In contrast, due to the employed regional patches and stable reconstruction quality, we can localize where on the surfaces these deformation patterns manifest.
|
| 12 |
+
|
| 13 |
+
§ 21 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
We study the deformation of surfaces in $3\mathrm{D}$ , which discretize human bodies, animals, or work pieces from computer aided engineering. Using autoencoders as a method for unsupervised learning, we analyze and detect patterns in the deformation behavior by calculating low-dimensional features. Since surface deformation is locally described by the same physical rules, we want to study the deformation patterns of unseen shapes by transfer learning. That means an autoencoder should be able to analyze new surface meshes without being trained again.
|
| 16 |
+
|
| 17 |
+
While two-dimensional surfaces embedded in ${\mathbb{R}}^{3}$ are locally homeomorphic to the two-dimensional space, they are of non-Euclidean nature. Their representation by surface meshes lacks the regularity of pixels describing images, which is so convenient for 2D CNNs [1]. This is why existing methods for unsupervised learning for irregularly meshed surface meshes depend on the mesh connectivity when defining pooling or convolutional operators. For this reason, a trained mesh autoencoder cannot be applied to a surface that is represented by a different mesh, although the local deformation behavior might be similar.
|
| 18 |
+
|
| 19 |
+
The authors of [2] presented a mesh autoencoder for semi-regular meshes of different sizes. The semi-regular surface representations enforce some local mesh regularity and are made up of regularly meshed patches as illustrated in Figure 1, which allows the application of their patch-wise approach to meshes of different sizes. However, the reconstruction quality decreases by a factor of 4 when applying their mesh autoencoder to new meshes and shapes that have not been used during training. This limits the method's application for unseen shapes.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Remeshing of the horse template mesh. In the semi-regular mesh, the boundaries of the regularly meshed patches are highlighted in gray.
|
| 24 |
+
|
| 25 |
+
Additionally, baseline mesh autoencoders for deforming shapes do not provide an understanding or explanation about which surface areas lead to the patterns in the embedding space. The embeddings represent the entire shape. Nevertheless, when identifying and analyzing deformation patterns, it is of particular relevance where on the surfaces these patterns manifest.
|
| 26 |
+
|
| 27 |
+
Our work remedies these gaps by adopting the patch-based framework for semi-regular meshes and choosing a spectral graph convolutional filter [3] projecting vertex features to the Laplacian eigenvector basis in combination with a surface-aware training. Since the spectral filters consider the entire patch, the network generalizes better in comparison to a spatial approach, whose filters consider smaller $n$ -ring neighborhoods. This improves the quality and smoothness of the reconstruction results when being applied to unknown meshes and the errors are ${40}\%$ lower than errors from models learned directly on the data. Although spectral graph neural network methods require fixed mesh connectivity, our mesh-independent approach is not limited by this constraint. This is because the filters are applied to the regular substructures of semi-regular mesh representations of the surfaces. Furthermore, our patch-based approach allows us to correlate patch-wise embeddings with the embedding of the entire shape. This way we localize and understand where on the surfaces the deformation patterns, which are visible in the low-dimensional representation, manifest.
|
| 28 |
+
|
| 29 |
+
The research objectives can be summarized as a) the definition of a spectral convolutional autoencoder for semi-regular meshes (spectral CoSMA) and a surface-aware training loss, by this means b) improving the generalization capability, transfer learning and runtime of baseline mesh autoencoders, and c) localizing the deformation patterns visible in the low-dimensional embedding on the surfaces.
|
| 30 |
+
|
| 31 |
+
Further on in section 2, we discuss work related to learning features from meshed geometry. In section 3, we present relevant characteristics of surface meshes for CNNs and introduce the semi-regular remeshing, followed by the definition of our spectral CoSMA in section 4. Results for different datasets containing meshes with different connectivity are presented in section 5 .
|
| 32 |
+
|
| 33 |
+
§ 2 RELATED WORK
|
| 34 |
+
|
| 35 |
+
§ 2.1 CONVOLUTIONAL NETWORKS FOR SURFACES
|
| 36 |
+
|
| 37 |
+
Surfaces are generally represented either in form of point clouds or by a surface mesh, which is defined by faces connecting vertices to each other. We only consider the representation via meshes, because their faces describe the underlying surface $\left\lbrack {4,5}\right\rbrack$ . Surface meshes can be viewed as graphs, and hence graph-based convolutional methods are often applied to meshes.
|
| 38 |
+
|
| 39 |
+
Generally, convolutional networks for graphs can be separated into spectral and spatial ones, of which $\left\lbrack {1,6,7}\right\rbrack$ give an overview. Spatial convolutional methods for graphs aggregate features based on a node's spatial relations, which allows generalization across different mesh connectivities [7, 8]. Spectral approaches, on the other hand, interpret information on the vertices as a signal propagation along the vertices. They exploit the connection of the graph Laplacian and the Fourier basis and vertex features are projected to the Laplacian eigenvector basis, where filters are applied [9]. Instead of explicitly computing Laplacian eigenvectors, the authors of [3] use truncated Chebyshev polynomials, and in [10] they use only first-order Chebyshev polynomials. These spectral methods require fixed connectivity of the graph. If not, the adjacency matrix and consequently the Laplacian eigenvector basis change.
|
| 40 |
+
|
| 41 |
+
Furthermore, there are network architectures only for surface meshes, e.g. DiffusionNet [11] and HodgeNet [12], which are applied for classification, mesh segmentation, and shape correspondence. Nevertheless, these architectures cannot be implemented directly into autoencoders, because of missing mesh pooling operators.
|
| 42 |
+
|
| 43 |
+
§ 2.2 NEURAL NETWORKS FOR SEMI-REGULAR SURFACE MESHES
|
| 44 |
+
|
| 45 |
+
Semi-regular triangular surface meshes, also known as meshes with subdivision connectivity, come with a regular local structure and a hierarchical multi-resolution structure. In section 3.2, we provide a more detailed definition. The Spatial CoSMA [2] and SubdivNet [13] take advantage of the local regularity of the patches by defining efficient mesh-independent pooling operators and using 2D convolution. By inputting the patches separately into the network, [2] can define an autoencoder pipeline that is independent of the mesh size. [13] apply self-parametrization using the MAPS algorithm [14] to remesh watertight manifold meshes without boundaries. [2] on the other hand, apply a remeshing algorithm that works for meshes with boundaries and coarser base meshes.
|
| 46 |
+
|
| 47 |
+
§ 2.3 MESH CONVOLUTIONAL AUTOENCODERS
|
| 48 |
+
|
| 49 |
+
The first convolutional mesh autoencoder (CoMA) has been introduced in [15]. The authors introduced mesh downsampling and mesh upsampling layers for pooling and unpooling, which are combined with spectral convolutional filters using truncated Chebyshev polynomials as in [3]. The Neural 3D Morphable Models (Neural3DMM) network presented in [4] improves those results using spiral convolutional layers. The authors of [16] apply the CoMA to different datasets and improve the down and upsampling layers slightly. By manually choosing latent vertices for the embedding space, [17] define an autoencoder that allows interpolating in the latent space. All the above-mentioned mesh convolutional autoencoders work only for meshes of the same size and connectivity because the pooling and/or convolutional layers depend on the adjacency matrix. The authors of [2] showed that the latter methods are not able to learn data with greater global variations in comparison to their patch-based approach, which generalizes and reconstructs the deformed meshes to superior quality. Additionally, their architecture can be applied to unseen meshes of different sizes. The MeshCNN architecture [5] can be implemented as an encoder and decoder. Nevertheless, the pooling is feature dependent and therefore the embeddings can be of different significance.
|
| 50 |
+
|
| 51 |
+
§ 3 HANDLING SURFACE MESHES BY NEURAL NETWORKS
|
| 52 |
+
|
| 53 |
+
The irregularity of surface meshes gives rise to difficulties when handling them with a neural network. These are explained in this section, followed by the motivation and definition of semi-regular meshes.
|
| 54 |
+
|
| 55 |
+
§ 3.1 IRREGULARITY OF SURFACE MESHES
|
| 56 |
+
|
| 57 |
+
CNNs in 2D [18, 19] apply the same local filters to local neighborhoods of selected pixels of the image. Because of the global grid structure (defined by the x - and y-axis) of the image, the filters of constant shape can be horizontally and vertically shifted and the local neighborhoods are of regular connectivity. CNNs work so efficiently for images because they are translation equivariant and therefore equivariant to the global symmetry of images [20].
|
| 58 |
+
|
| 59 |
+
The intrinsic dimension of surface meshes is also 2 because they represent a flat surface. Nevertheless, surface meshes lack global regularities, because they are not defined along a global grid, local neighborhoods can have any size and arrangement as long as they are locally Euclidean, and the distance between neighbors is not fixed.
|
| 60 |
+
|
| 61 |
+
One cannot enforce a regular mesh discretization for every surface in ${\mathbb{R}}^{3}$ , which would lead to an underlying global grid [21]. This is why [2, 22] proposed to enforce a similar structure in the local neighborhoods by choosing a semi-regular representation of the surface. In this way, an efficient application of convolution on surface meshes becomes possible. Note that remeshing the polygonal mesh only changes the representation of the objects. The considered surface embedded in ${\mathbb{R}}^{3}$ is the same, but now represented by a different discrete approximation.
|
| 62 |
+
|
| 63 |
+
< g r a p h i c s >
|
| 64 |
+
|
| 65 |
+
Figure 2: Resolution of the regularly meshed patches inside the spectral CoSMA. The encoder pools the patches twice by undoing subdivision. In the decoder, the unpooling increases the resolution again by subdivision. The orange vertices are the vertices from the irregular base mesh. Red and purple vertices have been created during the ${1}^{\text{ st }}$ and ${2}^{\text{ nd }}$ refinement steps.
|
| 66 |
+
|
| 67 |
+
§ 3.2 DEFINITION OF SEMI-REGULAR MESHES
|
| 68 |
+
|
| 69 |
+
We consider semi-regular meshes in order to mitigate the problems caused by the irregularity of surface meshes while still allowing a flexible surface representation (see Figure 1). Following the definitions in [23], we call a surface mesh semi-regular if we can convert it to a low-resolution mesh by iteratively merging four triangular faces into one. Consequently, all vertices of the semi-regular mesh except for the ones remaining in the low-resolution mesh are regular (i.e. have six neighbors). Vice versa, the regular subdivision of a possibly irregular low-resolution mesh yields a semi-regular mesh. Such a regular subdivision can be achieved by inserting a vertex on each edge and splitting each original triangle face into 4 sub-triangles. $\left\lbrack {{13},{24}}\right\rbrack$ refer to this property as Loop subdivision connectivity of the semi-regular mesh. The subdivision connectivity makes semi-regular meshes particularly useful for multiresolution analysis and directly implies a suitable local pooling operator on semi-regular meshes (see section 4).
|
| 70 |
+
|
| 71 |
+
§ 3.3 REMESHING
|
| 72 |
+
|
| 73 |
+
We apply the remeshing from [2], because other algorithms, e.g. Neural Subdivision [22] or MAPS [14], only work for closed surfaces without boundaries and fail for base meshes as coarse as ours. The algorithm iteratively subdivides a coarse approximation of the original irregular mesh (see Figure 1). The resulting semi-regular mesh is fitted to the original mesh using gradient descent on a loss function based on the chamfer distance. The refinement level ${rl}$ states the number of times each face of the coarse base mesh is iteratively subdivided. The number of faces in the final semi-regular mesh is ${n}_{F}^{\text{ semireg }} = {4}^{rl} * {n}_{F}^{c}$ , with ${n}_{F}^{c}$ being the number of faces describing the coarse base mesh. We choose the refinement level ${rl} = 4$ , which leads to finer meshes compared to [2], who chose ${rl} = 3$ .
|
| 74 |
+
|
| 75 |
+
After the remeshing, all vertices that are newly created during the subdivision have six neighbors. Therefore, the resulting mesh is semi-regular or has subdivision connectivity.
|
| 76 |
+
|
| 77 |
+
§ 4 SPECTRAL COSMA
|
| 78 |
+
|
| 79 |
+
The network handles the regional patches separately, which allows us to handle meshes of different sizes. We describe how the graph convolution is combined with the padding and surface-aware loss as well as the pooling of the patches, and how one takes advantage of the semi-regular meshing. The building blocks are set together to define the spectral CoSMA (Spectral Convolutional Semi-Regular Mesh Autoencoder).
|
| 80 |
+
|
| 81 |
+
§ 4.1 SPECTRAL CHEBYSHEV CONVOLUTIONAL FILTERS
|
| 82 |
+
|
| 83 |
+
We apply fast Chebyshev filters [3], as in [15], with the distinction that we are using them to perform spectral convolutions on the regional patches instead of the entire mesh. We justify this different convolution on the patches, compared to [2], by the intuition that spectral filters encode information of a whole patch and the general characteristics of its deformations, whereas in comparison spatial convolution considers just the local neighborhood around a vertex.
|
| 84 |
+
|
| 85 |
+
We use the formulation of [3] for convolving over our regularly meshed patches. We perform spectral decomposition using spectral filters and apply convolutions directly in the frequency space.
|
| 86 |
+
|
| 87 |
+
The spectral filters are approximated by truncated Chebyshev polynomials, which avoid explicitly computing the Laplacian eigenvectors and, by this means, reduce the computational complexity.
|
| 88 |
+
|
| 89 |
+
The decomposition using spectral filters is dependent on the adjacency matrix, which restricts the transfer learning of spectral graph convolution to meshes of the same connectivity. Nevertheless, the adjacency matrix of the patches of our semi-regular meshes is always the same for one refinement level. This allows us to train the filters for all patches together and to apply them to unseen meshes.
|
| 90 |
+
|
| 91 |
+
§ 4.2 POOLING AND PADDING OF THE REGULAR PATCHES
|
| 92 |
+
|
| 93 |
+
We apply the patch-wise average pooling and unpooling from [2] that takes advantage of the multi-scale structure of the semi-regular meshes. The subdivision connectivity guarantees that every 4 faces can be uniformly pooled to 1 . The remaining vertices take the average of their own value and the values of the neighboring vertices that are removed. The unpooling operator subdivides the faces and the newly created vertices are assigned the average value of neighboring vertices from the lower-resolution mesh patch. A similar pooling and unpooling operator is also applied by [13], where the information is saved on the faces.
|
| 94 |
+
|
| 95 |
+
The padding is crucial for the network to consider the regional patches in a larger context. Since the network handles the patches separately, we consider the features of the neighboring patches in a padding of size 2 as in [2]. If the vertices are boundary vertices, we decide to pad the patch with the boundary vertices' features.
|
| 96 |
+
|
| 97 |
+
§ 4.3 NETWORK ARCHITECTURE
|
| 98 |
+
|
| 99 |
+
While using specialized pooling and convolution techniques for the regular patches, the general structure of our network architecture is inspired by $\left\lbrack {2,{15}}\right\rbrack$ . Our autoencoder architecture combines spectral Chebyshev convolutional filters with the described pooling technique to process the padded regular patches of a semi-regular mesh. The autoencoder compresses every padded patch, which corresponds to one face of the low-resolution mesh, from ${\mathbb{R}}^{{276} \times 3}\left( {{rl} = 4}\right)$ to an ${hr} = {10}$ dimensional latent vector and reconstructs the original padded patch from the latent vector.
|
| 100 |
+
|
| 101 |
+
The encoder consists of two blocks containing a Chebyshev convolutional layer followed by an average pooling layer and an exponential linear unit (ELU) as an activation function [25]. The output of the second encoding block is mapped to the latent space by a fully connected layer.
|
| 102 |
+
|
| 103 |
+
The decoder mirrors the structure of the encoder by first applying a fully connected layer, which transforms the latent space vector back to a regular triangle representation with refinement level ${rl} = 2$ . Afterward, two decoding blocks consisting of an unpooling layer followed by a convolutional layer transform the coarse triangle representation back to the original padded patch representation. Finally, another Chebyshev convolutional layer is applied without activation function to reconstruct the original patch coordinates by reducing the number of features to three dimensions.
|
| 104 |
+
|
| 105 |
+
All Chebyshev convolutional layers use $K = 6$ Chebyshev polynomials. Table 3 in the supplementary material gives a detailed view of the structure of the network together with the parameter numbers per layer which sum up to 23,053. Figure 2 illustrates the patch sizes inside the autoencoder. Note that we are able to handle non-manifold edges of the coarse base mesh because the patches, whose interiors by construction have only manifold-edges, are fed separately. The code will be provided as supplementary material.
|
| 106 |
+
|
| 107 |
+
This spectral CoSMA architecture can handle all surface meshes, that have been remeshed into a semi-regular mesh representation of the same refinement level. By handling the regional padded patches separately, this workflow is independent of the original irregular mesh connectivity thanks to the remeshing and patch-wise handling.
|
| 108 |
+
|
| 109 |
+
§ 4.4 SURFACE-AWARE LOSS CALCULATION
|
| 110 |
+
|
| 111 |
+
The authors of the patch-based spatial CoSMA [2] employ a patch-wise mean squared error as the training loss. But, that loss calculation is not keeping track of multiple appearances of the vertices in the patch boundaries. Therefore, it is not surface-aware and during training the error on the patch boundaries is weighted higher than in the interior of the patches. By weighting the vertex-wise error in the training loss by the vertices' number of appearances in the different patches, we employ a surface-aware error for training. This reduces the P2S error as visible in the ablation study.
|
| 112 |
+
|
| 113 |
+
Table 1: Point to surface (P2S) errors $\left( {\times {10}^{-2}}\right)$ between reconstructed unseen semi-regular meshes $\left( {{rl} = 4}\right)$ and original irregular mesh and their standard deviations for three different training runs. $\left\lbrack {4,{13},{15}}\right\rbrack$ have to be trained per mesh; we and $\left\lbrack 2\right\rbrack$ train one network for all three animals in the GALLOP dataset. *: the elephant has not been seen by the network during training.
|
| 114 |
+
|
| 115 |
+
max width=
|
| 116 |
+
|
| 117 |
+
Mesh Class CoMA [15] Neural3DMM [4] SubdivNet [13] Spatial CoSMA [2] Ours
|
| 118 |
+
|
| 119 |
+
1-6
|
| 120 |
+
FAUST 0.7073 + 1.751 0.4064 + 0.921 ${2.8190} + {4.699}$ 0.0224 + 0.045 $\mathbf{{0.0031}} + {0.006}$
|
| 121 |
+
|
| 122 |
+
1-6
|
| 123 |
+
Horse 0.0053 + 0.017 0.0096 + 0.045 0.0113 + 0.025 0.0078 + 0.012 $\mathbf{{0.0022}} \pm {0.005}$
|
| 124 |
+
|
| 125 |
+
1-6
|
| 126 |
+
Camel ${0.0075} \pm {0.023}$ 0.0145 + 0.056 0.0113 + 0.024 0.0091 + 0.014 $\mathbf{{0.0030}} \pm {0.006}$
|
| 127 |
+
|
| 128 |
+
1-6
|
| 129 |
+
Elephant 0.0101 + 0.031 0.0147 + 0.057 0.0145 + 0.032 ${0.0316} + {0.068}^{ * }$ 0.0054 + 0.012*
|
| 130 |
+
|
| 131 |
+
1-6
|
| 132 |
+
|
| 133 |
+
Table 2: P2S errors $\left( {\times {10}^{-2}}\right)$ for three different training runs. Additionally, the Euclidean P2S error (in cm) is given. *: the entire YARIS dataset has not been seen by the network during training.
|
| 134 |
+
|
| 135 |
+
max width=
|
| 136 |
+
|
| 137 |
+
2*Dataset 2*Component Lengths 2|c|Spatial CoSMA [2] 2|c|Ours
|
| 138 |
+
|
| 139 |
+
3-6
|
| 140 |
+
Test P2S Eucl. E. Test P2S Eucl. E.
|
| 141 |
+
|
| 142 |
+
1-6
|
| 143 |
+
TRUCK 135-370 cm 0.0660 + 0.117 2.76 cm 0.0013 + 0.003 0.26 cm
|
| 144 |
+
|
| 145 |
+
1-6
|
| 146 |
+
YARIS* 21-91 cm ${0.2061} \pm {0.438}$ 0.84 cm 0.0375 + 0.088 0.31 cm
|
| 147 |
+
|
| 148 |
+
1-6
|
| 149 |
+
|
| 150 |
+
§ 216 5 EXPERIMENTS
|
| 151 |
+
|
| 152 |
+
We test our spectral CoSMA for semi-regular meshes using a setup similar to [2] on four different datasets and compare our achieved reconstruction errors to state-of-the-art surface mesh autoencoders.
|
| 153 |
+
|
| 154 |
+
§ 5.1 DATASETS
|
| 155 |
+
|
| 156 |
+
GALLOP: The dataset contains triangular meshes representing a motion sequence with 48 timesteps from a galloping horse, elephant, and camel [26]. The galloping movement is similar but the meshes representing the surfaces of the three animals are different in connectivity and the number of vertices. This is why the baseline autoencoders have to be trained three times. The surface approximations are remeshed to semi-regular meshes with refinement level ${rl} = 4$ for each animal. The new meshes are still of different connectivity, but all are made up of regional regular patches. Table 6 lists the resulting numbers of vertices. We normalize the semi-regular meshes to $\left\lbrack {-1,1}\right\rbrack$ as in [2]. Before inputting the data to the CoSMAs, every patch is translated to zero mean. We use the first 70% of the galloping sequence of the horse and camel for training. The architecture is tested on the remaining ${30}\%$ and the whole sequence of the elephant, which is never seen during the training for the CoSMAs.
|
| 157 |
+
|
| 158 |
+
FAUST: The dataset contains 100 meshes [27], which are in correspondence to each other. The irregular surface meshes represent 10 different bodies in 10 different poses. For the experiments, we consider two unknown poses of all bodies (20% of the data) in the testing set. The meshes are remeshed and normalized in the same way as for the GALLOP dataset.
|
| 159 |
+
|
| 160 |
+
TRUCK and YARIS: In a car crash simulation the car components, which are generally represented by surface meshes, often deform in different patterns. Every component is discretized by a surface mesh, while the local deformation is described by the same physical rules. Following [2], the TRUCK dataset contains 32 completed frontal crash simulations and 6 components, the YARIS dataset contains 10 simulations and 10 components. 30 simulations and 70% of the timesteps of the TRUCK dataset are included in the training set. The remaining samples from the TRUCK dataset and the entire YARIS dataset, representing a different car, are considered for testing. For this setup, the authors of $\left\lbrack {2,{28}}\right\rbrack$ detect patterns in the deformation of the TRUCK and YARIS components. We normalize the meshes that discretize car components to zero mean and range $\left\lbrack {-1,1}\right\rbrack$ relative to the coordinates' ratio. Every patch is translated to zero mean.
|
| 161 |
+
|
| 162 |
+
< g r a p h i c s >
|
| 163 |
+
|
| 164 |
+
Figure 3: Reconstructed unknown FAUST pose and elephant test sample at $t = {43}$ by CoMA, Neural3DMM, SubdivNet Autoencoder, spatial CoSMA, and our network. P2S error of the reconstructed faces is highlighted. More reconstruction examples are given in the supplementary material. * The elephant's mesh has not been presented during training to spatial CoSMA and our network.
|
| 165 |
+
|
| 166 |
+
§ 5.2 TRAINING DETAILS
|
| 167 |
+
|
| 168 |
+
We train the network (implemented in Pytorch [29] and Pytorch Geometric [30]) with the adaptive learning rate optimization algorithm [31]. For the GALLOP and the FAUST dataset, we use a learning rate of 0.0001 and train for 150 epochs using a batch size of 100 . For the TRUCK data, we choose a batch size of 100 combined with a learning rate of 0.001 for 300 epochs, since the variation inside the dataset is higher. We minimize the surface-aware loss between the original and reconstructed regional patches of the surface mesh without considering the padding. To augment the data in the case of the GALLOP and the FAUST dataset we rotate the regional patches by ${0}^{ \circ },{120}^{ \circ }$ , and ${240}^{ \circ }$ .
|
| 169 |
+
|
| 170 |
+
Our architecture requires at least ${50}\%$ fewer parameters than the CoMA, Neural3DMM, and Subdi-vNet networks, because for increasing ${rl}$ and consequently finer meshes, the CoSMAs require only a few parameters more in the linear layers (compare Tables 6 and 7 in the supplementary material). This is because the patches and convolutional filters share the parameters. The spectral CoSMA approach requires 15% fewer parameters than the spatial CoSMA approach. The runtime analysis and ablation study justifying parameter choices are provided in the supplementary material.
|
| 171 |
+
|
| 172 |
+
§ 5.3 RECONSTRUCTIONS OF THE MESHES
|
| 173 |
+
|
| 174 |
+
The mean squared error between true and reconstructed vertices of the semi-regular mesh allows a comparison of different methods only if the same remeshing result is used. In difference to [2], we compare the reconstructed semi-regular mesh directly to the original irregular surface mesh by calculating a point to surface error (P2S). We average the mean squared errors between the vertices of the semi-regular mesh and their orthogonal projections to the surface described by the irregular mesh. This allows us to compare the reconstruction errors when using different remeshing results or refinements.
|
| 175 |
+
|
| 176 |
+
Besides CoMA [15] and Neural3DMM [4], we use an additional baseline semi-regular mesh autoen-coder using our network's architectures with the pooling and convolutional layers from SubdivNet [13] to process the entire meshes. In Table 1 we compare the autoencoders for the GALLOP and FAUST dataset in terms of the P2S errors of reconstructed test samples, whose 3D coordinates lie in the range $\left\lbrack {-1,1}\right\rbrack$ . Our network reduces the test reconstruction error for the GALLOP and FAUST dataset by more than ${50}\%$ and ${80}\%$ respectively, if the shape is presented to the autoencoder during the training. For unknown poses from the FAUST dataset, the limbs' positions are reconstructed inaccurately by the CoMA, Neural3DMM, and SubdivNet autoencoders. Especially if the pose is not similar to training poses, their reconstruction fails, as Figure 3 illustrates.
|
| 177 |
+
|
| 178 |
+
< g r a p h i c s >
|
| 179 |
+
|
| 180 |
+
Figure 4: Reconstructed front beams from the TRUCK (length of ${150}\mathrm{\;{cm}}$ ) at time $t = {24}$ (test sample) from two crash simulations representing different deformation behavior and from the YARIS (length of ${65}\mathrm{\;{cm}}$ ) at $t = {15}$ . The average Euclidean P2S error (in $\mathrm{{cm}}$ ) of the faces is highlighted.
|
| 181 |
+
|
| 182 |
+
The spectral CoSMA's reconstructions are generally smoother than the ones from the spatial CoSMA, which reduces the reconstruction errors. Figure 7 in the supplementary material shows that the reconstructed patch using spectral filters, which encode the connectivity of the whole patch in the Chebyshev polynomials, is smoother than the spatial reconstruction, where the convolutional kernels only consider the close neighborhood. Because the spatial CoSMA uses ${hr} = 8$ and no surface-aware loss, we also list our reconstruction errors using these parameters in the ablation study for a complete comparison.
|
| 183 |
+
|
| 184 |
+
Transfer Learning to Meshes with New Connectivity: Our spectral CoSMA and the spatial CoSMA are the only networks that can reconstruct an unseen shape of different connectivity. The elephant's mesh has never been presented to our network, nevertheless, our reconstruction error is lower. Even though trained on the elephant, the baselines' reconstructions are worse and unstable in the legs, as Figure 3 illustrates. The spatial CoSMA's reconstructions of the unseen elephant are inferior to all the other networks, although the reconstructions of the known camel and horse are of similar quality to the other baselines. This highlights the improved transfer learning and generalization capability of the new spectral approach.
|
| 185 |
+
|
| 186 |
+
Since the TRUCK and YARIS datasets contain 16 different meshes, the reconstruction results are compared between the CoSMA architectures. In Table 2 we present the average P2S errors for the TRUCK and YARIS dataset between the components scaled to range $\left\lbrack {-1,1}\right\rbrack$ and in cm. The entire YARIS dataset has never been presented to the network during training. The results on the YARIS in Figure 4 also show that our network not only reconstructs smoother surfaces in comparison to the spatial CoSMA but also has higher transfer learning capacities.
|
| 187 |
+
|
| 188 |
+
A comparison of the results for refinement levels ${rl} = 3$ and ${rl} = 4$ for the TRUCK and YARIS datasets (see Table 8 in the supplementary material) shows the stability of the results from our spectral CoSMA. For the spatial CoSMA on the other hand, the reconstruction quality decreases when increasing the refinement level. This is due to the fixed kernel size of 2 . Since the mesh is finer, the considered neighborhoods by a spatial filter using kernel size 2 cover smaller areas of the surface. The spectral CoSMA considers the entire patches in spectral representation. Therefore, an increase in the refinement level does not impair the reconstruction quality.
|
| 189 |
+
|
| 190 |
+
§ 5.4 LOW-DIMENSIONAL EMBEDDING
|
| 191 |
+
|
| 192 |
+
We project the patch-wise hidden representations of size ${hr}$ into the two-dimensional space using the linear dimensionality reduction method Principal Component Analysis (PCA) [32]. Then we compare these patch-wise results to the $2\mathrm{D}$ embedding over time of the whole shape, by concatenating the hidden patch-wise representations and then applying PCA.
|
| 193 |
+
|
| 194 |
+
The time-dependent embedding for the unseen elephant from the GALLOP dataset exhibits a periodic galloping sequence, visualized in Figure 5 (a). We compare how similar the 2D patch-wise embed-dings are to the $2\mathrm{D}$ embedding for the entire shape, to determine how important the deformation of the patch is for the general deformation behavior of the whole shape. The patch-wise distance is visualized in Figure 5 (b) and its calculation detailed in the supplementary material. We notice that this distance is the lowest for the body and legs, which define the elephant's gallop, whereas the movement of the head does not follow the periodic pattern.
|
| 195 |
+
|
| 196 |
+
< g r a p h i c s >
|
| 197 |
+
|
| 198 |
+
Figure 5: (a) 2D Embedding of the low-dimensional representation of the whole elephant over time. (b) Highlighting the distance of the patch-wise embeddings to the embedding of the whole shape. (c) Patch-wise score for the TRUCK’s front beam from Figure 4 at $t = {24}$ . Only the patch with the high score manifests the deformation in two patterns. This is visible in the example patches with high and low scores. The embedding's colors encode timestep and branch.
|
| 199 |
+
|
| 200 |
+
For the TRUCK and YARIS datasets, the goal is the detection of clusters corresponding to different deformation patterns in the components' embeddings. This speeds up the analysis of car crash simulations since relations between model parameters and the deformation behavior are discovered more easily [28, 33]. In the 2D visualizations for the TRUCK components, we detect two clusters corresponding to a different deformation behavior and our patch-based approach allows us to identify the patches that contribute most to this. For each patch, we define a score, which equals the accuracy of an SVM (between 0.5 and 1) that is classifying the observed two deformation patterns of the entire component from the patch's embedding, see Figure 5 (c). The highlighted patches correlate to the left part of the beam, where the deformation is visibly different for two different TRUCK simulations in Figure 4. Note, that this comparison of patch- and shape-embeddings does not lead to significant results for the spatial CoSMA [2] because of the instability of its results.
|
| 201 |
+
|
| 202 |
+
For the YARIS, which has never been seen by the network during training, we also visualize the low-dimensional representation for different components in 2D using PCA. We detect a deformation pattern in the front beams that splits up the simulation set into two clusters, see Figure 9 in the supplementary material, which is a result similar to [2] who used a nonlinear dimensionality reduction.
|
| 203 |
+
|
| 204 |
+
§ 6 CONCLUSION
|
| 205 |
+
|
| 206 |
+
We have introduced a novel spectral mesh autoencoder pipeline for the analysis of deforming $3\mathrm{D}$ semi-regular surface meshes with different connectivity. This allows us to generate high-quality reconstructions of unseen meshes, that have not been presented during training. In fact, the reconstruction quality for unknown meshes with our spectral CoSMA is higher than with baseline autoencoders that have seen the meshes during training. This improved transfer learning capability and reconstruction quality motivate the future analysis of generative models for the patch-based approach. For high-quality generative results, we also plan to improve the remeshing procedure to focus more on detailed structures. Right now the loss of smaller detailed geometric structures in the remeshing has little effect on the results since we want to detect the behavioral patterns in the low-dimensional representations of global deformation.
|
| 207 |
+
|
| 208 |
+
Additionally, we provide an understanding and interpretation of which surface areas lead to the patterns in the embedding space. We speculate that this information per patch could be used in further analysis. We also plan to apply the architecture to other tasks such as shape matching and segmentation.
|
papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,269 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Expectation Complete Graph Representations Using Graph Homomorphisms
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
We propose and study a practical graph embedding that in expectation is able to distinguish all non-isomorphic graphs and can be computed in polynomial time. The embedding is based on Lovász' characterization of graph isomorphism through an infinite dimensional vector of homomorphism counts. Recent work has studied the expressiveness of graph embeddings by comparing their ability to distinguish graphs to that of the Weisfeiler-Leman hierarchy. While previous methods have either limited expressiveness or are computationally impractical, we devise efficient sampling-based alternatives that are maximally expressive in expectation. We empirically evaluate our proposed embeddings and show competitive results on several benchmark graph learning tasks.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
We study novel efficient and expressive graph embeddings based on Lovász' characterisation of graph isomorphism through homomorphism counts. While most practical graph embeddings drop the property of completeness, that is, the ability to distinguish all non-isomorphic graphs, in favour of runtime, we devise efficient embeddings that retain completeness in expectation. To achieve that, we sample pattern graphs in a particular way, simultaneously guaranteeing completeness and polynomial runtime in expectation. We discuss related work, in particular the relationship to the $k$ -dimensional Weisfeiler Leman isomorphism test, and show first results on benchmarks datasets.
|
| 16 |
+
|
| 17 |
+
While subgraph counts are also a reasonable choice for expectation complete graph embeddings, they have multiple drawbacks compared to homomorphism counts. Most importantly, from a computational perspective, computing subgraph counts even for simple graphs such as trees or paths is NP-hard [Alon et al., 1995; Marx and Pilipczuk, 2014], while we can compute homomorphism counts efficiently [Díaz et al., 2002] as long as the pattern graphs have small treewidth, a measure of 'tree-likeness'. In particular, all known exact algorithms for subgraph isomorphism have a runtime exponentially in the pattern size or the maximum degree of the pattern even for small treewidth-one of the main reasons why the graphlet kernel [Shervashidze et al., 2009] and similar fixed pattern based approaches [Bouritsas et al., 2022] only count subgraphs up to size around 5.
|
| 18 |
+
|
| 19 |
+
Probably most important from a conceptual perspective, is the relationship of homomorphism counts to the cut distance [Borgs et al., 2006; Lovász, 2012]. The cut distance is a well studied and important distance on graphs that captures global structural but also sampling-based local information. It is well known that the distance given by (potentially approximated and sampled) homomorphism counts is close to the cut distance and hence has similar favourable properties. The cut distance, and hence, homomorphism counts, capture the behaviour of all permutation-invariant functions on graphs. For an ongoing discussion about the importance of the cut distance and homomorphism counts in the context of graph learning, see Dell et al. [2018], Grohe [2020], and Hoang and Maehara [2020].
|
| 20 |
+
|
| 21 |
+
Completeness in expectation essentially implies one powerful fact which no deterministic embedding with bounded expressiveness can guarantee: repetition will make the embedding more expressive eventually. If the graph embedding is complete in expectation it is guaranteed that sampling more patterns will eventually increase its expressiveness.
|
| 22 |
+
|
| 23 |
+
## 2 Complete Graph Embeddings
|
| 24 |
+
|
| 25 |
+
The graph isomorphism problem is a classical problem in graph theory and its computational complexity is a major open problem [Babai, 2016]. Following the classical result of Lovász [1967], two graphs are isomorphic if and only if they have the same infinite dimensional homomorphism count vectors. This provides a strong graph embedding for graph classification tasks [Barceló et al., 2021; Dell et al., 2018; Hoang and Maehara, 2020].
|
| 26 |
+
|
| 27 |
+
A graph $G = \left( {V\left( G\right) , E\left( G\right) }\right)$ consists of a set $V\left( G\right)$ of vertices and a set $E\left( G\right) = \{ e \subseteq V\left| \right| e \mid = 2\}$ of edges. The size of a graph is the number of its vertices. In the following $F$ and $G$ denote graphs, where $F$ represents a pattern graph and $G$ a graph in our training set. A homomorphism $\varphi : V\left( F\right) \rightarrow V\left( G\right)$ is a map that respects edges, i.e. $\{ v, w\} \in E\left( F\right) \Rightarrow \{ \varphi \left( v\right) ,\varphi \left( w\right) \} \in E\left( G\right)$ . An isomorphism is a bijective homomorphism whose inverse is also a homomorphism. We say that a distribution $\mathcal{D}$ over a countable domain $\mathcal{X}$ has full support if each $x \in X$ has nonzero probability.
|
| 28 |
+
|
| 29 |
+
Let ${\mathcal{G}}_{n}$ be the set of all finite graphs of size at most $n$ and let $\hom \left( {F, G}\right)$ denote the number of homomorphisms of $F$ to $G$ for arbitrarily graphs and ${\varphi }_{n}\left( G\right) = \hom \left( {{\mathcal{G}}_{n}, G}\right) = {\left( \hom \left( F, G\right) \right) }_{F \in {\mathcal{G}}_{n}}$ denote the Lovász vector of $G$ for ${\mathcal{G}}_{n}$ . Lovász [1967] proved the following classical theorem.
|
| 30 |
+
|
| 31 |
+
Theorem 1 (Lovász [1967]). Two arbitrary graphs $G, H \in {\mathcal{G}}_{n}$ are isomorphic iff ${\varphi }_{n}\left( G\right) = {\varphi }_{n}\left( H\right)$ .
|
| 32 |
+
|
| 33 |
+
We can define a simple kernel on ${\mathcal{G}}_{n}$ with the canonical inner product using ${\varphi }_{n}$ .
|
| 34 |
+
|
| 35 |
+
Definition 2 (Complete Lovász kernel). Let ${k}_{{\varphi }_{n}}\left( {G, H}\right) = \left\langle {{\varphi }_{n}\left( G\right) ,{\varphi }_{n}\left( H\right) }\right\rangle$ .
|
| 36 |
+
|
| 37 |
+
Note that ${k}_{{\varphi }_{n}}$ is a complete graph kernel [Gärtner et al.,2003] on ${\mathcal{G}}_{n}$ , i.e., ${k}_{{\varphi }_{n}}$ can be used to distinguish non-isomorphic graphs of size $n$ . Similarly, we define complete graph embeddings.
|
| 38 |
+
|
| 39 |
+
Definition 3. Let $\varphi : \mathcal{G} \rightarrow X$ be a permutation-invariant graph embedding from a family of graphs $\mathcal{G}$ to a vector space $X$ . We call $\varphi$ complete (on $\mathcal{G}$ ) if $\varphi \left( G\right) \neq \varphi \left( H\right)$ for all non-isomorphic $G, H \in \mathcal{G}$ .
|
| 40 |
+
|
| 41 |
+
When studying graph embeddings and graph kernels we face the tradeoff between efficiency and expressiveness: complete graph representations are unlikely to be computable in polynomial-time [Gärtner et al., 2003] and hence most practical graph representations drop completeness in favour of polynomial runtime. In our work, we study random graph representations. While dropping completeness and being efficiently computable, this allows us to keep a slightly weaker yet desirable property: completeness in expectation.
|
| 42 |
+
|
| 43 |
+
Definition 4. A graph embedding ${\varphi }_{X}$ , which depends on a random variable $X$ , is complete in expectation if the graph embedding given by the expectation, ${\mathbb{E}}_{X}\left\lbrack {{\varphi }_{X}\left( \cdot \right) }\right\rbrack$ , is complete.
|
| 44 |
+
|
| 45 |
+
Similarly, we say that the corresponding kernel ${k}_{X}\left( {G, H}\right) = \left\langle {{\varphi }_{X}\left( G\right) ,{\varphi }_{X}\left( H\right) }\right\rangle$ is complete in expectation. We can use Lovász' isomorphism theorem to devise graph embeddings that are complete in expectation. For that let ${e}_{F} \in {\mathbb{R}}^{{\mathcal{G}}_{n}}$ be the ’ $F$ th’ standard basis unit-vector of ${\mathcal{G}}_{n}$
|
| 46 |
+
|
| 47 |
+
Theorem 5. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{n}$ with full support and $G \in {\mathcal{G}}_{n}$ . Then the graph embedding ${\varphi }_{F}\left( G\right) = \hom \left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel $k$ are complete in expectation.
|
| 48 |
+
|
| 49 |
+
### 2.1 Expectation Complete Embeddings and Kernels on ${\mathcal{G}}_{\infty }$
|
| 50 |
+
|
| 51 |
+
In this section, we generalise the previous result to the set of all finite graphs ${\mathcal{G}}_{\infty }$ . Theorem 1 holds for $G, H \in {\mathcal{G}}_{\infty }$ and the mapping ${\varphi }_{\infty }$ that maps each $G \in {\mathcal{G}}_{\infty }$ to an infinite-dimensional vector. The resulting vector space, however, is not a Hilbert space with the usual inner product. To see this, consider any graph $G$ that has at least one edge. Then $\hom \left( {{P}_{n}, G}\right) \geq 2$ for every path ${P}_{n}$ of length $n \in \mathbb{N}$ . Thus, the inner product $\left\langle {{\varphi }_{\infty }\left( G\right) ,{\varphi }_{\infty }\left( G\right) }\right\rangle$ is not finite.
|
| 52 |
+
|
| 53 |
+
To define a kernel on ${\mathcal{G}}_{\infty }$ without fixing a maximum size of graphs, i.e., restricting to ${\mathcal{G}}_{n}$ for some $n \in \mathbb{N}$ , we define the countable-dimensional vector ${\bar{\varphi }}_{\infty }\left( G\right) = {\left( {\hom }_{\left| V\left( G\right) \right| }\left( F, G\right) \right) }_{F \in {\mathcal{G}}_{\infty }}$ where
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\hom }_{\left| V\left( G\right) \right| }\left( {F, G}\right) = \left\{ \begin{array}{ll} \hom \left( {F, G}\right) & \text{ if }\left| {V\left( F\right) }\right| \leq \left| {V\left( G\right) }\right| , \\ 0 & \text{ if }\left| {V\left( F\right) }\right| > \left| {V\left( G\right) }\right| . \end{array}\right.
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
That is, ${\bar{\varphi }}_{\infty }\left( G\right)$ is the projection of ${\varphi }_{\infty }\left( G\right)$ to the subspace that gives us the homomorphism counts for all graphs of size at most of $G$ . Note that this is a well-defined map of graphs to a subspace of the ${\ell }^{2}$ space, i.e., sequences ${\left( {x}_{i}\right) }_{i}$ over $\mathbb{R}$ with $\mathop{\sum }\limits_{i}{\left| {x}_{i}\right| }^{2} < \infty$ . Hence, the kernel given by the canonical inner product ${\bar{k}}_{\infty }\left( {G, H}\right) = \left\langle {{\bar{\varphi }}_{\infty }\left( G\right) ,{\bar{\varphi }}_{\infty }\left( H\right) }\right\rangle$ is finite and positive semi-definite. Note that we can rewrite
|
| 60 |
+
|
| 61 |
+
${\bar{k}}_{\infty }\left( {G, H}\right) = {k}_{\min }\left( {G, H}\right) = \left\langle {{\varphi }_{{n}^{\prime }}\left( G\right) ,{\varphi }_{{n}^{\prime }}\left( H\right) }\right\rangle$ where ${n}^{\prime } = \min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ . While the first hunch might be to count patterns up to $\max \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ , it is thus not necessary to guarantee completeness. In addition to it, the corresponding map ${k}_{\max }$ is not even positive semi-definite.
|
| 62 |
+
|
| 63 |
+
Lemma 6. ${k}_{\min }$ is a complete kernel on ${\mathcal{G}}_{\infty }$ .
|
| 64 |
+
|
| 65 |
+
Given a sample of graphs $S$ , we note that for $n = \mathop{\max }\limits_{{G \in S}}\left| {V\left( G\right) }\right|$ we only need to consider patterns up to size $n{.}^{1}$ As the number of graphs of a given size $n$ are superexponential it is impractical to compute all such counts. Hence, we propose to resort to sampling.
|
| 66 |
+
|
| 67 |
+
Theorem 7. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{\infty }$ with full support and $G \in {\mathcal{G}}_{\infty }$ . Then ${\bar{\varphi }}_{F}\left( G\right) =$ ${\hom }_{\left| V\left( G\right) \right| }\left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel are complete in expectation.
|
| 68 |
+
|
| 69 |
+
### 2.2 Sampling multiple patterns
|
| 70 |
+
|
| 71 |
+
Sampling just a one pattern $F$ will not result in a practical graph embedding. Thus, we propose to sample $\ell$ patterns ${F}_{1},\ldots ,{F}_{\ell } \sim \mathcal{D}$ i.i.d. and construct the embedding ${\varphi }^{\ell }\left( G\right) \in {\mathbb{N}}_{0}^{\ell }$ with ${\left( {\varphi }^{\ell }\left( G\right) \right) }_{i} =$ $\hom \left( {{F}_{i}, G}\right)$ if $\left| {V\left( {F}_{i}\right) }\right| \leq \left| {V\left( G\right) }\right|$ and 0 otherwise for all $i \in \left\lbrack \ell \right\rbrack$ . Note that, for the dot product it holds that ${\varphi }^{\ell }{\left( G\right) }^{T}{\varphi }^{\ell }\left( H\right) = \mathop{\sum }\limits_{{i = 1}}^{\ell }\left\langle {{\bar{\varphi }}_{{F}_{i}}\left( G\right) ,{\bar{\varphi }}_{{F}_{i}}\left( H\right) }\right\rangle$ as long as we do not sample patterns twice. ${}^{2}$
|
| 72 |
+
|
| 73 |
+
## 3 Computing Embeddings in Expected Polynomial Time
|
| 74 |
+
|
| 75 |
+
A graph embedding that is complete in expectation must be efficiently computable to be practical. In this section, we describe our main result achieving polynomial runtime in expectation. The best known algorithms [Díaz et al.,2002] to exactly compute $\hom \left( {F, G}\right)$ take time
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right) \tag{1}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $\operatorname{tw}\left( F\right)$ is the treewidth of the pattern graph $H$ . Thus, a straightforward sampling strategy to achieve polynomial runtime in expectation is to give decreasing probability mass to patterns with higher treewidth. Unfortunately, in the case of ${\mathcal{G}}_{\infty }$ this is not possible.
|
| 82 |
+
|
| 83 |
+
Lemma 8. There exists no distribution $\mathcal{D}$ with full support on ${\mathcal{G}}_{\infty }$ such that the expected runtime of Eq. (1) becomes polynomial in $\left| {V\left( G\right) }\right|$ for all $G \in {\mathcal{G}}_{\infty }$ .
|
| 84 |
+
|
| 85 |
+
To resolve this issue we have to take the size of the largest graph in our sample into account. For a given sample $S \subseteq {\mathcal{G}}_{n}$ of graphs, where $n$ is the maximum number of vertices in $S$ , we can construct simple distributions achieving polynomial time in expectation.
|
| 86 |
+
|
| 87 |
+
Theorem 9. There exists a distribution $\mathcal{D}$ such that computing the expectation complete graph embedding ${\bar{\varphi }}_{X}\left( G\right)$ takes polynomial time in $\left| {V\left( G\right) }\right|$ in expectation for all $G \in {\mathcal{G}}_{n}$ .
|
| 88 |
+
|
| 89 |
+
Proof. Sketch. We first draw a treewidth upper bound $k$ from an appropriate distribution. For example, a Poisson distribution with parameter $\lambda = \mathcal{O}\left( {{}^{logn}//n}\right)$ is sufficient. We have to ensure that each possible graph with treewidth up to $k$ gets a nonzero probability of being drawn. For that we first draw a $k$ -tree, a maximal graph of treewidth $k$ , and then take a random subgraph of it.
|
| 90 |
+
|
| 91 |
+
Note that we do not require that the patterns are sampled uniformly at random. It merely suffices that each pattern has a nonzero probability of being drawn. To satisfy a runtime of $\mathcal{O}\left( {\left| V\left( G\right) \right| }^{d + 1}\right)$ in expectation, for example, a Poisson distribution with $\lambda \leq \frac{1 + d\log n}{n}$ is sufficient.
|
| 92 |
+
|
| 93 |
+
## 4 Related Work
|
| 94 |
+
|
| 95 |
+
The $k$ -dimensional Weisfeiler-Leman (WL) test and the Lovász vector restricted to patterns up to treewidth $k$ are equally expressive [Dell et al.,2018; Dvořák,2010]. We propose an efficiently computable embedding matching the expressiveness of $k$ -WL, and hence also MPNNs and $k$ -GNNs [Morris et al., 2019; Xu et al., 2019], in expectation, see Appendix D.
|
| 96 |
+
|
| 97 |
+
Dell et al. [2018] proposed a complete graph kernel based on homomorphism counts related to our ${k}_{\min }$ kernel. Instead of implicitly restricting the embedding to only a finite number of patterns, as we do, they weigh the homomorphism counts such that the inner product defined on the whole Lovász vectors converges. However, Dell et al. [2018] do not discuss runtime aspects and so, our approach can be seen as an efficient sampling-based alternative to their weighted kernel.
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
${}^{1}$ Actually, it is sufficient to go up to the size of the second largest graph.
|
| 102 |
+
|
| 103 |
+
${}^{2}$ Note that it does not affect the expressiveness results if we sample a pattern multiple times.
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
Table 1: Cross-validation accuracies on benchmark datasets
|
| 108 |
+
|
| 109 |
+
<table><tr><td>method</td><td>MUTAG</td><td>IMDB-BIN</td><td>IMDB-MULTI</td><td>PAULUS25</td><td>CSL</td></tr><tr><td>GHC-tree</td><td>${89.28} \pm {8.26}$</td><td>${72.10} \pm {2.62}$</td><td>${48.60} \pm {4.40}$</td><td>${7.14} \pm {0.00}$</td><td>${10.00} \pm {0.00}$</td></tr><tr><td>GHC-cycle</td><td>${87.81} \pm {7.46}$</td><td>${70.93} \pm {4.54}$</td><td>${47.41} \pm {3.67}$</td><td>${7.14} \pm {0.00}$</td><td>${100.00} \pm {0.00}$</td></tr><tr><td>GNTK</td><td>${89.46} \pm {7.03}$</td><td>${75.61} \pm {3.98}$</td><td>${51.91} \pm {3.56}$</td><td>${7.14} \pm {0.00}$</td><td>${10.00} \pm {0.00}$</td></tr><tr><td>GIN</td><td>${89.40} \pm {5.60}$</td><td>${70.70} \pm {1.10}$</td><td>${43.20} \pm {2.00}$</td><td>${7.14} \pm {00}$</td><td>${10} \pm {0.00}$</td></tr><tr><td>ours (SVM)</td><td>${86.85} \pm {1.28}$</td><td>${69.83} \pm {0.15}$</td><td>${47.31} \pm {0.46}$</td><td>${100.00} \pm {0.00}$</td><td>${38.89} \pm {11.18}$</td></tr><tr><td>ours (MLP)</td><td>${88.33} \pm {1.11}$</td><td>${70.37} \pm {0.85}$</td><td>${48.75} \pm {0.20}$</td><td>${49.84} \pm {6.74}$</td><td>${11.78} \pm {1.54}$</td></tr></table>
|
| 110 |
+
|
| 111 |
+
Using graph homomorphism counts as a feature embedding for graph learning tasks was proposed before by Hoang and Maehara [2020]. They discuss various aspects of homomorphism counts important for learning tasks, in particular, universality aspects and their power to capture certain properties of the graph, such as bipartiteness. Instead of relying on sampling patterns, which we use to guarantee expectation in completeness, they propose to use a fixed number of small pattern graphs. This limits the practical usage of their approach due to computational complexity reasons. In their experiments the authors only use tree and cycle patterns up to size 6 and 8, respectively, whereas we allow patterns of arbitrary size and treewidth, guaranteeing polynomial runtime in expectation. Simiarly to Hoang and Maehara [2020], we use the computed embeddings as features for a kernel SVM (with RBF kernel) and an MLP.
|
| 112 |
+
|
| 113 |
+
Instead of embedding the whole graph into a vector of homomorphism counts, Barceló et al. [2021] proposed to use rooted homomorphism counts as node features in conjunction with a graph neural network (GNN). They discuss the required patterns to be as or more expressive than the $k$ -WL test. We achieve this in expectation when selecting an appropriate sampling distribution.
|
| 114 |
+
|
| 115 |
+
Wu et al. [2019] adapted random Fourier features [Rahimi and Recht, 2007] to graphs and proposed an sampling-based variant of the global alignment graph kernel. Similar sampling-based ideas were discussed before for the graphlet kernel [Shervashidze et al., 2009] and frequent-subtree kernels [Welke et al., 2015]. All three papers do not discuss expressiveness aspects, however.
|
| 116 |
+
|
| 117 |
+
## 5 Experiments
|
| 118 |
+
|
| 119 |
+
We performed some preliminary experiments on some benchmark datasets. To this end, we sample a fixed number $\ell = {30}$ of patterns as described in Appendix A and compute the sampled min kernel as described in Section 3. Table 1 shows averaged accuracies of SVM and MLP classifiers trained on our feature sets. We follow the experimental design of Hoang and Maehara [2020] and compare to their published results. Even with as little as 30 features, the results of our approach are comparable to the competitors on real world datasets. Furthermore, it is interesting to note that a SVM with RBF kernel and our features performs perfectly on the PAULUS25 dataset, i.e., it is able to decide isomorphism for the strongly regular graphs in this dataset. It also shows good performance, although with high deviation, on the CSL dataset, where only the method specifically designed for this dataset, GHC-cycle, performs well. We also included GNTK [Du et al., 2019] and GIN [Xu et al., 2019].
|
| 120 |
+
|
| 121 |
+
## 6 Conclusion
|
| 122 |
+
|
| 123 |
+
As future work, we will investigate approximate counts to make our implementation more efficient [Beaujean et al., 2021]. It is unclear how this affects expressiveness, as we loose permutation-invariance. Going beyond expressiveness results, our goal is to further study graph similarities suitable for graph learning, such as the cut distance as proposed by Grohe [2020]. Finally, instead of sampling patterns from a fixed distribution, a more promising variant is to adapt the sampling process in a sample-dependent manner. One could, for example, draw new patterns until each graph in the sample has a unique embedding (up to isomorphism) or at least until we can distinguish 1-WL classes. Alternatively, we could pre-compute frequent or interesting patterns and use them to adapt the distribution. Such approaches would employ the power of randomisation to select a fitting graph representation in a data-driven manner, instead of relying on a finite set of fixed and pre-determined patterns like in previous work [Barceló et al., 2021; Bouritsas et al., 2022].
|
| 124 |
+
|
| 125 |
+
References
|
| 126 |
+
|
| 127 |
+
Noga Alon, Raphael Yuster, and Uri Zwick. Color-coding. J. ACM, 42(4):844-856, 1995. 1
|
| 128 |
+
|
| 129 |
+
László Babai. Graph isomorphism in quasipolynomial time. In STOC, 2016. 2
|
| 130 |
+
|
| 131 |
+
Pablo Barceló, Floris Geerts, Juan Reutter, and Maksimilian Ryschkov. Graph Neural Networks with Local Graph Parameters. In NeurIPS, 2021. 2, 4
|
| 132 |
+
|
| 133 |
+
Paul Beaujean, Florian Sikora, and Florian Yger. Graph homomorphism features: Why not sample? In Graph Embedding and Mining (GEM) Workshop at ECMLPKDD, 2021. 4
|
| 134 |
+
|
| 135 |
+
Christian Borgs, Jennifer Chayes, László Lovász, Vera T Sós, Balázs Szegedy, and Katalin Veszter-gombi. Graph limits and parameter testing. In STOC, 2006. 1
|
| 136 |
+
|
| 137 |
+
Giorgos Bouritsas, Fabrizio Frasca, Stefanos P Zafeiriou, and Michael Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. 1, 4
|
| 138 |
+
|
| 139 |
+
Saverio Caminiti, Emanuele G Fusco, and Rossella Petreschi. Bijective linear time coding and decoding for k-trees. Theory of Computing Systems, 46(2):284-300, 2010. 7
|
| 140 |
+
|
| 141 |
+
Radu Curticapean, Holger Dell, and Dániel Marx. Homomorphisms are a good basis for counting small subgraphs. In ${STOC},{2017.7}$
|
| 142 |
+
|
| 143 |
+
Holger Dell, Martin Grohe, and Gaurav Rattan. Lovász meets Weisfeiler and Leman. In ICALP, 2018.1,2,3,4,8
|
| 144 |
+
|
| 145 |
+
Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. NeurIPS, 2019.4
|
| 146 |
+
|
| 147 |
+
Zdeněk Dvořák. On recognizing graphs by numbers of homomorphisms. J. Graph Theory, 64(4): 330-342,2010.3,8
|
| 148 |
+
|
| 149 |
+
Josep Díaz, Maria Serna, and Dimitrios M. Thilikos. Counting h-colorings of partial k-trees. Theoretical Computer Science, 281(1):291-309, 2002. ISSN 0304-3975. 1, 3, 8
|
| 150 |
+
|
| 151 |
+
Thomas Gärtner, Peter A. Flach, and Stefan Wrobel. On graph kernels: Hardness results and efficient alternatives. In ${COLT},{2003.2}$
|
| 152 |
+
|
| 153 |
+
Martin Grohe. Word2vec, node2vec, graph2vec, x2vec: Towards a theory of vector embeddings of structured data. In PODS, 2020. 1, 4
|
| 154 |
+
|
| 155 |
+
NT Hoang and Takanori Maehara. Graph homomorphism convolution. In ICML, 2020. 1, 2, 4, 7
|
| 156 |
+
|
| 157 |
+
László Lovász. Operations with structures. Acta Mathematica Hungaria, 18:321-328, 1967. 2
|
| 158 |
+
|
| 159 |
+
Lászl6 Lovász. Large Networks and Graph Limits, volume 60 of Colloquium Publications. American Mathematical Society, 2012. ISBN 978-0-8218-9085-1. 1
|
| 160 |
+
|
| 161 |
+
Dániel Marx and Michal Pilipczuk. Everything you always wanted to know about the parameterized complexity of subgraph isomorphism (but were afraid to ask). In International Symposium on Theoretical Aspects of Computer Science, 2014. 1
|
| 162 |
+
|
| 163 |
+
Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In ${AAAI},{2019.3}$
|
| 164 |
+
|
| 165 |
+
Siqi Nie, Cassio P de Campos, and Qiang Ji. Learning bounded tree-width Bayesian networks via sampling. In European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, 2015. 7
|
| 166 |
+
|
| 167 |
+
Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In NIPS, 2007. 4
|
| 168 |
+
|
| 169 |
+
Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. Efficient graphlet kernels for large graph comparison. In AISTATS, 2009. 1, 4
|
| 170 |
+
|
| 171 |
+
Pascal Welke, Tamás Horváth, and Stefan Wrobel. Probabilistic frequent subtree kernels. In International Workshop on New Frontiers in Mining Complex Patterns, 2015. 4
|
| 172 |
+
|
| 173 |
+
Lingfei Wu, Ian En-Hsu Yen, Zhen Zhang, Kun Xu, Liang Zhao, Xi Peng, Yinglong Xia, and Charu Aggarwal. Scalable global alignment graph kernel using random features: From node embedding to graph embedding. In ${KDD},{2019.4}$
|
| 174 |
+
|
| 175 |
+
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019. 3, 4
|
| 176 |
+
|
| 177 |
+
Jaemin Yoo, U Kang, Mauro Scanagatta, Giorgio Corani, and Marco Zaffalon. Sampling subgraphs with guaranteed treewidth for accurate and efficient graphical inference. In WSDM, 2020. 7
|
| 178 |
+
|
| 179 |
+
## A Sampling details
|
| 180 |
+
|
| 181 |
+
Given a pattern size $N \in \mathbb{N}$ , we first draw a treewidth upper bound $k < N$ given from some distribution. Then we want to sample any graph with treewidth at most $k$ with a nonzero probability. A natural strategy is to first sample a $k$ -tree, which is a maximal graph with treewidth $k$ , and then take a random subgraph of it. Uniform sampling of $k$ -trees is described by Nie et al. [2015] and Caminiti et al. [2010]. Alternatively, the strategy of Yoo et al. [2020] is also possible. Note that we only have to guarantee that each pattern has a nonzero probability of being sampled; it does not have to be uniform. While guaranteed uniform sampling would be preferable, we resort to a simple sampling scheme that is easy to implement. We achieve a nonzero probability for each pattern of at most a given treewidth $k$ by first constructing a random $k$ -tree $P$ through its tree decomposition, by uniformly drawing a tree $T$ on $N - k$ vertices and choosing a root. We then create $P$ as the (unique up to isomorphism) $k$ -tree that has $T$ as tree decomposition. We then randomly remove edges from that $k$ -tree i.i.d. with fixed probability (currently set to 0.1). This ensures that each subgraph of $P$ will be created with nonzero probability.
|
| 182 |
+
|
| 183 |
+
## B Implementation details
|
| 184 |
+
|
| 185 |
+
The python code and information to reproduce our experiments can be found online ${}^{3}$ . These sources will be made accessible on Github. We rely on the C++ code of Curticapean et al. [2017] ${}^{4}$ to efficiently compute homomorphism counts. While the code computes a tree decomposition itself we decided to simply provide it with our tree decomposition of the $k$ -tree which we compute anyway, to make the computation more efficient. Additionally, we use the cross-validation-based eveluation with SVM and MLP of Hoang and Maehara [2020] ${}^{5}$ .
|
| 186 |
+
|
| 187 |
+
## C Proofs
|
| 188 |
+
|
| 189 |
+
Theorem 5. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{n}$ with full support and $G \in {\mathcal{G}}_{n}$ . Then the graph embedding ${\varphi }_{F}\left( G\right) = \hom \left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel $k$ are complete in expectation.
|
| 190 |
+
|
| 191 |
+
Proof. Let $\mathcal{D}$ and ${\varphi }_{F}$ with $F \sim \mathcal{D}$ as stated and $G \in {\mathcal{G}}_{n}$ . Then
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
g = {\mathbb{E}}_{F}\left\lbrack {{\varphi }_{F}\left( G\right) }\right\rbrack = \mathop{\sum }\limits_{{{F}^{\prime } \in {\mathcal{G}}_{n}}}\Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime }, G}\right) {e}_{{F}^{\prime }}.
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
The vector $g$ has the entries ${\left( g\right) }_{{F}^{\prime }} = \Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime }, G}\right)$ . Let ${G}^{\prime }$ be a graph that is non-isomorphic to $G$ and let ${g}^{\prime } = {\mathbb{E}}_{F}\left\lbrack {{\varphi }_{F}\left( {G}^{\prime }\right) }\right\rbrack$ accordingly. By Theorem 1 we know that $\hom \left( {{\mathcal{G}}_{n}, G}\right) \neq$ $\hom \left( {{\mathcal{G}}_{n},{G}^{\prime }}\right)$ . Thus, there is an ${F}^{\prime }$ such that $\hom \left( {{F}^{\prime }, G}\right) \neq \hom \left( {{F}^{\prime },{G}^{\prime }}\right)$ . By definition of $\mathcal{D}$ we have that $\Pr \left( {F = {F}^{\prime }}\right) > 0$ and hence $\Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime }, G}\right) \neq \Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime },{G}^{\prime }}\right)$ which implies $g \neq {g}^{\prime }$ . That shows that ${\mathbb{E}}_{F}\left\lbrack {{\varphi }_{F}\left( \cdot \right) }\right\rbrack$ is complete and concludes the proof.
|
| 198 |
+
|
| 199 |
+
Lemma 6. ${k}_{\min }$ is a complete kernel on ${\mathcal{G}}_{\infty }$ .
|
| 200 |
+
|
| 201 |
+
Proof. Let $G, H \in {\mathcal{G}}_{\infty }$ . We have to show that
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
{\varphi }_{\widetilde{\infty }}\left( G\right) = {\varphi }_{\widetilde{\infty }}\left( H\right) \Leftrightarrow G\widetilde{ = }H,
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where $G\widetilde{ = }H$ indicates that $G$ and $H$ are isomorphic. There are two cases:
|
| 208 |
+
|
| 209 |
+
$\left| {V\left( G\right) }\right| = \left| {V\left( H\right) }\right|$ : Then, by Theorem 1 we have ${\varphi }_{N}\left( G\right) = {\varphi }_{n}\left( H\right)$ iff $G \cong H$ for $N =$ $\min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \} = \left| {V\left( G\right) }\right| = \left| {V\left( H\right) }\right| .$
|
| 210 |
+
|
| 211 |
+
$\left| {V\left( G\right) }\right| \neq \left| {V\left( H\right) }\right|$ : Let w.l.o.g. $0 < \left| {V\left( G\right) }\right| < \left| {V\left( H\right) }\right|$ . Let $P$ be the graph on exactly one vertex. Then $\hom \left( {P, G}\right) < \hom \left( {P, H}\right)$ , i.e., we can distinguish graphs on different numbers of vertices using homomorphism counts. As $\min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \} \geq 1$ , we have $P \in {\mathcal{G}}^{\left| V\left( G\right) \right| }$ and hence ${\varphi }_{\left| V\left( G\right) \right| }\left( G\right) \neq {\varphi }_{\left| V\left( G\right) \right| }\left( H\right)$ . The other direction follows directly from the fact that homomorphism counts are invariant under isomorphism.
|
| 212 |
+
|
| 213 |
+
Theorem 7. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{\infty }$ with full support and $G \in {\mathcal{G}}_{\infty }$ . Then ${\bar{\varphi }}_{F}\left( G\right) =$ ${\hom }_{\left| V\left( G\right) \right| }\left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel are complete in expectation.
|
| 214 |
+
|
| 215 |
+
---
|
| 216 |
+
|
| 217 |
+
${}^{3}$ https://drive.google.com/file/d/1kCDSORcLgpDWNdfJz2xIShWENTLVPgSe/view
|
| 218 |
+
|
| 219 |
+
${}^{4}$ https://github.com/ChristianLebeda/HomSub
|
| 220 |
+
|
| 221 |
+
${}^{5}$ https://github.com/gear/graph-homomorphism-network
|
| 222 |
+
|
| 223 |
+
---
|
| 224 |
+
|
| 225 |
+
Proof. We can apply the same arguments as before from Theorem 5 to show that the expected embeddings of two graphs $G, H$ with size ${n}^{\prime } = \min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ are equal iff their Lovász vector restricted to size ${n}^{\prime }$ are equal. By Lemma 6 we know that the latter only can happen if the two graphs are isomorphic.
|
| 226 |
+
|
| 227 |
+
Lemma 8. There exists no distribution $\mathcal{D}$ with full support on ${\mathcal{G}}_{\infty }$ such that the expected runtime of Eq. (1) becomes polynomial in $\left| {V\left( G\right) }\right|$ for all $G \in {\mathcal{G}}_{\infty }$ .
|
| 228 |
+
|
| 229 |
+
Proof. Let $\mathcal{D}$ be such a distribution and let ${\mathcal{D}}^{\prime }$ be the marginal distribution on the treewidths of the graphs given by ${p}_{k} = \mathop{\Pr }\limits_{{F \sim \mathcal{D}}}\left( {\operatorname{tw}\left( F\right) = k}\right) > 0$ . Let $G$ be a given input graph in the sample with $n = \left| {V\left( G\right) }\right|$ . Díaz et al. [2002] has shown that computing $\hom \left( {F, G}\right)$ takes time $\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right)$ Assume for the purpose of contradiction that we can guarantee an expected polynomial runtime (ignoring the $\left| {V\left( F\right) }\right|$ and constant factors for simplicity):
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
{\mathbb{E}}_{F \sim \mathcal{D}}\left\lbrack {n}^{\operatorname{tw}\left( F\right) + 1}\right\rbrack = \mathop{\sum }\limits_{{k = 1}}^{\infty }{p}_{k}{n}^{k + 1} \leq C{n}^{c}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
for some constants $C, c \in \mathbb{N}$ . Then for all $k \geq c$ , it must hold that ${p}_{k}{n}^{k + 1} \leq C{n}^{c}$ , as all summands are positive. However, for large enough $n$ the left hand side is larger than the right hand side. Contradiction.
|
| 236 |
+
|
| 237 |
+
Theorem 9. There exists a distribution $\mathcal{D}$ such that computing the expectation complete graph embedding ${\bar{\varphi }}_{X}\left( G\right)$ takes polynomial time in $\left| {V\left( G\right) }\right|$ in expectation for all $G \in {\mathcal{G}}_{n}$ .
|
| 238 |
+
|
| 239 |
+
Proof. Let $G \in {\mathcal{G}}_{n}$ . Draw a treewidth upper bound $k$ from a Poisson distribution with parameter $\lambda$ to be determined later. Select a distribution ${\mathcal{D}}_{n, k}$ which has full support on all graphs with treewidth up to $k$ and size up to $n$ , for example, the one described in Appendix A. Using the algorithm of [Díaz et al.,2002] this gives, for some constant $C \in \mathbb{N}$ , an expected runtime of
|
| 240 |
+
|
| 241 |
+
${\mathbb{E}}_{k \sim \operatorname{Poi}\left( \lambda \right) , F \sim {\mathcal{D}}_{n, k}}\left\lbrack {C\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right\rbrack \leq {\mathbb{E}}_{k \sim \operatorname{Poi}\left( \lambda \right) }\left\lbrack {C{n}^{k + 2}}\right\rbrack = \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{\lambda }^{k}{e}^{-\lambda }}{k!}C{n}^{k + 2} = \frac{C{n}^{2}}{{e}^{\lambda }}{e}^{\lambda n}.$
|
| 242 |
+
|
| 243 |
+
We need to bound the right hand side by some polynomial $D{n}^{d}$ for some constants $D, d \in \mathbb{N}$ . By rearranging terms we see that
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\lambda \leq \frac{\ln \frac{D}{C} + \left( {d - 2}\right) \ln n}{n - 1} = \mathcal{O}\left( \frac{\log n}{n}\right)
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
is sufficient.
|
| 250 |
+
|
| 251 |
+
285
|
| 252 |
+
|
| 253 |
+
## D Matching the expressivness of $k$ -WL in expectation
|
| 254 |
+
|
| 255 |
+
We devise a graph embedding matching the expressiveness of the $k$ -WL test in expecation.
|
| 256 |
+
|
| 257 |
+
Theorem 10. Let $\mathcal{D}$ be a distribution with full support on the set of graphs with treewidth up to $k$ . The resulting graph embedding ${\varphi }_{F}^{k \cdot {WL}}\left( \cdot \right)$ with $F \sim \mathcal{D}$ has the same expressiveness as the $k$ -WL test in expectation. Furthermore, there is a specific such distribution such that can compute ${\varphi }_{F}^{k - {WL}}\left( G\right)$ in expected polynomial time $\mathcal{O}\left( {\left| V\left( G\right) \right| }^{k + 1}\right)$ for all $G \in {\mathcal{G}}_{\infty }$ .
|
| 258 |
+
|
| 259 |
+
Proof. Let ${\mathcal{T}}_{k}$ be the set of graphs with treewidth up to $k$ and $\mathcal{D}$ be a distribution with full support on ${\mathcal{T}}_{k}$ . Then by the same arguments as before in Theorem 5, the expected embeddings of two graphs $G$ and $H$ are equal iff their Lovász vectors restricted to patterns in ${\mathcal{T}}_{k}$ are equal. By Dvořák [2010] and Dell et al. [2018] the latter happens iff $k$ -WL returns the same color histogram for both graphs. This proves the first claim.
|
| 260 |
+
|
| 261 |
+
For the second claim note that the worst-case runtime for any pattern $F \in {\mathcal{T}}_{k}$ is $\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{k + 1}}\right)$ by Díaz et al. [2002]. However, the equivalence between homomorphism counts on ${\mathcal{T}}_{k}$ and $k$ -WL requires to inspect also patterns $F$ of all sizes, in particular, also larger than the size $n$ of the input graph. To remedy this, we can draw the pattern size $m$ from some distribution with bounded expectation and full support on $\mathbb{N}$ . For example, the geometric $m \sim \operatorname{Geom}\left( p\right)$ with any parameter $p \in \left( {0,1}\right)$ and expectation $\mathbb{E}\left\lbrack m\right\rbrack = \frac{1}{1 - p}$ is sufficient. By linearity of expectation then
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
E\left\lbrack {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right\rbrack = \mathcal{O}\left( {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}\right) .
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
297
|
| 268 |
+
|
| 269 |
+
Note that for the embedding ${\varphi }_{F}^{k - {WL}}\left( \cdot \right)$ Lemma 8 does not apply. In particular, the used distribution guaranteeing polynomial expected runtime is independent of $n$ and can be used for all ${\mathcal{G}}_{\infty }$ .
|
papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EXPECTATION COMPLETE GRAPH REPRESENTATIONS USING GRAPH HOMOMORPHISMS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
We propose and study a practical graph embedding that in expectation is able to distinguish all non-isomorphic graphs and can be computed in polynomial time. The embedding is based on Lovász' characterization of graph isomorphism through an infinite dimensional vector of homomorphism counts. Recent work has studied the expressiveness of graph embeddings by comparing their ability to distinguish graphs to that of the Weisfeiler-Leman hierarchy. While previous methods have either limited expressiveness or are computationally impractical, we devise efficient sampling-based alternatives that are maximally expressive in expectation. We empirically evaluate our proposed embeddings and show competitive results on several benchmark graph learning tasks.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
We study novel efficient and expressive graph embeddings based on Lovász' characterisation of graph isomorphism through homomorphism counts. While most practical graph embeddings drop the property of completeness, that is, the ability to distinguish all non-isomorphic graphs, in favour of runtime, we devise efficient embeddings that retain completeness in expectation. To achieve that, we sample pattern graphs in a particular way, simultaneously guaranteeing completeness and polynomial runtime in expectation. We discuss related work, in particular the relationship to the $k$ -dimensional Weisfeiler Leman isomorphism test, and show first results on benchmarks datasets.
|
| 16 |
+
|
| 17 |
+
While subgraph counts are also a reasonable choice for expectation complete graph embeddings, they have multiple drawbacks compared to homomorphism counts. Most importantly, from a computational perspective, computing subgraph counts even for simple graphs such as trees or paths is NP-hard [Alon et al., 1995; Marx and Pilipczuk, 2014], while we can compute homomorphism counts efficiently [Díaz et al., 2002] as long as the pattern graphs have small treewidth, a measure of 'tree-likeness'. In particular, all known exact algorithms for subgraph isomorphism have a runtime exponentially in the pattern size or the maximum degree of the pattern even for small treewidth-one of the main reasons why the graphlet kernel [Shervashidze et al., 2009] and similar fixed pattern based approaches [Bouritsas et al., 2022] only count subgraphs up to size around 5.
|
| 18 |
+
|
| 19 |
+
Probably most important from a conceptual perspective, is the relationship of homomorphism counts to the cut distance [Borgs et al., 2006; Lovász, 2012]. The cut distance is a well studied and important distance on graphs that captures global structural but also sampling-based local information. It is well known that the distance given by (potentially approximated and sampled) homomorphism counts is close to the cut distance and hence has similar favourable properties. The cut distance, and hence, homomorphism counts, capture the behaviour of all permutation-invariant functions on graphs. For an ongoing discussion about the importance of the cut distance and homomorphism counts in the context of graph learning, see Dell et al. [2018], Grohe [2020], and Hoang and Maehara [2020].
|
| 20 |
+
|
| 21 |
+
Completeness in expectation essentially implies one powerful fact which no deterministic embedding with bounded expressiveness can guarantee: repetition will make the embedding more expressive eventually. If the graph embedding is complete in expectation it is guaranteed that sampling more patterns will eventually increase its expressiveness.
|
| 22 |
+
|
| 23 |
+
§ 2 COMPLETE GRAPH EMBEDDINGS
|
| 24 |
+
|
| 25 |
+
The graph isomorphism problem is a classical problem in graph theory and its computational complexity is a major open problem [Babai, 2016]. Following the classical result of Lovász [1967], two graphs are isomorphic if and only if they have the same infinite dimensional homomorphism count vectors. This provides a strong graph embedding for graph classification tasks [Barceló et al., 2021; Dell et al., 2018; Hoang and Maehara, 2020].
|
| 26 |
+
|
| 27 |
+
A graph $G = \left( {V\left( G\right) ,E\left( G\right) }\right)$ consists of a set $V\left( G\right)$ of vertices and a set $E\left( G\right) = \{ e \subseteq V\left| \right| e \mid = 2\}$ of edges. The size of a graph is the number of its vertices. In the following $F$ and $G$ denote graphs, where $F$ represents a pattern graph and $G$ a graph in our training set. A homomorphism $\varphi : V\left( F\right) \rightarrow V\left( G\right)$ is a map that respects edges, i.e. $\{ v,w\} \in E\left( F\right) \Rightarrow \{ \varphi \left( v\right) ,\varphi \left( w\right) \} \in E\left( G\right)$ . An isomorphism is a bijective homomorphism whose inverse is also a homomorphism. We say that a distribution $\mathcal{D}$ over a countable domain $\mathcal{X}$ has full support if each $x \in X$ has nonzero probability.
|
| 28 |
+
|
| 29 |
+
Let ${\mathcal{G}}_{n}$ be the set of all finite graphs of size at most $n$ and let $\hom \left( {F,G}\right)$ denote the number of homomorphisms of $F$ to $G$ for arbitrarily graphs and ${\varphi }_{n}\left( G\right) = \hom \left( {{\mathcal{G}}_{n},G}\right) = {\left( \hom \left( F,G\right) \right) }_{F \in {\mathcal{G}}_{n}}$ denote the Lovász vector of $G$ for ${\mathcal{G}}_{n}$ . Lovász [1967] proved the following classical theorem.
|
| 30 |
+
|
| 31 |
+
Theorem 1 (Lovász [1967]). Two arbitrary graphs $G,H \in {\mathcal{G}}_{n}$ are isomorphic iff ${\varphi }_{n}\left( G\right) = {\varphi }_{n}\left( H\right)$ .
|
| 32 |
+
|
| 33 |
+
We can define a simple kernel on ${\mathcal{G}}_{n}$ with the canonical inner product using ${\varphi }_{n}$ .
|
| 34 |
+
|
| 35 |
+
Definition 2 (Complete Lovász kernel). Let ${k}_{{\varphi }_{n}}\left( {G,H}\right) = \left\langle {{\varphi }_{n}\left( G\right) ,{\varphi }_{n}\left( H\right) }\right\rangle$ .
|
| 36 |
+
|
| 37 |
+
Note that ${k}_{{\varphi }_{n}}$ is a complete graph kernel [Gärtner et al.,2003] on ${\mathcal{G}}_{n}$ , i.e., ${k}_{{\varphi }_{n}}$ can be used to distinguish non-isomorphic graphs of size $n$ . Similarly, we define complete graph embeddings.
|
| 38 |
+
|
| 39 |
+
Definition 3. Let $\varphi : \mathcal{G} \rightarrow X$ be a permutation-invariant graph embedding from a family of graphs $\mathcal{G}$ to a vector space $X$ . We call $\varphi$ complete (on $\mathcal{G}$ ) if $\varphi \left( G\right) \neq \varphi \left( H\right)$ for all non-isomorphic $G,H \in \mathcal{G}$ .
|
| 40 |
+
|
| 41 |
+
When studying graph embeddings and graph kernels we face the tradeoff between efficiency and expressiveness: complete graph representations are unlikely to be computable in polynomial-time [Gärtner et al., 2003] and hence most practical graph representations drop completeness in favour of polynomial runtime. In our work, we study random graph representations. While dropping completeness and being efficiently computable, this allows us to keep a slightly weaker yet desirable property: completeness in expectation.
|
| 42 |
+
|
| 43 |
+
Definition 4. A graph embedding ${\varphi }_{X}$ , which depends on a random variable $X$ , is complete in expectation if the graph embedding given by the expectation, ${\mathbb{E}}_{X}\left\lbrack {{\varphi }_{X}\left( \cdot \right) }\right\rbrack$ , is complete.
|
| 44 |
+
|
| 45 |
+
Similarly, we say that the corresponding kernel ${k}_{X}\left( {G,H}\right) = \left\langle {{\varphi }_{X}\left( G\right) ,{\varphi }_{X}\left( H\right) }\right\rangle$ is complete in expectation. We can use Lovász' isomorphism theorem to devise graph embeddings that are complete in expectation. For that let ${e}_{F} \in {\mathbb{R}}^{{\mathcal{G}}_{n}}$ be the ’ $F$ th’ standard basis unit-vector of ${\mathcal{G}}_{n}$
|
| 46 |
+
|
| 47 |
+
Theorem 5. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{n}$ with full support and $G \in {\mathcal{G}}_{n}$ . Then the graph embedding ${\varphi }_{F}\left( G\right) = \hom \left( {F,G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel $k$ are complete in expectation.
|
| 48 |
+
|
| 49 |
+
§ 2.1 EXPECTATION COMPLETE EMBEDDINGS AND KERNELS ON ${\MATHCAL{G}}_{\INFTY }$
|
| 50 |
+
|
| 51 |
+
In this section, we generalise the previous result to the set of all finite graphs ${\mathcal{G}}_{\infty }$ . Theorem 1 holds for $G,H \in {\mathcal{G}}_{\infty }$ and the mapping ${\varphi }_{\infty }$ that maps each $G \in {\mathcal{G}}_{\infty }$ to an infinite-dimensional vector. The resulting vector space, however, is not a Hilbert space with the usual inner product. To see this, consider any graph $G$ that has at least one edge. Then $\hom \left( {{P}_{n},G}\right) \geq 2$ for every path ${P}_{n}$ of length $n \in \mathbb{N}$ . Thus, the inner product $\left\langle {{\varphi }_{\infty }\left( G\right) ,{\varphi }_{\infty }\left( G\right) }\right\rangle$ is not finite.
|
| 52 |
+
|
| 53 |
+
To define a kernel on ${\mathcal{G}}_{\infty }$ without fixing a maximum size of graphs, i.e., restricting to ${\mathcal{G}}_{n}$ for some $n \in \mathbb{N}$ , we define the countable-dimensional vector ${\bar{\varphi }}_{\infty }\left( G\right) = {\left( {\hom }_{\left| V\left( G\right) \right| }\left( F,G\right) \right) }_{F \in {\mathcal{G}}_{\infty }}$ where
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\hom }_{\left| V\left( G\right) \right| }\left( {F,G}\right) = \left\{ \begin{array}{ll} \hom \left( {F,G}\right) & \text{ if }\left| {V\left( F\right) }\right| \leq \left| {V\left( G\right) }\right| , \\ 0 & \text{ if }\left| {V\left( F\right) }\right| > \left| {V\left( G\right) }\right| . \end{array}\right.
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
That is, ${\bar{\varphi }}_{\infty }\left( G\right)$ is the projection of ${\varphi }_{\infty }\left( G\right)$ to the subspace that gives us the homomorphism counts for all graphs of size at most of $G$ . Note that this is a well-defined map of graphs to a subspace of the ${\ell }^{2}$ space, i.e., sequences ${\left( {x}_{i}\right) }_{i}$ over $\mathbb{R}$ with $\mathop{\sum }\limits_{i}{\left| {x}_{i}\right| }^{2} < \infty$ . Hence, the kernel given by the canonical inner product ${\bar{k}}_{\infty }\left( {G,H}\right) = \left\langle {{\bar{\varphi }}_{\infty }\left( G\right) ,{\bar{\varphi }}_{\infty }\left( H\right) }\right\rangle$ is finite and positive semi-definite. Note that we can rewrite
|
| 60 |
+
|
| 61 |
+
${\bar{k}}_{\infty }\left( {G,H}\right) = {k}_{\min }\left( {G,H}\right) = \left\langle {{\varphi }_{{n}^{\prime }}\left( G\right) ,{\varphi }_{{n}^{\prime }}\left( H\right) }\right\rangle$ where ${n}^{\prime } = \min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ . While the first hunch might be to count patterns up to $\max \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ , it is thus not necessary to guarantee completeness. In addition to it, the corresponding map ${k}_{\max }$ is not even positive semi-definite.
|
| 62 |
+
|
| 63 |
+
Lemma 6. ${k}_{\min }$ is a complete kernel on ${\mathcal{G}}_{\infty }$ .
|
| 64 |
+
|
| 65 |
+
Given a sample of graphs $S$ , we note that for $n = \mathop{\max }\limits_{{G \in S}}\left| {V\left( G\right) }\right|$ we only need to consider patterns up to size $n{.}^{1}$ As the number of graphs of a given size $n$ are superexponential it is impractical to compute all such counts. Hence, we propose to resort to sampling.
|
| 66 |
+
|
| 67 |
+
Theorem 7. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{\infty }$ with full support and $G \in {\mathcal{G}}_{\infty }$ . Then ${\bar{\varphi }}_{F}\left( G\right) =$ ${\hom }_{\left| V\left( G\right) \right| }\left( {F,G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel are complete in expectation.
|
| 68 |
+
|
| 69 |
+
§ 2.2 SAMPLING MULTIPLE PATTERNS
|
| 70 |
+
|
| 71 |
+
Sampling just a one pattern $F$ will not result in a practical graph embedding. Thus, we propose to sample $\ell$ patterns ${F}_{1},\ldots ,{F}_{\ell } \sim \mathcal{D}$ i.i.d. and construct the embedding ${\varphi }^{\ell }\left( G\right) \in {\mathbb{N}}_{0}^{\ell }$ with ${\left( {\varphi }^{\ell }\left( G\right) \right) }_{i} =$ $\hom \left( {{F}_{i},G}\right)$ if $\left| {V\left( {F}_{i}\right) }\right| \leq \left| {V\left( G\right) }\right|$ and 0 otherwise for all $i \in \left\lbrack \ell \right\rbrack$ . Note that, for the dot product it holds that ${\varphi }^{\ell }{\left( G\right) }^{T}{\varphi }^{\ell }\left( H\right) = \mathop{\sum }\limits_{{i = 1}}^{\ell }\left\langle {{\bar{\varphi }}_{{F}_{i}}\left( G\right) ,{\bar{\varphi }}_{{F}_{i}}\left( H\right) }\right\rangle$ as long as we do not sample patterns twice. ${}^{2}$
|
| 72 |
+
|
| 73 |
+
§ 3 COMPUTING EMBEDDINGS IN EXPECTED POLYNOMIAL TIME
|
| 74 |
+
|
| 75 |
+
A graph embedding that is complete in expectation must be efficiently computable to be practical. In this section, we describe our main result achieving polynomial runtime in expectation. The best known algorithms [Díaz et al.,2002] to exactly compute $\hom \left( {F,G}\right)$ take time
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right) \tag{1}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $\operatorname{tw}\left( F\right)$ is the treewidth of the pattern graph $H$ . Thus, a straightforward sampling strategy to achieve polynomial runtime in expectation is to give decreasing probability mass to patterns with higher treewidth. Unfortunately, in the case of ${\mathcal{G}}_{\infty }$ this is not possible.
|
| 82 |
+
|
| 83 |
+
Lemma 8. There exists no distribution $\mathcal{D}$ with full support on ${\mathcal{G}}_{\infty }$ such that the expected runtime of Eq. (1) becomes polynomial in $\left| {V\left( G\right) }\right|$ for all $G \in {\mathcal{G}}_{\infty }$ .
|
| 84 |
+
|
| 85 |
+
To resolve this issue we have to take the size of the largest graph in our sample into account. For a given sample $S \subseteq {\mathcal{G}}_{n}$ of graphs, where $n$ is the maximum number of vertices in $S$ , we can construct simple distributions achieving polynomial time in expectation.
|
| 86 |
+
|
| 87 |
+
Theorem 9. There exists a distribution $\mathcal{D}$ such that computing the expectation complete graph embedding ${\bar{\varphi }}_{X}\left( G\right)$ takes polynomial time in $\left| {V\left( G\right) }\right|$ in expectation for all $G \in {\mathcal{G}}_{n}$ .
|
| 88 |
+
|
| 89 |
+
Proof. Sketch. We first draw a treewidth upper bound $k$ from an appropriate distribution. For example, a Poisson distribution with parameter $\lambda = \mathcal{O}\left( {{}^{logn}//n}\right)$ is sufficient. We have to ensure that each possible graph with treewidth up to $k$ gets a nonzero probability of being drawn. For that we first draw a $k$ -tree, a maximal graph of treewidth $k$ , and then take a random subgraph of it.
|
| 90 |
+
|
| 91 |
+
Note that we do not require that the patterns are sampled uniformly at random. It merely suffices that each pattern has a nonzero probability of being drawn. To satisfy a runtime of $\mathcal{O}\left( {\left| V\left( G\right) \right| }^{d + 1}\right)$ in expectation, for example, a Poisson distribution with $\lambda \leq \frac{1 + d\log n}{n}$ is sufficient.
|
| 92 |
+
|
| 93 |
+
§ 4 RELATED WORK
|
| 94 |
+
|
| 95 |
+
The $k$ -dimensional Weisfeiler-Leman (WL) test and the Lovász vector restricted to patterns up to treewidth $k$ are equally expressive [Dell et al.,2018; Dvořák,2010]. We propose an efficiently computable embedding matching the expressiveness of $k$ -WL, and hence also MPNNs and $k$ -GNNs [Morris et al., 2019; Xu et al., 2019], in expectation, see Appendix D.
|
| 96 |
+
|
| 97 |
+
Dell et al. [2018] proposed a complete graph kernel based on homomorphism counts related to our ${k}_{\min }$ kernel. Instead of implicitly restricting the embedding to only a finite number of patterns, as we do, they weigh the homomorphism counts such that the inner product defined on the whole Lovász vectors converges. However, Dell et al. [2018] do not discuss runtime aspects and so, our approach can be seen as an efficient sampling-based alternative to their weighted kernel.
|
| 98 |
+
|
| 99 |
+
${}^{1}$ Actually, it is sufficient to go up to the size of the second largest graph.
|
| 100 |
+
|
| 101 |
+
${}^{2}$ Note that it does not affect the expressiveness results if we sample a pattern multiple times.
|
| 102 |
+
|
| 103 |
+
Table 1: Cross-validation accuracies on benchmark datasets
|
| 104 |
+
|
| 105 |
+
max width=
|
| 106 |
+
|
| 107 |
+
method MUTAG IMDB-BIN IMDB-MULTI PAULUS25 CSL
|
| 108 |
+
|
| 109 |
+
1-6
|
| 110 |
+
GHC-tree ${89.28} \pm {8.26}$ ${72.10} \pm {2.62}$ ${48.60} \pm {4.40}$ ${7.14} \pm {0.00}$ ${10.00} \pm {0.00}$
|
| 111 |
+
|
| 112 |
+
1-6
|
| 113 |
+
GHC-cycle ${87.81} \pm {7.46}$ ${70.93} \pm {4.54}$ ${47.41} \pm {3.67}$ ${7.14} \pm {0.00}$ ${100.00} \pm {0.00}$
|
| 114 |
+
|
| 115 |
+
1-6
|
| 116 |
+
GNTK ${89.46} \pm {7.03}$ ${75.61} \pm {3.98}$ ${51.91} \pm {3.56}$ ${7.14} \pm {0.00}$ ${10.00} \pm {0.00}$
|
| 117 |
+
|
| 118 |
+
1-6
|
| 119 |
+
GIN ${89.40} \pm {5.60}$ ${70.70} \pm {1.10}$ ${43.20} \pm {2.00}$ ${7.14} \pm {00}$ ${10} \pm {0.00}$
|
| 120 |
+
|
| 121 |
+
1-6
|
| 122 |
+
ours (SVM) ${86.85} \pm {1.28}$ ${69.83} \pm {0.15}$ ${47.31} \pm {0.46}$ ${100.00} \pm {0.00}$ ${38.89} \pm {11.18}$
|
| 123 |
+
|
| 124 |
+
1-6
|
| 125 |
+
ours (MLP) ${88.33} \pm {1.11}$ ${70.37} \pm {0.85}$ ${48.75} \pm {0.20}$ ${49.84} \pm {6.74}$ ${11.78} \pm {1.54}$
|
| 126 |
+
|
| 127 |
+
1-6
|
| 128 |
+
|
| 129 |
+
Using graph homomorphism counts as a feature embedding for graph learning tasks was proposed before by Hoang and Maehara [2020]. They discuss various aspects of homomorphism counts important for learning tasks, in particular, universality aspects and their power to capture certain properties of the graph, such as bipartiteness. Instead of relying on sampling patterns, which we use to guarantee expectation in completeness, they propose to use a fixed number of small pattern graphs. This limits the practical usage of their approach due to computational complexity reasons. In their experiments the authors only use tree and cycle patterns up to size 6 and 8, respectively, whereas we allow patterns of arbitrary size and treewidth, guaranteeing polynomial runtime in expectation. Simiarly to Hoang and Maehara [2020], we use the computed embeddings as features for a kernel SVM (with RBF kernel) and an MLP.
|
| 130 |
+
|
| 131 |
+
Instead of embedding the whole graph into a vector of homomorphism counts, Barceló et al. [2021] proposed to use rooted homomorphism counts as node features in conjunction with a graph neural network (GNN). They discuss the required patterns to be as or more expressive than the $k$ -WL test. We achieve this in expectation when selecting an appropriate sampling distribution.
|
| 132 |
+
|
| 133 |
+
Wu et al. [2019] adapted random Fourier features [Rahimi and Recht, 2007] to graphs and proposed an sampling-based variant of the global alignment graph kernel. Similar sampling-based ideas were discussed before for the graphlet kernel [Shervashidze et al., 2009] and frequent-subtree kernels [Welke et al., 2015]. All three papers do not discuss expressiveness aspects, however.
|
| 134 |
+
|
| 135 |
+
§ 5 EXPERIMENTS
|
| 136 |
+
|
| 137 |
+
We performed some preliminary experiments on some benchmark datasets. To this end, we sample a fixed number $\ell = {30}$ of patterns as described in Appendix A and compute the sampled min kernel as described in Section 3. Table 1 shows averaged accuracies of SVM and MLP classifiers trained on our feature sets. We follow the experimental design of Hoang and Maehara [2020] and compare to their published results. Even with as little as 30 features, the results of our approach are comparable to the competitors on real world datasets. Furthermore, it is interesting to note that a SVM with RBF kernel and our features performs perfectly on the PAULUS25 dataset, i.e., it is able to decide isomorphism for the strongly regular graphs in this dataset. It also shows good performance, although with high deviation, on the CSL dataset, where only the method specifically designed for this dataset, GHC-cycle, performs well. We also included GNTK [Du et al., 2019] and GIN [Xu et al., 2019].
|
| 138 |
+
|
| 139 |
+
§ 6 CONCLUSION
|
| 140 |
+
|
| 141 |
+
As future work, we will investigate approximate counts to make our implementation more efficient [Beaujean et al., 2021]. It is unclear how this affects expressiveness, as we loose permutation-invariance. Going beyond expressiveness results, our goal is to further study graph similarities suitable for graph learning, such as the cut distance as proposed by Grohe [2020]. Finally, instead of sampling patterns from a fixed distribution, a more promising variant is to adapt the sampling process in a sample-dependent manner. One could, for example, draw new patterns until each graph in the sample has a unique embedding (up to isomorphism) or at least until we can distinguish 1-WL classes. Alternatively, we could pre-compute frequent or interesting patterns and use them to adapt the distribution. Such approaches would employ the power of randomisation to select a fitting graph representation in a data-driven manner, instead of relying on a finite set of fixed and pre-determined patterns like in previous work [Barceló et al., 2021; Bouritsas et al., 2022].
|
papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,532 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ScatterSample: Diversified Label Sampling for Data Efficient Graph Neural Network Learning
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
What target labels are most effective for graph neural network (GNN) training? In some applications where GNNs excel-like drug design or fraud detection, labeling new instances is expensive. We develop a data-efficient active sampling framework, ScatterSample, to train GNNs under an active learning setting. ScatterSample employs a sampling module termed DiverseUncertainty to collect instances with large uncertainty from different regions of the sample space for labeling. To ensure diversification of the selected nodes, DiverseUncertainty clusters the high uncertainty nodes and selects the representative nodes from each cluster. Our ScatterSample algorithm is further supported by rigorous theoretical analysis demonstrating its advantage compared to standard active sampling methods that aim to simply maximize the uncertainty and not diversify the samples. In particular, we show that ScatterSample is able to efficiently reduce the model uncertainty over the whole sample space. Our experiments on five datasets show that ScatterSample significantly outperforms the other GNN active learning baselines, specifically it reduces the sampling cost by up to $\mathbf{{50}}\%$ while achieving the same test accuracy.
|
| 12 |
+
|
| 13 |
+
## 17 1 Introduction
|
| 14 |
+
|
| 15 |
+
How to spot the most effective labeled nodes for GNN training? Graph neural networks (GNN) [KW16; Vel+17; Wu+19a] which employ non-linear and parameterized feature propagation [ZG02] to compute graph representations, have been widely employed in a broad range of learning tasks and achieved state-of-art-performance in node classification, link prediction and graph classification. Training GNNs for node classification in the supervised learning setup typically requires a large number of labeled examples such that the GNN can learn from diverse node features and node connectivity patterns. However, labeling costs can be expensive which inhibits the possibility of acquiring a large number of node labels. For example, the GNNs can be used to assist the drug design. However, evaluating the properties of a molecule is time consuming. It usually takes one to two weeks for evaluation using the current simulation tools, not to mention the cost spent on the laboratory experiments.
|
| 16 |
+
|
| 17 |
+
Active learning (AL) aims at maximizing the generalization performance under a constrained labeling budget [Set09]. AL algorithms choose which training instances to use as labeled targets to maximize the performance of the learned model. Previous research in AL algorithms for GNN training can be categorized with respect to whether the AL methods take into account the model weights (model aware) or can be applied to any model (model agnostic). Model agnostic algorithms label a representative subset of the nodes such that the labeled nodes can cover the whole sample space [Wu+19b; Zha+21]. Model aware AL algorithms leverage the GNN model to compute the node uncertainty, which combines both the input features and graph structure [CZC17; Gao+18]. AL then picks the nodes with the highest uncertainty.
|
| 18 |
+
|
| 19 |
+
However, maximizing the uncertainty of the labeled nodes may not balance the exploration and exploitation of the classification boundary [KVAG19]. For example, if there exist a group of nodes close to the classification boundary but are clustered in a small region of the graph, just labeling the most uncertain nodes could only explore that specific region of the classification boundary,
|
| 20 |
+
|
| 21 |
+
while others are ignored, and the classification boundary is not well explored. Thus, our first main contribution is to simultaneously consider the node uncertainty and the diversification of the uncertain nodes over the sample space.
|
| 22 |
+
|
| 23 |
+
Challenges of diversifying uncertain nodes. Graph data present additional challenges to diversify the uncertain nodes. Diversification requires modeling the sample space using carefully selected representations for the nodes. However, there are two challenges of a suitable node representations.
|
| 24 |
+
|
| 25 |
+
Challenge 1: Sample space for graph data requires a representation which takes both the graph structure and node features into account (see section sec 4.2).
|
| 26 |
+
|
| 27 |
+
Challenge 2: The representation should be robust to the model trained so far, and not be biased by the limited amount of available labels.
|
| 28 |
+
|
| 29 |
+
Our approach. We develop ScatterSample for data-efficient GNN learning. ScatterSample allows us to explore the classification boundary while exploiting the nodes with the highest uncertainty. To diversify the uncertain samples on graph-structured data, ScatterSample includes a DiverseUncer-tainty module to address the two challenges above, which clusters the uncertain nodes representations over the whole sample space.
|
| 30 |
+
|
| 31 |
+
Our Contributions. The contributions of our work are the following.
|
| 32 |
+
|
| 33 |
+
- Insight: ScatterSample is the first method that proposes and implements diversification of the uncertain samples for data efficient GNN learning.
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
Figure 1: ScatterSample wins: test accuracy vs. sampling ratio on the ogbn-products dataset (62M edges).
|
| 38 |
+
|
| 39 |
+
- Effectiveness: We evaluate ScatterSample on five different graph datasets, where ScatterSample saves up to ${50}\%$ labeling cost, while still achieving the same test accuracy with state-of-the-art baselines.
|
| 40 |
+
|
| 41 |
+
- Theoretical Guarantees: Our theoretical analysis proves the superiority of ScatterSample over the standard, uncertainty-sampling method (see Theorem 5.1). Simulation results further confirm our theory.
|
| 42 |
+
|
| 43 |
+
## 2 Related Work
|
| 44 |
+
|
| 45 |
+
This section will review the uncertainty based active learn-
|
| 46 |
+
|
| 47 |
+
ing research and implementation of active learning in GNNs.
|
| 48 |
+
|
| 49 |
+
Active Learning (AL):. Active learning aims at selecting a subset of training data as labeling targets such that the model performance is optimized [Set09; Han+14]. Uncertainty sampling is one major approach of active learning, which labels a group of samples to maximally reduce the model uncertainty. To achieve this goal, uncertainty sampling selects samples around the decision boundary [THTS05]. Uncertainty sampling has also been applied to the deep learning field, and researchers have proposed different methods to measure the uncertainty of samples. For example, Ducoffe and Precioso [DP18] developed a margin based method which uses the distance from a sample to its smallest adversarial sample to approximate the distance to the decision boundary.
|
| 50 |
+
|
| 51 |
+
AL and GNNs: AL with GNNs requires to consider the graph structure information into the node selection. Wu et al. [Wu+19b] uses the propagated features followed by K-Medoids clustering of nodes to select a group of representative instances. Zhang et al. $\left\lbrack {\mathrm{{Zha}} + {21}}\right\rbrack$ measures importance of nodes through combining the diversity and influence scores. However the above approaches do not account for the learned GNN model, which may limit the generalization performance. Uncertainty sampling has also been implemented to select nodes. Cai et al. [CZC17] propose to use a weighted average of the node uncertainty, graph centrality and information density scores. Gao et al. [Gao+18] further propose a different approach to combine the three features with multi-armed bandit techniques. Although useful, these approaches aim choose nodes with the highest uncertainty and may be challenged if the selected nodes are clustered in a small region of the graph, which will not provide 92 good graph coverage. Our work addresses this limitation by diversifying the selected nodes based on the graph structure.
|
| 52 |
+
|
| 53 |
+
## 3 Preliminaries
|
| 54 |
+
|
| 55 |
+
Problem Statement. Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is the set of nodes with $N = \left| \mathcal{V}\right|$ nodes and $\mathcal{E}$ is the set of edges. The set of nodes is divided into the training set ${\mathcal{V}}_{\text{train }}$ , validation set ${\mathcal{V}}_{\text{valid }}$ and testing set ${\mathcal{V}}_{\text{test }}$ . Each node ${v}_{n} \in \mathcal{V}$ is associated with a feature vector ${\mathbf{x}}_{n} \in {\mathbb{R}}^{d}$ and a label ${y}_{n} \in \{ 1,2,\ldots , C\}$ . Let $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ be the feature matrix of all the nodes in the graph, where the $i$ -th row of $\mathbf{X}$ corresponds to ${v}_{n},\mathbf{y} = \left( {{y}_{1},{y}_{2},\ldots ,{y}_{n}}\right) \in {\mathbb{R}}^{n}$ is the vector containing all the labels. To learn the labels of the nodes, we train a GNN model $M$ which maps the graph $\mathcal{G}$ and $\mathbf{X}$ to the the prediction of labels $\widehat{\mathbf{y}}$ .
|
| 56 |
+
|
| 57 |
+
Active Learning: Active learning picks a subset of nodes $S \subset {\mathcal{V}}_{\text{train }}$ from the training set and query their labels ${\mathbf{y}}_{S}$ . A GNN model ${M}_{S}$ is trained with respect to the feature matrix $\mathbf{X}$ and ${\mathbf{y}}_{S}$ . Given the sampling budget $B$ , the goal of active learning is to find a set $S\left( {\left| S\right| \leq B}\right)$ such that the generalization loss is minimized, i.e.
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\underset{S : \left| S\right| \leq b}{\arg \min }{\mathbb{E}}_{{v}_{n} \in {\mathcal{V}}_{\text{test }}}\left( {\ell \left( {{y}_{n}, f\left( {{\mathbf{x}}_{n} \mid \mathcal{G},{M}_{S}}\right) }\right) }\right) .
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
### 3.1 Graph neural networks and message passing
|
| 64 |
+
|
| 65 |
+
In this section we present the basic operation of the GNN at layer $l$ . With the message passin paradigm, the GNN layer updates for most GNN models can be interpreted as message vectors that are exchanged among neighbors over the edges and nodes in the graph.
|
| 66 |
+
|
| 67 |
+
For the following let ${\mathbf{h}}_{v}^{\left( l\right) } \in {\mathbb{R}}^{{d}_{1}}$ be the hidden representation for node $v$ and layer $l$ . Consider $\phi$ that is a message function combining the hidden representations for nodes $v, u$ . Next, using the message vectors for neighboring edges the node representations are updated as follows
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\mathbf{h}}_{v}^{\left( l + 1\right) } = \psi \left( {{\mathbf{h}}_{v}^{\left( l\right) },\rho \left( \left\{ {\phi \left( {{\mathbf{h}}_{v}^{\left( l\right) },{\mathbf{h}}_{u}^{\left( l\right) }}\right) : \left( {u, v}\right) \in \mathcal{E}}\right\} \right) }\right) \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where $\rho$ is a reduce function used to aggregate the messages coming from the neighbors of $v$ and $\psi$ is an update function defined on each node to update the hidden node representation for layer $l + 1$ . By defining $\phi ,\rho ,\psi$ different GNN models can be instantiated [KW16; DBV16; Bro+17; IMG20]. These functions are also parameterized by learnable matrices that are updated during training.
|
| 74 |
+
|
| 75 |
+
## 4 Proposed method: ScatterSample
|
| 76 |
+
|
| 77 |
+
We propose the ScatterSample algorithm, which dynamically samples a set of diverse nodes with large uncertainties in order to more efficiently explore the classification boundary during GNN training. At each round, our method calculates the uncertainty for all nodes with the GNN model trained so far. Then, ScatterSample clusters the top uncertain nodes and selecting nodes from each cluster to obtain diverse samples. The labels of the selected nodes are queried and used as supervision to continue training the GNN model for the next round. This section explains our method in detail.
|
| 78 |
+
|
| 79 |
+
### 4.1 Selecting the uncertain nodes
|
| 80 |
+
|
| 81 |
+
The uncertainty of a node is measured by the information entropy. Given a trained GNN model at the $t$ -th sampling round, ScatterSample first computes the information entropy ${\phi }_{\text{entropy }}\left( {v}_{n}\right)$ of nodes in ${\mathcal{V}}_{\text{train }}$ based on the current GNN model, i.e.
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{\phi }_{\text{entropy }}\left( {v}_{n}\right) = - \mathop{\sum }\limits_{{j = 1}}^{C}\log \left( {\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X}, M}\right\rbrack }\right) \mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X}, M}\right\rbrack \tag{2}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X}, M}\right\rbrack$ is probability that node ${v}_{n}$ belongs to class $j$ given the GNN model $M$ . Then, ScatterSample ranks all the nodes in order of decreasing uncertainty, and picks the ones with the largest information entropy into a candidate set ${\mathcal{C}}_{t} \subset {\mathcal{V}}_{\text{train }}$ . Different than traditional AL techniques that select training targets solely based on uncertainty, we then move on to pick a diverse subset of the uncertain nodes over the sampling space.
|
| 88 |
+
|
| 89 |
+
### 4.2 Diversifying uncertain nodes
|
| 90 |
+
|
| 91 |
+
Our goal is to ensure the diversity of selected nodes for labeling, by exploring the node distribution over the sample space. At this point naturally, the question arises How to model the sample space? We need a representation for nodes to define the space, based on which we could measure the samples' distances. A straightforward approach is to use the GNN embedding space since the classification boundary is directly depicted there. However, GNN embeddings fail to address the two challenges in the introduction section.
|
| 92 |
+
|
| 93 |
+
First, with active learning, a limited number of labeled nodes are available in the initial stages. Hence, only the already labeled nodes may have reliable GNN embeddings and biased subsequent samples. Second, GNN embeddings for node classification may not carry enough information for diversification. GNNs usually do not have an MLP layer connecting to the output. The final GNN outputs of uncertain nodes are not diverse enough since the high uncertain nodes may have similar class probabilities (class probabilities close to uniform). Conversely, embeddings of intermediate GNN layers may have an appropriate dimension but lack information of the expanded ego-network.
|
| 94 |
+
|
| 95 |
+
These drawbacks are confirmed in Sec. 6.2, where we show that using GNN embeddings as proxy representations leads to a performance drop. Moreover, different from other machine learning problems, the nodes are correlated with each other, and we also need to take the graph structure into account when diversifying the samples. Hence, to address all these considerations we will employ a $k$ -step propagation of the original node features based on the graph structure as a proxy representation for the nodes. The $k$ -step propagation of nodes ${\mathbf{X}}^{\left( k\right) } = \left( {{\mathbf{x}}_{1}^{\left( k\right) },{\mathbf{x}}_{2}^{\left( k\right) },\ldots ,{\mathbf{x}}_{N}^{\left( k\right) }}\right)$ is defined as follows
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\mathbf{X}}^{\left( k\right) } \mathrel{\text{:=}} {\mathbf{{SX}}}^{\left( k - 1\right) } \tag{3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $\mathbf{S}$ is the normalized adjacency matrix, and ${\mathbf{X}}^{\left( 0\right) }$ are the initial node features. The operation in (3) is efficient and amenable to a mini-batch implementation. Such representations are well-known to succinctly encode the node feature distribution and graph structure. Next, we calculate the proxy representations for the candidate high uncertainty nodes in the set ${\mathcal{C}}_{t}$ . To maximize the diversity of the samples, we cluster the proxy representations in ${\mathcal{C}}_{t}$ using $k$ -means++ into ${B}_{t}$ clusters [AV06], and select the nodes closest to the cluster centers for labeling, by using the ${L}_{2}$ distance metric. One node from each cluster is selected that amounts to ${B}_{t}$ samples.
|
| 102 |
+
|
| 103 |
+
Algorithm 1 ScatterSample Algorithm
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
1: Input: ${\mathcal{V}}_{\text{train }}$ , GNN model $M$ , number of propagation layers $k$ , number of sampling round $T$ ,
|
| 108 |
+
|
| 109 |
+
sampling redundancy $r$ , initial sampling budget ${B}_{0}$ and total sampling budget $B$ .
|
| 110 |
+
|
| 111 |
+
Initialize $S = \varnothing$
|
| 112 |
+
|
| 113 |
+
Compute ${\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{V}}_{\text{train }}$ as in (3).
|
| 114 |
+
|
| 115 |
+
Initial Sampling:
|
| 116 |
+
|
| 117 |
+
Use $k$ -means++ to cluster $\left\{ {\mathbf{x}}_{n}^{\left( k\right) }\right\}$ into ${B}_{0}$ clusters.
|
| 118 |
+
|
| 119 |
+
Add a node closest to the cluster center per cluster to $S$ .
|
| 120 |
+
|
| 121 |
+
Query the labels of nodes ${v}_{n} \in S$ , denoted by ${\mathbf{y}}_{S}$ .
|
| 122 |
+
|
| 123 |
+
Train model $M$ using $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ .
|
| 124 |
+
|
| 125 |
+
Dynamic Sampling:
|
| 126 |
+
|
| 127 |
+
Initialize sampling round $t = 1$
|
| 128 |
+
|
| 129 |
+
while $t < T$ do
|
| 130 |
+
|
| 131 |
+
Let ${B}_{t} = \min \left( {B - \left| S\right| ,\left( {B - {B}_{0}}\right) /T}\right)$
|
| 132 |
+
|
| 133 |
+
Use the DiverseUncertainty algorithm to select ${S}_{t}$
|
| 134 |
+
|
| 135 |
+
Query the labels of ${S}_{t}$ , and update $S = S \cup {S}_{t}$ .
|
| 136 |
+
|
| 137 |
+
Train model $M$ over $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ . Update $t = t + 1$ .
|
| 138 |
+
|
| 139 |
+
end while
|
| 140 |
+
|
| 141 |
+
---
|
| 142 |
+
|
| 143 |
+
Clearly, the size of the candidate set $\left| {\mathcal{C}}_{t}\right| \geq {B}_{t}$ , however deciding how many candidate nodes to choose from is important. We parameterize the size as a multiple of the selected nodes namely $\left| {\mathcal{C}}_{t}\right| = r{B}_{t}$ , where $r > 1$ is the sampling redundancy. If $r$ is too small, the selected nodes are closer to the classification boundary (have larger information entropy) but the nodes selected may not be diverse enough. On the other hand, if $r$ is too large, the set will be diverse, but the selected nodes may be far away from the classification boundary. Therefore, it is critical to pick a suitable $r$ to achieve a sweet point between diversity and uncertainty. We leave the discussion of choosing $r$ to Sec. 6.2. Besides empirical validation with experiments in five real datasets (see Sec. 6), our diversification approach is theoretically motivated (see Sec. 5).
|
| 144 |
+
|
| 145 |
+
The pseudo code of ScatterSample is shown in Algorithm 1. ScatterSample is a multiple rounds sampling scheme, which includes an initial sampling step and dynamic sampling steps. ScatterSample
|
| 146 |
+
|
| 147 |
+
Algorithm 2 DiverseUncertainty Algorithm
|
| 148 |
+
|
| 149 |
+
---
|
| 150 |
+
|
| 151 |
+
Input: ${\mathcal{V}}_{\text{train }},\left\{ {{\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{C}}_{t}}\right\} , r,{B}_{t}$
|
| 152 |
+
|
| 153 |
+
Compute ${\phi }_{\text{entropy }}\left( v\right) \forall v \in {\mathcal{V}}_{\text{train }}$ ; see 2).
|
| 154 |
+
|
| 155 |
+
${\mathcal{C}}_{t} \leftarrow \left\{ {r{B}_{t}\text{nodes with largest}{\phi }_{\text{entropy }}\left( v\right) }\right\}$ .
|
| 156 |
+
|
| 157 |
+
Use $k$ -means++ to cluster the ${\mathbf{x}}_{n}^{\left( k\right) }$ (for all $n \in {\mathcal{C}}_{t}$ ) into ${B}_{t}$ clusters.
|
| 158 |
+
|
| 159 |
+
${S}_{t} \leftarrow \varnothing$
|
| 160 |
+
|
| 161 |
+
for $j = 1,2,\ldots ,{B}_{t}$ do
|
| 162 |
+
|
| 163 |
+
Compute the cluster center ${\mathbf{v}}_{j}$ of cluster $j$
|
| 164 |
+
|
| 165 |
+
Pick node $x \leftarrow \arg \mathop{\min }\limits_{{n \in {\mathcal{C}}_{t}}}\begin{Vmatrix}{{\mathbf{x}}_{n}^{\left( k\right) } - {\mathbf{v}}_{j}}\end{Vmatrix}$
|
| 166 |
+
|
| 167 |
+
${S}_{t} \leftarrow {S}_{t} \cup \{ x\}$
|
| 168 |
+
|
| 169 |
+
end for
|
| 170 |
+
|
| 171 |
+
Return ${S}_{t}$
|
| 172 |
+
|
| 173 |
+
---
|
| 174 |
+
|
| 175 |
+
first computes the $k$ -step features propagation of all the nodes in the training set using (3), and clusters them into ${B}_{0}$ clusters, where ${B}_{0}$ is the initial sampling budget. Then, ScatterSample picks the nodes closest to the cluster centers as the initial training samples and queries their labels. The purpose of clustering $k$ -step feature propagations is to enforce the initial training set to spread out over the whole sample space. It is also helpful to explore the classification boundary since if the initial sampled nodes are not diverse enough, we cannot picture the classification boundary of the regions that are far away from the initial training samples. ScatterSample repeats the dynamic sampling described in Algorithm 2 until the sampling budget $B$ is exhausted. The next section fortifies our diversification method with theoretical guarantees.
|
| 176 |
+
|
| 177 |
+
## 5 Theoretical analysis
|
| 178 |
+
|
| 179 |
+
In Sec. 6.2, we have shown that DiverseUncertainty is significantly better than Uncertainty algorithm. In this section, we provide theoretical analysis and simulation results to demonstrate the benefits of DiverseUncertainty and explains why MaxUncertainty algorithm may fail. The results presented here give a theoretical basis for the superiority of our method as established in the experiments in Section 6.
|
| 180 |
+
|
| 181 |
+
### 5.1 Analysis setup
|
| 182 |
+
|
| 183 |
+
For the analysis, we employ the Gaussian Process (GP) model [O'H78]. GP models offer a flexible approach to model complex functions and are robust to small sample sizes [See04]. Moreover, the uncertainty of the prediction can be easily computed using a GP model. Neural network models and GNNs interpolate the observed samples, while GPs provide a robust framework to interpolate samples, that is amenable to analysis.
|
| 184 |
+
|
| 185 |
+
Assume the label ${y}_{i} \in \mathbb{R}$ is dependent on the propagated features ${\mathbf{x}}_{i}^{\left( k\right) }$ through a GP model. The label ${y}_{i}$ is modeled by a Gaussian Process, where $\left( {\mathbf{y} \mid {\mathbf{X}}^{\left( k\right) }}\right) \sim N\left( {\mathbf{1}\mu ,\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right) }\right)$ and $\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right)$ is the Gaussian kernel matrix. The kernel is parameterized by ${\mathbf{K}}_{ij}\left( {\mathbf{X}}^{\left( k\right) }\right) = K\left( {{\mathbf{x}}_{i}^{\left( k\right) },{\mathbf{x}}_{j}^{\left( k\right) }}\right) =$ $\exp \left( {-\frac{1}{2}{\left( {\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }\right) }^{T}{\sum }^{-1}\left( {{\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }}\right) }\right)$ , where $\sum = \operatorname{diag}\left( {{\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{d}}\right)$ . Consider that the sample space of ${\mathbf{x}}^{\left( k\right) }$ can be clustered into $m$ clusters ${\mathcal{S}}_{1},{\mathcal{S}}_{2},\ldots ,{\mathcal{S}}_{m}$ , and denote the cluster centers as ${\mathbf{c}}_{1},{\mathbf{c}}_{2},\ldots ,{\mathbf{c}}_{m}$ . Without loss of generality, denote the radius of the cluster, ${d}_{1} \leq {d}_{2} \leq {d}_{3} \leq \cdots < {d}_{m}$ . The clusters are well separated and the distance between the cluster centers are larger than $\delta$ , i.e. $\mathop{\min }\limits_{{i \neq j}}{\begin{Vmatrix}{\mathbf{c}}_{i} - {\mathbf{c}}_{j}\end{Vmatrix}}_{2} \geq \delta \left( {\delta > 2{d}_{m}}\right)$ . Moreover, we consider that there does not exist a cluster dominating the sample space, ${d}_{m}^{2} \leq \tau \mathop{\sum }\limits_{{j = 1}}^{{m - 1}}{d}_{j}^{2}$ and the samples are uniformly distributed over the clusters.
|
| 186 |
+
|
| 187 |
+
### 5.2 MaxUncertainty vs DiverseUncertainty
|
| 188 |
+
|
| 189 |
+
Here, we show that DiverseUncertainty could significantly achieves smaller mean squared error (MSE) compared to MaxUncertainty. Without loss of generality we consider $m$ clusters and the following definitions.
|
| 190 |
+
|
| 191 |
+
- MaxUncertainty Select ${2m}$ most uncertain samples.
|
| 192 |
+
|
| 193 |
+
- DiverseUncertainty Select the 2 most uncertain samples from each cluster.
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
|
| 197 |
+
Figure 2: The area enclosed by the blue circles is the sample space of propagated features (2D case). The green stars are sampled nodes during initial sampling (cluster center). The red stars are the sampled nodes during uncertainty sampling. (a) MaxUncertainty picks the nodes with largest uncertainty, which is equivalent to sampling the boundary of cluster 2. (b) DiverseUncertainty diversifies the clustered nodes, and samples the boundary of both clusters.
|
| 198 |
+
|
| 199 |
+
Before presenting the theory we illustrate the operation of our method and MaxUncertainty in Figure 2. ScatterSample first clusters the samples on the propagated feature space (blue circles in Figure 2), and selects the nodes closest to the cluster centers for initial training (green stars in Figure 2). Then, during the dynamic sampling steps, we compute the uncertainty using equation 4. MaxUncertainty approach will select the nodes with the largest uncertainty. Under our setup, it is equivalent to sample nodes at the boundary of the largest cluster since the distance to the cluster center is the most important factor of uncertainty (Figure 2(a)). While DiverseUncertainty will diversify the high uncertainty nodes, which is equivalent to sample from the boundary of each cluster (Figure 2(b)). The red stars of Figure 2 show the nodes labeled during the uncertainty sampling stage. Since MaxUncertainty algorithm only labels the nodes in cluster 2, cluster 1 is ignored the prediction uncertainty of cluster 2 cannot be reduced. On the contrary, DiverseUncertainty samples nodes from both cluster 1 and 2. Thus, it could reduce the prediction uncertainty in both clusters.
|
| 200 |
+
|
| 201 |
+
Then, the following theorem quantifies the relationship of the MSEs of both algorithms under the setup of Sec. 5.1.
|
| 202 |
+
|
| 203 |
+
Theorem 5.1. Consider a case where feature dimension $d = 1$ . With the above notation and assumptions, let ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{2\theta }}\right\rbrack$ . If we satisfy ${d}_{m}^{2} \geq {d}_{m - 1}^{2} + 4\log \theta$ and $\delta \geq {d}_{m} +$ $\max \left( {\sqrt{{d}_{m}^{2} + \theta \log \left( {9m}\right) },{2\theta }\log \left( \frac{3\sqrt{m}}{1 - {r}_{m}}\right) }\right)$ , we have
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\frac{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ Diverse Uncertainty }}\right) } \geq \frac{1}{2\left( {1 + \tau }\right) }\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3} = \frac{1}{\tau + 1}O\left( \theta \right) .
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
## Proof: The complete proof is included in Appendix B.
|
| 210 |
+
|
| 211 |
+
Theorem 5.1 suggests that when the GP function is smooth enough (large $\theta$ ), the MaxUncertainty will have larger MSE than the MaxDiversity algorithm (proof is in appendix section B). A large $\theta$ suggests a close correlation between the labels of the nodes that are close to each other. It is also common for most of the graph datasets where samples clustered together usually have similar labels. Thus, DiverseUncertainty can achieve a smaller MSE in this case.
|
| 212 |
+
|
| 213 |
+
Table 1: Statistics of graph datasets used in experiments.
|
| 214 |
+
|
| 215 |
+
<table><tr><td>Data</td><td>#Nodes</td><td>#Train Nods</td><td>#Edges</td><td>#Classes</td></tr><tr><td>Cora</td><td>2,708</td><td>1,208</td><td>5,429</td><td>7</td></tr><tr><td>Citeseer</td><td>3,327</td><td>1,827</td><td>4,732</td><td>6</td></tr><tr><td>Pubmed</td><td>19,717</td><td>18,217</td><td>44,328</td><td>3</td></tr><tr><td>Corafull</td><td>19,793</td><td>18,293</td><td>126,842</td><td>70</td></tr><tr><td>ogbn-products</td><td>2,449,029</td><td>196,615</td><td>61,859,149</td><td>47</td></tr></table>
|
| 216 |
+
|
| 217 |
+
## 232 6 Experiments
|
| 218 |
+
|
| 219 |
+
3 We evaluate the performance of ScatterSample on five different datasets.
|
| 220 |
+
|
| 221 |
+
Datasets. We evaluated the different methods on the Cora, Citeseer, Pubmed, Corafull [KW16], and ogbn-products [Hu+20] datasets (Table 1). Besides the ogbn-products, we do not keep original data split of the training and testing set. For the nodes that are not in the validation or testing sets (the validation and testing sets follows the split in the dgl package "dgl.data" [Wan+19]), we will add them to the training set. The labels can only be queried from the training set.
|
| 222 |
+
|
| 223 |
+
Baselines. For different sampling budget $B$ , we compare the test accuracy of ScatterSample with the following graph active learning baselines:
|
| 224 |
+
|
| 225 |
+
- Random sampling. Select $B$ nodes uniformly at random from ${\mathcal{V}}_{\text{train }}$ .
|
| 226 |
+
|
| 227 |
+
- AGE [CZC17]: AGE computes a score which combines the node centrality, information density, and uncertainty, to select $B$ nodes with the highest scores.
|
| 228 |
+
|
| 229 |
+
- ANRMAB [Gao+18]: ANRMAB learns the combination weights of the three metrics used by AGE with multi-armed bandit method.
|
| 230 |
+
|
| 231 |
+
- FeatProp: FeatProp [Wu+19b] clusters the feature propogations into $B$ clusters and pick the nodes closest to the cluster centers.
|
| 232 |
+
|
| 233 |
+
- Grain: [Zha+21] score the node by the weighted average of the influence score and diversity score. And select the top $B$ nodes with largest node scores. Grain includes two different approaches of selecting nodes, Grain (ball-D) and Grain (NN-D).
|
| 234 |
+
|
| 235 |
+
- ScatterSample: For the sample scale graph dataset (Cora, Citeseer), we set the initial sampling budget to $3\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ and sample $1\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ each round during the dynamic sampling period. For medium scale datasets (Pubmed and Corafull), we set the initial sampling budget to $1\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ and sample ${0.5}\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ each dynamic sampling round. For the large scale dataset (ogbn-products), initial sampling budget is ${0.2}\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ , and each dynamic sampling round selects ${0.05}\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ nodes.
|
| 236 |
+
|
| 237 |
+
GNN setup. We train a 2-layer GCN network with hidden layer dimension $= {64}$ for Cora, Cite-seer and Pubmed, and $= {128}$ for Corafull and obgn-products. To train the GNN, we follow the standard random neighbor sampling where for each node [HYL17], we randomly sample 5 neighbors for the convolution operation in each layer. We use the function in "dgl" package to train the GNNs [Wan+19].
|
| 238 |
+
|
| 239 |
+
### 6.1 Performance Results
|
| 240 |
+
|
| 241 |
+
We compare the performance of different active graph neural network learning algorithms under different labeling budgets(B). We parameterize the labeling budget $B$ equal to a certain proportion of the nodes in the training set $\left( {B = r\left| {\mathcal{V}}_{\text{train }}\right| }\right)$ . For Cora and Citeseer, we vary $r$ from 5% to ${15}\%$ in increment of $2\%$ ; for Pubmed and Corafull, $r$ is varied from $3\%$ to ${10}\%$ ; for ogbn-product dataset, we vary the $r$ from 0.3% to 1%. The performance of the active learning algorithms are measured with the test accuracy.
|
| 242 |
+
|
| 243 |
+
Accuracy. Figure 3 shows the test accuracy of baselines trained on different proportions of the selected nodes. ScatterSample improves the test accuracy and consistently outperforms other baselines in all the datasets. In Citeseer, ScatterSample requires 9% of the node labels to achieve test accuracy 74.2%, while the best alternative baselines "Grain (ball-D)" and "Grain (NN-D)" need to label 15% of nodes to achieve similar accuracy, which corresponds to a ${40}\%$ savings of the labeling cost. Similarly, in PubMed and ogbn-products, ScatterSample achieves a 50% labeling cost reduction compared to the best alternative baseline.
|
| 244 |
+
|
| 245 |
+
Efficiency. Here, we compare the computation time among the methods that use the graph structure and node features to select the samples namely, ScatterSample, "Grain (ball-D)" and "Grain (NN-D)". We use the ogbn-products dataset to perform comparisons. ScatterSample takes less than 8 hours to determine the labeling nodes and train the GNN, while the Grain algorithm requires more than 240 hours. Grain requires $\mathcal{O}\left( {n}^{2}\right)$ complexity to calculate the scores of all nodes, which is prohibitive complexity in large graphs.
|
| 246 |
+
|
| 247 |
+
Complexity analysis. The computation complexity of DiverseUncertainty is $O\left( {\left| E\right| + r * {B}_{t}^{2}}\right)$ . It is because ScatterSample includes two parts: 1) computing the node representations with complexity $O\left( \left| E\right| \right)$ where $\left| E\right|$ is the number of edges and 2) cluster the the uncertain nodes where the complexity is $O\left( {r{B}_{t}^{2}}\right)$ . Since both $r$ and ${B}_{t}$ are small, $r{B}_{t}^{2} < \left| E\right|$ , our method does not add a lot of extra burden compared to the model training time.
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+
Figure 3: ScatterSample (blue), wins consistently: Comparison of the test accuracy of active GNN learning algorithms at different labeling budget. The $x$ -axis shows # labeled nodes/# nodes in training set.
|
| 252 |
+
|
| 253 |
+
### 6.2 Ablation Study
|
| 254 |
+
|
| 255 |
+
The MaxDiversity algorithm of ScatterSample needs to determine the size of candidate set ${\mathcal{C}}_{t}$ before selecting a subset ${S}_{t}$ from ${\mathcal{C}}_{t}$ for labeling. Hence, sampling redundancy $r$ and the clustering algorithm to cluster the nodes in ${\mathcal{C}}_{t}$ will affect the performance of ScatterSample. In this section, we will evaluate the effect of both factors.
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+
Figure 4: Compare the performance under different sampling redundancy $r$ . When $r = 1$ , Diverse-Uncertainty reduces to MaxUncertainty method.
|
| 260 |
+
|
| 261 |
+
Sampling redundancy $r$ : Recall from algorithm 1, the sampling redundancy $r$ controls the relative size of candidate set ${\mathcal{C}}_{t}$ to size of sampled node ${S}_{t}$ . When $r = 1$ , ScatterSample reduces to the standard MaxUncertainty algorithm. And figure 4 shows that the sampling the most uncertain nodes is significantly worse than DiverseUncertainty. For the Citeseer dataset, DiverseUncertainty can outperform MaxUncertainty by over 7% when sampling ratio is 5%. Therefore, to achieve a good test accuracy, $r$ should be carefully selected. Figure 4 suggests that as $r$ increases, the test accuracy quickly boosts at the early stage, and then decreases slowly.
|
| 262 |
+
|
| 263 |
+
Sensitivity to initial sampling ratio: During the initial sampling stage, DiverseUncertainty samples ${B}_{0}$ nodes to train the model initially. And the initially trained model will affect the nodes sampled during the dynamic sampling period. We test the effect of different initial sampling ratio on Cora and Citeseer datasets. We vary the initial sampling ratio from 2% to 4%, and figure A5 shows that DiverseUncertainty is robust to the choice of initial sampling ratio.
|
| 264 |
+
|
| 265 |
+
Diverse uncertainty algorithms: Besides the sampling algorithm used by DiverseUncertainty, there are some other algorithms to pick the representative nodes from the candidate set ${S}_{t}$ . First, we will evaluate three algorithms to cluster and select the propagated features.
|
| 266 |
+
|
| 267 |
+
- Random select: randomly pick nodes ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
|
| 268 |
+
|
| 269 |
+
- DiverseUncertainty: use $k$ -means++ to cluster the nodes in ${\mathcal{C}}_{t}$ and
|
| 270 |
+
|
| 271 |
+
- Random round-robin Algorithm [Cit+21]: use the cluster labels from the initial sampling period (the initial sampling period clusters all the nodes in ${\mathcal{V}}_{\text{train }}$ ). Then, following the Algorithm A3 (see Appendix) to select ${S}_{t}$ from ${\mathcal{C}}_{t}$
|
| 272 |
+
|
| 273 |
+
Figure A6 suggests that $k$ -means++ clustering algorithm achieves a better test accuracy in most cases compared to random selection or random round-robin algorithm (see Appendix). Moreover, compared to random sampling algorithm, $k$ -means++ clustering algorithm is more robust when the sampling ratio increases. As the sampling ratio increases, the test accuracy of $k$ -means++ keeps increasing in most cases, while the test accuracy of random sampling algorithm has more fluctuations.
|
| 274 |
+
|
| 275 |
+
Another factor that affects the test performance is the metric for clustering. Besides the propagated features (which is used by MaxDiversity), we can also cluster the input features or the embedding vectors. Since the GNN models typically used do not have a fully connected layer connecting to the output, we cannot use the output of second last layer as the embedding. Hence, we use the GNN output as the embedding vector for clustering. Figure A7 shows that clustering the propagated features consistently outperforms clustering the other two targets. Especially for the "Citeseer" dataset, clustering the propagated features outperforms by at most 5%. To conclude, the $k$ -means++ clustering algorithm achieves the best performance compared to the other selection methods and clustering the propagated features is better than clustering other targets. Thus, DiverseUncertainty uses $k$ -means++ to cluster the propagated features to pick ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
|
| 276 |
+
|
| 277 |
+
## 7 Empirical validation of theorem
|
| 278 |
+
|
| 279 |
+
In this section, we perform simulation analysis to demonstrate that ScatterSample can reduce the MSE compared to greedy uncertainty sampling approach.
|
| 280 |
+
|
| 281 |
+
Graph Simulation Setup. Let the dimension of input feature $d = 1$ . Simulate $\mathbf{X}$ from two different clusters, where $\left( {X \mid {C}_{1}}\right) \sim$ Uniform(-15, - 5)and $\left( {X \mid {C}_{2}}\right) \sim$ Uniform(8,12). In our simulation, we randomly generated 100 nodes for each cluster. Each node is randomly connected to two other nodes in the same cluster. Moreover, for the edges between clusters, we set a probability threshold $r$ such that $\mathrm{P}\left\lbrack {{V}_{i} \in {C}_{1}\text{connect to a node} \in {C}_{2}}\right\rbrack = r$ (See Appendix D for details).
|
| 282 |
+
|
| 283 |
+
Label of nodes. The label of a node depends on its propagated features. First compute the 1-layer feature propagation of each node, ${\mathbf{X}}^{\left( 1\right) }$ . Then, the label of $i$ -th node is ${y}_{i} = {\left| {X}_{i}^{\left( 1\right) }\right| }^{2}$ . Here, because the two cluster centers are equally distanced from 0 , hence, the label function is also symmetric around 0 .
|
| 284 |
+
|
| 285 |
+
Node sampling. During the initial sampling step, label the nodes closest to the cluster centers and train the GP function. To sample uncertain nodes,
|
| 286 |
+
|
| 287 |
+
- MaxUncertainty: Label the 8 nodes with largest uncertainty.
|
| 288 |
+
|
| 289 |
+
- DiverseUncertainty: Collect the top 80 nodes with largest uncertainty into the candidate set. Then, use $k$ -means++ to cluster the nodes in the candidate set into 8 clusters. Label the 8 nodes closest to the cluster centers.
|
| 290 |
+
|
| 291 |
+
MaxUncertainty and DiverseUncertainty use the newly labeled nodes to update the GP function respectively. Finally, the trained GP function predicts the node labels, and we compute the corresponding MSE.
|
| 292 |
+
|
| 293 |
+
Figure A8 in the Appendix suggests that MaxUncertainty has larger MSE compared to Diverse-Uncertainty algorithm. For the MaxUncertainty algorithm, since most of the labeled nodes come from the cluster 1, the MSE of cluster 1 is significantly smaller than that of cluster 2 . While for the DiverseUncertainty algorithm, the MSE of cluster 1 and 2 are comparable. As $r$ increases, there are more and more edges between clusters, and the propagated features are less separated. Hence, there are some high uncertainty nodes from cluster 1 very close to cluster 2, which is beneficial for Max-Uncertainty to learn the labels of nodes from cluster 2. Thus, we could observe $\frac{\text{ MSE of MaxUncertainty }}{\text{ MSE of DiverseUncertaintly }}$ keeps decreasing when $r$ increases. When $r$ is very large, cluster 1 and 2 will merge into one cluster, and MSEs of both methods no longer have a significant difference.
|
| 294 |
+
|
| 295 |
+
## 8 Conclusion
|
| 296 |
+
|
| 297 |
+
Learning a GNN model with limited labeling budget is an important but challenging problem. In this paper:
|
| 298 |
+
|
| 299 |
+
- We propose a novel data efficient GNN learning algorithm, ScatterSample, which efficiently diversifies the uncertain nodes and achieves better test accuracy than recent baselines.
|
| 300 |
+
|
| 301 |
+
- We provide theoretical guarantees: Theorem 5.1 proves the advantage of ScatterSample over MaxUncertainty sampling.
|
| 302 |
+
|
| 303 |
+
- Experiments on real data show that ScatterSample can save up to ${50}\%$ labeling size, for the same test accuracy.
|
| 304 |
+
|
| 305 |
+
We envision ScatterSample will inspire future research of combining uncertainty sampling and representation sampling (diversifying).
|
| 306 |
+
|
| 307 |
+
References
|
| 308 |
+
|
| 309 |
+
[AV06] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. Tech. rep. Stanford, 2006
|
| 310 |
+
|
| 311 |
+
[Bro+17] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. "Geometric deep learning: going beyond euclidean data". In: 34.4 (2017), pp. 18-42
|
| 312 |
+
|
| 313 |
+
[CZC17] H. Cai, V. W. Zheng, and K. C.-C. Chang. "Active learning for graph embedding". In: arXiv preprint arXiv:1705.05085 (2017)
|
| 314 |
+
|
| 315 |
+
[Cit+21] G. Citovsky, G. DeSalvo, C. Gentile, L. Karydas, A. Rajagopalan, A. Rostamizadeh, and S. Kumar. "Batch Active Learning at Scale". In: Advances in Neural Information Processing Systems 34 (2021)
|
| 316 |
+
|
| 317 |
+
[DBV16] M. Defferrard, X. Bresson, and P. Vandergheynst. "Convolutional neural networks on graphs with fast localized spectral filtering". In: Barcelona, Spain, 2016, pp. 3844-3852
|
| 318 |
+
|
| 319 |
+
[DP18] M. Ducoffe and F. Precioso. "Adversarial active learning for deep networks: a margin based approach". In: arXiv preprint arXiv:1802.09841 (2018)
|
| 320 |
+
|
| 321 |
+
[Gao+18] L. Gao, H. Yang, C. Zhou, J. Wu, S. Pan, and Y. Hu. "Active discriminative network representation learning". In: IJCAI International Joint Conference on Artificial Intelligence. 2018
|
| 322 |
+
|
| 323 |
+
[HYL17] W. L. Hamilton, R. Ying, and J. Leskovec. "Inductive representation learning on large graphs". In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, pp. 1025-1035
|
| 324 |
+
|
| 325 |
+
[Han+14] S. Hanneke et al. "Theory of disagreement-based active learning". In: Foundations and Trends® in Machine Learning 7.2-3 (2014), pp. 131-309
|
| 326 |
+
|
| 327 |
+
[Hu+20] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. "Open Graph Benchmark: Datasets for Machine Learning on Graphs". In: arXiv preprint arXiv:2005.00687 (2020)
|
| 328 |
+
|
| 329 |
+
[IMG20] V. N. Ioannidis, A. G. Marques, and G. B. Giannakis. "Tensor Graph Convolutional Networks for Multi-Relational and Robust Learning". In: IEEE Transactions on Signal Processing 68 (2020), pp. 6535-6546
|
| 330 |
+
|
| 331 |
+
[KW16] T. N. Kipf and M. Welling. "Semi-supervised classification with graph convolutional networks". In: arXiv preprint arXiv:1609.02907 (2016)
|
| 332 |
+
|
| 333 |
+
[KVAG19] A. Kirsch, J. Van Amersfoort, and Y. Gal. "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning". In: Advances in neural information processing systems 32 (2019), pp. 7026-7037
|
| 334 |
+
|
| 335 |
+
[O'H78] A. O'Hagan. "Curve fitting and optimal design for prediction". In: Journal of the Royal Statistical Society: Series B (Methodological) 40.1 (1978), pp. 1-24
|
| 336 |
+
|
| 337 |
+
[See04] M. Seeger. "Gaussian processes for machine learning". In: International journal of neural systems 14.02 (2004), pp. 69-106
|
| 338 |
+
|
| 339 |
+
[Set09] B. Settles. "Active learning literature survey". In: (2009)
|
| 340 |
+
|
| 341 |
+
[THTS05] G. Tur, D. Hakkani-Tür, and R. E. Schapire. "Combining active and semi-supervised learning for spoken language understanding". In: Speech Communication 45.2 (2005), pp. 171-186
|
| 342 |
+
|
| 343 |
+
[Vel+17] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. "Graph attention networks". In: arXiv preprint arXiv:1710.10903 (2017)
|
| 344 |
+
|
| 345 |
+
[Wan+19] M. Wang et al. "Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks". In: arXiv preprint arXiv:1909.01315 (2019)
|
| 346 |
+
|
| 347 |
+
[Wu+19a] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. "Simplifying graph convolutional networks". In: International conference on machine learning. PMLR. 2019, pp. 6861-6871
|
| 348 |
+
|
| 349 |
+
[Wu+19b] Y. Wu, Y. Xu, A. Singh, Y. Yang, and A. Dubrawski. "Active learning for graph neural networks via node feature propagation". In: arXiv preprint arXiv:1910.07567 (2019)
|
| 350 |
+
|
| 351 |
+
[Zha+21] W. Zhang, Z. Yang, Y. Wang, Y. Shen, Y. Li, L. Wang, and B. Cui. "Grain: Improving data efficiency of graph neural networks via diversified influence maximization". In: arXiv preprint arXiv:2108.00219 (2021)
|
| 352 |
+
|
| 353 |
+
[ZG02] X. Zhu and Z. Ghahramani. "Learning from labeled and unlabeled data with label propagation". In: (2002)
|
| 354 |
+
|
| 355 |
+
## 3 A Estimation and prediction of the GP model
|
| 356 |
+
|
| 357 |
+
Given the assumptions and notations above, the likelihood of GP model can be written as:
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
f\left( {\mathbf{y} \mid \mu ,{\sigma }^{2},\mathbf{\theta }}\right) \propto \exp \left\lbrack {-\frac{1}{2{\sigma }^{2}}{\left( \mathbf{y} - \mathbf{1}\mu \right) }^{T}\mathbf{K}{\left( {\mathbf{X}}^{\left( k\right) }\right) }^{-1}\left( {\mathbf{y} - \mathbf{1}\mu }\right) }\right\rbrack .
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
Here, we assume $\mathbf{\theta } = \left( {{\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{d}}\right)$ is a known parameter, and only $\mu$ and ${\sigma }^{2}$ are left to fit. The MLE of $\mu$ and ${\sigma }^{2}$ are, $\widehat{\mu } = \mathop{\sum }\limits_{{i = 1}}^{n}{\mathbf{y}}_{i}$ and ${\widehat{\sigma }}^{2} = \frac{1}{n}{\left( \mathbf{y} - \widehat{\mathbf{\mu }}\right) }^{T}\mathbf{K}{\left( {\mathbf{X}}^{\left( k\right) }\right) }^{-1}\left( {\mathbf{y} - \widehat{\mathbf{\mu }}}\right)$ .
|
| 364 |
+
|
| 365 |
+
Given a testing point ${\mathbf{x}}_{ * }^{\left( k\right) }$ , by the GP model fitted by $D$ , the prediction of the response $f\left( {\mathbf{x}}_{ * }^{\left( k\right) }\right) \sim$ $N\left( {{\mu }^{ * },{\sigma }^{*2}}\right)$ , where
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
{\mu }^{ * } = \mu + {k}^{*T}\mathbf{K}{\left( {\mathbf{X}}^{\left( k\right) }\right) }^{-1}\left( {\mathbf{y} - \mathbf{1}\widehat{\mu }}\right) \;\text{ and }\;{\sigma }^{*2} = {\widehat{\sigma }}^{2}\left( {1 - {k}^{*T}\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right) {k}^{ * }}\right) \tag{4}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
{k}^{ * } = {\left\lbrack K\left( {\mathbf{x}}_{1},{\mathbf{x}}^{ * }\right) , K\left( {\mathbf{x}}_{2},{\mathbf{x}}^{ * }\right) ,\ldots , K\left( {\mathbf{x}}_{n},{\mathbf{x}}^{ * }\right) \right\rbrack }^{T} \in {\mathbb{R}}^{n \times 1}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
## B Proof of theorem 1
|
| 376 |
+
|
| 377 |
+
Before proving theorem 5.1, we first provide some preliminary results of Gaussian kernel matrix.
|
| 378 |
+
|
| 379 |
+
### B.1 Preliminary of Gaussian kernel matrix
|
| 380 |
+
|
| 381 |
+
Lemma B.1. Let $\mathbf{K}$ be the Gaussian kernel matrix of vector $\left( {{\mathbf{c}}_{1},{\mathbf{c}}_{2},\ldots ,{\mathbf{c}}_{m}}\right)$ . Since $\mathop{\min }\limits_{{i \neq j}}\begin{Vmatrix}{{\mathbf{c}}_{i} - {\mathbf{c}}_{j}}\end{Vmatrix} > \delta$ , we have ${\mathbf{K}}_{ij} < \exp \left\lbrack {-\frac{{\delta }^{2}}{\theta }}\right\rbrack$ . Denote $\epsilon = \exp \left\lbrack {-\frac{{\delta }^{2}}{\theta }}\right\rbrack$ . Then, ${K}_{ij}^{-1} > - \epsilon$ if $i \neq j$ , and $1 < {K}_{ii}^{-1} < 1 + \left( {m - 1}\right) {\epsilon }^{2}$ .
|
| 382 |
+
|
| 383 |
+
Proof. Let $\mathbf{K} = \mathbf{I} + \mathbf{A}$ . By Neumann series, ${\mathbf{K}}^{-1} = \mathbf{I} + \mathop{\sum }\limits_{{t = 1}}^{\infty }{\left( -1\right) }^{t}{\mathbf{A}}^{t}$ . Thus, ${\mathbf{K}}_{ij} > - {\mathbf{A}}_{ij} > - \epsilon$ for $i \neq j$ , and $1 < {\mathbf{K}}_{ii} < 1 + {\mathbf{A}}_{ii}^{2} < 1 + \left( {m - 1}\right) {\epsilon }^{2}$
|
| 384 |
+
|
| 385 |
+
### B.2 Prove MaxUncertainty method samples ${2m}$ from cluster $m$
|
| 386 |
+
|
| 387 |
+
During the initial sampling stage, the nodes at the cluster centers are sampled. Then, the variance of a sample $x$ is,
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\operatorname{Var}\left( {f\left( x\right) }\right) = {\sigma }^{2}\left( {1 - {\mathbf{k}}^{T}{\mathbf{K}}^{-1}\mathbf{k}}\right) , \tag{5}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
where $\mathbf{k} = \left( {K\left( {x,{c}_{1}}\right) , K\left( {x,{c}_{2}}\right) ,\ldots , K\left( {x,{c}_{m}}\right) }\right)$ and $\mathbf{K} = \mathbf{K}\left( \mathbf{c}\right)$ is the Gaussian kernel matrix of $\mathbf{c} = \left( {{c}_{1},{c}_{2},\ldots ,{c}_{m}}\right) .$
|
| 394 |
+
|
| 395 |
+
For a node $x$ from cluster $i$ , the $\operatorname{Var}\left( {f\left( x\right) }\right)$ is monotone increasing as $x$ moves from cluster center to boundary. Let $\omega = \exp \left\lbrack {-\frac{{\left( \delta - {d}_{m}\right) }^{2}}{2\theta }}\right\rbrack$ . Since $\left| {x - {c}_{j}}\right| \geq \delta - {d}_{i} \geq \delta - {d}_{m}$ for $j \neq i$ , naturally, we have $\omega < {\mathbf{k}}_{j}$ . Then, following lemma B.1,
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\mathbf{k}{\left( x,\mathbf{c}\right) }^{T}{\mathbf{K}}^{-1}\mathbf{k}\left( {x,\mathbf{c}}\right) \geq \exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack {\mathbf{K}}_{ii}^{-1} > \exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack \tag{6}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
With equation 6, we can upper bound the variance of the $x$ from cluster $i$ .
|
| 402 |
+
|
| 403 |
+
In the next step, we lower bound the variance of $x$ at the boundary of cluster $m$ (largest cluster), and show that its variance is strictly larger than nodes from other clusters.
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
\mathbf{k}{\left( x,\mathbf{c}\right) }^{T}{\mathbf{K}}^{-1}\mathbf{k}\left( {x,\mathbf{c}}\right) < \left( {\exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack + \left( {m - 1}\right) {\omega }^{2}}\right) \left\lbrack {1 + \left( {m - 1}\right) {\epsilon }^{2}}\right\rbrack , \tag{7}
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
Since $\delta \geq {d}_{m} + \sqrt{{d}_{m}^{2} + \theta \log \left( {9m}\right) }$ , we have $\left( {m - 1}\right) {\epsilon }^{2} < \left( {m - 1}\right) {\omega }^{2} \leq \frac{1}{9}\exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack <$ $\frac{1}{9}\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack$ .
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
\text{RHS of equation}7 \leq 2\left( {\exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack + \left( {m - 1}\right) {\omega }^{2}}\right)
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
\leq \frac{1}{2}\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack + 2\left( {m - 1}\right) {\omega }^{2} < \frac{13}{18}\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack \tag{8}
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
The RHS of equation 7 is strictly smaller than the LHS of equation 6 . Therefore, the uncertainty of nodes at the boundary of cluster $m$ is larger than the uncertainty of nodes from other clusters. For our case, feature dimension is 1 and there only exist 2 points at the boundary of cluster $m$ . However, since the nodes are continuous distributed, MaxUncertainty will pick the other $2\left( {m - 1}\right)$ nodes close to the boundary of cluster $m$ .
|
| 420 |
+
|
| 421 |
+
### B.3 Bound the MSE of MaxUncertainty and DiverseUncertainty
|
| 422 |
+
|
| 423 |
+
From previous section, we have seen the boundary nodes of cluster $m$ have the largest uncertainty. Thus, MaxUncertainty will sample ${2m}$ nodes from the cluster $m$ . To lower bound the MSE of MaxUncertainty, we consider the other(m - 1)clusters. Since the Gaussian Process model does not have noise, MSE of the prediction is equal to its variance.
|
| 424 |
+
|
| 425 |
+
Let $\mathbf{h} = \left( {\mathbf{c},\mathbf{s}}\right) \in {\mathbb{R}}^{3m}$ , where $\mathbf{h}$ are the sampled nodes and $\mathbf{s} \in {\mathbb{R}}^{2m}$ are the ${2m}$ nodes sampled during the dynamic sampling stage. Denote $\mathbf{K}\left( \mathbf{h}\right)$ to be Gaussian kernel matrix of $\mathbf{h}$ . Let $t = \left| {x - {c}_{i}}\right|$ be the distance from node $x$ to its cluster center.
|
| 426 |
+
|
| 427 |
+
$$
|
| 428 |
+
\mathbf{k}{\left( x,\mathbf{h}\right) }^{T}{\mathbf{K}}^{-1}\left( \mathbf{h}\right) \mathbf{k}\left( {x,\mathbf{h}}\right) \leq \left\lbrack {1 + m{\epsilon }^{2} + {2m}{\omega }^{2}}\right\rbrack \left( {\exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack + {3m}{\omega }^{2}}\right)
|
| 429 |
+
$$
|
| 430 |
+
|
| 431 |
+
$$
|
| 432 |
+
\leq \left( {1 + {3m}{\omega }^{2}}\right) \left( {\exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack + {3m}{\omega }^{2}}\right) \tag{9}
|
| 433 |
+
$$
|
| 434 |
+
|
| 435 |
+
Moreover, we have ${\mathbb{E}}_{t}\left( {\exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack }\right) \leq \frac{1}{2}\left( {1 + \exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack }\right)$ . Let ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{4\theta }}\right\rbrack$ and $a =$ $\exp \left\lbrack {-\frac{{d}_{m}^{2}}{{\theta }^{ * }}}\right\rbrack$ , we have
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\operatorname{MSE}\left( {f\left( x\right) \mid \text{ Max Uncertainty }, x \in {\mathcal{S}}_{i}}\right) > {\sigma }^{2}\left\lbrack {\left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}}\right\rbrack . \tag{10}
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
Hence, ${MSE}\left( {f\left( x\right) \mid \text{MaxUncertainty}}\right) > {\sigma }^{2}\mathop{\sum }\limits_{{i = 1}}^{{m - 1}}\frac{{d}_{i}^{2}}{\parallel \mathbf{d}{\parallel }^{2}}\left\lbrack {\left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}}\right\rbrack$ . Let 466 $h\left( {r}_{i}^{2}\right) = \left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}$ and $h$ is concave in ${d}_{i}^{2}$ . Thus,
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
h\left( {r}_{m}^{2}\right) = \left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4} \leq \tau \mathop{\sum }\limits_{{i = 1}}^{{m - 1}}\frac{h\left( {r}_{i}^{2}\right) }{{r}_{m}^{2}}h\left( {r}_{m}^{2}\right) \leq \tau \mathop{\sum }\limits_{{i = 1}}^{{m - 1}}h\left( {r}_{i}^{2}\right) . \tag{11}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
Hence, ${MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) > \frac{{\sigma }^{2}}{1 + \tau }\mathop{\sum }\limits_{{i = 1}}^{m}\frac{{d}_{i}^{2}}{\parallel \mathbf{d}{\parallel }^{2}}\left\lbrack {\left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}}\right\rbrack$ .
|
| 448 |
+
|
| 449 |
+
Then, we upper bound the MSE of DiverseUncertainty. Since each cluster labels 2 nodes at the cluster boundary. For node $x$ from cluster $i$ , the distance between node $i$ to the closest labeled point is smaller than $\frac{{d}_{i}}{2}$ . Hence,
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
\mathbf{k}{\left( x,\mathbf{h}\right) }^{T}{\mathbf{K}}^{-1}\left( \mathbf{h}\right) \mathbf{k}\left( {x,\mathbf{h}}\right) \geq \exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack + \exp \left\lbrack {-\frac{{\left( {d}_{i} - t\right) }^{2}}{\theta }}\right\rbrack - 2\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack \exp \left\lbrack {-\frac{{t}^{2} + {\left( {d}_{i} - t\right) }^{2}}{2\theta }}\right\rbrack \geq \frac{2{r}_{i}}{1 + {r}_{i}^{2}}.({12} \tag{12}
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
Thus, we have $\operatorname{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty,}x \in {\mathcal{S}}_{i}}\right) \leq {\sigma }^{2}\frac{{\left( 1 - {r}_{i}\right) }^{2}}{1 + {r}_{i}^{2}}$ and $\operatorname{MSE}(f\left( x\right) \mid$ DiverseUncertainty $) \leq {\sigma }^{2}\mathop{\sum }\limits_{{i = 1}}^{m}\frac{{d}_{i}^{2}}{\parallel \mathbf{d}{\parallel }^{2}}\frac{{\left( 1 - {r}_{i}\right) }^{2}}{1 + {r}_{i}^{2}}$ .
|
| 456 |
+
|
| 457 |
+
Moreover, $\frac{{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty,}x \in {\mathcal{S}}_{i}}\right) }{{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty,}x \in {\mathcal{S}}_{i}}\right) } \geq \frac{1 + {r}_{i}^{2}}{1 - {r}_{i}}\left( {\frac{1}{2} + \frac{a}{6}}\right) - \frac{\left( 1 + {r}_{i}^{2}\right) }{{\left( 1 - {r}_{i}\right) }^{2}}\left( {\frac{2a}{3} + \frac{{a}^{2}}{9}}\right)$ . Since $\delta \geq {d}_{m} + {2\theta }\log \left( \frac{3\sqrt{m}}{1 - {r}_{m}}\right)$ , we have $a \leq {\left( 1 - {r}_{i}\right) }^{2}$ for all $i = 1,2,\ldots , m$ . Thus, $\frac{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty,}x \in {\mathcal{S}}_{i}}\right) }{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty,}x \in {\mathcal{S}}_{i}}\right) } \geq \frac{1}{2}\frac{1 + {r}_{i}^{2}}{1 - {r}_{i}} - \frac{8}{3} \geq \frac{1}{2}\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3}.$
|
| 458 |
+
|
| 459 |
+
Now, we could lower bound $\frac{{MSE}\left( {f\left( x\right) \mid {MaxUncertainty}}\right) }{{MSE}\left( {f\left( x\right) \mid {DiverseUncertainty}}\right) }$ over the whole sample space, where
|
| 460 |
+
|
| 461 |
+
$$
|
| 462 |
+
\frac{{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty }}\right) } \geq \frac{1}{2\left( {1 + \tau }\right) }\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3}
|
| 463 |
+
$$
|
| 464 |
+
|
| 465 |
+
17 Moreover, when $\theta$ is large, ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{4\theta }}\right\rbrack \approx 1 - \frac{{d}_{i}^{2}}{4\theta }$ . Thus, $\frac{{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{{MSE}\left( {f\left( x\right) \mid \text{ DiversedUncertainty }}\right) } \geq$ $\frac{1}{1 + \tau }O\left( \theta \right)$ .
|
| 466 |
+
|
| 467 |
+
## C Ablation Experiments
|
| 468 |
+
|
| 469 |
+
### C.1 Detail of round-robin algorithm
|
| 470 |
+
|
| 471 |
+
Algorithm 3 Random Round-robin Algorithm
|
| 472 |
+
|
| 473 |
+
---
|
| 474 |
+
|
| 475 |
+
1: Input: cluster labels of node $i$ (node $i \in {\mathcal{V}}_{\text{train }}$ ) ${cl}_{n}$ , where ${cl}_{n} \in 1,2,\ldots , m$ ; candidate set ${\mathcal{C}}_{t}$ ;
|
| 476 |
+
|
| 477 |
+
number of nodes to label ${B}_{t}$ .
|
| 478 |
+
|
| 479 |
+
2: Using the cluster labels to split ${\mathcal{C}}_{t}$ onto clusters ${A}_{1},{A}_{2},\ldots ,{A}_{m}$ . Without loss of generality,
|
| 480 |
+
|
| 481 |
+
$\left| {A}_{1}\right| \leq \left| {A}_{2}\right| \leq \ldots \leq \left| {A}_{m}\right| .$
|
| 482 |
+
|
| 483 |
+
${S}_{t} = \varnothing$
|
| 484 |
+
|
| 485 |
+
for $i = 1,2,\ldots ,{B}_{t}$ do
|
| 486 |
+
|
| 487 |
+
for $j = 1,2,\ldots , m$ do
|
| 488 |
+
|
| 489 |
+
if ${A}_{j} \neq \varnothing$ then
|
| 490 |
+
|
| 491 |
+
Uniformly select $x$ from ${A}_{j}$ at random
|
| 492 |
+
|
| 493 |
+
${A}_{j} \leftarrow {A}_{j} \smallsetminus \{ x\} ,{S}_{t} \leftarrow {S}_{t} \cup \{ x\}$
|
| 494 |
+
|
| 495 |
+
break
|
| 496 |
+
|
| 497 |
+
end if
|
| 498 |
+
|
| 499 |
+
end for
|
| 500 |
+
|
| 501 |
+
end for
|
| 502 |
+
|
| 503 |
+
return ${S}_{t}$
|
| 504 |
+
|
| 505 |
+
---
|
| 506 |
+
|
| 507 |
+
### C.2 Sensitivity to initial sampling ratio
|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
|
| 511 |
+
Figure 5: Compare different initial sampling ratios for Cora (left) and Citeseer (Right)
|
| 512 |
+
|
| 513 |
+
### C.3 Compare sampling algorithms
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
|
| 517 |
+
Figure 6: Compare different sampling algorithms to collect ${S}_{t}$ from the candidate set ${\mathcal{C}}_{t}$ .
|
| 518 |
+
|
| 519 |
+
B3 C. 4 Compare clustering algorithms
|
| 520 |
+
|
| 521 |
+

|
| 522 |
+
|
| 523 |
+
Figure 7: Compare clustering different targets to select ${S}_{t}$ from the candidate set ${\mathcal{C}}_{t}$ .
|
| 524 |
+
|
| 525 |
+
## D Empirical validation of theory
|
| 526 |
+
|
| 527 |
+
Graph Simulation Setup. Let the dimension of input feature $d = 1$ . Simulate $\mathbf{X}$ from two different clusters, where $\left( {X \mid {C}_{1}}\right) \sim$ Uniform(-15, - 5)and $\left( {X \mid {C}_{2}}\right) \sim$ Uniform(8,12). In our simulation, we randomly generated 100 nodes for each cluster. Then, we simulate the edges between nodes. The edges can be divided into two categories, edges within clusters and edges between clusters. To simulate the edges within clusters, for each node, we random select two other nodes from the same cluster as its neighbor. For the edges between clusters, we set a probability threshold $r$ such that $\mathrm{P}\left\lbrack {{V}_{i} \in {C}_{1}\text{connect to a node} \in {C}_{2}}\right\rbrack = r$ . For each node ${V}_{i} \in {C}_{1}$ , generate an indicator variable ${I}_{i} \sim$ Bernoulli(r)to determine whether ${V}_{i}$ is connected to cluster $2\left( {V}_{i}\right.$ is connected to cluster 2 if $\left. {{I}_{i} = 1}\right)$ . If ${V}_{i}$ is connected to cluster 2, randomly pick a node from cluster 2 and connect it to ${V}_{i}$ .
|
| 528 |
+
|
| 529 |
+

|
| 530 |
+
|
| 531 |
+
Figure 8: Compare the MSEs of Uncertainty and DiverseUncertainty algorithms under different correlation levels between clusters.
|
| 532 |
+
|
papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SCATTERSAMPLE: DIVERSIFIED LABEL SAMPLING FOR DATA EFFICIENT GRAPH NEURAL NETWORK LEARNING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
What target labels are most effective for graph neural network (GNN) training? In some applications where GNNs excel-like drug design or fraud detection, labeling new instances is expensive. We develop a data-efficient active sampling framework, ScatterSample, to train GNNs under an active learning setting. ScatterSample employs a sampling module termed DiverseUncertainty to collect instances with large uncertainty from different regions of the sample space for labeling. To ensure diversification of the selected nodes, DiverseUncertainty clusters the high uncertainty nodes and selects the representative nodes from each cluster. Our ScatterSample algorithm is further supported by rigorous theoretical analysis demonstrating its advantage compared to standard active sampling methods that aim to simply maximize the uncertainty and not diversify the samples. In particular, we show that ScatterSample is able to efficiently reduce the model uncertainty over the whole sample space. Our experiments on five datasets show that ScatterSample significantly outperforms the other GNN active learning baselines, specifically it reduces the sampling cost by up to $\mathbf{{50}}\%$ while achieving the same test accuracy.
|
| 12 |
+
|
| 13 |
+
§ 17 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
How to spot the most effective labeled nodes for GNN training? Graph neural networks (GNN) [KW16; Vel+17; Wu+19a] which employ non-linear and parameterized feature propagation [ZG02] to compute graph representations, have been widely employed in a broad range of learning tasks and achieved state-of-art-performance in node classification, link prediction and graph classification. Training GNNs for node classification in the supervised learning setup typically requires a large number of labeled examples such that the GNN can learn from diverse node features and node connectivity patterns. However, labeling costs can be expensive which inhibits the possibility of acquiring a large number of node labels. For example, the GNNs can be used to assist the drug design. However, evaluating the properties of a molecule is time consuming. It usually takes one to two weeks for evaluation using the current simulation tools, not to mention the cost spent on the laboratory experiments.
|
| 16 |
+
|
| 17 |
+
Active learning (AL) aims at maximizing the generalization performance under a constrained labeling budget [Set09]. AL algorithms choose which training instances to use as labeled targets to maximize the performance of the learned model. Previous research in AL algorithms for GNN training can be categorized with respect to whether the AL methods take into account the model weights (model aware) or can be applied to any model (model agnostic). Model agnostic algorithms label a representative subset of the nodes such that the labeled nodes can cover the whole sample space [Wu+19b; Zha+21]. Model aware AL algorithms leverage the GNN model to compute the node uncertainty, which combines both the input features and graph structure [CZC17; Gao+18]. AL then picks the nodes with the highest uncertainty.
|
| 18 |
+
|
| 19 |
+
However, maximizing the uncertainty of the labeled nodes may not balance the exploration and exploitation of the classification boundary [KVAG19]. For example, if there exist a group of nodes close to the classification boundary but are clustered in a small region of the graph, just labeling the most uncertain nodes could only explore that specific region of the classification boundary,
|
| 20 |
+
|
| 21 |
+
while others are ignored, and the classification boundary is not well explored. Thus, our first main contribution is to simultaneously consider the node uncertainty and the diversification of the uncertain nodes over the sample space.
|
| 22 |
+
|
| 23 |
+
Challenges of diversifying uncertain nodes. Graph data present additional challenges to diversify the uncertain nodes. Diversification requires modeling the sample space using carefully selected representations for the nodes. However, there are two challenges of a suitable node representations.
|
| 24 |
+
|
| 25 |
+
Challenge 1: Sample space for graph data requires a representation which takes both the graph structure and node features into account (see section sec 4.2).
|
| 26 |
+
|
| 27 |
+
Challenge 2: The representation should be robust to the model trained so far, and not be biased by the limited amount of available labels.
|
| 28 |
+
|
| 29 |
+
Our approach. We develop ScatterSample for data-efficient GNN learning. ScatterSample allows us to explore the classification boundary while exploiting the nodes with the highest uncertainty. To diversify the uncertain samples on graph-structured data, ScatterSample includes a DiverseUncer-tainty module to address the two challenges above, which clusters the uncertain nodes representations over the whole sample space.
|
| 30 |
+
|
| 31 |
+
Our Contributions. The contributions of our work are the following.
|
| 32 |
+
|
| 33 |
+
* Insight: ScatterSample is the first method that proposes and implements diversification of the uncertain samples for data efficient GNN learning.
|
| 34 |
+
|
| 35 |
+
< g r a p h i c s >
|
| 36 |
+
|
| 37 |
+
Figure 1: ScatterSample wins: test accuracy vs. sampling ratio on the ogbn-products dataset (62M edges).
|
| 38 |
+
|
| 39 |
+
* Effectiveness: We evaluate ScatterSample on five different graph datasets, where ScatterSample saves up to ${50}\%$ labeling cost, while still achieving the same test accuracy with state-of-the-art baselines.
|
| 40 |
+
|
| 41 |
+
* Theoretical Guarantees: Our theoretical analysis proves the superiority of ScatterSample over the standard, uncertainty-sampling method (see Theorem 5.1). Simulation results further confirm our theory.
|
| 42 |
+
|
| 43 |
+
§ 2 RELATED WORK
|
| 44 |
+
|
| 45 |
+
This section will review the uncertainty based active learn-
|
| 46 |
+
|
| 47 |
+
ing research and implementation of active learning in GNNs.
|
| 48 |
+
|
| 49 |
+
Active Learning (AL):. Active learning aims at selecting a subset of training data as labeling targets such that the model performance is optimized [Set09; Han+14]. Uncertainty sampling is one major approach of active learning, which labels a group of samples to maximally reduce the model uncertainty. To achieve this goal, uncertainty sampling selects samples around the decision boundary [THTS05]. Uncertainty sampling has also been applied to the deep learning field, and researchers have proposed different methods to measure the uncertainty of samples. For example, Ducoffe and Precioso [DP18] developed a margin based method which uses the distance from a sample to its smallest adversarial sample to approximate the distance to the decision boundary.
|
| 50 |
+
|
| 51 |
+
AL and GNNs: AL with GNNs requires to consider the graph structure information into the node selection. Wu et al. [Wu+19b] uses the propagated features followed by K-Medoids clustering of nodes to select a group of representative instances. Zhang et al. $\left\lbrack {\mathrm{{Zha}} + {21}}\right\rbrack$ measures importance of nodes through combining the diversity and influence scores. However the above approaches do not account for the learned GNN model, which may limit the generalization performance. Uncertainty sampling has also been implemented to select nodes. Cai et al. [CZC17] propose to use a weighted average of the node uncertainty, graph centrality and information density scores. Gao et al. [Gao+18] further propose a different approach to combine the three features with multi-armed bandit techniques. Although useful, these approaches aim choose nodes with the highest uncertainty and may be challenged if the selected nodes are clustered in a small region of the graph, which will not provide 92 good graph coverage. Our work addresses this limitation by diversifying the selected nodes based on the graph structure.
|
| 52 |
+
|
| 53 |
+
§ 3 PRELIMINARIES
|
| 54 |
+
|
| 55 |
+
Problem Statement. Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is the set of nodes with $N = \left| \mathcal{V}\right|$ nodes and $\mathcal{E}$ is the set of edges. The set of nodes is divided into the training set ${\mathcal{V}}_{\text{ train }}$ , validation set ${\mathcal{V}}_{\text{ valid }}$ and testing set ${\mathcal{V}}_{\text{ test }}$ . Each node ${v}_{n} \in \mathcal{V}$ is associated with a feature vector ${\mathbf{x}}_{n} \in {\mathbb{R}}^{d}$ and a label ${y}_{n} \in \{ 1,2,\ldots ,C\}$ . Let $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ be the feature matrix of all the nodes in the graph, where the $i$ -th row of $\mathbf{X}$ corresponds to ${v}_{n},\mathbf{y} = \left( {{y}_{1},{y}_{2},\ldots ,{y}_{n}}\right) \in {\mathbb{R}}^{n}$ is the vector containing all the labels. To learn the labels of the nodes, we train a GNN model $M$ which maps the graph $\mathcal{G}$ and $\mathbf{X}$ to the the prediction of labels $\widehat{\mathbf{y}}$ .
|
| 56 |
+
|
| 57 |
+
Active Learning: Active learning picks a subset of nodes $S \subset {\mathcal{V}}_{\text{ train }}$ from the training set and query their labels ${\mathbf{y}}_{S}$ . A GNN model ${M}_{S}$ is trained with respect to the feature matrix $\mathbf{X}$ and ${\mathbf{y}}_{S}$ . Given the sampling budget $B$ , the goal of active learning is to find a set $S\left( {\left| S\right| \leq B}\right)$ such that the generalization loss is minimized, i.e.
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\underset{S : \left| S\right| \leq b}{\arg \min }{\mathbb{E}}_{{v}_{n} \in {\mathcal{V}}_{\text{ test }}}\left( {\ell \left( {{y}_{n},f\left( {{\mathbf{x}}_{n} \mid \mathcal{G},{M}_{S}}\right) }\right) }\right) .
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
§ 3.1 GRAPH NEURAL NETWORKS AND MESSAGE PASSING
|
| 64 |
+
|
| 65 |
+
In this section we present the basic operation of the GNN at layer $l$ . With the message passin paradigm, the GNN layer updates for most GNN models can be interpreted as message vectors that are exchanged among neighbors over the edges and nodes in the graph.
|
| 66 |
+
|
| 67 |
+
For the following let ${\mathbf{h}}_{v}^{\left( l\right) } \in {\mathbb{R}}^{{d}_{1}}$ be the hidden representation for node $v$ and layer $l$ . Consider $\phi$ that is a message function combining the hidden representations for nodes $v,u$ . Next, using the message vectors for neighboring edges the node representations are updated as follows
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\mathbf{h}}_{v}^{\left( l + 1\right) } = \psi \left( {{\mathbf{h}}_{v}^{\left( l\right) },\rho \left( \left\{ {\phi \left( {{\mathbf{h}}_{v}^{\left( l\right) },{\mathbf{h}}_{u}^{\left( l\right) }}\right) : \left( {u,v}\right) \in \mathcal{E}}\right\} \right) }\right) \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where $\rho$ is a reduce function used to aggregate the messages coming from the neighbors of $v$ and $\psi$ is an update function defined on each node to update the hidden node representation for layer $l + 1$ . By defining $\phi ,\rho ,\psi$ different GNN models can be instantiated [KW16; DBV16; Bro+17; IMG20]. These functions are also parameterized by learnable matrices that are updated during training.
|
| 74 |
+
|
| 75 |
+
§ 4 PROPOSED METHOD: SCATTERSAMPLE
|
| 76 |
+
|
| 77 |
+
We propose the ScatterSample algorithm, which dynamically samples a set of diverse nodes with large uncertainties in order to more efficiently explore the classification boundary during GNN training. At each round, our method calculates the uncertainty for all nodes with the GNN model trained so far. Then, ScatterSample clusters the top uncertain nodes and selecting nodes from each cluster to obtain diverse samples. The labels of the selected nodes are queried and used as supervision to continue training the GNN model for the next round. This section explains our method in detail.
|
| 78 |
+
|
| 79 |
+
§ 4.1 SELECTING THE UNCERTAIN NODES
|
| 80 |
+
|
| 81 |
+
The uncertainty of a node is measured by the information entropy. Given a trained GNN model at the $t$ -th sampling round, ScatterSample first computes the information entropy ${\phi }_{\text{ entropy }}\left( {v}_{n}\right)$ of nodes in ${\mathcal{V}}_{\text{ train }}$ based on the current GNN model, i.e.
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{\phi }_{\text{ entropy }}\left( {v}_{n}\right) = - \mathop{\sum }\limits_{{j = 1}}^{C}\log \left( {\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X},M}\right\rbrack }\right) \mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X},M}\right\rbrack \tag{2}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X},M}\right\rbrack$ is probability that node ${v}_{n}$ belongs to class $j$ given the GNN model $M$ . Then, ScatterSample ranks all the nodes in order of decreasing uncertainty, and picks the ones with the largest information entropy into a candidate set ${\mathcal{C}}_{t} \subset {\mathcal{V}}_{\text{ train }}$ . Different than traditional AL techniques that select training targets solely based on uncertainty, we then move on to pick a diverse subset of the uncertain nodes over the sampling space.
|
| 88 |
+
|
| 89 |
+
§ 4.2 DIVERSIFYING UNCERTAIN NODES
|
| 90 |
+
|
| 91 |
+
Our goal is to ensure the diversity of selected nodes for labeling, by exploring the node distribution over the sample space. At this point naturally, the question arises How to model the sample space? We need a representation for nodes to define the space, based on which we could measure the samples' distances. A straightforward approach is to use the GNN embedding space since the classification boundary is directly depicted there. However, GNN embeddings fail to address the two challenges in the introduction section.
|
| 92 |
+
|
| 93 |
+
First, with active learning, a limited number of labeled nodes are available in the initial stages. Hence, only the already labeled nodes may have reliable GNN embeddings and biased subsequent samples. Second, GNN embeddings for node classification may not carry enough information for diversification. GNNs usually do not have an MLP layer connecting to the output. The final GNN outputs of uncertain nodes are not diverse enough since the high uncertain nodes may have similar class probabilities (class probabilities close to uniform). Conversely, embeddings of intermediate GNN layers may have an appropriate dimension but lack information of the expanded ego-network.
|
| 94 |
+
|
| 95 |
+
These drawbacks are confirmed in Sec. 6.2, where we show that using GNN embeddings as proxy representations leads to a performance drop. Moreover, different from other machine learning problems, the nodes are correlated with each other, and we also need to take the graph structure into account when diversifying the samples. Hence, to address all these considerations we will employ a $k$ -step propagation of the original node features based on the graph structure as a proxy representation for the nodes. The $k$ -step propagation of nodes ${\mathbf{X}}^{\left( k\right) } = \left( {{\mathbf{x}}_{1}^{\left( k\right) },{\mathbf{x}}_{2}^{\left( k\right) },\ldots ,{\mathbf{x}}_{N}^{\left( k\right) }}\right)$ is defined as follows
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\mathbf{X}}^{\left( k\right) } \mathrel{\text{ := }} {\mathbf{{SX}}}^{\left( k - 1\right) } \tag{3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $\mathbf{S}$ is the normalized adjacency matrix, and ${\mathbf{X}}^{\left( 0\right) }$ are the initial node features. The operation in (3) is efficient and amenable to a mini-batch implementation. Such representations are well-known to succinctly encode the node feature distribution and graph structure. Next, we calculate the proxy representations for the candidate high uncertainty nodes in the set ${\mathcal{C}}_{t}$ . To maximize the diversity of the samples, we cluster the proxy representations in ${\mathcal{C}}_{t}$ using $k$ -means++ into ${B}_{t}$ clusters [AV06], and select the nodes closest to the cluster centers for labeling, by using the ${L}_{2}$ distance metric. One node from each cluster is selected that amounts to ${B}_{t}$ samples.
|
| 102 |
+
|
| 103 |
+
Algorithm 1 ScatterSample Algorithm
|
| 104 |
+
|
| 105 |
+
1: Input: ${\mathcal{V}}_{\text{ train }}$ , GNN model $M$ , number of propagation layers $k$ , number of sampling round $T$ ,
|
| 106 |
+
|
| 107 |
+
sampling redundancy $r$ , initial sampling budget ${B}_{0}$ and total sampling budget $B$ .
|
| 108 |
+
|
| 109 |
+
Initialize $S = \varnothing$
|
| 110 |
+
|
| 111 |
+
Compute ${\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{V}}_{\text{ train }}$ as in (3).
|
| 112 |
+
|
| 113 |
+
Initial Sampling:
|
| 114 |
+
|
| 115 |
+
Use $k$ -means++ to cluster $\left\{ {\mathbf{x}}_{n}^{\left( k\right) }\right\}$ into ${B}_{0}$ clusters.
|
| 116 |
+
|
| 117 |
+
Add a node closest to the cluster center per cluster to $S$ .
|
| 118 |
+
|
| 119 |
+
Query the labels of nodes ${v}_{n} \in S$ , denoted by ${\mathbf{y}}_{S}$ .
|
| 120 |
+
|
| 121 |
+
Train model $M$ using $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ .
|
| 122 |
+
|
| 123 |
+
Dynamic Sampling:
|
| 124 |
+
|
| 125 |
+
Initialize sampling round $t = 1$
|
| 126 |
+
|
| 127 |
+
while $t < T$ do
|
| 128 |
+
|
| 129 |
+
Let ${B}_{t} = \min \left( {B - \left| S\right| ,\left( {B - {B}_{0}}\right) /T}\right)$
|
| 130 |
+
|
| 131 |
+
Use the DiverseUncertainty algorithm to select ${S}_{t}$
|
| 132 |
+
|
| 133 |
+
Query the labels of ${S}_{t}$ , and update $S = S \cup {S}_{t}$ .
|
| 134 |
+
|
| 135 |
+
Train model $M$ over $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ . Update $t = t + 1$ .
|
| 136 |
+
|
| 137 |
+
end while
|
| 138 |
+
|
| 139 |
+
Clearly, the size of the candidate set $\left| {\mathcal{C}}_{t}\right| \geq {B}_{t}$ , however deciding how many candidate nodes to choose from is important. We parameterize the size as a multiple of the selected nodes namely $\left| {\mathcal{C}}_{t}\right| = r{B}_{t}$ , where $r > 1$ is the sampling redundancy. If $r$ is too small, the selected nodes are closer to the classification boundary (have larger information entropy) but the nodes selected may not be diverse enough. On the other hand, if $r$ is too large, the set will be diverse, but the selected nodes may be far away from the classification boundary. Therefore, it is critical to pick a suitable $r$ to achieve a sweet point between diversity and uncertainty. We leave the discussion of choosing $r$ to Sec. 6.2. Besides empirical validation with experiments in five real datasets (see Sec. 6), our diversification approach is theoretically motivated (see Sec. 5).
|
| 140 |
+
|
| 141 |
+
The pseudo code of ScatterSample is shown in Algorithm 1. ScatterSample is a multiple rounds sampling scheme, which includes an initial sampling step and dynamic sampling steps. ScatterSample
|
| 142 |
+
|
| 143 |
+
Algorithm 2 DiverseUncertainty Algorithm
|
| 144 |
+
|
| 145 |
+
Input: ${\mathcal{V}}_{\text{ train }},\left\{ {{\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{C}}_{t}}\right\} ,r,{B}_{t}$
|
| 146 |
+
|
| 147 |
+
Compute ${\phi }_{\text{ entropy }}\left( v\right) \forall v \in {\mathcal{V}}_{\text{ train }}$ ; see 2).
|
| 148 |
+
|
| 149 |
+
${\mathcal{C}}_{t} \leftarrow \left\{ {r{B}_{t}\text{ nodes with largest }{\phi }_{\text{ entropy }}\left( v\right) }\right\}$ .
|
| 150 |
+
|
| 151 |
+
Use $k$ -means++ to cluster the ${\mathbf{x}}_{n}^{\left( k\right) }$ (for all $n \in {\mathcal{C}}_{t}$ ) into ${B}_{t}$ clusters.
|
| 152 |
+
|
| 153 |
+
${S}_{t} \leftarrow \varnothing$
|
| 154 |
+
|
| 155 |
+
for $j = 1,2,\ldots ,{B}_{t}$ do
|
| 156 |
+
|
| 157 |
+
Compute the cluster center ${\mathbf{v}}_{j}$ of cluster $j$
|
| 158 |
+
|
| 159 |
+
Pick node $x \leftarrow \arg \mathop{\min }\limits_{{n \in {\mathcal{C}}_{t}}}\begin{Vmatrix}{{\mathbf{x}}_{n}^{\left( k\right) } - {\mathbf{v}}_{j}}\end{Vmatrix}$
|
| 160 |
+
|
| 161 |
+
${S}_{t} \leftarrow {S}_{t} \cup \{ x\}$
|
| 162 |
+
|
| 163 |
+
end for
|
| 164 |
+
|
| 165 |
+
Return ${S}_{t}$
|
| 166 |
+
|
| 167 |
+
first computes the $k$ -step features propagation of all the nodes in the training set using (3), and clusters them into ${B}_{0}$ clusters, where ${B}_{0}$ is the initial sampling budget. Then, ScatterSample picks the nodes closest to the cluster centers as the initial training samples and queries their labels. The purpose of clustering $k$ -step feature propagations is to enforce the initial training set to spread out over the whole sample space. It is also helpful to explore the classification boundary since if the initial sampled nodes are not diverse enough, we cannot picture the classification boundary of the regions that are far away from the initial training samples. ScatterSample repeats the dynamic sampling described in Algorithm 2 until the sampling budget $B$ is exhausted. The next section fortifies our diversification method with theoretical guarantees.
|
| 168 |
+
|
| 169 |
+
§ 5 THEORETICAL ANALYSIS
|
| 170 |
+
|
| 171 |
+
In Sec. 6.2, we have shown that DiverseUncertainty is significantly better than Uncertainty algorithm. In this section, we provide theoretical analysis and simulation results to demonstrate the benefits of DiverseUncertainty and explains why MaxUncertainty algorithm may fail. The results presented here give a theoretical basis for the superiority of our method as established in the experiments in Section 6.
|
| 172 |
+
|
| 173 |
+
§ 5.1 ANALYSIS SETUP
|
| 174 |
+
|
| 175 |
+
For the analysis, we employ the Gaussian Process (GP) model [O'H78]. GP models offer a flexible approach to model complex functions and are robust to small sample sizes [See04]. Moreover, the uncertainty of the prediction can be easily computed using a GP model. Neural network models and GNNs interpolate the observed samples, while GPs provide a robust framework to interpolate samples, that is amenable to analysis.
|
| 176 |
+
|
| 177 |
+
Assume the label ${y}_{i} \in \mathbb{R}$ is dependent on the propagated features ${\mathbf{x}}_{i}^{\left( k\right) }$ through a GP model. The label ${y}_{i}$ is modeled by a Gaussian Process, where $\left( {\mathbf{y} \mid {\mathbf{X}}^{\left( k\right) }}\right) \sim N\left( {\mathbf{1}\mu ,\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right) }\right)$ and $\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right)$ is the Gaussian kernel matrix. The kernel is parameterized by ${\mathbf{K}}_{ij}\left( {\mathbf{X}}^{\left( k\right) }\right) = K\left( {{\mathbf{x}}_{i}^{\left( k\right) },{\mathbf{x}}_{j}^{\left( k\right) }}\right) =$ $\exp \left( {-\frac{1}{2}{\left( {\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }\right) }^{T}{\sum }^{-1}\left( {{\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }}\right) }\right)$ , where $\sum = \operatorname{diag}\left( {{\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{d}}\right)$ . Consider that the sample space of ${\mathbf{x}}^{\left( k\right) }$ can be clustered into $m$ clusters ${\mathcal{S}}_{1},{\mathcal{S}}_{2},\ldots ,{\mathcal{S}}_{m}$ , and denote the cluster centers as ${\mathbf{c}}_{1},{\mathbf{c}}_{2},\ldots ,{\mathbf{c}}_{m}$ . Without loss of generality, denote the radius of the cluster, ${d}_{1} \leq {d}_{2} \leq {d}_{3} \leq \cdots < {d}_{m}$ . The clusters are well separated and the distance between the cluster centers are larger than $\delta$ , i.e. $\mathop{\min }\limits_{{i \neq j}}{\begin{Vmatrix}{\mathbf{c}}_{i} - {\mathbf{c}}_{j}\end{Vmatrix}}_{2} \geq \delta \left( {\delta > 2{d}_{m}}\right)$ . Moreover, we consider that there does not exist a cluster dominating the sample space, ${d}_{m}^{2} \leq \tau \mathop{\sum }\limits_{{j = 1}}^{{m - 1}}{d}_{j}^{2}$ and the samples are uniformly distributed over the clusters.
|
| 178 |
+
|
| 179 |
+
§ 5.2 MAXUNCERTAINTY VS DIVERSEUNCERTAINTY
|
| 180 |
+
|
| 181 |
+
Here, we show that DiverseUncertainty could significantly achieves smaller mean squared error (MSE) compared to MaxUncertainty. Without loss of generality we consider $m$ clusters and the following definitions.
|
| 182 |
+
|
| 183 |
+
* MaxUncertainty Select ${2m}$ most uncertain samples.
|
| 184 |
+
|
| 185 |
+
* DiverseUncertainty Select the 2 most uncertain samples from each cluster.
|
| 186 |
+
|
| 187 |
+
< g r a p h i c s >
|
| 188 |
+
|
| 189 |
+
Figure 2: The area enclosed by the blue circles is the sample space of propagated features (2D case). The green stars are sampled nodes during initial sampling (cluster center). The red stars are the sampled nodes during uncertainty sampling. (a) MaxUncertainty picks the nodes with largest uncertainty, which is equivalent to sampling the boundary of cluster 2. (b) DiverseUncertainty diversifies the clustered nodes, and samples the boundary of both clusters.
|
| 190 |
+
|
| 191 |
+
Before presenting the theory we illustrate the operation of our method and MaxUncertainty in Figure 2. ScatterSample first clusters the samples on the propagated feature space (blue circles in Figure 2), and selects the nodes closest to the cluster centers for initial training (green stars in Figure 2). Then, during the dynamic sampling steps, we compute the uncertainty using equation 4. MaxUncertainty approach will select the nodes with the largest uncertainty. Under our setup, it is equivalent to sample nodes at the boundary of the largest cluster since the distance to the cluster center is the most important factor of uncertainty (Figure 2(a)). While DiverseUncertainty will diversify the high uncertainty nodes, which is equivalent to sample from the boundary of each cluster (Figure 2(b)). The red stars of Figure 2 show the nodes labeled during the uncertainty sampling stage. Since MaxUncertainty algorithm only labels the nodes in cluster 2, cluster 1 is ignored the prediction uncertainty of cluster 2 cannot be reduced. On the contrary, DiverseUncertainty samples nodes from both cluster 1 and 2. Thus, it could reduce the prediction uncertainty in both clusters.
|
| 192 |
+
|
| 193 |
+
Then, the following theorem quantifies the relationship of the MSEs of both algorithms under the setup of Sec. 5.1.
|
| 194 |
+
|
| 195 |
+
Theorem 5.1. Consider a case where feature dimension $d = 1$ . With the above notation and assumptions, let ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{2\theta }}\right\rbrack$ . If we satisfy ${d}_{m}^{2} \geq {d}_{m - 1}^{2} + 4\log \theta$ and $\delta \geq {d}_{m} +$ $\max \left( {\sqrt{{d}_{m}^{2} + \theta \log \left( {9m}\right) },{2\theta }\log \left( \frac{3\sqrt{m}}{1 - {r}_{m}}\right) }\right)$ , we have
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\frac{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ Diverse Uncertainty }}\right) } \geq \frac{1}{2\left( {1 + \tau }\right) }\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3} = \frac{1}{\tau + 1}O\left( \theta \right) .
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
§ PROOF: THE COMPLETE PROOF IS INCLUDED IN APPENDIX B.
|
| 202 |
+
|
| 203 |
+
Theorem 5.1 suggests that when the GP function is smooth enough (large $\theta$ ), the MaxUncertainty will have larger MSE than the MaxDiversity algorithm (proof is in appendix section B). A large $\theta$ suggests a close correlation between the labels of the nodes that are close to each other. It is also common for most of the graph datasets where samples clustered together usually have similar labels. Thus, DiverseUncertainty can achieve a smaller MSE in this case.
|
| 204 |
+
|
| 205 |
+
Table 1: Statistics of graph datasets used in experiments.
|
| 206 |
+
|
| 207 |
+
max width=
|
| 208 |
+
|
| 209 |
+
Data #Nodes #Train Nods #Edges #Classes
|
| 210 |
+
|
| 211 |
+
1-5
|
| 212 |
+
Cora 2,708 1,208 5,429 7
|
| 213 |
+
|
| 214 |
+
1-5
|
| 215 |
+
Citeseer 3,327 1,827 4,732 6
|
| 216 |
+
|
| 217 |
+
1-5
|
| 218 |
+
Pubmed 19,717 18,217 44,328 3
|
| 219 |
+
|
| 220 |
+
1-5
|
| 221 |
+
Corafull 19,793 18,293 126,842 70
|
| 222 |
+
|
| 223 |
+
1-5
|
| 224 |
+
ogbn-products 2,449,029 196,615 61,859,149 47
|
| 225 |
+
|
| 226 |
+
1-5
|
| 227 |
+
|
| 228 |
+
§ 232 6 EXPERIMENTS
|
| 229 |
+
|
| 230 |
+
3 We evaluate the performance of ScatterSample on five different datasets.
|
| 231 |
+
|
| 232 |
+
Datasets. We evaluated the different methods on the Cora, Citeseer, Pubmed, Corafull [KW16], and ogbn-products [Hu+20] datasets (Table 1). Besides the ogbn-products, we do not keep original data split of the training and testing set. For the nodes that are not in the validation or testing sets (the validation and testing sets follows the split in the dgl package "dgl.data" [Wan+19]), we will add them to the training set. The labels can only be queried from the training set.
|
| 233 |
+
|
| 234 |
+
Baselines. For different sampling budget $B$ , we compare the test accuracy of ScatterSample with the following graph active learning baselines:
|
| 235 |
+
|
| 236 |
+
* Random sampling. Select $B$ nodes uniformly at random from ${\mathcal{V}}_{\text{ train }}$ .
|
| 237 |
+
|
| 238 |
+
* AGE [CZC17]: AGE computes a score which combines the node centrality, information density, and uncertainty, to select $B$ nodes with the highest scores.
|
| 239 |
+
|
| 240 |
+
* ANRMAB [Gao+18]: ANRMAB learns the combination weights of the three metrics used by AGE with multi-armed bandit method.
|
| 241 |
+
|
| 242 |
+
* FeatProp: FeatProp [Wu+19b] clusters the feature propogations into $B$ clusters and pick the nodes closest to the cluster centers.
|
| 243 |
+
|
| 244 |
+
* Grain: [Zha+21] score the node by the weighted average of the influence score and diversity score. And select the top $B$ nodes with largest node scores. Grain includes two different approaches of selecting nodes, Grain (ball-D) and Grain (NN-D).
|
| 245 |
+
|
| 246 |
+
* ScatterSample: For the sample scale graph dataset (Cora, Citeseer), we set the initial sampling budget to $3\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ and sample $1\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ each round during the dynamic sampling period. For medium scale datasets (Pubmed and Corafull), we set the initial sampling budget to $1\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ and sample ${0.5}\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ each dynamic sampling round. For the large scale dataset (ogbn-products), initial sampling budget is ${0.2}\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ , and each dynamic sampling round selects ${0.05}\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ nodes.
|
| 247 |
+
|
| 248 |
+
GNN setup. We train a 2-layer GCN network with hidden layer dimension $= {64}$ for Cora, Cite-seer and Pubmed, and $= {128}$ for Corafull and obgn-products. To train the GNN, we follow the standard random neighbor sampling where for each node [HYL17], we randomly sample 5 neighbors for the convolution operation in each layer. We use the function in "dgl" package to train the GNNs [Wan+19].
|
| 249 |
+
|
| 250 |
+
§ 6.1 PERFORMANCE RESULTS
|
| 251 |
+
|
| 252 |
+
We compare the performance of different active graph neural network learning algorithms under different labeling budgets(B). We parameterize the labeling budget $B$ equal to a certain proportion of the nodes in the training set $\left( {B = r\left| {\mathcal{V}}_{\text{ train }}\right| }\right)$ . For Cora and Citeseer, we vary $r$ from 5% to ${15}\%$ in increment of $2\%$ ; for Pubmed and Corafull, $r$ is varied from $3\%$ to ${10}\%$ ; for ogbn-product dataset, we vary the $r$ from 0.3% to 1%. The performance of the active learning algorithms are measured with the test accuracy.
|
| 253 |
+
|
| 254 |
+
Accuracy. Figure 3 shows the test accuracy of baselines trained on different proportions of the selected nodes. ScatterSample improves the test accuracy and consistently outperforms other baselines in all the datasets. In Citeseer, ScatterSample requires 9% of the node labels to achieve test accuracy 74.2%, while the best alternative baselines "Grain (ball-D)" and "Grain (NN-D)" need to label 15% of nodes to achieve similar accuracy, which corresponds to a ${40}\%$ savings of the labeling cost. Similarly, in PubMed and ogbn-products, ScatterSample achieves a 50% labeling cost reduction compared to the best alternative baseline.
|
| 255 |
+
|
| 256 |
+
Efficiency. Here, we compare the computation time among the methods that use the graph structure and node features to select the samples namely, ScatterSample, "Grain (ball-D)" and "Grain (NN-D)". We use the ogbn-products dataset to perform comparisons. ScatterSample takes less than 8 hours to determine the labeling nodes and train the GNN, while the Grain algorithm requires more than 240 hours. Grain requires $\mathcal{O}\left( {n}^{2}\right)$ complexity to calculate the scores of all nodes, which is prohibitive complexity in large graphs.
|
| 257 |
+
|
| 258 |
+
Complexity analysis. The computation complexity of DiverseUncertainty is $O\left( {\left| E\right| + r * {B}_{t}^{2}}\right)$ . It is because ScatterSample includes two parts: 1) computing the node representations with complexity $O\left( \left| E\right| \right)$ where $\left| E\right|$ is the number of edges and 2) cluster the the uncertain nodes where the complexity is $O\left( {r{B}_{t}^{2}}\right)$ . Since both $r$ and ${B}_{t}$ are small, $r{B}_{t}^{2} < \left| E\right|$ , our method does not add a lot of extra burden compared to the model training time.
|
| 259 |
+
|
| 260 |
+
< g r a p h i c s >
|
| 261 |
+
|
| 262 |
+
Figure 3: ScatterSample (blue), wins consistently: Comparison of the test accuracy of active GNN learning algorithms at different labeling budget. The $x$ -axis shows # labeled nodes/# nodes in training set.
|
| 263 |
+
|
| 264 |
+
§ 6.2 ABLATION STUDY
|
| 265 |
+
|
| 266 |
+
The MaxDiversity algorithm of ScatterSample needs to determine the size of candidate set ${\mathcal{C}}_{t}$ before selecting a subset ${S}_{t}$ from ${\mathcal{C}}_{t}$ for labeling. Hence, sampling redundancy $r$ and the clustering algorithm to cluster the nodes in ${\mathcal{C}}_{t}$ will affect the performance of ScatterSample. In this section, we will evaluate the effect of both factors.
|
| 267 |
+
|
| 268 |
+
< g r a p h i c s >
|
| 269 |
+
|
| 270 |
+
Figure 4: Compare the performance under different sampling redundancy $r$ . When $r = 1$ , Diverse-Uncertainty reduces to MaxUncertainty method.
|
| 271 |
+
|
| 272 |
+
Sampling redundancy $r$ : Recall from algorithm 1, the sampling redundancy $r$ controls the relative size of candidate set ${\mathcal{C}}_{t}$ to size of sampled node ${S}_{t}$ . When $r = 1$ , ScatterSample reduces to the standard MaxUncertainty algorithm. And figure 4 shows that the sampling the most uncertain nodes is significantly worse than DiverseUncertainty. For the Citeseer dataset, DiverseUncertainty can outperform MaxUncertainty by over 7% when sampling ratio is 5%. Therefore, to achieve a good test accuracy, $r$ should be carefully selected. Figure 4 suggests that as $r$ increases, the test accuracy quickly boosts at the early stage, and then decreases slowly.
|
| 273 |
+
|
| 274 |
+
Sensitivity to initial sampling ratio: During the initial sampling stage, DiverseUncertainty samples ${B}_{0}$ nodes to train the model initially. And the initially trained model will affect the nodes sampled during the dynamic sampling period. We test the effect of different initial sampling ratio on Cora and Citeseer datasets. We vary the initial sampling ratio from 2% to 4%, and figure A5 shows that DiverseUncertainty is robust to the choice of initial sampling ratio.
|
| 275 |
+
|
| 276 |
+
Diverse uncertainty algorithms: Besides the sampling algorithm used by DiverseUncertainty, there are some other algorithms to pick the representative nodes from the candidate set ${S}_{t}$ . First, we will evaluate three algorithms to cluster and select the propagated features.
|
| 277 |
+
|
| 278 |
+
* Random select: randomly pick nodes ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
|
| 279 |
+
|
| 280 |
+
* DiverseUncertainty: use $k$ -means++ to cluster the nodes in ${\mathcal{C}}_{t}$ and
|
| 281 |
+
|
| 282 |
+
* Random round-robin Algorithm [Cit+21]: use the cluster labels from the initial sampling period (the initial sampling period clusters all the nodes in ${\mathcal{V}}_{\text{ train }}$ ). Then, following the Algorithm A3 (see Appendix) to select ${S}_{t}$ from ${\mathcal{C}}_{t}$
|
| 283 |
+
|
| 284 |
+
Figure A6 suggests that $k$ -means++ clustering algorithm achieves a better test accuracy in most cases compared to random selection or random round-robin algorithm (see Appendix). Moreover, compared to random sampling algorithm, $k$ -means++ clustering algorithm is more robust when the sampling ratio increases. As the sampling ratio increases, the test accuracy of $k$ -means++ keeps increasing in most cases, while the test accuracy of random sampling algorithm has more fluctuations.
|
| 285 |
+
|
| 286 |
+
Another factor that affects the test performance is the metric for clustering. Besides the propagated features (which is used by MaxDiversity), we can also cluster the input features or the embedding vectors. Since the GNN models typically used do not have a fully connected layer connecting to the output, we cannot use the output of second last layer as the embedding. Hence, we use the GNN output as the embedding vector for clustering. Figure A7 shows that clustering the propagated features consistently outperforms clustering the other two targets. Especially for the "Citeseer" dataset, clustering the propagated features outperforms by at most 5%. To conclude, the $k$ -means++ clustering algorithm achieves the best performance compared to the other selection methods and clustering the propagated features is better than clustering other targets. Thus, DiverseUncertainty uses $k$ -means++ to cluster the propagated features to pick ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
|
| 287 |
+
|
| 288 |
+
§ 7 EMPIRICAL VALIDATION OF THEOREM
|
| 289 |
+
|
| 290 |
+
In this section, we perform simulation analysis to demonstrate that ScatterSample can reduce the MSE compared to greedy uncertainty sampling approach.
|
| 291 |
+
|
| 292 |
+
Graph Simulation Setup. Let the dimension of input feature $d = 1$ . Simulate $\mathbf{X}$ from two different clusters, where $\left( {X \mid {C}_{1}}\right) \sim$ Uniform(-15, - 5)and $\left( {X \mid {C}_{2}}\right) \sim$ Uniform(8,12). In our simulation, we randomly generated 100 nodes for each cluster. Each node is randomly connected to two other nodes in the same cluster. Moreover, for the edges between clusters, we set a probability threshold $r$ such that $\mathrm{P}\left\lbrack {{V}_{i} \in {C}_{1}\text{ connect to a node } \in {C}_{2}}\right\rbrack = r$ (See Appendix D for details).
|
| 293 |
+
|
| 294 |
+
Label of nodes. The label of a node depends on its propagated features. First compute the 1-layer feature propagation of each node, ${\mathbf{X}}^{\left( 1\right) }$ . Then, the label of $i$ -th node is ${y}_{i} = {\left| {X}_{i}^{\left( 1\right) }\right| }^{2}$ . Here, because the two cluster centers are equally distanced from 0, hence, the label function is also symmetric around 0 .
|
| 295 |
+
|
| 296 |
+
Node sampling. During the initial sampling step, label the nodes closest to the cluster centers and train the GP function. To sample uncertain nodes,
|
| 297 |
+
|
| 298 |
+
* MaxUncertainty: Label the 8 nodes with largest uncertainty.
|
| 299 |
+
|
| 300 |
+
* DiverseUncertainty: Collect the top 80 nodes with largest uncertainty into the candidate set. Then, use $k$ -means++ to cluster the nodes in the candidate set into 8 clusters. Label the 8 nodes closest to the cluster centers.
|
| 301 |
+
|
| 302 |
+
MaxUncertainty and DiverseUncertainty use the newly labeled nodes to update the GP function respectively. Finally, the trained GP function predicts the node labels, and we compute the corresponding MSE.
|
| 303 |
+
|
| 304 |
+
Figure A8 in the Appendix suggests that MaxUncertainty has larger MSE compared to Diverse-Uncertainty algorithm. For the MaxUncertainty algorithm, since most of the labeled nodes come from the cluster 1, the MSE of cluster 1 is significantly smaller than that of cluster 2 . While for the DiverseUncertainty algorithm, the MSE of cluster 1 and 2 are comparable. As $r$ increases, there are more and more edges between clusters, and the propagated features are less separated. Hence, there are some high uncertainty nodes from cluster 1 very close to cluster 2, which is beneficial for Max-Uncertainty to learn the labels of nodes from cluster 2. Thus, we could observe $\frac{\text{ MSE of MaxUncertainty }}{\text{ MSE of DiverseUncertaintly }}$ keeps decreasing when $r$ increases. When $r$ is very large, cluster 1 and 2 will merge into one cluster, and MSEs of both methods no longer have a significant difference.
|
| 305 |
+
|
| 306 |
+
§ 8 CONCLUSION
|
| 307 |
+
|
| 308 |
+
Learning a GNN model with limited labeling budget is an important but challenging problem. In this paper:
|
| 309 |
+
|
| 310 |
+
* We propose a novel data efficient GNN learning algorithm, ScatterSample, which efficiently diversifies the uncertain nodes and achieves better test accuracy than recent baselines.
|
| 311 |
+
|
| 312 |
+
* We provide theoretical guarantees: Theorem 5.1 proves the advantage of ScatterSample over MaxUncertainty sampling.
|
| 313 |
+
|
| 314 |
+
* Experiments on real data show that ScatterSample can save up to ${50}\%$ labeling size, for the same test accuracy.
|
| 315 |
+
|
| 316 |
+
We envision ScatterSample will inspire future research of combining uncertainty sampling and representation sampling (diversifying).
|
papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,498 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Diffusion Models for Graphs Benefit From Discrete State Spaces
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Denoising diffusion probabilistic models and score matching models have proven to be very powerful for generative tasks. While these approaches have also been applied to the generation of discrete graphs, they have, so far, relied on continuous Gaussian perturbations. Instead, in this work, we suggest using discrete noise for the forward Markov process. This ensures that in every intermediate step the graph remains discrete. Compared to the previous approach, our experimental results on four datasets and multiple architectures show that using a discrete noising process results in higher quality generated samples indicated with an average MMDs reduced by a factor of 1.5. Furthermore, the number of denoising steps is reduced from 1000 to 32 steps leading to a 30 times faster sampling procedure.
|
| 12 |
+
|
| 13 |
+
## 12 1 Introduction
|
| 14 |
+
|
| 15 |
+
Score-based [1] and denoising diffusion probabilistic models (DDPMs) [2, 3] have recently achieved striking results in generative modeling and in particular in image generation. Instead of learning a complex model that generates samples in a single pass (like a Generative Adversarial Network [4] (GAN) or a Variational Auto-Encoder [5] (VAE)), a diffusion model is a parameterized Markov Chain trained to reverse an iterative predefined process that gradually transforms a sample into pure noise. Although diffusion processes have been proposed for both continuous [6] and discrete [7] state spaces, their use for graph generation has only focused on Gaussian diffusion processes which operate in the continuous state space $\left\lbrack {8,9}\right\rbrack$ .
|
| 16 |
+
|
| 17 |
+
This contribution suggests adapting the denoising procedure to an actual graph distribution and using discrete noise, leading to a random graph model. We describe this procedure based on the Discrete DDPM framework proposed by Austin et al. [7], Hoogeboom et al. [10]. Our experiments show that using discrete noise greatly reduces the number of denoising steps that are needed and improves the sample quality. We also suggest the use of a simple expressive graph neural network architecture [11] for denoising, which, while bringing expressivity benefits, contrasts with more complicated architectures currently used for graph denoising [8].
|
| 18 |
+
|
| 19 |
+
## 2 Related Work
|
| 20 |
+
|
| 21 |
+
Traditionally, graph generation has been studied through the lens of random graph models [12-14]. While this approach is insufficient to model many real-world graph distributions, it is useful to create synthetic datasets and provides a useful abstraction. In fact, we will use Erdős-Rényi graphs [12] to model the prior distribution of our diffusion process.
|
| 22 |
+
|
| 23 |
+
Due to their larger number of parameters and expressive power, deep generative models have achieved better results in modeling complex graph distributions. The most successful graph generative models can be devised into two different techniques: a) auto-regressive graph generative models, which generate the graph sequentially node-by-node [15, 16], and b) one-shot generative models which generate the whole graph in a single forward pass [8, 9, 17-21]. While auto-regressive models can generate graphs with hundreds or even thousands of nodes, they can suffer from mode collapse $\left\lbrack {{20},{21}}\right\rbrack$ . One-shot graph generative models are more resilient to mode collapse but are more challenging to train while still not scaling easily beyond tens of nodes. Recently, one-shot generation has been scaled up to graphs of hundreds of nodes thanks to spectral conditioning [21], suggesting that good conditioning can largely benefit graph generation. Still, the suggested training procedure is cumbersome as it involves 3 different intertwined Generative Adversarial Networks (GANs). Finally, Variational Auto Encoders (VAE) have also been studied to generate graphs but remain difficult to train, as the loss function needs to be permutation invariant [22] which can necessitate an expensive graph matching step [17].
|
| 24 |
+
|
| 25 |
+
In contrast, the score-based models $\left\lbrack {8,9}\right\rbrack$ have the potential to provide both, a simple, stable training objective similar to the auto-regressive models and good graph distribution coverage provided by the one-shot models. Niu et al. [8] provided the first score-based model for graph generation by directly using the score-based model formulation by Song and Ermon [1] and additionally accounting for the permutation equivariance of graphs. Jo et al. [9] extended this to featured graph generation, by formulating the problem as a system of two stochastic differential equations, one for feature generation and one for adjacency generation. The graph and the features are then generated in parallel. This approach provided promising results for molecule generation. Results on slightly larger graphs were also improved but remained imperfect. Importantly, both contributions rely on a continuous Gaussian noise process and use a thousand denoising steps to achieve good results, which makes for a slow graph generation.
|
| 26 |
+
|
| 27 |
+
As shown by Song et al. [6], score matching is tightly related to denoising diffusion probabilistic models [3] which provide a more flexible formulation, more easily amendable for the graph generation. In particular, for the noisy samples to remain discrete graphs, the perturbations need to be discrete. Such discrete diffusion has been successfully used for quantized image generation [23, 24] and text generation [25]. Diffusion using the multinomial distribution was proposed in Hoogeboom et al. [10]. Then, Austin et al. [7] extended the previous work by Hoogeboom et al. [10], Song et al. [26] and provided a general recipe for denoising diffusion models in discrete state-spaces which mainly requires the specification of a doubly-stochastic Markov transition matrix $\mathbf{Q}$ which ensures the Markov process conserves probability mass and converges to a stationary distribution. In the next section, we describe a formulation of this perturbation matrix $\mathbf{Q}$ leading to the Erdős-Rényi random graphs.
|
| 28 |
+
|
| 29 |
+
## 3 Discrete Diffusion for Simple Graphs
|
| 30 |
+
|
| 31 |
+
Diffusion models [2] are generative models based on a forward and a reverse Markov process. The forward process, denoted $q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) = \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right)$ generates a sequence of increasingly noisier latent variables ${\mathbf{A}}_{t}$ from the initial sample ${\mathbf{A}}_{0}$ , to white noise ${\mathbf{A}}_{T}$ . Here the sample ${\mathbf{A}}_{0}$ and the latent variables ${\mathbf{A}}_{t}$ are adjacency matrices. The learned reverse process ${p}_{\theta }\left( {\mathbf{A}}_{1 : T}\right) = p\left( {\mathbf{A}}_{T}\right) \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ attempts to progressively denoise the latent variable ${\mathbf{A}}_{t}$ in order to produce samples from the desired distribution. Here we will focus on simple graphs, but the approach can be extended in a straightforward manner to account for different edge types. We use the model from [10] and, for convenience, adopt the representation of [7] for our discrete process.
|
| 32 |
+
|
| 33 |
+
### 3.1 Forward Process
|
| 34 |
+
|
| 35 |
+
Let the row vector ${\mathbf{a}}_{t}^{ij} \in \{ 0,1{\} }^{2}$ be the one-hot encoding of $i, j$ element of the adjacency matrix ${\mathbf{A}}_{t}$ . Here $t \in \left\lbrack {0, T}\right\rbrack$ denotes the timestep of the process, where ${\mathbf{A}}_{0}$ is a sample from the data distribution and ${\mathbf{A}}_{T}$ is an Erdős-Rényi random graph. The forward process is described as repeated multiplication of each adjacency element type row vector ${\mathbf{a}}_{t}^{ij} = {\mathbf{a}}_{t - 1}^{ij}{\mathbf{Q}}_{t}$ with a double stochastic matrix ${\mathbf{Q}}_{t}$ . Note that the forward process is independent for each edge/non-edge $i \neq j$ . The matrix ${\mathbf{Q}}_{t} \in {\mathbb{R}}^{2 \times 2}$ is
|
| 36 |
+
|
| 37 |
+
modeled as
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{\mathbf{Q}}_{t} = \left\lbrack \begin{matrix} 1 - {\beta }_{t} & {\beta }_{t} \\ {\beta }_{t} & 1 - {\beta }_{t} \end{matrix}\right\rbrack , \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where ${\beta }_{t}$ is the probability of not changing the edge state ${}^{1}$ . This formulation ${}^{2}$ has the advantage to allow direct sampling at any timestep of the diffusion process without computing any previous timesteps. Indeed the matrix ${\overline{\mathbf{Q}}}_{t} = \mathop{\prod }\limits_{{i < t}}{\mathbf{Q}}_{i}$ can be expressed in the form of (1) with ${\beta }_{t}$ being replaced by ${\bar{\beta }}_{t} = \frac{1}{2} - \frac{1}{2}\mathop{\prod }\limits_{{i < t}}\left( {1 - 2{\beta }_{i}}\right)$ . Eventually, we want the probability ${\bar{\beta }}_{t} \in \left\lbrack {0,{0.5}}\right\rbrack$ to vary from 0 (unperturbed sample) to 0.5 (pure noise). In this contribution, we limit ourselves to symmetric graphs and therefore only need to model the upper triangular part of the adjacency matrix. The noise is sampled i.i.d. over all of the edges.
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
${}^{1}$ Note that two different $\beta$ ’s could be used for edges and non-edges. This case is left for future work.
|
| 48 |
+
|
| 49 |
+
${}^{2}$ Note that we use a different parametrization for (1) than [10]. To recover the original formulation, one can simply divide all ${\beta }_{t}$ by 2 .
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
### 3.2 Reverse Process
|
| 54 |
+
|
| 55 |
+
To sample from the data distribution, the forward process needs to be reversed. Therefore, we need to estimate $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ . In our case, using the Markov property of the forward process this can be rewritten as (see Appendix A for derivation):
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) .} \tag{2}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
Note that (2) is entirely defined by ${\beta }_{t}$ and ${\bar{\beta }}_{t}$ and ${\mathbf{A}}_{0}$ (see Appendix A, Equation 4).
|
| 62 |
+
|
| 63 |
+
### 3.3 Loss
|
| 64 |
+
|
| 65 |
+
Diffusion models are typically trained to minimize a variational upper bound on the negative log-likelihood. This bound can be expressed as (see Appendix C or [3, Equation 5]):
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\left. {{L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\left\lbrack \underset{{L}_{T}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) }}\right.
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
+ \mathop{\sum }\limits_{{t = 1}}^{T}{\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\underset{{L}_{t}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) }}\underset{{L}_{0}}{\underbrace{-{\mathbb{E}}_{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
Practically, the model is trained to directly minimize the losses ${L}_{t}$ , i.e. the KL divergence ${D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right)$ by using the tractable parametrization of $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ from (2). Note that the discrete setting of the selected noise distribution prevents training the model to approximate the gradient of the distribution as done by score-matching graph generative models [8, 9].
|
| 76 |
+
|
| 77 |
+
Parametrization of the reverse process. While it is possible to predict the logits of ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ in order to minimize ${L}_{\mathrm{{vb}}}$ , we follow $\left\lbrack {3,7,{10}}\right\rbrack$ and use a network ${\mathrm{{nn}}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ that predict the logits of the distribution ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ . This parametrization is known to stabilize the training procedure. To minimize ${L}_{\mathrm{{vb}}},\left( 2\right)$ can be used to recover ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ from ${\mathbf{A}}_{0}$ and ${\mathbf{A}}_{t}$ .
|
| 78 |
+
|
| 79 |
+
Alternate loss. Many implementations of DDPMs found it beneficial to use alternative losses. For instance, [3] derived a simplified loss function that reweights the ELBO. Hybrid losses have been used in [27] and [7]. As shown in Appendix D, using the parametrization ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ , one can express the term: ${L}_{t}$ as ${L}_{t} = - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right)$ . Empirically, we found that minimizing
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{L}_{\text{simple }} \mathrel{\text{:=}} - {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\mathop{\sum }\limits_{{t = 1}}^{T}\left( {1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}}\right) \cdot {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\log {p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) ) \tag{3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
leads to stable training and better results. Note that this loss equals the cross-entropy loss between ${\mathbf{A}}_{0}$ and ${\operatorname{nn}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ . The re-weighting $1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}$ , which assigns linearly more importance to the less noisy samples, has been proposed in [23, Equation 7].
|
| 86 |
+
|
| 87 |
+
### 3.4 Sampling
|
| 88 |
+
|
| 89 |
+
For each loss, we used a specific sampling algorithm. For both approaches, we start by sampling each edge independently from a Bernoulli distribution with probability $p = 1/2$ (Erdős-Rényi random graph). Then, for the ${L}_{\mathrm{{vb}}}$ loss we follow Ho et al. [3] and iteratively reverse the chain by sampling Bernoulli-sampling from ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ until we obtain at our sample of ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right)$ . For the loss function ${L}_{\text{simple }}$ , we sample ${\mathbf{A}}_{0}$ directly from ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ for each step $t$ and obtain ${\mathbf{A}}_{t - 1}$ by sampling again from $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right)$ . The two approaches are described algorithmically in Appendix E.
|
| 90 |
+
|
| 91 |
+
The values of ${\bar{\beta }}_{t}$ are selected following a simple linear schedule for our reverse process [2]. We found it works similarly well as other options such as cosine schedule [27]. Note that in this case ${\beta }_{t}$ can be obtained from ${\bar{\beta }}_{t}$ in a straightforward manner (see Appendix B).
|
| 92 |
+
|
| 93 |
+
## 4 Experiments
|
| 94 |
+
|
| 95 |
+
We compare our graph discrete diffusion approach to the original score-based approach proposed by Niu et al. [8]. Models using this original formulation are denoted by score. We follow the training and evaluation setup used by previous contributions $\left\lbrack {8,9,{15},{19}}\right\rbrack$ . More details can be found in Appendix G. For evaluation, we compute MMD metrics from [15] between the generated graphs and the test set, namely, the degree distribution, the clustering coefficient, and the 4-node orbit counts. To demonstrate the efficiency of the discrete parameterization, the discrete models only use 32 denoising steps, while the score-based models use 1000 denoising steps, as originally proposed. We compare two architectures: 1. EDP-GNN as introduced by Niu et al. [8], and 2. a simpler and more expressive provably powerful graph network (PPGN) [11]. See Appendix F for a more detailed description of the architectures.
|
| 96 |
+
|
| 97 |
+
<table><tr><td rowspan="2">Model</td><td colspan="4">Community</td><td colspan="4">Ego</td><td rowspan="2">Total</td></tr><tr><td>Deg.</td><td>Clus.</td><td>Orb.</td><td>Avg.</td><td>Deg.</td><td>Clus.</td><td>Orb.</td><td>Avg.</td></tr><tr><td>GraphRNN ${}^{ \dagger }$</td><td>0.030</td><td>0.030</td><td>0.010</td><td>0.017</td><td>0.040</td><td>0.050</td><td>0.060</td><td>0.050</td><td>0.033</td></tr><tr><td>${\mathrm{{GNF}}}^{ \dagger }$</td><td>0.120</td><td>0.150</td><td>0.020</td><td>0.097</td><td>0.010</td><td>0.030</td><td>0.001</td><td>0.014</td><td>0.055</td></tr><tr><td>EDP-Score ${}^{ \dagger }$</td><td>0.006</td><td>0.127</td><td>0.018</td><td>0.050</td><td>0.010</td><td>0.025</td><td>0.003</td><td>0.013</td><td>0.031</td></tr><tr><td>SDE-Score ${}^{ \dagger }$</td><td>0.045</td><td>0.086</td><td>0.007</td><td>0.046</td><td>0.021</td><td>0.024</td><td>0.007</td><td>0.017</td><td>0.032</td></tr><tr><td>EDP-Score ${}^{3}$</td><td>0.016</td><td>0.810</td><td>0.110</td><td>0.320</td><td>0.04</td><td>0.064</td><td>0.005</td><td>0.037</td><td>0.178</td></tr><tr><td>PPGN-Score</td><td>0.081</td><td>0.237</td><td>0.284</td><td>0.200</td><td>0.019</td><td>0.049</td><td>0.005</td><td>0.025</td><td>0.113</td></tr><tr><td>PPGN ${L}_{\mathrm{{vb}}}$</td><td>0.023</td><td>0.061</td><td>0.015</td><td>0.033</td><td>0.025</td><td>0.039</td><td>0.019</td><td>0.027</td><td>0.03</td></tr><tr><td>PPGN ${L}_{\text{simple }}$</td><td>0.019</td><td>0.044</td><td>0.005</td><td>0.023</td><td>0.018</td><td>0.026</td><td>0.003</td><td>0.016</td><td>0.019</td></tr><tr><td>EDP ${L}_{\text{simple }}$</td><td>0.024</td><td>0.04</td><td>0.012</td><td>0.026</td><td>0.019</td><td>0.031</td><td>0.017</td><td>0.022</td><td>0.024</td></tr></table>
|
| 98 |
+
|
| 99 |
+
Table 1: MMD results for the Community and the Ego datasets. All values are averaged over 5 runs with 1024 generated samples without any sub-selection. The "Total" column denotes the average MMD over all of the 6 measurements. The best results of the "Avg." and "Total" columns are shown in bold. $\dagger$ marks the results taken from the original papers.
|
| 100 |
+
|
| 101 |
+
<table><tr><td rowspan="2">Model</td><td colspan="4">SBM-27</td><td colspan="4">Planar-60</td><td rowspan="2">Total</td></tr><tr><td>Deg.</td><td>Clus.</td><td>Orb.</td><td>Avg.</td><td>Deg.</td><td>Clus.</td><td>Orb.</td><td>Avg.</td></tr><tr><td>EDP-Score</td><td>0.014</td><td>0.800</td><td>0.190</td><td>0.334</td><td>1.360</td><td>1.904</td><td>0.534</td><td>1.266</td><td>0.8</td></tr><tr><td>PPGN ${L}_{\text{simple }}$</td><td>0.007</td><td>0.035</td><td>0.072</td><td>0.038</td><td>0.029</td><td>0.039</td><td>0.036</td><td>0.035</td><td>0.036</td></tr><tr><td>EDP ${L}_{\text{simple }}$</td><td>0.046</td><td>0.184</td><td>0.064</td><td>0.098</td><td>0.017</td><td>1.928</td><td>0.785</td><td>0.910</td><td>0.504</td></tr></table>
|
| 102 |
+
|
| 103 |
+
Table 2: MMD results for the SBM-27 and the Planar- 60 datasets.
|
| 104 |
+
|
| 105 |
+
Table 1 shows the results for two datasets, Community-small $\left( {{12} \leq n \leq {20}}\right)$ and Ego-small $\left( {4 \leq n \leq {18}}\right)$ , used by Niu et al. [8]. To better compare our approach to traditional score-based graph generation, in Table 2, we additionally perform experiments on slightly more challenging datasets with larger graphs. Namely, a stochastic-block-model (SBM) dataset with three communities, which in total consists of $\left( {{24} \leq n \leq {27}}\right)$ nodes and a planar dataset with $\left( {n = {60}}\right)$ nodes. Detailed information on the datasets can be found in Appendix H. Additional details concerning the evaluation setup are provided in Appendix G.4.
|
| 106 |
+
|
| 107 |
+
Results. In Table 1, we observe that the proposed discrete diffusion process using the ${L}_{\mathrm{{vb}}}$ loss and PPGN model leads to slightly improved average MMDs over the competitors. The ${L}_{\text{simple }}$ loss further improve the result over ${L}_{\mathrm{{vb}}}$ . The fact that the EDP- ${L}_{\text{simple }}$ model has significantly lower MMD values than the EDP-score model is a strong indication that the proposed loss and the discrete formulation are the cause of the improvement rather than the PPGN architecture. This improvement comes with the additional benefit that sampling is greatly accelerated (30 times) as the number of timesteps is reduced from 1000 to 32. Table 2 shows that the proposed discrete formulation is even more beneficial when graph size and complexity increase. The PPGN-Score even becomes infeasible to run in this setting, due to the prohibitively expensive sampling procedure. A qualitative evaluation of the generated graphs is performed in Appendix I. Visually, the ${L}_{\text{simple }}$ loss leads to the best samples.
|
| 108 |
+
|
| 109 |
+
## 5 Conclusion
|
| 110 |
+
|
| 111 |
+
In this work, we demonstrated that discrete diffusion can increase sample quality and greatly improve the efficiency of denoising diffusion for the graph generation. While the approach was presented for simple graphs with non-attributed edges, it could also be extended to cover graphs with edge attributes.
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
${}^{3}$ The discrepancy with the SDE-Score ${}^{ \dagger }$ results comes from the fact that using the code provided by the authors, we were unable to reproduce their results. Strangely, their code leads to good results when used with our discrete formulation and ${L}_{\text{simple }}$ loss improving over the result reported in their contribution.
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
References
|
| 120 |
+
|
| 121 |
+
[1] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019. 1, 2
|
| 122 |
+
|
| 123 |
+
[2] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015. 1, 2, 4
|
| 124 |
+
|
| 125 |
+
[3] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 1, 2, 3
|
| 126 |
+
|
| 127 |
+
[4] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. 1
|
| 128 |
+
|
| 129 |
+
[5] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 2014. 1
|
| 130 |
+
|
| 131 |
+
[6] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. 1, 2
|
| 132 |
+
|
| 133 |
+
[7] Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. Advances in Neural Information Processing Systems, 34:17981-17993, 2021. 1, 2, 3
|
| 134 |
+
|
| 135 |
+
[8] Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, and Stefano Ermon. Permutation invariant graph generation via score-based generative modeling. In International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 4474-4484, Online, 26-28 Aug 2020. PMLR. 1, 2, 3, 4, 9, 10, 12, 13, 14, 15
|
| 136 |
+
|
| 137 |
+
[9] Jaehyeong Jo, Seul Lee, and Sung Ju Hwang. Score-based generative modeling of graphs via the system of stochastic differential equations. In Proceedings of the International Conference on Machine Learning (ICML), 2022. 1, 2, 3, 4
|
| 138 |
+
|
| 139 |
+
[10] Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems, 34:12454-12465, 2021. 1, 2, 3
|
| 140 |
+
|
| 141 |
+
[11] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In Advances in Neural Information Processing Systems, pages 2156-2167, 2019.1,4,9
|
| 142 |
+
|
| 143 |
+
[12] Paul Erdos, Alfréd Rényi, et al. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17-60, 1960. 1, 10
|
| 144 |
+
|
| 145 |
+
[13] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109-137, 1983.
|
| 146 |
+
|
| 147 |
+
[14] Justin Eldridge, Mikhail Belkin, and Yusu Wang. Graphons, mergeons, and so on! In Advances in Neural Information Processing Systems, pages 2307-2315, 2016. 1
|
| 148 |
+
|
| 149 |
+
[15] Jiaxuan You, Rex Ying, Xiang Ren, William L Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep auto-regressive models. In ICML, 2018. 1, 4
|
| 150 |
+
|
| 151 |
+
[16] Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Will Hamilton, David K Duvenaud, Raquel Urtasun, and Richard Zemel. Efficient graph generation with graph recurrent attention networks. In Advances in Neural Information Processing Systems, pages 4255-4265, 2019. 1
|
| 152 |
+
|
| 153 |
+
[17] Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders. In International conference on artificial neural networks, pages 412-422. Springer, 2018. 1, 2
|
| 154 |
+
|
| 155 |
+
[18] Nicola De Cao and Thomas Kipf. MolGAN: An implicit generative model for small molecular graphs. ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018.
|
| 156 |
+
|
| 157 |
+
[19] Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, and Kevin Swersky. Graph normalizing flows. Advances in Neural Information Processing Systems, 32:13578-13588, 2019. 4
|
| 158 |
+
|
| 159 |
+
[20] Igor Krawczuk, Pedro Abranches, Andreas Loukas, and Volkan Cevher. Gg-gan: A geometric graph generative adversarial network. 2020. 1
|
| 160 |
+
|
| 161 |
+
[21] Karolis Martinkus, Andreas Loukas, Nathanaël Perraudin, and Roger Wattenhofer. Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators. In Proceedings of the International Conference on Machine Learning (ICML), 2022. 1, 2, 10, 11
|
| 162 |
+
|
| 163 |
+
[22] Clement Vignac and Pascal Frossard. Top-n: Equivariant set and graph generation without exchangeability. In International Conference on Learning Representations, 2022. 2
|
| 164 |
+
|
| 165 |
+
[23] Sam Bond-Taylor, Peter Hessey, Hiroshi Sasaki, Toby P. Breckon, and Chris G. Willcocks. Unleashing transformers: Parallel token prediction with discrete absorbing diffusion for fast high-resolution image generation from vector-quantized codes. In European Conference on Computer Vision (ECCV), 2022. 2, 3
|
| 166 |
+
|
| 167 |
+
[24] Patrick Esser, Robin Rombach, Andreas Blattmann, and Bjorn Ommer. Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in Neural Information Processing Systems, 34:3518-3532, 2021. 2
|
| 168 |
+
|
| 169 |
+
[25] Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. Autoregressive diffusion models. In International Conference on Learning Representations, 2022. 2
|
| 170 |
+
|
| 171 |
+
[26] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2020. 2
|
| 172 |
+
|
| 173 |
+
[27] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162-8171. PMLR, 2021. 3, 4
|
| 174 |
+
|
| 175 |
+
[28] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. 9
|
| 176 |
+
|
| 177 |
+
[29] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. Collective classification in network data articles. AI Magazine, 29:93-106, 09 2008. doi: 10.1609/aimag.v29i3.2157. 10
|
| 178 |
+
|
| 179 |
+
[30] Der-Tsai Lee and Bruce J Schachter. Two algorithms for constructing a delaunay triangulation. International Journal of Computer & Information Sciences, 9(3):219-242, 1980. 11
|
| 180 |
+
|
| 181 |
+
## 5 A Reverse Process Derivations
|
| 182 |
+
|
| 183 |
+
In this appendix, we provide the derivation of the reverse probability $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ . Using the Bayes rule, we obtain
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) \cdot q\left( {{\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
= \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \cdot q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) q\left( {\mathbf{A}}_{0}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) \cdot q\left( {\mathbf{A}}_{0}\right) }
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
= q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \cdot \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) },
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
where we use the fact that $q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) = q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right)$ since ${\mathbf{A}}_{t}$ is independent of ${\mathbf{A}}_{0}$ given ${\mathbf{A}}_{t - 1}$ .
|
| 198 |
+
|
| 199 |
+
This reverse probability is entirely defined with ${\beta }_{t}$ and ${\bar{\beta }}_{t}$ . For the $i, j$ element of $\mathbf{A}$ (denoted ${\mathbf{A}}^{ij}$ ), we obtain:
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
q\left( {{\mathbf{A}}_{t - 1}^{ij} = 1 \mid {\mathbf{A}}_{t}^{ij},{\mathbf{A}}_{0}^{ij}}\right) = \left\{ \begin{array}{ll} \left( {1 - {\beta }_{t}}\right) \cdot \frac{\left( 1 - {\bar{\beta }}_{t - 1}\right) }{1 - {\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 1,{\mathbf{A}}_{0}^{ij} = 1 \\ \left( {1 - {\beta }_{t}}\right) \cdot \frac{{\bar{\beta }}_{t - 1}}{{\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 1,{\mathbf{A}}_{0}^{ij} = 0 \\ {\beta }_{t} \cdot \frac{\left( 1 - {\bar{\beta }}_{t - 1}\right) }{{\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 0,{\mathbf{A}}_{0}^{ij} = 1 \\ {\beta }_{t} \cdot \frac{{\bar{\beta }}_{t - 1}}{1 - {\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 0,{\mathbf{A}}_{0}^{ij} = 0 \end{array}\right. \tag{4}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
## B Conversion of ${\bar{\beta }}_{t}$ to ${\beta }_{t}$
|
| 206 |
+
|
| 207 |
+
The selected linear schedule provides us with the values of ${\bar{\beta }}_{t}$ . In this appendix, we compute an expression for ${\beta }_{t}$ from ${\bar{\beta }}_{t}$ , which allows us easy computation of (2). By definition, we have ${\overline{\mathbf{Q}}}_{t} = {\overline{\mathbf{Q}}}_{t - 1}{\mathbf{Q}}_{t}$ which is equivalent to
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\left( \begin{matrix} 1 - {\bar{\beta }}_{t - 1} & {\bar{\beta }}_{t - 1} \\ {\bar{\beta }}_{t - 1} & 1 - {\bar{\beta }}_{t - 1} \end{matrix}\right) \left( \begin{matrix} 1 - {\beta }_{t} & {\beta }_{t} \\ {\beta }_{t} & 1 - {\beta }_{t} \end{matrix}\right) = \left( \begin{matrix} 1 - {\bar{\beta }}_{t} & {\bar{\beta }}_{t} \\ {\bar{\beta }}_{t} & 1 - {\bar{\beta }}_{t} \end{matrix}\right)
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
52 Let us select the first row and first column equality. We obtain the following equation
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\left( {1 - {\bar{\beta }}_{t - 1}}\right) \left( {1 - {\beta }_{t}}\right) + {\bar{\beta }}_{t - 1}{\beta }_{t} = 1 - {\bar{\beta }}_{t},
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
263 which, after some arithmetic, provides us with the desired answer
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
{\beta }_{t} = \frac{{\bar{\beta }}_{t - 1} - {\bar{\beta }}_{t}}{2{\bar{\beta }}_{t - 1} - 1}.
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
## 264 C ELBO derivation
|
| 226 |
+
|
| 227 |
+
265 The general Evidence Lower Bound (ELBO) formula states that
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
\log \left( {{p}_{\theta }\left( x\right) }\right) \geq {\mathbb{E}}_{z \sim q}\left\lbrack {\log \left( \frac{p\left( {x, z}\right) }{q\left( z\right) }\right) }\right\rbrack
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
6 for any distribution $q$ and latent $z$ . In our case, we use ${\mathbf{A}}_{1 : T}$ as a latent variable and obtain
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
- \log \left( {{p}_{\theta }\left( {\mathbf{A}}_{0}\right) }\right) \leq {\mathbb{E}}_{{\mathbf{A}}_{1 : T} \sim q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) }\left\lbrack {\log \left( \frac{{p}_{\theta }\left( {\mathbf{A}}_{0 : T}\right) }{q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack \mathrel{\text{:=}} {L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right)
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
267 We use ${L}_{\mathrm{{vb}}} = \mathbb{E}\left\lbrack {{L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right) )}\right\rbrack$ and obtain
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
{L}_{\mathrm{{vb}}} = {\mathbb{E}}_{q\left( {\mathbf{A}}_{0 : T}\right) }\left\lbrack {-\log \left( \frac{{p}_{\theta }\left( {\mathbf{A}}_{0 : T}\right) }{q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
= {\mathbb{E}}_{q}\left\lbrack {-\log \left( {{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) - \mathop{\sum }\limits_{{t = 1}}^{T}\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) }\right) }\right\rbrack
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
= {\mathbb{E}}_{q}\left\lbrack {-\log \left( {{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) - \mathop{\sum }\limits_{{t = 2}}^{T}\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) }\right) - \log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
= {\mathbb{E}}_{q}\left\lbrack {-\log \left( {{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) - \mathop{\sum }\limits_{{t = 2}}^{T}\log \left( {\frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) } \cdot \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }}\right) - \log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
(5)
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
= {\mathbb{E}}_{q}\left\lbrack {-\log \left( \frac{{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }{q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) }\right) - \mathop{\sum }\limits_{{t = 2}}^{T}\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\right) - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }\right\rbrack
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
= {\mathbb{E}}_{{\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }}\left\lbrack {{D}_{KL}\left( {q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) + \mathop{\sum }\limits_{{t = 2}}^{T}{\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) }\right.
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
\left. {-{\mathbb{E}}_{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }\right\rbrack
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
268 where (5) follows from
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) q\left( {{\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
$$
|
| 278 |
+
= \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }.
|
| 279 |
+
$$
|
| 280 |
+
|
| 281 |
+
## 269 D Simple Loss
|
| 282 |
+
|
| 283 |
+
0 Using the parametrization ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ , we can simplify the KL divergenc of the term ${L}_{t}$ .
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) = {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\left\lbrack {-\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
= {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\left\lbrack {-\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right) }\right\rbrack
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
= - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right)
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
We note that this term corresponds to the cross-entropy of the distribution ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ with the ground truth of ${\mathbf{A}}_{0}$ .
|
| 298 |
+
|
| 299 |
+
## E Sampling Algorithms
|
| 300 |
+
|
| 301 |
+
Here in Algorithms 1 and 2 we provide an algorithmic description of the two sampling approaches described in Section 3.4. Here ${\mathcal{B}}_{p = 1/2}$ denotes the Bernoulli distribution with parameter $p = 1/2$ , which corresponds to the Erdős-Rényi random graph model.
|
| 302 |
+
|
| 303 |
+
Algorithm 1 Sampling for ${L}_{\mathrm{{vb}}}$
|
| 304 |
+
|
| 305 |
+
---
|
| 306 |
+
|
| 307 |
+
$\forall i, j \mid i > j : {\mathbf{A}}_{T}^{ij} \sim {\mathcal{B}}_{p = 1/2}$
|
| 308 |
+
|
| 309 |
+
for $t = T,\ldots ,\overline{1}$ do
|
| 310 |
+
|
| 311 |
+
Compute ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$
|
| 312 |
+
|
| 313 |
+
${\mathbf{A}}_{t - 1} \sim {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$
|
| 314 |
+
|
| 315 |
+
end for
|
| 316 |
+
|
| 317 |
+
---
|
| 318 |
+
|
| 319 |
+
Algorithm 2 Sampling for ${L}_{\text{simple }}$
|
| 320 |
+
|
| 321 |
+
---
|
| 322 |
+
|
| 323 |
+
$\forall i, j \mid i > j : {\mathbf{A}}_{T}^{ij} \sim {\mathcal{B}}_{p = 1/2}$
|
| 324 |
+
|
| 325 |
+
for $t = T,\ldots ,\overline{1}$ do
|
| 326 |
+
|
| 327 |
+
${\widetilde{\mathbf{A}}}_{0} \sim {p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$
|
| 328 |
+
|
| 329 |
+
${\mathbf{A}}_{t - 1} \sim q\left( {{\mathbf{A}}_{t - 1} \mid {\widetilde{\mathbf{A}}}_{0}}\right)$
|
| 330 |
+
|
| 331 |
+
end for
|
| 332 |
+
|
| 333 |
+
---
|
| 334 |
+
|
| 335 |
+
## F Models
|
| 336 |
+
|
| 337 |
+
### F.1 Edgewise Dense Prediction Graph Neural Network (EDP-GNN)
|
| 338 |
+
|
| 339 |
+
The EDP-GNN model introduced by Niu et al. [8] extends GIN [28] to work with multi-channel adjacency matrices. This means that a GIN graph neural network is run on multiple different adjacency matrices (channels) and the different outputs are concatenated to produce new node embeddings:
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
{\mathbf{X}}_{c}^{{\left( k + 1\right) }^{\prime }} = {\widetilde{\mathbf{A}}}_{c}^{\left( k\right) }{\mathbf{X}}^{\left( k\right) } + \left( {1 + \epsilon }\right) {\mathbf{X}}^{\left( k\right) },
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
{\mathbf{X}}^{\left( k + 1\right) } = \operatorname{Concat}\left( {\mathbf{X}}_{c}^{{\left( k + 1\right) }^{\prime }}\right. \text{for}\left. {c \in \left\{ {1,\ldots ,{C}^{\left( k + 1\right) }}\right\} }\right) \text{,}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
where $\mathbf{X} \in {\mathbb{R}}^{n \times h}$ is the node embedding matrix with hidden dimension $h$ and ${C}^{\left( k\right) }$ is the number of channels in the input multi-channel adjacency matrix ${\widetilde{\mathbf{A}}}^{\left( k\right) } \in {\mathbb{R}}^{{C}^{\left( k\right) } \times n \times n}$ , at layer $k$ . The adjacency matrices for the next layer are produced using the node embeddings:
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
{\widetilde{\mathbf{A}}}_{\cdot , i, j}^{\left( k + 1\right) } = \operatorname{MLP}\left( {{\widetilde{\mathbf{A}}}_{\cdot , i, j}^{\left( k\right) },{\mathbf{X}}_{i},{\mathbf{X}}_{j}}\right) .
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
For the first layer, EDP-GNN computes two adjacency matrix ${\widetilde{\mathbf{A}}}^{\left( 0\right) }$ channels, original input adjacency $\mathbf{A}$ and its inversion ${\mathbf{{11}}}^{T} - \mathbf{A}$ . For node features, node degrees are used ${\mathbf{X}}^{\left( 0\right) } = \mathop{\sum }\limits_{i}{\mathbf{A}}_{i}$ .
|
| 356 |
+
|
| 357 |
+
To produce the final outputs, outputs of all intermediary layers are concatenated:
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\widetilde{\mathbf{A}} = {\operatorname{MLP}}_{\text{out }}\left( {\operatorname{Concat}\left( {{\widetilde{\mathbf{A}}}^{\left( k\right) }\text{ for }k \in \{ 1,\ldots , K\} }\right) }\right) .
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
The final layer always has only one output channel, such that ${\mathbf{A}}_{\left( t\right) } = \operatorname{EDP-GNN}\left( {\mathbf{A}}_{\left( t - 1\right) }\right)$ .
|
| 364 |
+
|
| 365 |
+
To condition the model on the given noise level ${\bar{\beta }}_{t}$ , noise-level-dependent scale and bias parameters ${\mathbf{\alpha }}_{t}$ and ${\gamma }_{t}$ are introduced to each layer $f$ of every MLP:
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
f\left( {\widetilde{\mathbf{A}}}_{\cdot , i, j}\right) = \operatorname{activation}\left( {\left( {\mathbf{W}{\widetilde{\mathbf{A}}}_{\cdot , i, j} + \mathbf{b}}\right) {\mathbf{\alpha }}_{t} + {\mathbf{\gamma }}_{t}}\right) .
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
### F.2 Provably Powerful Graph Network (PPGN)
|
| 372 |
+
|
| 373 |
+
The input to the PPGN model used is the adjacency matrix ${\mathbf{A}}_{t}$ concatenated with the diagonal matrix ${\overline{\mathbf{\beta }}}_{t} \cdot \mathbf{I}$ , resulting in an input tensor ${\mathbf{X}}_{in} \in {\mathbb{R}}^{n \times n \times 2}$ . The output tensor is ${\mathbf{X}}_{\text{out }} \in {\mathbb{R}}^{n \times n \times 1}$ , where each ${\left\lbrack {\mathbf{X}}_{\text{out }}\right\rbrack }_{ij}$ represents $p\left( {{\left\lbrack {\mathbf{A}}_{0}\right\rbrack }_{ij} \mid {\left\lbrack {\mathbf{A}}_{t}\right\rbrack }_{ij}}\right)$ .
|
| 374 |
+
|
| 375 |
+
Our PPGN implementation, which closely follows Maron et al. [11] is structured as follows: Let $\mathbf{P}$ denote the PPGN model, then
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
\mathbf{P}\left( {\mathbf{X}}_{\text{in }}\right) \mathrel{\text{:=}} \left( {{l}_{\text{out }} \circ C}\right) \left( {\mathbf{X}}_{\text{in }}\right) \tag{6}
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
C : {\mathbb{R}}^{n \times n \times 2} \rightarrow {\mathbb{R}}^{n \times n \times \left( {d \cdot h}\right) } \tag{7}
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
C\left( {\mathbf{X}}_{in}\right) \mathrel{\text{:=}} \operatorname{Concat}\left( {\left( {{B}_{d} \circ \ldots \circ {B}_{1}}\right) \left( {\mathbf{X}}_{in}\right) ,\left( {{B}_{d - 1} \circ \ldots \circ {B}_{1}}\right) \left( {\mathbf{X}}_{in}\right) ,\ldots ,{B}_{1}\left( {\mathbf{X}}_{in}\right) }\right) \tag{8}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
The set $\left\{ {{B}_{1},\ldots ,{B}_{d}}\right\}$ is a set of d different powerful layers implemented as proposed by Maron et al. [11]. We let the input run through different amounts of these powerful layers and concatenate their respective outputs to one tensor of size $n \times n \times \left( {d \cdot h}\right)$ . These powerful layers are functions of size:
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\forall {B}_{i} \in \left\{ {{B}_{2},\ldots ,{B}_{d}}\right\} ,{B}_{i} : {\mathbb{R}}^{n \times n \times h} \rightarrow {\mathbb{R}}^{n \times n \times h} \tag{9}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
304
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
{B}_{1} : {\mathbb{R}}^{n \times n \times 1} \rightarrow {\mathbb{R}}^{n \times n \times h}. \tag{10}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
Finally, we use an MLP 2 to reduce the dimensionality of each matrix element down to 1 , so that we can treat the output as an adjacency matrix.
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
{l}_{\text{out }} : {\mathbb{R}}^{d \cdot h} \rightarrow {\mathbb{R}}^{1}, \tag{11}
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
where ${l}_{\text{out }}$ is applied to each element ${\left\lbrack C\left( {\mathbf{X}}_{in}\right) \right\rbrack }_{i, j,\text{ }}$ of the tensor $C\left( {\mathbf{X}}_{in}\right)$ over all its $d \cdot h$ channels. It is used to reduce the number of channels down to a single one which represents $p\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ .
|
| 408 |
+
|
| 409 |
+
## G Training Setup
|
| 410 |
+
|
| 411 |
+
### G.1 EDP-GNN
|
| 412 |
+
|
| 413 |
+
The model training setup and hyperparameters used for the EDP-GNN were directly taken from [8]. We used 4 message-passing steps for each GIN, then stacked 5 EDP-GNN layers, for which the maximum number of channels is always set to 4 and the maximum number of node features is 16 . We use 32 denoising steps for all datasets besides Planar-60, where we used 256. Opposed to 6 noise levels with 1000 sample steps per level as in the Score-based approach.
|
| 414 |
+
|
| 415 |
+
### G.2 PPGN
|
| 416 |
+
|
| 417 |
+
The PPGN model we used for the Ego-small, Community-small, and SBM-27 datasets consist of 6 layers $\left\{ {{B}_{1},\ldots ,{B}_{6}}\right\}$ . After each powerful layer, we apply an instance normalization. The hidden dimension was set to 16. For the Planar-60 dataset, we have used 8 layers and a hidden dimension of 128. We used a batch size of 64 for all datasets and used the Adam optimizer with parameters chosen as follows: learning rate is 0.001 , betas are(0.9,0.999)and weight decay is0.999.
|
| 418 |
+
|
| 419 |
+
### G.3 Model Selection
|
| 420 |
+
|
| 421 |
+
We performed a simple model selection where the model which achieves the best training loss is saved and used to generate graphs for testing. We also investigated the use of a validation split and computation of MMD scores versus this validation split for model selection, but we did not find this to produce better results while adding considerable computational overhead.
|
| 422 |
+
|
| 423 |
+
### G.4 Additional Details on Experimental Setup
|
| 424 |
+
|
| 425 |
+
Here we provide some details concerning the experimental setup for the results in Tables 1 and 2.
|
| 426 |
+
|
| 427 |
+
Details for MMD results in Table 1: From the original paper Niu et al. [8], we are unsure if the GNF, GraphRNN, and EDP-Score model selection were used or not. The SDE-Score results in the first section are sampled after training for 5000 epochs and no model selection was used. Due to the compute limitations on the PPGN model, the results for PPGN ${L}_{\mathrm{{vb}}}$ are taken after epoch 900 instead of 5000, as results for SDE-Score and EDP-Score have been. The results for PPGN ${L}_{\text{simple }}$ and EDP ${L}_{\text{simple }}$ were trained for 2500 epochs.
|
| 428 |
+
|
| 429 |
+
Details for MMD results in Table 2: All results using the EDP-GNN model are trained until epoch 5000 and the PPGN implementation was trained until epoch 2500.
|
| 430 |
+
|
| 431 |
+
## H Datasets
|
| 432 |
+
|
| 433 |
+
In this appendix, we describe the 4 datasets used in our experiments.
|
| 434 |
+
|
| 435 |
+
Ego-small: This dataset is composed of 200 graphs of 4-18 nodes from the Citeseer network (Sen et al. [29]). The dataset is available in the repository ${}^{4}$ of Niu et al. [8].
|
| 436 |
+
|
| 437 |
+
Community-small: This dataset consists of 100 graphs from 12 to 20 nodes. The graphs are generated in two steps. First two communities of equal size are generated using the Erdos-Rényi model [12] with parameter $p = {0.7}$ . Then edges are randomly added between the nodes of the two communities with a probability $p = {0.05}$ . The dataset is directly taken from the repository of Niu et al. [8].
|
| 438 |
+
|
| 439 |
+
SBM-27: This dataset consists of 200 graphs with 24 to 27 nodes generated using the Stochastic-Block-Model (SBM) with three communities. We use the implementation provided by Martinkus et al. [21]. The parameters used are ${p}_{\text{intra }} = {0.85},{p}_{\text{inter }} = {0.046875}$ , where ${p}_{\text{intra }}$ stands for the intra-community (i.e. for node within the same community) edge probability and ${p}_{\text{inter }}$ stands for the inter-community (i.e. for nodes from different community) edge probability. The number of nodes for the 3 communities is randomly drawn from $\{ 7,8,9\}$ . In expectation, these parameters generate 3 edges between each pair of communities.
|
| 440 |
+
|
| 441 |
+
---
|
| 442 |
+
|
| 443 |
+
${}^{4}$ https://github.com/ermongroup/GraphScoreMatching
|
| 444 |
+
|
| 445 |
+
---
|
| 446 |
+
|
| 447 |
+
Planar-60: This dataset consists of 200 randomly generated planar graphs of 60 nodes. We use the implementation provided by Martinkus et al. [21]. To generate a graph, 60 points are first random uniformly sampled on the ${\left\lbrack 0,1\right\rbrack }^{2}$ plane. Then the graph is generated by applying Delaunay triangulation to these points [30].
|
| 448 |
+
|
| 449 |
+
## 57 I Visualization of Sampled Graphs
|
| 450 |
+
|
| 451 |
+
In the following pages, we provide a visual comparison of graphs generated by the different models.
|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
|
| 455 |
+
Figure 1: Sample graphs from the training set of Ego-small dataset.
|
| 456 |
+
|
| 457 |
+
Figure 2: Sample graphs generated with the model EDP-Score [8] for the Ego-small dataset.
|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
|
| 461 |
+
Figure 3: Sample graphs generated with the PPGN ${L}_{\mathrm{{vb}}}$ model for the Ego-small dataset.
|
| 462 |
+
|
| 463 |
+
Figure 4: Sample graphs generated with the EDP ${L}_{\text{simple }}$ model for the Ego-small dataset.
|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
|
| 467 |
+
Figure 6: Sample graphs generated with the model EDP-Score [8] for the Community dataset.
|
| 468 |
+
|
| 469 |
+
Figure 5: Sample graphs from the training set of the Community dataset
|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
|
| 473 |
+
Figure 7: Sample graphs generated with the PPGN ${L}_{\mathrm{{vb}}}$ model for the Community dataset.
|
| 474 |
+
|
| 475 |
+
Figure 8: Sample graphs generated with the EDP ${L}_{\text{simple }}$ model for the Community dataset.
|
| 476 |
+
|
| 477 |
+

|
| 478 |
+
|
| 479 |
+
Figure 9: Sample graphs from the training set of the Planar-60 dataset.
|
| 480 |
+
|
| 481 |
+

|
| 482 |
+
|
| 483 |
+
Figure 10: Sample graphs generated with the model EDP-Score [8] for the Planar-60 dataset.
|
| 484 |
+
|
| 485 |
+
Figure 11: Sample graphs generated with the PPGN ${L}_{\text{simple }}$ model for the Planar-60 dataset.
|
| 486 |
+
|
| 487 |
+

|
| 488 |
+
|
| 489 |
+
Figure 12: Sample graphs from the training set of the SBM-27 dataset.
|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
|
| 493 |
+
Figure 13: Sample graphs generated with the model EDP-Score [8] for the SBM-27 dataset.
|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
|
| 497 |
+
Figure 14: Sample graphs generated with the PPGN ${L}_{\text{simple }}$ model for the SBM-27 dataset.
|
| 498 |
+
|
papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DIFFUSION MODELS FOR GRAPHS BENEFIT FROM DISCRETE STATE SPACES
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Denoising diffusion probabilistic models and score matching models have proven to be very powerful for generative tasks. While these approaches have also been applied to the generation of discrete graphs, they have, so far, relied on continuous Gaussian perturbations. Instead, in this work, we suggest using discrete noise for the forward Markov process. This ensures that in every intermediate step the graph remains discrete. Compared to the previous approach, our experimental results on four datasets and multiple architectures show that using a discrete noising process results in higher quality generated samples indicated with an average MMDs reduced by a factor of 1.5. Furthermore, the number of denoising steps is reduced from 1000 to 32 steps leading to a 30 times faster sampling procedure.
|
| 12 |
+
|
| 13 |
+
§ 12 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Score-based [1] and denoising diffusion probabilistic models (DDPMs) [2, 3] have recently achieved striking results in generative modeling and in particular in image generation. Instead of learning a complex model that generates samples in a single pass (like a Generative Adversarial Network [4] (GAN) or a Variational Auto-Encoder [5] (VAE)), a diffusion model is a parameterized Markov Chain trained to reverse an iterative predefined process that gradually transforms a sample into pure noise. Although diffusion processes have been proposed for both continuous [6] and discrete [7] state spaces, their use for graph generation has only focused on Gaussian diffusion processes which operate in the continuous state space $\left\lbrack {8,9}\right\rbrack$ .
|
| 16 |
+
|
| 17 |
+
This contribution suggests adapting the denoising procedure to an actual graph distribution and using discrete noise, leading to a random graph model. We describe this procedure based on the Discrete DDPM framework proposed by Austin et al. [7], Hoogeboom et al. [10]. Our experiments show that using discrete noise greatly reduces the number of denoising steps that are needed and improves the sample quality. We also suggest the use of a simple expressive graph neural network architecture [11] for denoising, which, while bringing expressivity benefits, contrasts with more complicated architectures currently used for graph denoising [8].
|
| 18 |
+
|
| 19 |
+
§ 2 RELATED WORK
|
| 20 |
+
|
| 21 |
+
Traditionally, graph generation has been studied through the lens of random graph models [12-14]. While this approach is insufficient to model many real-world graph distributions, it is useful to create synthetic datasets and provides a useful abstraction. In fact, we will use Erdős-Rényi graphs [12] to model the prior distribution of our diffusion process.
|
| 22 |
+
|
| 23 |
+
Due to their larger number of parameters and expressive power, deep generative models have achieved better results in modeling complex graph distributions. The most successful graph generative models can be devised into two different techniques: a) auto-regressive graph generative models, which generate the graph sequentially node-by-node [15, 16], and b) one-shot generative models which generate the whole graph in a single forward pass [8, 9, 17-21]. While auto-regressive models can generate graphs with hundreds or even thousands of nodes, they can suffer from mode collapse $\left\lbrack {{20},{21}}\right\rbrack$ . One-shot graph generative models are more resilient to mode collapse but are more challenging to train while still not scaling easily beyond tens of nodes. Recently, one-shot generation has been scaled up to graphs of hundreds of nodes thanks to spectral conditioning [21], suggesting that good conditioning can largely benefit graph generation. Still, the suggested training procedure is cumbersome as it involves 3 different intertwined Generative Adversarial Networks (GANs). Finally, Variational Auto Encoders (VAE) have also been studied to generate graphs but remain difficult to train, as the loss function needs to be permutation invariant [22] which can necessitate an expensive graph matching step [17].
|
| 24 |
+
|
| 25 |
+
In contrast, the score-based models $\left\lbrack {8,9}\right\rbrack$ have the potential to provide both, a simple, stable training objective similar to the auto-regressive models and good graph distribution coverage provided by the one-shot models. Niu et al. [8] provided the first score-based model for graph generation by directly using the score-based model formulation by Song and Ermon [1] and additionally accounting for the permutation equivariance of graphs. Jo et al. [9] extended this to featured graph generation, by formulating the problem as a system of two stochastic differential equations, one for feature generation and one for adjacency generation. The graph and the features are then generated in parallel. This approach provided promising results for molecule generation. Results on slightly larger graphs were also improved but remained imperfect. Importantly, both contributions rely on a continuous Gaussian noise process and use a thousand denoising steps to achieve good results, which makes for a slow graph generation.
|
| 26 |
+
|
| 27 |
+
As shown by Song et al. [6], score matching is tightly related to denoising diffusion probabilistic models [3] which provide a more flexible formulation, more easily amendable for the graph generation. In particular, for the noisy samples to remain discrete graphs, the perturbations need to be discrete. Such discrete diffusion has been successfully used for quantized image generation [23, 24] and text generation [25]. Diffusion using the multinomial distribution was proposed in Hoogeboom et al. [10]. Then, Austin et al. [7] extended the previous work by Hoogeboom et al. [10], Song et al. [26] and provided a general recipe for denoising diffusion models in discrete state-spaces which mainly requires the specification of a doubly-stochastic Markov transition matrix $\mathbf{Q}$ which ensures the Markov process conserves probability mass and converges to a stationary distribution. In the next section, we describe a formulation of this perturbation matrix $\mathbf{Q}$ leading to the Erdős-Rényi random graphs.
|
| 28 |
+
|
| 29 |
+
§ 3 DISCRETE DIFFUSION FOR SIMPLE GRAPHS
|
| 30 |
+
|
| 31 |
+
Diffusion models [2] are generative models based on a forward and a reverse Markov process. The forward process, denoted $q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) = \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right)$ generates a sequence of increasingly noisier latent variables ${\mathbf{A}}_{t}$ from the initial sample ${\mathbf{A}}_{0}$ , to white noise ${\mathbf{A}}_{T}$ . Here the sample ${\mathbf{A}}_{0}$ and the latent variables ${\mathbf{A}}_{t}$ are adjacency matrices. The learned reverse process ${p}_{\theta }\left( {\mathbf{A}}_{1 : T}\right) = p\left( {\mathbf{A}}_{T}\right) \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ attempts to progressively denoise the latent variable ${\mathbf{A}}_{t}$ in order to produce samples from the desired distribution. Here we will focus on simple graphs, but the approach can be extended in a straightforward manner to account for different edge types. We use the model from [10] and, for convenience, adopt the representation of [7] for our discrete process.
|
| 32 |
+
|
| 33 |
+
§ 3.1 FORWARD PROCESS
|
| 34 |
+
|
| 35 |
+
Let the row vector ${\mathbf{a}}_{t}^{ij} \in \{ 0,1{\} }^{2}$ be the one-hot encoding of $i,j$ element of the adjacency matrix ${\mathbf{A}}_{t}$ . Here $t \in \left\lbrack {0,T}\right\rbrack$ denotes the timestep of the process, where ${\mathbf{A}}_{0}$ is a sample from the data distribution and ${\mathbf{A}}_{T}$ is an Erdős-Rényi random graph. The forward process is described as repeated multiplication of each adjacency element type row vector ${\mathbf{a}}_{t}^{ij} = {\mathbf{a}}_{t - 1}^{ij}{\mathbf{Q}}_{t}$ with a double stochastic matrix ${\mathbf{Q}}_{t}$ . Note that the forward process is independent for each edge/non-edge $i \neq j$ . The matrix ${\mathbf{Q}}_{t} \in {\mathbb{R}}^{2 \times 2}$ is
|
| 36 |
+
|
| 37 |
+
modeled as
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{\mathbf{Q}}_{t} = \left\lbrack \begin{matrix} 1 - {\beta }_{t} & {\beta }_{t} \\ {\beta }_{t} & 1 - {\beta }_{t} \end{matrix}\right\rbrack , \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where ${\beta }_{t}$ is the probability of not changing the edge state ${}^{1}$ . This formulation ${}^{2}$ has the advantage to allow direct sampling at any timestep of the diffusion process without computing any previous timesteps. Indeed the matrix ${\overline{\mathbf{Q}}}_{t} = \mathop{\prod }\limits_{{i < t}}{\mathbf{Q}}_{i}$ can be expressed in the form of (1) with ${\beta }_{t}$ being replaced by ${\bar{\beta }}_{t} = \frac{1}{2} - \frac{1}{2}\mathop{\prod }\limits_{{i < t}}\left( {1 - 2{\beta }_{i}}\right)$ . Eventually, we want the probability ${\bar{\beta }}_{t} \in \left\lbrack {0,{0.5}}\right\rbrack$ to vary from 0 (unperturbed sample) to 0.5 (pure noise). In this contribution, we limit ourselves to symmetric graphs and therefore only need to model the upper triangular part of the adjacency matrix. The noise is sampled i.i.d. over all of the edges.
|
| 44 |
+
|
| 45 |
+
${}^{1}$ Note that two different $\beta$ ’s could be used for edges and non-edges. This case is left for future work.
|
| 46 |
+
|
| 47 |
+
${}^{2}$ Note that we use a different parametrization for (1) than [10]. To recover the original formulation, one can simply divide all ${\beta }_{t}$ by 2 .
|
| 48 |
+
|
| 49 |
+
§ 3.2 REVERSE PROCESS
|
| 50 |
+
|
| 51 |
+
To sample from the data distribution, the forward process needs to be reversed. Therefore, we need to estimate $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ . In our case, using the Markov property of the forward process this can be rewritten as (see Appendix A for derivation):
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) .} \tag{2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
Note that (2) is entirely defined by ${\beta }_{t}$ and ${\bar{\beta }}_{t}$ and ${\mathbf{A}}_{0}$ (see Appendix A, Equation 4).
|
| 58 |
+
|
| 59 |
+
§ 3.3 LOSS
|
| 60 |
+
|
| 61 |
+
Diffusion models are typically trained to minimize a variational upper bound on the negative log-likelihood. This bound can be expressed as (see Appendix C or [3, Equation 5]):
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\left. {{L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\left\lbrack \underset{{L}_{T}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) }}\right.
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
+ \mathop{\sum }\limits_{{t = 1}}^{T}{\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\underset{{L}_{t}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) }}\underset{{L}_{0}}{\underbrace{-{\mathbb{E}}_{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Practically, the model is trained to directly minimize the losses ${L}_{t}$ , i.e. the KL divergence ${D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right)$ by using the tractable parametrization of $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ from (2). Note that the discrete setting of the selected noise distribution prevents training the model to approximate the gradient of the distribution as done by score-matching graph generative models [8, 9].
|
| 72 |
+
|
| 73 |
+
Parametrization of the reverse process. While it is possible to predict the logits of ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ in order to minimize ${L}_{\mathrm{{vb}}}$ , we follow $\left\lbrack {3,7,{10}}\right\rbrack$ and use a network ${\mathrm{{nn}}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ that predict the logits of the distribution ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ . This parametrization is known to stabilize the training procedure. To minimize ${L}_{\mathrm{{vb}}},\left( 2\right)$ can be used to recover ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ from ${\mathbf{A}}_{0}$ and ${\mathbf{A}}_{t}$ .
|
| 74 |
+
|
| 75 |
+
Alternate loss. Many implementations of DDPMs found it beneficial to use alternative losses. For instance, [3] derived a simplified loss function that reweights the ELBO. Hybrid losses have been used in [27] and [7]. As shown in Appendix D, using the parametrization ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ , one can express the term: ${L}_{t}$ as ${L}_{t} = - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right)$ . Empirically, we found that minimizing
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{L}_{\text{ simple }} \mathrel{\text{ := }} - {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\mathop{\sum }\limits_{{t = 1}}^{T}\left( {1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}}\right) \cdot {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\log {p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) ) \tag{3}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
leads to stable training and better results. Note that this loss equals the cross-entropy loss between ${\mathbf{A}}_{0}$ and ${\operatorname{nn}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ . The re-weighting $1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}$ , which assigns linearly more importance to the less noisy samples, has been proposed in [23, Equation 7].
|
| 82 |
+
|
| 83 |
+
§ 3.4 SAMPLING
|
| 84 |
+
|
| 85 |
+
For each loss, we used a specific sampling algorithm. For both approaches, we start by sampling each edge independently from a Bernoulli distribution with probability $p = 1/2$ (Erdős-Rényi random graph). Then, for the ${L}_{\mathrm{{vb}}}$ loss we follow Ho et al. [3] and iteratively reverse the chain by sampling Bernoulli-sampling from ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ until we obtain at our sample of ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right)$ . For the loss function ${L}_{\text{ simple }}$ , we sample ${\mathbf{A}}_{0}$ directly from ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ for each step $t$ and obtain ${\mathbf{A}}_{t - 1}$ by sampling again from $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right)$ . The two approaches are described algorithmically in Appendix E.
|
| 86 |
+
|
| 87 |
+
The values of ${\bar{\beta }}_{t}$ are selected following a simple linear schedule for our reverse process [2]. We found it works similarly well as other options such as cosine schedule [27]. Note that in this case ${\beta }_{t}$ can be obtained from ${\bar{\beta }}_{t}$ in a straightforward manner (see Appendix B).
|
| 88 |
+
|
| 89 |
+
§ 4 EXPERIMENTS
|
| 90 |
+
|
| 91 |
+
We compare our graph discrete diffusion approach to the original score-based approach proposed by Niu et al. [8]. Models using this original formulation are denoted by score. We follow the training and evaluation setup used by previous contributions $\left\lbrack {8,9,{15},{19}}\right\rbrack$ . More details can be found in Appendix G. For evaluation, we compute MMD metrics from [15] between the generated graphs and the test set, namely, the degree distribution, the clustering coefficient, and the 4-node orbit counts. To demonstrate the efficiency of the discrete parameterization, the discrete models only use 32 denoising steps, while the score-based models use 1000 denoising steps, as originally proposed. We compare two architectures: 1. EDP-GNN as introduced by Niu et al. [8], and 2. a simpler and more expressive provably powerful graph network (PPGN) [11]. See Appendix F for a more detailed description of the architectures.
|
| 92 |
+
|
| 93 |
+
max width=
|
| 94 |
+
|
| 95 |
+
2*Model 4|c|Community 4|c|Ego 2*Total
|
| 96 |
+
|
| 97 |
+
2-9
|
| 98 |
+
Deg. Clus. Orb. Avg. Deg. Clus. Orb. Avg.
|
| 99 |
+
|
| 100 |
+
1-10
|
| 101 |
+
GraphRNN ${}^{ \dagger }$ 0.030 0.030 0.010 0.017 0.040 0.050 0.060 0.050 0.033
|
| 102 |
+
|
| 103 |
+
1-10
|
| 104 |
+
${\mathrm{{GNF}}}^{ \dagger }$ 0.120 0.150 0.020 0.097 0.010 0.030 0.001 0.014 0.055
|
| 105 |
+
|
| 106 |
+
1-10
|
| 107 |
+
EDP-Score ${}^{ \dagger }$ 0.006 0.127 0.018 0.050 0.010 0.025 0.003 0.013 0.031
|
| 108 |
+
|
| 109 |
+
1-10
|
| 110 |
+
SDE-Score ${}^{ \dagger }$ 0.045 0.086 0.007 0.046 0.021 0.024 0.007 0.017 0.032
|
| 111 |
+
|
| 112 |
+
1-10
|
| 113 |
+
EDP-Score ${}^{3}$ 0.016 0.810 0.110 0.320 0.04 0.064 0.005 0.037 0.178
|
| 114 |
+
|
| 115 |
+
1-10
|
| 116 |
+
PPGN-Score 0.081 0.237 0.284 0.200 0.019 0.049 0.005 0.025 0.113
|
| 117 |
+
|
| 118 |
+
1-10
|
| 119 |
+
PPGN ${L}_{\mathrm{{vb}}}$ 0.023 0.061 0.015 0.033 0.025 0.039 0.019 0.027 0.03
|
| 120 |
+
|
| 121 |
+
1-10
|
| 122 |
+
PPGN ${L}_{\text{ simple }}$ 0.019 0.044 0.005 0.023 0.018 0.026 0.003 0.016 0.019
|
| 123 |
+
|
| 124 |
+
1-10
|
| 125 |
+
EDP ${L}_{\text{ simple }}$ 0.024 0.04 0.012 0.026 0.019 0.031 0.017 0.022 0.024
|
| 126 |
+
|
| 127 |
+
1-10
|
| 128 |
+
|
| 129 |
+
Table 1: MMD results for the Community and the Ego datasets. All values are averaged over 5 runs with 1024 generated samples without any sub-selection. The "Total" column denotes the average MMD over all of the 6 measurements. The best results of the "Avg." and "Total" columns are shown in bold. $\dagger$ marks the results taken from the original papers.
|
| 130 |
+
|
| 131 |
+
max width=
|
| 132 |
+
|
| 133 |
+
2*Model 4|c|SBM-27 4|c|Planar-60 2*Total
|
| 134 |
+
|
| 135 |
+
2-9
|
| 136 |
+
Deg. Clus. Orb. Avg. Deg. Clus. Orb. Avg.
|
| 137 |
+
|
| 138 |
+
1-10
|
| 139 |
+
EDP-Score 0.014 0.800 0.190 0.334 1.360 1.904 0.534 1.266 0.8
|
| 140 |
+
|
| 141 |
+
1-10
|
| 142 |
+
PPGN ${L}_{\text{ simple }}$ 0.007 0.035 0.072 0.038 0.029 0.039 0.036 0.035 0.036
|
| 143 |
+
|
| 144 |
+
1-10
|
| 145 |
+
EDP ${L}_{\text{ simple }}$ 0.046 0.184 0.064 0.098 0.017 1.928 0.785 0.910 0.504
|
| 146 |
+
|
| 147 |
+
1-10
|
| 148 |
+
|
| 149 |
+
Table 2: MMD results for the SBM-27 and the Planar- 60 datasets.
|
| 150 |
+
|
| 151 |
+
Table 1 shows the results for two datasets, Community-small $\left( {{12} \leq n \leq {20}}\right)$ and Ego-small $\left( {4 \leq n \leq {18}}\right)$ , used by Niu et al. [8]. To better compare our approach to traditional score-based graph generation, in Table 2, we additionally perform experiments on slightly more challenging datasets with larger graphs. Namely, a stochastic-block-model (SBM) dataset with three communities, which in total consists of $\left( {{24} \leq n \leq {27}}\right)$ nodes and a planar dataset with $\left( {n = {60}}\right)$ nodes. Detailed information on the datasets can be found in Appendix H. Additional details concerning the evaluation setup are provided in Appendix G.4.
|
| 152 |
+
|
| 153 |
+
Results. In Table 1, we observe that the proposed discrete diffusion process using the ${L}_{\mathrm{{vb}}}$ loss and PPGN model leads to slightly improved average MMDs over the competitors. The ${L}_{\text{ simple }}$ loss further improve the result over ${L}_{\mathrm{{vb}}}$ . The fact that the EDP- ${L}_{\text{ simple }}$ model has significantly lower MMD values than the EDP-score model is a strong indication that the proposed loss and the discrete formulation are the cause of the improvement rather than the PPGN architecture. This improvement comes with the additional benefit that sampling is greatly accelerated (30 times) as the number of timesteps is reduced from 1000 to 32. Table 2 shows that the proposed discrete formulation is even more beneficial when graph size and complexity increase. The PPGN-Score even becomes infeasible to run in this setting, due to the prohibitively expensive sampling procedure. A qualitative evaluation of the generated graphs is performed in Appendix I. Visually, the ${L}_{\text{ simple }}$ loss leads to the best samples.
|
| 154 |
+
|
| 155 |
+
§ 5 CONCLUSION
|
| 156 |
+
|
| 157 |
+
In this work, we demonstrated that discrete diffusion can increase sample quality and greatly improve the efficiency of denoising diffusion for the graph generation. While the approach was presented for simple graphs with non-attributed edges, it could also be extended to cover graphs with edge attributes.
|
| 158 |
+
|
| 159 |
+
${}^{3}$ The discrepancy with the SDE-Score ${}^{ \dagger }$ results comes from the fact that using the code provided by the authors, we were unable to reproduce their results. Strangely, their code leads to good results when used with our discrete formulation and ${L}_{\text{ simple }}$ loss improving over the result reported in their contribution.
|
papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,244 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# De Bruijn goes Neural: Causality-Aware Graph Neural Networks for Time Series Data on Dynamic Graphs
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
We introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold in the causal topology of dynamic graphs, which is determined by causal walks, i.e. temporally ordered sequences of links by which nodes can influence each other over time. Our architecture builds on multiple layers of higher-order De Bruijn graphs, an iterative line graph construction where nodes in a De Bruijn graph of order $k$ represent walks of length $k - 1$ , while edges represent walks of length $k$ . We develop a graph neural network architecture that utilizes De Bruijn graphs to implement a message passing scheme that follows a non-Markovian dynamics, which enables us to learn patterns in the causal topology of a dynamic graph. Addressing the issue that De Bruijn graphs with different orders $k$ can be used to model the same data set, we further apply statistical model selection to determine the optimal graph topology to be used for message passing. An evaluation in synthetic and empirical data sets suggests that DBGNNs can leverage temporal patterns in dynamic graphs, which substantially improves the performance in a supervised node classification task.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Graph Neural Networks (GNNs) [1, 2] have become a cornerstone for the application of deep learning to data with a non-Euclidean, relational structure. Different flavors of GNNs have been shown to be highly efficient for tasks like node classification, representation learning, link prediction, cluster detection, or graph classification. The popularity of GNNs is largely due to the abundance of data that can be represented as graphs, i.e. as a set of nodes with pairwise connections represented as links. However, we increasingly have access to time-resolved data that not only capture which nodes are connected to each other, but also when and in which temporal order those connections occur. A number of works in computer science, network science, and interdisciplinary physics have highlighted how the temporal dimension of dynamic graphs, i.e. the timing and ordering of links, influences the causal topology of networked systems, i.e. which nodes can possibly influence each other over time [3-5]. In a nutshell, if an undirected link(a, b)between two nodes $a$ and $b$ occurs before an undirected link(b, c), node $a$ can causally influence node $c$ via node $b$ . If the temporal ordering of those two links is reversed, node $a$ cannot influence node $c$ via $b$ due to the directionality of the arrow of time. This simple example shows that the arrow of time in dynamic graphs limits possible causal influences between nodes beyond what we would expect based on the mere topology of links.
|
| 16 |
+
|
| 17 |
+
Beyond such toy examples, a number of recent studies in network science, computer science, and interdisciplinary physics have shown that the temporal ordering of links in real time series data on graphs has non-trivial consequences for the properties of networked systems, e.g. for reachability and percolation [6, 7], diffusion and epidemic spreading [8, 9], node rankings and community structures [10]. It had further been shown that this interesting aspect of dynamic graphs can be understood using a variant of De Bruijn graphs [11], i.e. static higher-order graphical models [9, 12, 13] of causal paths that capture both the temporal and the topological dimension of time series data on graphs.
|
| 18 |
+
|
| 19 |
+
While the generalization of network analysis techniques like node centrality measures and community detection [10, 12], or graph embedding [14] to such higher-order models has been successful, to the best of our knowledge no generalizations of Graph Neural Networks to higher-order De Bruijn graphs have been proposed $\left\lbrack {{15},{16}}\right\rbrack$ . Such a generalization bears several promises: First it could enable us to apply well-known and efficient gradient-based learning techniques in a static neural network architecture that is able to learn patterns in the causal topology of dynamic graphs that are due to the temporal ordering of links. Second, making the temporal ordering of links in time-stamped data a first-class citizen of graph neural networks, this generalization could also be an interesting approach to incorporate a necessary condition for causality into state-of-the-art geometric deep learning techniques, which often lack meaningful ways to represent time. Finally, a combination of higher-order De Bruijn graph models with graph neural networks enable us to apply frequentist and Bayesian techniques to learn the "optimal" order of a De Bruijn graph model for a given time series, providing new ways to combine statistical learning and model selection with graph neural networks.
|
| 20 |
+
|
| 21 |
+
Addressing this gap, our work generalizes graph neural networks to high-dimensional De Bruijn graph models for causal paths in time-stamped data on dynamic graphs. We obtain a novel causality-aware graph neural network architecture for time series data that makes the following contributions:
|
| 22 |
+
|
| 23 |
+
- We develop a graph neural network architecture that generalizes message passing to multiple layers of higher-order De Bruijn graphs. The resulting De Bruijn Graph Neural Network (DBGNN) architecture leads to a non-Markovian message passing, whose dynamics matches correlations in the temporal ordering of links, thus enabling us to learn patterns that shape the causal topology of dynamic graphs.
|
| 24 |
+
|
| 25 |
+
- We evaluate our proposed architecture both in empirical and synthetically generated dynamic graphs and compare its performance to graph neural networks as well as (time-aware) graph representation learning techniques. We find that our method yields superior node classification performance.
|
| 26 |
+
|
| 27 |
+
- We combine this architecture with statistical model selection to infer the optimal higher order of a De Bruijn graph. This yields a two-step learning process, where (i) we first learn a parsimonious De Bruijn graph model that neither under- nor overfits patterns in a dynamic graph, and (ii) we apply message passing and gradient-based optimization to the inferred graph in order to address graph learning tasks like node classification or representation learning.
|
| 28 |
+
|
| 29 |
+
Our work builds on the -to the best of our knowledge- novel combination of (i) statistical model selection to infer optimal higher-order graphical models for causal paths in dynamic graphs, and (ii) gradient-based learning in a neural network architecture that uses the inferred higher-order graphical models as message passing layers. Thanks to this approach, our architecture performs message passing in an optimal graph model for the causal paths in a given dynamic graph. The results of our evaluation confirm that this explicit regularization of the message passing layers enables us to considerably improve performance in a node classification task. The remainder of this paper is structured as follows: In section 2 we introduce the background of our work and formally state the problem that we address, in section 3 we introduce the De Bruijn graph neural network architecture, in section 4 we experimentally validate our method in synthetic and empirical data on dynamic graphs, and in section 5 we summarize our contributions and highlight opportunities for future research. We have implemented our architecture based on the graph learning library pyTorch Geometric [17] and release the code of our experiments as an Open Source package ${}^{1}$ .
|
| 30 |
+
|
| 31 |
+
## 2 Background and Problem Statement
|
| 32 |
+
|
| 33 |
+
Basic definitions We consider a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ with a (static) set of nodes $V$ and time-stamped (directed) edges $\left( {v, w;t}\right) \in {E}^{\mathcal{T}} \subseteq V \times V \times \mathbb{N}$ where -without loss of generality-integer timestamps $t$ represent the instantaneous time at which a pair of nodes $v, w$ is connected [4]. While many real-world network data exhibit such timestamps, for the application of graph neural networks we often consider a time-aggregated projection $G\left( {V, E}\right)$ along the time axis, where a (static) edge $\left( {v, w}\right) \in E$ exists iff $\exists t \in \mathbb{N} : \left( {v, w}\right) \in {E}^{\mathcal{T}}$ . We can further consider edge weights $w : E \rightarrow \mathbb{N}$ defined as $w\left( {v, w}\right) \mathrel{\text{:=}} \left| \left\{ {t \in \mathbb{N} : \left( {v, w;t}\right) \in {E}^{\mathcal{T}}}\right\} \right|$ , i.e. we use $w\left( {v, w}\right)$ to count the number of temporal activations of(v, w).
|
| 34 |
+
|
| 35 |
+
A key motivation for the study of graphs as models for complex systems is that -apart from direct interactions captured by edges(v, w)- they facilitate the study of indirect interactions between nodes via paths or walks in a graph. Formally, we define a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ in a graph $G = \left( {V, E}\right)$ as any sequence of nodes ${v}_{i} \in V$ such that $\left( {{v}_{i - 1},{v}_{i}}\right) \in E$ for $i = 1,\ldots , l - 1$ . The length $l$ of a walk captures the number of traversed edges, i.e. each node $v \in V$ is a walk of length zero, while each edge(v, w)is a walk of length one. We further call a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a path of length $l$ from ${v}_{0}$ to ${v}_{l - 1}$ iff ${v}_{i} \neq {v}_{j}$ for $i \neq j$ , i.e. a path is a walk between a set of distinct nodes.
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
${}^{1}$ link blinded in review version
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
Causal walks and paths in dynamic graphs In a static graph $G = \left( {V, E}\right)$ , the topology-i.e. which nodes can directly and indirectly influence each other via edges, walks, or paths- is completely determined by the edges $E$ . This is is different for dynamic graphs, which can be understood by extending the definition of walks and paths to causal concepts that respect the arrow of time:
|
| 44 |
+
|
| 45 |
+
Definition 1. For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ , we call a node sequence ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a causal walk iff the following two conditions hold: (i) $\left( {{v}_{i - 1},{v}_{i};{t}_{i}}\right) \in {E}^{\mathcal{T}}$ for $i = 1,\ldots , l - 1$ and (ii) $0 < {t}_{j} - {t}_{i} \leq \delta$ for $i < j$ and some $\delta > 0$ .
|
| 46 |
+
|
| 47 |
+
The first condition ensures that nodes in a dynamic graph can only indirectly influence each other via a causal walk iff a corresponding walk exists in the time-aggregated graph. Due to $0 < {t}_{j} - {t}_{i}$ for $i < j$ , the second condition ensures that time-stamped edges in a causal walk occur in the correct chronological order, i.e. timestamps are monotonically increasing [3, 4]. As an example, two time-stamped edges $\left( {a, b;1}\right) ,\left( {b, c;2}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = 2$ via node $b$ , while the same edges in reverse temporal order $\left( {a, b;2}\right) ,\left( {b, c;1}\right)$ do not constitute a causal walk. While this definition of a causal walk does not impose an upper bound on the time difference between consecutive time-stamped edges constituting a causal walk, it is often reasonable to define a time limit $\delta > 0$ , i.e. a time difference beyond which consecutive edges are not considered to contribute to a causal walk. As an example, two time-stamped edges $\left( {a, b;1}\right) ,\left( {b, c;{100}}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = {100}$ via node $b$ for $\delta = {150}$ , while they do not constitute a causal walk for $\delta = 5$ . This time-limited notion of causal or time-respecting walks is characteristic for many real networked systems in which processes or agents have a finite time scale or "memory", which rules out infinitely long gaps between consecutive causal interactions [4, 5]. Analogous to the definition in a static network, we finally define a causal path ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ from node ${v}_{0}$ to node ${v}_{l - 1}$ as a causal walk with ${v}_{i} \neq {v}_{j}$ for $i \neq j$ .
|
| 48 |
+
|
| 49 |
+
Non-Markovian characteristics of dynamic graphs The above definition of causal walks and paths in dynamic graphs has important consequences for our understanding of the topology of dynamic graphs, i.e. which nodes can directly and indirectly influence each other directly via walks or paths. Moreover, it has important consequences for graph learning and network analysis tasks such as node ranking, cluster detection, or embedding $\left\lbrack {9,{10},{12},{13},{18}}\right\rbrack$ . This additional complexity of dynamic graphs is due to the fact that the topology of a static graph $G = \left( {V, E}\right)$ can be fully understood based on the transitive hull of edges, i.e. the presence of two edges $\left( {u, v}\right) \in E$ and $\left( {v, w}\right) \in E$ implies that nodes $u$ and $w$ can indirectly influence each other via a walk or path, which we denote as $u{ \rightarrow }^{ * }w$ . This not only enables us to use standard algorithms, e.g. to calculate (shortest) paths, it also implies that we can use matrix powers, eigenvalues and eigenvectors to analyze topological properties of a graph. In contrast, in dynamic graphs the chronological order of time-stamped edges can break transitivity, i.e. $\left( {u, v;t}\right) \in E$ and $\left( {v, w;{t}^{\prime }}\right) \in E$ does not necessarily imply $u{ \rightarrow }^{ * }w$ , which invalidates graph analytic approaches [13].
|
| 50 |
+
|
| 51 |
+
To study the question how correlations in the temporal ordering of time-stamped edges influence the causal topology of a dynamic graph, we can take a statistical modelling perspective. We can, for instance, consider causal walks as sequences of random variables that can be modelled via a Markov chain of order $k$ over a discrete state space $V$ [12]. In other words, we model the sequence of nodes ${v}_{0},\ldots ,{v}_{l - 1}$ on causal walks as $P\left( {{v}_{i} \mid {v}_{i - k},\ldots ,{v}_{i - 1}}\right)$ where $k - 1$ is the length of the "memory" of the Markov chain. For $k = 1$ we have a memoryless, first-order Markov chain model $P\left( {{v}_{i} \mid {v}_{i - 1}}\right)$ , where the next node on the walk exclusively depends on the current node. From the perspective of dynamic graphs with time-stamped link sequences, this corresponds to a case where the causal walks of the dynamic graph are exclusively determined by the topology (and possibly frequency) of edges, i.e. there are no correlations in the temporal ordering of time-stamped edges and the causal topology of the dynamic matches the topology of the corresponding time-aggregated graph. If the need a Markov order $k > 1$ , the sequence of nodes traversed by causal walks exhibits memory, i.e. the next node on a walk not only depends on the current one but also on the history of past interactions. The presence of such higher-order correlations in dynamic graphs is associated with more complex causal topologies that (i) cannot be reduced to the topology of the associated time-aggregated network, and (ii) have interesting implications for spreading and diffusion processes and spectral properties [9], node centralities [12], and community structures [10].
|
| 52 |
+
|
| 53 |
+
Higher-order De Bruijn graph models of causal topologies The use of higher-order Markov chain models for causal paths leads to an interesting novel view on the relationship between graph models and time series data on dynamic graphs. In this view, the common (weighted) time-aggregated graph representation of time-stamped edges corresponds to a first-order graphical model, where edge weights capture the statistics of edges, i.e. causal paths of length one. A normalization of edge weights in this graph yields a first-order Markov model of causal walks in a dynamic graph. Similarly, a graphical representation of higher-order Markov chain model of causal walks can be used to capture non-Markovian patterns in the temporal sequence of time-stamped edges. However, different from higher-order Markov chain models of general categorical sequences, a higher-order model of causal paths in dynamic graphs must account for the fact that the set of possible causal paths is constrained by the topology of the corresponding static graph (i.e. condition (i) in Definition 1). To account for this we define a higher-order De Bruijn graph model of causal walks [11]:
|
| 54 |
+
|
| 55 |
+
Definition 2 ( $k$ -th order De Bruijn graph model). For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ and $k \in \mathbb{N}$ , a $k$ -th order De Bruijn graph model of causal paths in ${G}^{\mathcal{T}}$ is a graph ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ , with $u \mathrel{\text{:=}} \left( {{u}_{0},{u}_{1},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ a causal walk of length $k - 1$ in ${G}^{\mathcal{T}}$ and $\left( {u, v}\right) \in {E}^{\left( k\right) }$ iff(i) $v = \left( {{v}_{1},\ldots ,{v}_{k}}\right)$ with ${u}_{i} = {v}_{i}$ for $i = 1,\ldots , k - 1$ and (ii) $u \oplus v = \left( {{u}_{0},\ldots ,{u}_{k - 1},{v}_{k}}\right)$ a causal walk of length $k$ in ${G}^{\mathcal{T}}$ .
|
| 56 |
+
|
| 57 |
+
We note that any two adjacency nodes $u, v \in {V}^{k}$ in a $k$ -th order De Bruijn graph ${G}^{\left( k\right) }$ represent two causal walks of length $k - 1$ that overlap in exactly $k - 1$ nodes, i.e. each edge $\left( {u, v}\right) \in {E}^{\left( k\right) }$ represents a causal walk of length $k$ . We can further use edge weights $w : {E}^{\left( k\right) } \rightarrow \mathbb{N}$ to capture the frequencies of causal paths of length $k$ . The (weighted) time-aggregated graph $G$ of a dynamic graph trivially corresponds to a first-order De Bruijn graph, where (i) nodes are causal walks of length zero and (ii) edges $E = {E}^{\left( 1\right) }$ capture causal walks of length one (i.e. edges) in ${G}^{\mathcal{T}}$ . To construct a second-order De Bruijn graph ${G}^{\left( 2\right) }$ we can perform a line graph transformation of a static graph $G = {G}^{\left( 1\right) }$ , where each edge $\left( {{u}_{0},{u}_{1}}\right) ,\left( {{u}_{1},{u}_{2}}\right) \in {E}^{\left( 2\right) }$ captures a causally ordered sequence of two edges $\left( {{u}_{0},{u}_{1};t}\right)$ and $\left( {{u}_{1},{u}_{2};{t}^{\prime }}\right)$ . A $k$ -th order De Bruijn graph can be constructed by a repeated line graph transformation of a static graph $G$ . Hence, De Bruijn graphs can be viewed as generalization of common graph models to a higher-order, static graphical model of causal walks of length $k$ , where walks of length $l$ in ${G}^{k}$ model causal walks of length $k + l - 1$ in ${G}^{\mathcal{T}}\left\lbrack {9,{13}}\right\rbrack$ .
|
| 58 |
+
|
| 59 |
+
De Bruijn graphs have interesting mathematical properties that connect them to trajectories of subshifts of finite type as well as to dynamical systems and ergodic theory [19]. For the purpose of our work, they provide the advantage that we can use $k$ -th order De Bruijn graphs to model the causal topology in dynamic graphs. We illustrate this in fig. 1, which shows two dynamic graphs with four nodes and 33 time-stamped links. These dynamics groups only differ in term of the temporal ordering of edges, i.e. they have the same (first-order) weighted time-aggregated graph representation (center). Moreover, this first-order representation wrongly suggests that node $A$ can influence node $C$ by a path via node $B$ . While this is true in the dynamic graph on the right (see red causal paths), no corresponding causal path from $A$ via $B$ to $C$ exists in the dynamic graph on the left. A second-order De Bruijn graph model (bottom left and right) captures the fact that the causal path from $A$ via $B$ to $C$ is absent in the right example. This shows that, different from commonly used static graph representations, the edges of a $k$ -th order De Bruijn graph with $k > 1$ are sensitive to the temporal ordering of time-stamped edges. Hence, static higher-order De Bruijn graphs can be used to model the causal topology in a dynamic graph. We can view a $k$ -th order De Bruijn graph in analogy to a $k$ -th order Markov model, where a directed link from node $\left( {{u}_{0},\ldots ,{u}_{k - 1}}\right)$ to node $u = \left( {{u}_{1},\ldots ,{u}_{k},{u}_{k}}\right)$ captures a walk from node ${u}_{k}$ to ${u}_{k + 1}$ in the underlying graph, with a memory of $k$ previously visited nodes ${u}_{0},\ldots ,{u}_{k - 1}$ . This approach has been used to analyze how the causal topology of dynamic graphs influences node ranking in dynamic graphs $\left\lbrack {{10},{12}}\right\rbrack$ , the modelling of random walks and diffusion [9], community detection [10, 18], time-aware static graph embedding [14, 20]. Moreover, several works have proposed heuristic, frequentist and Bayesian methods to infer the optimal order of higher-order graph models of causal paths given time series data on dynamic graphs [10, 12, 21, 22].
|
| 60 |
+
|
| 61 |
+
Problem Statement and Research Gap The works above provide the background for the generalization of graph neural networks to higher-order De Bruijn graph models of causal walks in dynamic graphs, which we propose in the following section. Following the terminology in the network science community, higher-order De Bruijn graph models can be seen as one particular type of higher-order network models [13, 23, 24], which capture (causally-ordered) sequences of interactions between more than two nodes, rather than dyadic edges. They complement other types of popular higher-order network models (like, e.g. hypergraphs, simplicial complexes, or motif-based adjacency matrices)
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
|
| 65 |
+
Figure 1: Simple example for two dynamic graphs with four nodes and 33 directed time-stamped edges (top left and right). The two graphs only differ in terms of the temporal ordering of edges. Frequency and topology of edges are identical, i.e. they have the same first-order time-aggregated weighted graph representation (center). Due to the arrow of time, causal walks and paths differ in the two dynamic graphs: Assuming $\delta = 1$ , in the left dynamic graph node $A$ cannot causally influence $C$ via $B$ , while such a causal path is possible in the right graph. A second-order De Bruijn graph representation of causal walks in the two graphs (bottom left and right) captures this difference in the causal topology. Building on such causality-aware graphical models, in our work we define a graph neural network architecture that is able to learn patterns in the causal topology of dynamic graphs.
|
| 66 |
+
|
| 67 |
+
that consider (unordered) non-dyadic interactions in static networks, and which have been used to generalize graph neural networks to non-dyadic interactions [25, 26].
|
| 68 |
+
|
| 69 |
+
To the best of our knowledge, De Bruijn graph models have not been combined with recent advanced in graph neural networks. Closing this gap, we propose a causality-aware graph convolutional network architecture that uses an augmented message passing scheme [27] in higher-order De Bruijn graphs to capture patterns in the causal topology of dynamic graphs.
|
| 70 |
+
|
| 71 |
+
## 3 De Bruijn Graph Neural Network Architecture
|
| 72 |
+
|
| 73 |
+
We now introduce the De Bruijn Graph Neural Network (DBGNN) architecture with an augmented message passing [27] scheme whose dynamics matches the non-Markovian characteristics of dynamic graphs, which is the key contribution of our work. While we build on the message passing proposed for Graph Convolutional Networks (GCN) [28], it is easy to generalize our architecture to other message passing schemes. Our approach is based on the following three steps, which yield an easy to implement and scalable class of graph neural networks for time series and sequential data on graphs: We first use time series data on dynamic graphs to calculate statistics of causal walks of different lengths $k$ . We use these statistics to select an higher-order De Bruijn graph model for the causal topology of a dynamic graph. This step is parameter-free, i.e. we can use statistical learning techniques to infer an optimal graph model for the causal topology directly from time series data, without need for hyperparameter tuning or cross-validation. We now define a graph convolutional network that builds on neural message passing in the higher-order De Bruijn graphs inferred in step one. The hidden layers of the resulting graph convolutional network yield meaningful latent representations of patterns in the causal topology of a dynamic graph. Since the nodes in a $k$ -th order De Bruijn graph model correspond to walks (i.e. sequences) of nodes of length $k - 1$ , we implement an additional bipartite layer that maps the latent space representations of sequences to nodes in the original graph. In the following, we provide a detailed description of the three steps outlined above:
|
| 74 |
+
|
| 75 |
+
Inference of Optimal Higher-Order De Bruijn Graph Model The first step in the DBGNN architecture is the inference of the higher-order De Bruijn graph model for the causal topology in a given dynamic graph data set. For this, we use Definition 1 to calculate the statistic of causal walks of different lengths $k$ for a given maximum time difference $\delta$ . We note that this can be achieved using efficient window-based algorithms $\left\lbrack {{29},{30}}\right\rbrack$ . The statistics of causal walks in the dynamic graph allows us to apply the model selection technique proposed in [12], which yields the optimal higher-order of a De Bruijn graph model given the statistics of causal walks (or paths). The resulting (static) higher-order De Bruijn graph model is the basis for our extension of the message passing scheme for a dynamic graph with non-Markovian characteristics.
|
| 76 |
+
|
| 77 |
+
Message passing in higher-order De Bruijn graphs Standard message passing algorithms in graph neural networks use the topology of a graph to propagate (and smooth) features across nodes, thus generating hidden features that incorporate patterns in the topology of a graph. To additionally incorporate patterns in the causal topology of a dynamic graph we perform message passing in multiple layers of higher-order De Bruijn graphs. Assuming a $k$ -th order De Bruijn graph model ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ as defined in Definition 2, the input to the first layer $l = 0$ is a set of $k$ -th order node features ${\mathbf{h}}^{\mathbf{k},\mathbf{0}} = \left\{ {{\overrightarrow{h}}_{1}^{k,0},{\overrightarrow{h}}_{2}^{k,0},\ldots ,{\overrightarrow{h}}_{N}^{k,0}}\right\}$ , for ${\overrightarrow{h}}_{i}^{k,0} \in {\mathbb{R}}^{{H}^{0}}$ , where $N = \left| {V}^{\left( k\right) }\right|$ and ${H}^{0}$ is the dimensionality of initial node features. The De Bruijn graph message passing layer uses the causal topology to learn a new set of hidden representations for higher-nodes ${\mathbf{h}}^{\mathbf{k},\mathbf{1}} = \left\{ {{\overrightarrow{h}}_{1}^{k,1},{\overrightarrow{h}}_{2}^{k,1},\ldots ,{\overrightarrow{h}}_{N}^{k,1}}\right\}$ , with ${\overrightarrow{h}}_{i}^{k,1} \in {\mathbb{R}}^{{H}^{1}}$ for each $k - {th}$ order node $i$ (corresponding to a causal walk of length $k - 1$ ). For layer $l$ , we define the update rule of the message passing as:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\overrightarrow{h}}_{v}^{k, l} = \sigma \left( {{\mathbf{W}}^{k, l}\mathop{\sum }\limits_{{\left\{ {u \in {V}^{\left( k\right) } : \left( {u, v}\right) \in {E}^{\left( k\right) }}\right\} \cup \{ v\} }}\frac{w\left( {u, v}\right) \cdot {\overrightarrow{h}}_{v}^{k, l - 1}}{\sqrt{S\left( v\right) \cdot S\left( u\right) }}}\right) , \tag{1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where ${\overrightarrow{h}}_{u}^{k, l - 1}$ is the previous hidden representation of node $u \in {V}^{k}, w\left( {u, v}\right)$ is the weight of edge $\left( {u, v}\right) \in {E}^{k}$ (capturing the frequency of the corresponding causal walk as explained in section 2), ${\mathbf{W}}^{k, l} \in {\mathbb{R}}^{{H}^{l} \times {H}^{l - 1}}$ are trainable weight matrices, $S\left( v\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{u \in {V}^{\left( k\right) }}}w\left( {u, w}\right)$ is the sum of weights of incoming edges of nodes, and $\sigma$ is a non-linear activation function. Since the message passing is performed on a higher-order De Bruijn graph, we obtain a non-Markovian (or rather higher-order Markovian) message passing dynamics, i.e. we perform a Laplacian smoothing that follows the non-Markovian patterns in the causal walks in the underlying dynamic graph. Different from standard, static graph neural networks that ignore the temporal dimension of dynamic graphs, this enables our architecture to incorporate temporal patters that shape the causal topology, i.e. which nodes in a dynamic graph can influence each other directly and indirectly based on the temporal ordering of time-stamped edges (and the arrow of time).
|
| 84 |
+
|
| 85 |
+
First-order message passing and bipartite projection layer While the (static) topology of edges influences the (possible) causal walks and thus the edges in the $k$ -th order De Bruin graph, it is important to note that -due to the fact that it operates on nodes ${V}^{\left( k\right) }$ in the higher-order graph- the message passing outlined above does not allow us to incorporate information on the first-order topology. To address this issue, we additionally include message passing in the (static) time-aggregated weighted graph $G$ , which can be done in parallel to the message passing in the higher-order De Bruijn graph. The $g$ layers of this first-order message passing (whose formal definition we omit as it simply uses the GCN update rule [28]) generate hidden representations ${\overrightarrow{h}}_{v}^{1, g}$ of nodes $v \in V$ . This approach enables us to incorporate optional node features ${\overrightarrow{h}}_{v}^{0, g}$ (or alternatively use a one-hot-encoding of nodes).
|
| 86 |
+
|
| 87 |
+
Since the message passing in a higher-order De Bruijn graph generates hidden features for higher-order nodes ${V}^{\left( k\right) }$ (i.e. sequences of $k$ nodes) rather than nodes $V$ in the original dynamic graph, we finally define a bipartite graph ${G}^{b} = \left( {{V}^{\left( k\right) } \cup V,{E}^{b} \subseteq {V}^{\left( k\right) } \times V}\right)$ that maps node features of higher-nodes to the first-order node space. For a given node $v \in V$ , this bipartite layer sums the hidden representations ${\overrightarrow{h}}_{u}^{k, l}$ of each higher-order node $u = \left( {{u}_{0},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ with ${u}_{k - 1} = v$ to the representation ${h}_{v}^{1, g} \in {\mathbb{R}}^{{F}^{g}}$ generated by the last layer of the first-order message passing. Notice that the dimensions of representations in the last layers of the $k$ -th and first-order message passing should satisfy ${F}^{g} = {H}^{l}$ to enable the summing of the representations. We obtain representations $\left\{ {{\overrightarrow{h}}_{u}^{k, l} + {\overrightarrow{h}}_{v}^{1, g} : \text{ for }u \in {V}^{k}\text{ with }\left( {u, v}\right) \in {E}^{b}}\right\}$ that are the higher-order node representations augmented by the corresponding first order representations. We then use a function $\mathcal{F}$ to aggregate the augmented higher-order representations at the level of first-order nodes. In our experiments, we learn first-order node representations ${h}^{1, g}$ using GCN message passing with $g$ layers, allowing to integrate information on the static and the causal topology of a dynamic graph. Formally, we define the bipartite layer as
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
{\overrightarrow{h}}_{v}^{b} = \sigma \left( {{\mathbf{W}}^{b}\mathcal{F}\left( \left\{ {{\overrightarrow{h}}_{u}^{k, l} + {\overrightarrow{h}}_{v}^{1, g} : \text{ for }u \in {V}^{\left( k\right) }\text{ with }\left( {u, v}\right) \in {E}^{b}}\right\} \right) }\right) , \tag{2}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where ${\overrightarrow{h}}_{v}^{b}$ is the output of the bipartite layer for node $v \in V$ , and ${\mathbf{W}}^{b} \in {\mathbb{R}}^{{F}^{g} \times {H}^{l}}$ is a learnable weight matrix. The function $\mathcal{F}$ can be SUM, MEAN, MAX, MIN.
|
| 94 |
+
|
| 95 |
+
Figure 2 gives an overview of the proposed neural network architecture for the dynamic graph (and associated second-order De Bruijn graph model) shown in Figure 1 (left). The higher-order message
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
|
| 99 |
+
Figure 2: Illustration of DBGNN architecture with two message passing layers in first- (left, gray) and second-order De Bruijn graph (right, orange) corresponding to the dynamic graph in Figure 1 (left). Red edges represent indicate the bipartite mapping ${G}^{b}$ of higher-order node representations to first-order representations. An additional linear layer (not shown) is used for node classification.
|
| 100 |
+
|
| 101 |
+
passing layers on the right use the topology of the second-order De Bruijn graph in Figure 1 (left), while the first-order message passing layers (left) use the topology of the first-order graph. Note that the first-order and higher-order message passing can be performed in parallel, and that the number of message passing layers do not necessarily need to be the same. Red edges indicate the propagation of higher-order node representations to first order nodes performed in the final bipartite layer. Due to space constraints, in Figure 2 we omit the final linear layer used for classification.
|
| 102 |
+
|
| 103 |
+
## 4 Experimental Evaluation
|
| 104 |
+
|
| 105 |
+
In the following, we experimentally evaluate our proposed causality-aware graph neural network architecture both in synthetic and empirical time series data on dynamic graphs. With our evaluation, we want to answer the following questions:
|
| 106 |
+
|
| 107 |
+
Q1 How does the performance of De Bruijn Graph Neural Networks compare to temporal and non-temporal graph learning techniqes?
|
| 108 |
+
|
| 109 |
+
Q2 Can we use De Bruijn Graph Neural Networks to learn interpretable static latent space representations of nodes in dynamic graphs?
|
| 110 |
+
|
| 111 |
+
To address those questions, we use six time series data sets on dynamic graphs that provide meta-information on node classes. The overall statistics of the data sets can be found in table 1, temp-clusters is a synthetically generated dynamic graph with three clusters in the causal topology, but no pattern in the static topology. To generate this data set, we first constructed a random graph and generated random sequences of time-stamped edges. We then selectively swap the time stamps of edges such that causal walks of length two within three clusters of nodes are overrepresented, while causal walks between clusters are underrepresented. We include a more detailed description in the appendix (code and data will be provided in a companion repository). Apart from this synthetic data set, we use five empirical time series data sets: student-sms captures time-stamped SMS exchanged over four weeks between freshmen at the Technical University of Denmark [31]. We use the gender of participants as ground truth classes and use a maximum time difference of $\delta = {40}$ . Since the time granularity of this data set is five minutes, this corresponds to a maximum time difference of 200 minutes. high-school-2011 and high-school-2012 capture time-stamped proximities between high-school students in two consecutive years [32] ( 4 days in 2001, 7 days in 2012). We use the gender of students as ground truth classes. workplace captures time-stamped proximity interaction between employees recorded in an office building for multiple days in different years [33]. We use the department of employees as ground truth classes. hospital captures time-stamped proximities between patients and healthcare workers in a hospital ward. We use employees' roles (patient, nurse, administrative, doctor) as ground truth node classes. All the proximity datasets were collected with a resolution of 20 seconds. To mitigate the computational complexity of the causal walk extraction in the (undirected) proximity data sets, we coarsen the resolution by aggregating interactions to a resolution of fifteen minutes and use $\delta = 4$ , which corresponds to a maximum time difference of one hour. Based on the resulting statistics of causal walks, we use the method (and code) provided in [12] to select a higher-order De Bruijn graph model. In table 1 we report the $p$ -value of the resulting likelihood ratio test, which is used to test the hypothesis that a first-order graph model is sufficient to explain the observed causal walk statistics, against the alternative hypothesis that a second-order De Bruijn graph model is needed. Since all $p$ -values are numerically zero, we find strong evidence for patterns that justify a second-order De Bruijn graph model for all data sets.
|
| 112 |
+
|
| 113 |
+
<table><tr><td>Data set</td><td>Ref</td><td>$\left| V\right|$</td><td>$\left| E\right|$</td><td>$\left| {E}^{\mathcal{T}}\right|$</td><td>$p\left( {k = 2}\right)$</td><td>$\left| {V}^{\left( 2\right) }\right|$</td><td>$\left| {E}^{\left( 2\right) }\right|$</td><td>$\delta$</td><td>Classes</td></tr><tr><td>temp-clusters</td><td>[blinded]</td><td>30</td><td>560</td><td>60000</td><td>0.0</td><td>560</td><td>6,789</td><td>1</td><td>3</td></tr><tr><td>high-school-2011</td><td>[32]</td><td>126</td><td>3042</td><td>28561</td><td>0.0</td><td>3042</td><td>17141</td><td>4</td><td>2</td></tr><tr><td>high-school-2012</td><td>[32]</td><td>180</td><td>3965</td><td>45047</td><td>0.0</td><td>3965</td><td>20614</td><td>4</td><td>2</td></tr><tr><td>hospital</td><td>[34]</td><td>75</td><td>2028</td><td>32424</td><td>0.0</td><td>2028</td><td>15500</td><td>4</td><td>4</td></tr><tr><td>student-sms</td><td>[31]</td><td>429</td><td>733</td><td>46138</td><td>0.0</td><td>733</td><td>846</td><td>40</td><td>2</td></tr><tr><td>workplace</td><td>[33]</td><td>92</td><td>1431</td><td>9827</td><td>0.0</td><td>1431</td><td>7121</td><td>4</td><td>5</td></tr></table>
|
| 114 |
+
|
| 115 |
+
Table 1: Overview of time series data and ground truth node classes used in the experiments.
|
| 116 |
+
|
| 117 |
+
Using a second-order De Bruijn graph, we compare the node classification performance of the DBGNN architecture against the following five baselines. The first three are standard (static) graph learning techniques, namely Graph Convolutional Networks (GCN) [28], DeepWalk [35] and node2vec [36]. We further use two recently proposed temporal graph embedding techniques: Embedding Variable Orders (EVO) [14], is a node representation learning framework that captures non-Markovian characteristics in dynamic graphs. Similar to our approach, EVO uses a higher-order network to generate time-aware node representations that can be used for downstream node classification. HONEM [20] is a higher-order network embedding approach that captures non-Markovian dependencies in time series data on graphs. This framework uses truncated SVD on a higher-order neighborhood matrix that considers the temporal order of interactions.
|
| 118 |
+
|
| 119 |
+
Addressing Q1, the results of our experiments on node classification are shown in Table 2. Since the classes of the empirical data sets are imbalanced, we use balanced accuracy and additionally report macro-averaged precision, recall and f1-score for a 70-30 training-test split. We report the average performance across multiple splits. For DBGNN, GCN, DeepWalk, node2vec, and HONEM we performed 50 runs. Due to its larger computational complexity (and time constraints) we could only perform 10 runs on EVO. The standard deviations are included in the appendix. We trained node2vec, EVO, and DeepWalk with 80 walks of length 40 per each node and a window of 10 . We obtained the embeddings using the word2vec implementation in [37]. For EVO, we use the average as an aggregator for the higher-order representations. To ensure the comparability of the results from GCN and DBGNN, we train both with the same number of convolutional layers with a learning rate of 0.001 for 5000 epochs, ELU [38] as activation function, and Adam [39] optimiser. For DBGNN, we use SUM as aggregation function $\mathcal{F}$ . Since the data sets had no node features, we used one-hot encoding of nodes as a feature matrix (and a one-hot encoding of higher-order nodes in the initial layer of the DBGNN). For all methods, we fix the dimensionality of the learned representations to $d = {16}$ , which is justified by the size of the graphs. We manually tuned the number of hidden dimensions of the first hidden layers for GCN and DBGNN, as well as the $\mathrm{p}$ and $\mathrm{q}$ parameters of EVO and node2vec. We report the results for the best combination of hyperparameters.
|
| 120 |
+
|
| 121 |
+
As expected, the results in Table 2 for the synthetic temporal clusters data set show that the three time-aware methods (EVO, HONEM, and DBGNN) perform considerably better than the static counterparts, which only "see" a random graph topology that does not allow to meaningfully assign node classes. Both EVO and our proposed DBGNN architecture are able to perfectly classify nodes in this data set. Interestingly, despite their good performance in the synthetic data set, the three time-aware methods show much higher variability in the empirical data sets. We find that DBGNN shows superior performance in terms of balanced accuracy, fl-macro, and recall-macro, for all of the five empirical data sets, with a relative performance increase compared to the second best method ranging from 1.55% to 28.16%. For precision-macro, DBGNN performs best in four of the five. We attribute these results to the ability of our architecture to consider both patterns in the (static) graph topology and the causal topology, as well as to the underlying supervised approach that is due to the use of the GCN-based message passing.
|
| 122 |
+
|
| 123 |
+
To address Q2, we study visualizations of the hidden representations of higher- and first-order nodes generated by the DBGNN architecture for the synthetic temporal cluster data set, which exhibits three clear clusters in the causal topology. We use the hidden representations $\overrightarrow{{h}_{v}^{b}}$ generated by the bipartite layer of our DBGNN architecture, as defined in Section 3. We compare this to the representation generated in the last message passing layer of a GCN. Figure 3 in the appendix confirms that the DBGNN architecture learns meaningful latent space representations of nodes that incorporate temporal patterns.
|
| 124 |
+
|
| 125 |
+
<table><tr><td>dataset</td><td>method</td><td>Balanced Accuracy</td><td>F1-score-macro</td><td>Precision-macro</td><td>Recall-macro</td></tr><tr><td rowspan="6">temp-clusters</td><td>DeepWalk</td><td>32.47</td><td>30.39</td><td>32.25</td><td>32.47</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>35.48</td><td>33.02</td><td>34.92</td><td>35.48</td></tr><tr><td>GCN (8,32)</td><td>33.52</td><td>12.5</td><td>8.61</td><td>33.52</td></tr><tr><td>EVO p=1 q=1</td><td>100.0</td><td>100.0</td><td>100.0</td><td>100.0</td></tr><tr><td>HONEM</td><td>54.94</td><td>53.5</td><td>58.16</td><td>54.94</td></tr><tr><td>DBGNN (16,16)</td><td>100.0</td><td>100.0</td><td>100.0</td><td>100.0</td></tr><tr><td>gain</td><td/><td>0%</td><td>0%</td><td>0%</td><td>0%</td></tr><tr><td rowspan="6">high-school-2011</td><td>DeepWalk</td><td>55.25</td><td>54.02</td><td>60.45</td><td>55.25</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>56.89</td><td>56.29</td><td>60.05</td><td>56.89</td></tr><tr><td>GCN (32,4)</td><td>50.06</td><td>40.27</td><td>33.99</td><td>50.06</td></tr><tr><td>EVO p=1 q=4</td><td>57.21</td><td>56.28</td><td>62.09</td><td>57.21</td></tr><tr><td>HONEM</td><td>54.24</td><td>53.08</td><td>56.44</td><td>54.24</td></tr><tr><td>DBGNN (32,8)</td><td>64.4</td><td>63.7</td><td>65.14</td><td>64.4</td></tr><tr><td>gain</td><td/><td>12.57%</td><td>13.16%</td><td>4.91%</td><td>12.57%</td></tr><tr><td rowspan="6">high-school-2012</td><td>DeepWalk</td><td>59.46</td><td>59.6</td><td>71.71</td><td>59.46</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>60.75</td><td>61.23</td><td>72.44</td><td>60.75</td></tr><tr><td>GCN (8,32)</td><td>58.03</td><td>56.39</td><td>59.16</td><td>58.03</td></tr><tr><td>EVO p=4 q=1</td><td>57.98</td><td>57.5</td><td>69.42</td><td>57.98</td></tr><tr><td>HONEM</td><td>53.16</td><td>51.7</td><td>56.59</td><td>53.16</td></tr><tr><td>DBGNN (4,8)</td><td>65.8</td><td>65.89</td><td>67.27</td><td>65.8</td></tr><tr><td>gain</td><td/><td>8.31%</td><td>7.61%</td><td>-7.14%</td><td>8.31%</td></tr><tr><td rowspan="6">hospital</td><td>DeepWalk</td><td>47.18</td><td>44.18</td><td>43.91</td><td>47.18</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>50.6</td><td>47.14</td><td>45.81</td><td>50.6</td></tr><tr><td>GCN [32,32]</td><td>49.48</td><td>44.62</td><td>43.55</td><td>49.48</td></tr><tr><td>EVO p=1 q=4</td><td>36.34</td><td>36.44</td><td>42.1</td><td>36.34</td></tr><tr><td>HONEM</td><td>46.17</td><td>43.13</td><td>44.45</td><td>46.17</td></tr><tr><td>DBGNN (32,16)</td><td>59.04</td><td>55.26</td><td>58.71</td><td>57.71</td></tr><tr><td>gain</td><td/><td>16.68%</td><td>17.23%</td><td>28.16%</td><td>14.05%</td></tr><tr><td rowspan="6">student-sms</td><td>DeepWalk</td><td>53.22</td><td>50.57</td><td>60.57</td><td>53.22</td></tr><tr><td>Node2Vec $p = 1q = 4$</td><td>53.22</td><td>50.97</td><td>58.56</td><td>53.22</td></tr><tr><td>GCN (4,32)</td><td>57.33</td><td>57.25</td><td>57.72</td><td>57.33</td></tr><tr><td>EVO p=4 q=1</td><td>52.93</td><td>50.66</td><td>57.14</td><td>52.93</td></tr><tr><td>HONEM</td><td>50.43</td><td>44.44</td><td>52.91</td><td>50.43</td></tr><tr><td>DBGNN (4,4)</td><td>60.6</td><td>60.89</td><td>62.55</td><td>60.6</td></tr><tr><td>gain</td><td/><td>5.7%</td><td>6.36%</td><td>3.27%</td><td>5.7%</td></tr><tr><td rowspan="6">workplace</td><td>DeepWalk</td><td>77.81</td><td>76.74</td><td>76.06</td><td>77.81</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>78.0</td><td>77.01</td><td>76.38</td><td>78.0</td></tr><tr><td>GCN (32,16)</td><td>81.86</td><td>78.72</td><td>78.58</td><td>79.93</td></tr><tr><td>EVO p=1 q=4</td><td>77.0</td><td>75.68</td><td>75.03</td><td>77.0</td></tr><tr><td>HONEM</td><td>73.26</td><td>72.82</td><td>73.73</td><td>73.26</td></tr><tr><td>DBGNN (32,8)</td><td>83.13</td><td>81.06</td><td>81.52</td><td>81.75</td></tr><tr><td>gain</td><td/><td>1.55%</td><td>2.97%</td><td>3.74%</td><td>2.28%</td></tr></table>
|
| 126 |
+
|
| 127 |
+
Table 2: Results of node classification in six dynamic graphs for static graph learning techniques (DeepWalk, node2vec, GCN) and time-aware methods (HONEM, EVO) as well as the DBGNN architecture proposed in this work.
|
| 128 |
+
|
| 129 |
+
## 5 Conclusion
|
| 130 |
+
|
| 131 |
+
In summary, we propose an approach to apply graph neural networks to high-resolution time series data that captures the temporal ordering of time-stamped edges in dynamic graphs. Our method is based on a novel combination of (i) a statistical approach to infer an optimal static higher-order De Bruijn graph model for the causal topology that is due to the temporal ordering of edges, (ii) gradient-based learning in a neural network architecture that performs neural message passing in the inferred higher-order De Bruijn graph, and (iii) an additional bipartite mapping layer that maps the learnt hidden representation of higher-order nodes to the original node space. Thanks to this approach, our architecture is able to generalize neural message passing to a static higher-order graph model that captures the causal topology of a dynamic graph, which can considerably deviate from what we would expect based on the mere (static) topology of edges. The results of our experiments demonstrate that the resulting architecture can considerably improve the performance of node classification in time series data, despite using message passing in a relatively simple static (augmented) graph. Bridging recent research on higher-order graph models in network science and deep learning in graphs $\left\lbrack {{13},{15},{23},{24}}\right\rbrack$ , our work contributes to the ongoing discussion about the need for augmented message passing schemes in data on graphs with complex characteristics [27].
|
| 132 |
+
|
| 133 |
+
References
|
| 134 |
+
|
| 135 |
+
[1] Hamilton, W. L. Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning 14, 1-159 (2020). 1
|
| 136 |
+
|
| 137 |
+
[2] Wu, Z. et al. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32, 4-24 (2021). 1
|
| 138 |
+
|
| 139 |
+
[3] Kempe, D., Kleinberg, J. & Kumar, A. Connectivity and inference problems for temporal networks. In Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, STOC '00, 504-513 (Association for Computing Machinery, New York, NY, USA, 2000). URL https://doi.org/10.1145/ 335305.335364.1,3
|
| 140 |
+
|
| 141 |
+
[4] Holme, P. & Saramäki, J. Temporal networks. Phys. Rep. 519, 97 - 125 (2012). URL http://www.sciencedirect.com/science/article/pii/S0370157312000841.2, 3
|
| 142 |
+
|
| 143 |
+
[5] Badie-Modiri, A., Karsai, M. & Kivelä, M. Efficient limited-time reachability estimation in temporal networks. Phys. Rev. E 101, 052303 (2020). URL https://link.aps.org/doi/10.1103/PhysRevE.101.052303.1,3
|
| 144 |
+
|
| 145 |
+
[6] Lentz, H. H. K., Selhorst, T. & Sokolov, I. M. Unfolding accessibility provides a macroscopic approach to temporal networks. Phys. Rev. Lett. 110, 118701 (2013). URL http://link.aps.org/doi/10.1103/ PhysRevLett.110.118701. 1
|
| 146 |
+
|
| 147 |
+
[7] Badie-Modiri, A., Rizi, A. K., Karsai, M. & Kivelă, M. Directed percolation in temporal networks. Phys. Rev. Research 4, L022047 (2022). URL https://link.aps.org/doi/10.1103/PhysRevResearch.4.L022047.1
|
| 148 |
+
|
| 149 |
+
[8] Pfitzner, R., Scholtes, I., Garas, A., Tessone, C. J. & Schweitzer, F. Betweenness preference: Quantifying correlations in the topological dynamics of temporal networks. Phys. Rev. Lett. 110, 198701 (2013). URL http://link.aps.org/doi/10.1103/PhysRevLett.110.198701.https://doi.org/10.1103/ PhysRevLett.110.198701. 1
|
| 150 |
+
|
| 151 |
+
[9] Scholtes, I. et al. Causality-driven slow-down and speed-up of diffusion in non-markovian temporal networks. Nature Communications 5, 5024 (2014). URL http://www.nature.com/ncomms/ 2014/140924/ncomms6024/full/ncomms6024.html. https://doi.org/10.1038/ncomms6024, 1307.4030.1,3,4
|
| 152 |
+
|
| 153 |
+
[10] Rosvall, M., Esquivel, A. V., Lancichinetti, A., West, J. D. & Lambiotte, R. Memory in network flows and its effects on spreading dynamics and community detection. Nature communications 5 (2014). 1, 3, 4
|
| 154 |
+
|
| 155 |
+
[11] Bruijn, N. D. A combinatorial problem. In Nederl. Akad. Wetensch., Proc. 49, 461-467 (1946). 1, 4
|
| 156 |
+
|
| 157 |
+
[12] Scholtes, I. When is a network a network?: Multi-order graphical model selection in pathways and temporal networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, CA, August 2017, KDD '17, 1037-1046 (ACM, New York, NY, USA, 2017). URL http://doi.acm.org/10.1145/3097983.3098145.http://doi.acm.org/10.1145/3097983.3098145.1,3,4,5,7
|
| 158 |
+
|
| 159 |
+
[13] Lambiotte, R., Rosvall, M. & Scholtes, I. From networks to optimal higher-order models of complex systems. Nature physics 15, 313-320 (2019). 1, 3, 4, 9
|
| 160 |
+
|
| 161 |
+
[14] Belth, C., Kamran, F., Tjandra, D. & Koutra, D. When to remember where you came from: Node representation learning in higher-order networks. In 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 222-225 (2019). 1, 4, 8
|
| 162 |
+
|
| 163 |
+
[15] Eliassi-Rad, T., Latora, V., Rosvall, M. & Scholtes, I. Higher-Order Graph Models: From Theoretical Foundations to Machine Learning (Dagstuhl Seminar 21352). Dagstuhl Reports 11, 139-178 (2021). URL https://drops.dagstuhl.de/opus/volltexte/2021/15592.2, 9
|
| 164 |
+
|
| 165 |
+
[16] Krieg, S. J., Burgis, W. C., Soga, P. M. & Chawla, N. V. Deep ensembles for graphs with higher-order dependencies. CoRR abs/2205.13988 (2022). URL https://doi.org/10.48550/arXiv.2205.13988.2205.13988.2
|
| 166 |
+
|
| 167 |
+
[17] Fey, M. & Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428 (2019). 2
|
| 168 |
+
|
| 169 |
+
[18] Salnikov, V., Schaub, M. T. & Lambiotte, R. Using higher-order markov models to reveal flow-based communities in networks. Scientific reports $\mathbf{6},1 - {13}\left( {2016}\right)$ . 3,4
|
| 170 |
+
|
| 171 |
+
[19] Chung, F., Diaconis, P. & Graham, R. Universal cycles for combinatorial structures. Discrete Mathematics 110, 43-59 (1992). URL https://www.sciencedirect.com/science/article/pii/ 0012365X9290699G. 4
|
| 172 |
+
|
| 173 |
+
[20] Saebi, M., Ciampaglia, G. L., Kaplan, L. M. & Chawla, N. V. HONEM: learning embedding for higher order networks. Big Data 8, 255-269 (2020). URL https://doi.org/10.1089/big.2019.0169.4, 8
|
| 174 |
+
|
| 175 |
+
[21] Xu, J., Wickramarathne, T. L. & Chawla, N. V. Representing higher-order dependencies in networks. Science Advances 2 (2016). URL http://advances.sciencemag.org/content/2/5/e1600028.http://advances.sciencemag.org/content/2/5/e1600028.full.pdf.4
|
| 176 |
+
|
| 177 |
+
[22] Petrovic, L. V. & Scholtes, I. Learning the markov order of paths in graphs. In Laforest, F. et al. (eds.) WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, 1559-1569 (ACM, 2022). URL https://doi.org/10.1145/3485447.3512091.4
|
| 178 |
+
|
| 179 |
+
[23] Torres, L., Blevins, A. S., Bassett, D. & Eliassi-Rad, T. The why, how, and when of representations for complex systems. SIAM Review 63, 435-485 (2021). URL https://doi.org/10.1137/20M1355896.https://doi.org/10.1137/20M1355896.4,9
|
| 180 |
+
|
| 181 |
+
[24] Benson, A. R., Gleich, D. F. & Higham, D. J. Higher-order network analysis takes off, fueled by classical ideas and new data. arXiv preprint arXiv:2103.05031 (2021). 4, 9
|
| 182 |
+
|
| 183 |
+
[25] Feng, Y., You, H., Zhang, Z., Ji, R. & Gao, Y. Hypergraph neural networks. CoRR abs/1809.09401 (2018). URL http://arxiv.org/abs/1809.09401.1809.09401.5
|
| 184 |
+
|
| 185 |
+
[26] Huang, J. & Yang, J. Unignn: a unified framework for graph and hypergraph neural networks. In Zhou, Z. (ed.) Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, 2563-2569 (ijcai.org, 2021). URL https://doi.org/10.24963/ijcai.2021/353.5
|
| 186 |
+
|
| 187 |
+
[27] Veličković, P. Message passing all the way up. arXiv preprint arXiv:2202.11097 (2022). 5, 9
|
| 188 |
+
|
| 189 |
+
[28] Kipf, T. N. & Welling, M. Semi-supervised classification with graph convolutional networks. CoRR abs/1609.02907 (2016). URL http://arxiv.org/abs/1609.02907.1609.02907.5, 6, 8
|
| 190 |
+
|
| 191 |
+
[29] Badie-Modiri, A., Karsai, M. & Kivelä, M. Efficient limited-time reachability estimation in temporal networks. Physical Review E 101, 052303 (2020). 5
|
| 192 |
+
|
| 193 |
+
[30] Petrovic, L. V. & Scholtes, I. Paco: Fast counting of causal paths in temporal network data. In Leskovec, J., Grobelnik, M., Najork, M., Tang, J. & Zia, L. (eds.) Companion of The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, 521-526 (ACM / IW3C2, 2021). URL https: //doi.org/10.1145/3442442.3452050.5
|
| 194 |
+
|
| 195 |
+
[31] Sapiezynski, P., Stopczynski, A., Lassen, D. D. & Lehmann, S. Interaction data from the copenhagen networks study. Scientific Data 6, 1-10 (2019). 7, 8
|
| 196 |
+
|
| 197 |
+
[32] Fournet, J. & Barrat, A. Contact patterns among high school students. PloS one 9, e107878 (2014). 7, 8
|
| 198 |
+
|
| 199 |
+
[33] Génois, M. et al. Data on face-to-face contacts in an office building suggest a low-cost vaccination strategy based on community linkers. Network Science 3, 326-347 (2015). URL http://www.sociopatterns.org/datasets/contacts-in-a-workplace/.7, 8
|
| 200 |
+
|
| 201 |
+
[34] Vanhems, P. et al. Estimating potential infection transmission routes in hospital wards using wearable proximity sensors. PloS one 8, e73970 (2013). URL http://www.sociopatterns.org/datasets/ hospital-ward-dynamic-contact-network/. 8
|
| 202 |
+
|
| 203 |
+
[35] Perozzi, B., Al-Rfou, R. & Skiena, S. Deepwalk: online learning of social representations. In Macskassy, S. A., Perlich, C., Leskovec, J., Wang, W. & Ghani, R. (eds.) The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, New York, NY, USA - August 24 - 27, 2014, 701-710 (ACM, 2014). URL https://doi.org/10.1145/2623330.2623732.8
|
| 204 |
+
|
| 205 |
+
[36] Grover, A. & Leskovec, J. node2vec: Scalable feature learning for networks. In Krishnapuram, B. et al. (eds.) Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 855-864 (ACM, 2016). URL https: //doi.org/10.1145/2939672.2939754.8
|
| 206 |
+
|
| 207 |
+
[37] Rehurek, R. & Sojka, P. Gensim-python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic 3 (2011). 8
|
| 208 |
+
|
| 209 |
+
[38] Clevert, D.-A., Unterthiner, T. & Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv: Learning (2016). 8
|
| 210 |
+
|
| 211 |
+
[39] Kingma, D. & Ba, J. Adam: A method for stochastic optimization. International Conference on Learning Representations (2014). 8
|
| 212 |
+
|
| 213 |
+
## A Generation of Synthetic data with temporal clusters
|
| 214 |
+
|
| 215 |
+
temp-clusters is a synthetically generated dynamic graph with a random static topology but a strong cluster structure in the causal topology. To generate the dynamic graph, we first generate a static directed random graph with $n$ vertices and $m$ edges. For our experiment we chose $n = {30}$ and $m = {560}$ . We randomly assign vertices to three equally-sized, non-overlapping clusters, where $C\left( v\right)$ denotes the cluster of vertex $v$ . We then generate $N$ sequences of two randomly chosen time-stamped edges $\left( {{v}_{0},{v}_{1};t}\right)$ and $\left( {{v}_{1},{v}_{2};t + 1}\right)$ that contribute to a causal walk of length two in the resulting dynamic graph. For each vertex ${v}_{1}$ of such a causal path of length two, we randomly pick:
|
| 216 |
+
|
| 217 |
+
- two time-stamped edges $\left( {u,{v}_{1};{t}_{1}}\right)$ and $\left( {{v}_{1}, w,{t}_{1} + 1}\right)$ such that $C\left( u\right) = C\left( {v}_{1}\right) \neq C\left( w\right)$
|
| 218 |
+
|
| 219 |
+
- two time-stamped edges $\left( {x,{v}_{1};{t}_{2}}\right)$ and $\left( {{v}_{1}, z;{t}_{2} + 1}\right)$ with $C\left( {v}_{1}\right) = C\left( z\right) \neq C\left( x\right)$
|
| 220 |
+
|
| 221 |
+
Finally, we swap the time stamps of the four time-stamped edges to $\left( {u,{v}_{1};{t}_{1}}\right)$ and $\left( {{v}_{1}, z;{t}_{1} + 1}\right)$ , $\left( {x,{v}_{1},{t}_{2}}\right)$ , and $\left( {{v}_{1}, w,{t}_{2} + 1}\right)$ . This swapping procedure is repeated for each vertex ${v}_{1}$ of a causal path of length two. This simple process changes the temporal ordering of time-stamped edges, affecting neither the topology nor the frequency of time-stamped edges. The model changes time stamps of edges (and thus causal paths) such that vertices are preferentially connected-via causal paths of length two-to other vertices in the same cluster. This leads to a strong cluster structure in the causal topology of the dynamic graph, which (i) is neither present in the time-aggregated topology nor in the temporal activation patterns of edges, and (ii) can nevertheless be detected by higher-order methods. A random reshuffling of timestamps destroys the cluster pattern, which confirms that it is only due to the temporal order of time-stamped edges.
|
| 222 |
+
|
| 223 |
+
## B Latent Space Embeddings of Synthetic Example
|
| 224 |
+
|
| 225 |
+
Figure 3 shows a latent representation of nodes in the synthetic data set temp-clusters generated by the DBGNN (a) and GCN (b) architecture. This synthetically generated dynamic graph contains no pattern whatsoever in the (static) graph topology, which corresponds to a random graph, i.e. the topology of edges is random and all nodes have similar degrees (cf. Figure 3(b)). However, correlations in the temporal ordering of edges lead to three strong clusters in the causal topology, i.e. there are three groups of nodes where -due to the arrow of time and the temporal ordering of edges- pairs of nodes within the same cluster can influence each other via causal walks more frequently than pairs of nodes in different clusters. We emphasize that the resulting pattern in the causal topology is exclusively due to the temporal ordering of edges. The latent space embedding in Figure 3(a) highlights the DBGNN architectures's ability to learn this pattern in the causal topology of the underlying dynamic graph, which is absent in Figure 3(b). As expected, the different node degrees of the static graph (visible as clusters in Figure 3(b)) are the only pattern captured in the hidden node representations of the GCN architecture, which is insensitive to the temporal ordering of edges. This synthetic example confirms that DBGNNs provide a simple, static causality-aware approach for deep learning in dynamic graphs.
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
|
| 229 |
+
(a) Latent space representation of nodes generated by De Bruijn Graph Neural Network (DBGNN) using higher-order De Bruijn graph with order $k = 2$ .
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+
(b) Latent space representation of nodes generated by Graph Convolutional Network (GCN).
|
| 234 |
+
|
| 235 |
+
Figure 3: Latent space representations of nodes in a synthetically generated dynamic graph (temp-clusters) with three clusters in the causal topology, where colours indicate cluster memberships. The hidden node representations learned by the DBGNN architecture capture the cluster structure in the causal topology, which is exclusively due to the temporal ordering -and not due to the topology or frequency- of time-stamped edges.
|
| 236 |
+
|
| 237 |
+
## C Standard Deviation of Classification Results
|
| 238 |
+
|
| 239 |
+
In Table 3 we present the standard deviation of the classification results reported in table 2 across all runs for all models.
|
| 240 |
+
|
| 241 |
+
<table><tr><td>dataset</td><td>method</td><td>Balanced Accuracy</td><td>F1-score-macro</td><td>Precision-macro</td><td>Recall-macro</td></tr><tr><td rowspan="6">temp-clusters</td><td>DeepWalk</td><td>15.38</td><td>15.04</td><td>18.03</td><td>15.38</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>17.12</td><td>16.88</td><td>20.24</td><td>17.12</td></tr><tr><td>GCN (8,32)</td><td>7.3</td><td>7.69</td><td>8.04</td><td>7.3</td></tr><tr><td>EVO p=1 q=1</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>HONEM</td><td>16.27</td><td>16.71</td><td>19.61</td><td>16.27</td></tr><tr><td>DBGNN (16,16)</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td rowspan="6">high-school-2011</td><td>DeepWalk</td><td>5.83</td><td>7.22</td><td>12.79</td><td>5.83</td></tr><tr><td>Node2Vec1.04.0</td><td>6.34</td><td>7.58</td><td>9.44</td><td>6.34</td></tr><tr><td>GCN (32,4)</td><td>0.89</td><td>3.1</td><td>4.83</td><td>0.89</td></tr><tr><td>EVO p=1 q=4</td><td>5.72</td><td>7.65</td><td>9.33</td><td>5.72</td></tr><tr><td>HONEM</td><td>5.72</td><td>6.93</td><td>10.07</td><td>5.72</td></tr><tr><td>DBGNN (32,8)</td><td>7.0</td><td>7.42</td><td>7.8</td><td>7.0</td></tr><tr><td rowspan="6">high-school-2012</td><td>DeepWalk</td><td>4.97</td><td>6.52</td><td>11.0</td><td>4.97</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>5.27</td><td>6.8</td><td>11.29</td><td>5.27</td></tr><tr><td>GCN (8,32)</td><td>6.87</td><td>9.49</td><td>13.58</td><td>6.87</td></tr><tr><td>EVO p=4 q=1</td><td>4.14</td><td>6.07</td><td>9.96</td><td>4.14</td></tr><tr><td>HONEM</td><td>4.59</td><td>5.89</td><td>9.12</td><td>4.59</td></tr><tr><td>DBGNN (4,8)</td><td>6.59</td><td>6.62</td><td>7.07</td><td>6.59</td></tr><tr><td rowspan="6">hospital</td><td>DeepWalk</td><td>7.64</td><td>6.9</td><td>7.51</td><td>7.64</td></tr><tr><td>Node2Vec $p = {1q} = 4$</td><td>6.79</td><td>6.46</td><td>6.95</td><td>6.79</td></tr><tr><td>GCN (32,32)</td><td>11.06</td><td>12.0</td><td>13.58</td><td>11.06</td></tr><tr><td>EVO p=1 q=4</td><td>9.31</td><td>11.34</td><td>16.31</td><td>9.31</td></tr><tr><td>HONEM</td><td>8.51</td><td>7.78</td><td>8.25</td><td>8.51</td></tr><tr><td>DBGNN (32,16)</td><td>13.09</td><td>12.54</td><td>15.02</td><td>12.65</td></tr><tr><td rowspan="6">student-sms</td><td>DeepWalk</td><td>2.72</td><td>4.45</td><td>10.05</td><td>2.72</td></tr><tr><td>Node2Vec $p = 1q = 4$</td><td>3.29</td><td>4.93</td><td>9.13</td><td>3.29</td></tr><tr><td>GCN (4,32)</td><td>3.59</td><td>3.65</td><td>3.91</td><td>3.59</td></tr><tr><td>EVO p=4 q=1</td><td>3.38</td><td>5.14</td><td>7.89</td><td>3.38</td></tr><tr><td>HONEM</td><td>1.29</td><td>2.31</td><td>15.0</td><td>1.29</td></tr><tr><td>DBGNN (4,4)</td><td>4.28</td><td>4.47</td><td>4.56</td><td>4.28</td></tr><tr><td rowspan="6">workplace</td><td>DeepWalk</td><td>2.23</td><td>1.85</td><td>1.48</td><td>2.23</td></tr><tr><td>Node2Vec $p = 1q = 4$</td><td>3.3</td><td>3.11</td><td>2.95</td><td>3.3</td></tr><tr><td>GCN (32,16)</td><td>8.67</td><td>8.6</td><td>9.61</td><td>8.26</td></tr><tr><td>EVO p=1 q=4</td><td>3.12</td><td>2.36</td><td>1.65</td><td>3.12</td></tr><tr><td>HONEM</td><td>6.27</td><td>5.17</td><td>4.34</td><td>6.27</td></tr><tr><td>DBGNN (32,8)</td><td>9.67</td><td>9.76</td><td>10.26</td><td>9.65</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 3: Standard deviations of node classification in six dynamic graphs for static graph learning techniques (DeepWalk, node2vec, GCN) and time-aware methods (HONEM, EVO) as well as the DBGNN architecture proposed in this work.
|
| 244 |
+
|
papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,279 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DE BRUIJN GOES NEURAL: CAUSALITY-AWARE GRAPH NEURAL NETWORKS FOR TIME SERIES DATA ON DYNAMIC GRAPHS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
We introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold in the causal topology of dynamic graphs, which is determined by causal walks, i.e. temporally ordered sequences of links by which nodes can influence each other over time. Our architecture builds on multiple layers of higher-order De Bruijn graphs, an iterative line graph construction where nodes in a De Bruijn graph of order $k$ represent walks of length $k - 1$ , while edges represent walks of length $k$ . We develop a graph neural network architecture that utilizes De Bruijn graphs to implement a message passing scheme that follows a non-Markovian dynamics, which enables us to learn patterns in the causal topology of a dynamic graph. Addressing the issue that De Bruijn graphs with different orders $k$ can be used to model the same data set, we further apply statistical model selection to determine the optimal graph topology to be used for message passing. An evaluation in synthetic and empirical data sets suggests that DBGNNs can leverage temporal patterns in dynamic graphs, which substantially improves the performance in a supervised node classification task.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Graph Neural Networks (GNNs) [1, 2] have become a cornerstone for the application of deep learning to data with a non-Euclidean, relational structure. Different flavors of GNNs have been shown to be highly efficient for tasks like node classification, representation learning, link prediction, cluster detection, or graph classification. The popularity of GNNs is largely due to the abundance of data that can be represented as graphs, i.e. as a set of nodes with pairwise connections represented as links. However, we increasingly have access to time-resolved data that not only capture which nodes are connected to each other, but also when and in which temporal order those connections occur. A number of works in computer science, network science, and interdisciplinary physics have highlighted how the temporal dimension of dynamic graphs, i.e. the timing and ordering of links, influences the causal topology of networked systems, i.e. which nodes can possibly influence each other over time [3-5]. In a nutshell, if an undirected link(a, b)between two nodes $a$ and $b$ occurs before an undirected link(b, c), node $a$ can causally influence node $c$ via node $b$ . If the temporal ordering of those two links is reversed, node $a$ cannot influence node $c$ via $b$ due to the directionality of the arrow of time. This simple example shows that the arrow of time in dynamic graphs limits possible causal influences between nodes beyond what we would expect based on the mere topology of links.
|
| 16 |
+
|
| 17 |
+
Beyond such toy examples, a number of recent studies in network science, computer science, and interdisciplinary physics have shown that the temporal ordering of links in real time series data on graphs has non-trivial consequences for the properties of networked systems, e.g. for reachability and percolation [6, 7], diffusion and epidemic spreading [8, 9], node rankings and community structures [10]. It had further been shown that this interesting aspect of dynamic graphs can be understood using a variant of De Bruijn graphs [11], i.e. static higher-order graphical models [9, 12, 13] of causal paths that capture both the temporal and the topological dimension of time series data on graphs.
|
| 18 |
+
|
| 19 |
+
While the generalization of network analysis techniques like node centrality measures and community detection [10, 12], or graph embedding [14] to such higher-order models has been successful, to the best of our knowledge no generalizations of Graph Neural Networks to higher-order De Bruijn graphs have been proposed $\left\lbrack {{15},{16}}\right\rbrack$ . Such a generalization bears several promises: First it could enable us to apply well-known and efficient gradient-based learning techniques in a static neural network architecture that is able to learn patterns in the causal topology of dynamic graphs that are due to the temporal ordering of links. Second, making the temporal ordering of links in time-stamped data a first-class citizen of graph neural networks, this generalization could also be an interesting approach to incorporate a necessary condition for causality into state-of-the-art geometric deep learning techniques, which often lack meaningful ways to represent time. Finally, a combination of higher-order De Bruijn graph models with graph neural networks enable us to apply frequentist and Bayesian techniques to learn the "optimal" order of a De Bruijn graph model for a given time series, providing new ways to combine statistical learning and model selection with graph neural networks.
|
| 20 |
+
|
| 21 |
+
Addressing this gap, our work generalizes graph neural networks to high-dimensional De Bruijn graph models for causal paths in time-stamped data on dynamic graphs. We obtain a novel causality-aware graph neural network architecture for time series data that makes the following contributions:
|
| 22 |
+
|
| 23 |
+
* We develop a graph neural network architecture that generalizes message passing to multiple layers of higher-order De Bruijn graphs. The resulting De Bruijn Graph Neural Network (DBGNN) architecture leads to a non-Markovian message passing, whose dynamics matches correlations in the temporal ordering of links, thus enabling us to learn patterns that shape the causal topology of dynamic graphs.
|
| 24 |
+
|
| 25 |
+
* We evaluate our proposed architecture both in empirical and synthetically generated dynamic graphs and compare its performance to graph neural networks as well as (time-aware) graph representation learning techniques. We find that our method yields superior node classification performance.
|
| 26 |
+
|
| 27 |
+
* We combine this architecture with statistical model selection to infer the optimal higher order of a De Bruijn graph. This yields a two-step learning process, where (i) we first learn a parsimonious De Bruijn graph model that neither under- nor overfits patterns in a dynamic graph, and (ii) we apply message passing and gradient-based optimization to the inferred graph in order to address graph learning tasks like node classification or representation learning.
|
| 28 |
+
|
| 29 |
+
Our work builds on the -to the best of our knowledge- novel combination of (i) statistical model selection to infer optimal higher-order graphical models for causal paths in dynamic graphs, and (ii) gradient-based learning in a neural network architecture that uses the inferred higher-order graphical models as message passing layers. Thanks to this approach, our architecture performs message passing in an optimal graph model for the causal paths in a given dynamic graph. The results of our evaluation confirm that this explicit regularization of the message passing layers enables us to considerably improve performance in a node classification task. The remainder of this paper is structured as follows: In section 2 we introduce the background of our work and formally state the problem that we address, in section 3 we introduce the De Bruijn graph neural network architecture, in section 4 we experimentally validate our method in synthetic and empirical data on dynamic graphs, and in section 5 we summarize our contributions and highlight opportunities for future research. We have implemented our architecture based on the graph learning library pyTorch Geometric [17] and release the code of our experiments as an Open Source package ${}^{1}$ .
|
| 30 |
+
|
| 31 |
+
§ 2 BACKGROUND AND PROBLEM STATEMENT
|
| 32 |
+
|
| 33 |
+
Basic definitions We consider a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ with a (static) set of nodes $V$ and time-stamped (directed) edges $\left( {v,w;t}\right) \in {E}^{\mathcal{T}} \subseteq V \times V \times \mathbb{N}$ where -without loss of generality-integer timestamps $t$ represent the instantaneous time at which a pair of nodes $v,w$ is connected [4]. While many real-world network data exhibit such timestamps, for the application of graph neural networks we often consider a time-aggregated projection $G\left( {V,E}\right)$ along the time axis, where a (static) edge $\left( {v,w}\right) \in E$ exists iff $\exists t \in \mathbb{N} : \left( {v,w}\right) \in {E}^{\mathcal{T}}$ . We can further consider edge weights $w : E \rightarrow \mathbb{N}$ defined as $w\left( {v,w}\right) \mathrel{\text{ := }} \left| \left\{ {t \in \mathbb{N} : \left( {v,w;t}\right) \in {E}^{\mathcal{T}}}\right\} \right|$ , i.e. we use $w\left( {v,w}\right)$ to count the number of temporal activations of(v, w).
|
| 34 |
+
|
| 35 |
+
A key motivation for the study of graphs as models for complex systems is that -apart from direct interactions captured by edges(v, w)- they facilitate the study of indirect interactions between nodes via paths or walks in a graph. Formally, we define a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ in a graph $G = \left( {V,E}\right)$ as any sequence of nodes ${v}_{i} \in V$ such that $\left( {{v}_{i - 1},{v}_{i}}\right) \in E$ for $i = 1,\ldots ,l - 1$ . The length $l$ of a walk captures the number of traversed edges, i.e. each node $v \in V$ is a walk of length zero, while each edge(v, w)is a walk of length one. We further call a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a path of length $l$ from ${v}_{0}$ to ${v}_{l - 1}$ iff ${v}_{i} \neq {v}_{j}$ for $i \neq j$ , i.e. a path is a walk between a set of distinct nodes.
|
| 36 |
+
|
| 37 |
+
${}^{1}$ link blinded in review version
|
| 38 |
+
|
| 39 |
+
Causal walks and paths in dynamic graphs In a static graph $G = \left( {V,E}\right)$ , the topology-i.e. which nodes can directly and indirectly influence each other via edges, walks, or paths- is completely determined by the edges $E$ . This is is different for dynamic graphs, which can be understood by extending the definition of walks and paths to causal concepts that respect the arrow of time:
|
| 40 |
+
|
| 41 |
+
Definition 1. For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ , we call a node sequence ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a causal walk iff the following two conditions hold: (i) $\left( {{v}_{i - 1},{v}_{i};{t}_{i}}\right) \in {E}^{\mathcal{T}}$ for $i = 1,\ldots ,l - 1$ and (ii) $0 < {t}_{j} - {t}_{i} \leq \delta$ for $i < j$ and some $\delta > 0$ .
|
| 42 |
+
|
| 43 |
+
The first condition ensures that nodes in a dynamic graph can only indirectly influence each other via a causal walk iff a corresponding walk exists in the time-aggregated graph. Due to $0 < {t}_{j} - {t}_{i}$ for $i < j$ , the second condition ensures that time-stamped edges in a causal walk occur in the correct chronological order, i.e. timestamps are monotonically increasing [3, 4]. As an example, two time-stamped edges $\left( {a,b;1}\right) ,\left( {b,c;2}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = 2$ via node $b$ , while the same edges in reverse temporal order $\left( {a,b;2}\right) ,\left( {b,c;1}\right)$ do not constitute a causal walk. While this definition of a causal walk does not impose an upper bound on the time difference between consecutive time-stamped edges constituting a causal walk, it is often reasonable to define a time limit $\delta > 0$ , i.e. a time difference beyond which consecutive edges are not considered to contribute to a causal walk. As an example, two time-stamped edges $\left( {a,b;1}\right) ,\left( {b,c;{100}}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = {100}$ via node $b$ for $\delta = {150}$ , while they do not constitute a causal walk for $\delta = 5$ . This time-limited notion of causal or time-respecting walks is characteristic for many real networked systems in which processes or agents have a finite time scale or "memory", which rules out infinitely long gaps between consecutive causal interactions [4, 5]. Analogous to the definition in a static network, we finally define a causal path ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ from node ${v}_{0}$ to node ${v}_{l - 1}$ as a causal walk with ${v}_{i} \neq {v}_{j}$ for $i \neq j$ .
|
| 44 |
+
|
| 45 |
+
Non-Markovian characteristics of dynamic graphs The above definition of causal walks and paths in dynamic graphs has important consequences for our understanding of the topology of dynamic graphs, i.e. which nodes can directly and indirectly influence each other directly via walks or paths. Moreover, it has important consequences for graph learning and network analysis tasks such as node ranking, cluster detection, or embedding $\left\lbrack {9,{10},{12},{13},{18}}\right\rbrack$ . This additional complexity of dynamic graphs is due to the fact that the topology of a static graph $G = \left( {V,E}\right)$ can be fully understood based on the transitive hull of edges, i.e. the presence of two edges $\left( {u,v}\right) \in E$ and $\left( {v,w}\right) \in E$ implies that nodes $u$ and $w$ can indirectly influence each other via a walk or path, which we denote as $u{ \rightarrow }^{ * }w$ . This not only enables us to use standard algorithms, e.g. to calculate (shortest) paths, it also implies that we can use matrix powers, eigenvalues and eigenvectors to analyze topological properties of a graph. In contrast, in dynamic graphs the chronological order of time-stamped edges can break transitivity, i.e. $\left( {u,v;t}\right) \in E$ and $\left( {v,w;{t}^{\prime }}\right) \in E$ does not necessarily imply $u{ \rightarrow }^{ * }w$ , which invalidates graph analytic approaches [13].
|
| 46 |
+
|
| 47 |
+
To study the question how correlations in the temporal ordering of time-stamped edges influence the causal topology of a dynamic graph, we can take a statistical modelling perspective. We can, for instance, consider causal walks as sequences of random variables that can be modelled via a Markov chain of order $k$ over a discrete state space $V$ [12]. In other words, we model the sequence of nodes ${v}_{0},\ldots ,{v}_{l - 1}$ on causal walks as $P\left( {{v}_{i} \mid {v}_{i - k},\ldots ,{v}_{i - 1}}\right)$ where $k - 1$ is the length of the "memory" of the Markov chain. For $k = 1$ we have a memoryless, first-order Markov chain model $P\left( {{v}_{i} \mid {v}_{i - 1}}\right)$ , where the next node on the walk exclusively depends on the current node. From the perspective of dynamic graphs with time-stamped link sequences, this corresponds to a case where the causal walks of the dynamic graph are exclusively determined by the topology (and possibly frequency) of edges, i.e. there are no correlations in the temporal ordering of time-stamped edges and the causal topology of the dynamic matches the topology of the corresponding time-aggregated graph. If the need a Markov order $k > 1$ , the sequence of nodes traversed by causal walks exhibits memory, i.e. the next node on a walk not only depends on the current one but also on the history of past interactions. The presence of such higher-order correlations in dynamic graphs is associated with more complex causal topologies that (i) cannot be reduced to the topology of the associated time-aggregated network, and (ii) have interesting implications for spreading and diffusion processes and spectral properties [9], node centralities [12], and community structures [10].
|
| 48 |
+
|
| 49 |
+
Higher-order De Bruijn graph models of causal topologies The use of higher-order Markov chain models for causal paths leads to an interesting novel view on the relationship between graph models and time series data on dynamic graphs. In this view, the common (weighted) time-aggregated graph representation of time-stamped edges corresponds to a first-order graphical model, where edge weights capture the statistics of edges, i.e. causal paths of length one. A normalization of edge weights in this graph yields a first-order Markov model of causal walks in a dynamic graph. Similarly, a graphical representation of higher-order Markov chain model of causal walks can be used to capture non-Markovian patterns in the temporal sequence of time-stamped edges. However, different from higher-order Markov chain models of general categorical sequences, a higher-order model of causal paths in dynamic graphs must account for the fact that the set of possible causal paths is constrained by the topology of the corresponding static graph (i.e. condition (i) in Definition 1). To account for this we define a higher-order De Bruijn graph model of causal walks [11]:
|
| 50 |
+
|
| 51 |
+
Definition 2 ( $k$ -th order De Bruijn graph model). For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ and $k \in \mathbb{N}$ , a $k$ -th order De Bruijn graph model of causal paths in ${G}^{\mathcal{T}}$ is a graph ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ , with $u \mathrel{\text{ := }} \left( {{u}_{0},{u}_{1},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ a causal walk of length $k - 1$ in ${G}^{\mathcal{T}}$ and $\left( {u,v}\right) \in {E}^{\left( k\right) }$ iff(i) $v = \left( {{v}_{1},\ldots ,{v}_{k}}\right)$ with ${u}_{i} = {v}_{i}$ for $i = 1,\ldots ,k - 1$ and (ii) $u \oplus v = \left( {{u}_{0},\ldots ,{u}_{k - 1},{v}_{k}}\right)$ a causal walk of length $k$ in ${G}^{\mathcal{T}}$ .
|
| 52 |
+
|
| 53 |
+
We note that any two adjacency nodes $u,v \in {V}^{k}$ in a $k$ -th order De Bruijn graph ${G}^{\left( k\right) }$ represent two causal walks of length $k - 1$ that overlap in exactly $k - 1$ nodes, i.e. each edge $\left( {u,v}\right) \in {E}^{\left( k\right) }$ represents a causal walk of length $k$ . We can further use edge weights $w : {E}^{\left( k\right) } \rightarrow \mathbb{N}$ to capture the frequencies of causal paths of length $k$ . The (weighted) time-aggregated graph $G$ of a dynamic graph trivially corresponds to a first-order De Bruijn graph, where (i) nodes are causal walks of length zero and (ii) edges $E = {E}^{\left( 1\right) }$ capture causal walks of length one (i.e. edges) in ${G}^{\mathcal{T}}$ . To construct a second-order De Bruijn graph ${G}^{\left( 2\right) }$ we can perform a line graph transformation of a static graph $G = {G}^{\left( 1\right) }$ , where each edge $\left( {{u}_{0},{u}_{1}}\right) ,\left( {{u}_{1},{u}_{2}}\right) \in {E}^{\left( 2\right) }$ captures a causally ordered sequence of two edges $\left( {{u}_{0},{u}_{1};t}\right)$ and $\left( {{u}_{1},{u}_{2};{t}^{\prime }}\right)$ . A $k$ -th order De Bruijn graph can be constructed by a repeated line graph transformation of a static graph $G$ . Hence, De Bruijn graphs can be viewed as generalization of common graph models to a higher-order, static graphical model of causal walks of length $k$ , where walks of length $l$ in ${G}^{k}$ model causal walks of length $k + l - 1$ in ${G}^{\mathcal{T}}\left\lbrack {9,{13}}\right\rbrack$ .
|
| 54 |
+
|
| 55 |
+
De Bruijn graphs have interesting mathematical properties that connect them to trajectories of subshifts of finite type as well as to dynamical systems and ergodic theory [19]. For the purpose of our work, they provide the advantage that we can use $k$ -th order De Bruijn graphs to model the causal topology in dynamic graphs. We illustrate this in fig. 1, which shows two dynamic graphs with four nodes and 33 time-stamped links. These dynamics groups only differ in term of the temporal ordering of edges, i.e. they have the same (first-order) weighted time-aggregated graph representation (center). Moreover, this first-order representation wrongly suggests that node $A$ can influence node $C$ by a path via node $B$ . While this is true in the dynamic graph on the right (see red causal paths), no corresponding causal path from $A$ via $B$ to $C$ exists in the dynamic graph on the left. A second-order De Bruijn graph model (bottom left and right) captures the fact that the causal path from $A$ via $B$ to $C$ is absent in the right example. This shows that, different from commonly used static graph representations, the edges of a $k$ -th order De Bruijn graph with $k > 1$ are sensitive to the temporal ordering of time-stamped edges. Hence, static higher-order De Bruijn graphs can be used to model the causal topology in a dynamic graph. We can view a $k$ -th order De Bruijn graph in analogy to a $k$ -th order Markov model, where a directed link from node $\left( {{u}_{0},\ldots ,{u}_{k - 1}}\right)$ to node $u = \left( {{u}_{1},\ldots ,{u}_{k},{u}_{k}}\right)$ captures a walk from node ${u}_{k}$ to ${u}_{k + 1}$ in the underlying graph, with a memory of $k$ previously visited nodes ${u}_{0},\ldots ,{u}_{k - 1}$ . This approach has been used to analyze how the causal topology of dynamic graphs influences node ranking in dynamic graphs $\left\lbrack {{10},{12}}\right\rbrack$ , the modelling of random walks and diffusion [9], community detection [10, 18], time-aware static graph embedding [14, 20]. Moreover, several works have proposed heuristic, frequentist and Bayesian methods to infer the optimal order of higher-order graph models of causal paths given time series data on dynamic graphs [10, 12, 21, 22].
|
| 56 |
+
|
| 57 |
+
Problem Statement and Research Gap The works above provide the background for the generalization of graph neural networks to higher-order De Bruijn graph models of causal walks in dynamic graphs, which we propose in the following section. Following the terminology in the network science community, higher-order De Bruijn graph models can be seen as one particular type of higher-order network models [13, 23, 24], which capture (causally-ordered) sequences of interactions between more than two nodes, rather than dyadic edges. They complement other types of popular higher-order network models (like, e.g. hypergraphs, simplicial complexes, or motif-based adjacency matrices)
|
| 58 |
+
|
| 59 |
+
< g r a p h i c s >
|
| 60 |
+
|
| 61 |
+
Figure 1: Simple example for two dynamic graphs with four nodes and 33 directed time-stamped edges (top left and right). The two graphs only differ in terms of the temporal ordering of edges. Frequency and topology of edges are identical, i.e. they have the same first-order time-aggregated weighted graph representation (center). Due to the arrow of time, causal walks and paths differ in the two dynamic graphs: Assuming $\delta = 1$ , in the left dynamic graph node $A$ cannot causally influence $C$ via $B$ , while such a causal path is possible in the right graph. A second-order De Bruijn graph representation of causal walks in the two graphs (bottom left and right) captures this difference in the causal topology. Building on such causality-aware graphical models, in our work we define a graph neural network architecture that is able to learn patterns in the causal topology of dynamic graphs.
|
| 62 |
+
|
| 63 |
+
that consider (unordered) non-dyadic interactions in static networks, and which have been used to generalize graph neural networks to non-dyadic interactions [25, 26].
|
| 64 |
+
|
| 65 |
+
To the best of our knowledge, De Bruijn graph models have not been combined with recent advanced in graph neural networks. Closing this gap, we propose a causality-aware graph convolutional network architecture that uses an augmented message passing scheme [27] in higher-order De Bruijn graphs to capture patterns in the causal topology of dynamic graphs.
|
| 66 |
+
|
| 67 |
+
§ 3 DE BRUIJN GRAPH NEURAL NETWORK ARCHITECTURE
|
| 68 |
+
|
| 69 |
+
We now introduce the De Bruijn Graph Neural Network (DBGNN) architecture with an augmented message passing [27] scheme whose dynamics matches the non-Markovian characteristics of dynamic graphs, which is the key contribution of our work. While we build on the message passing proposed for Graph Convolutional Networks (GCN) [28], it is easy to generalize our architecture to other message passing schemes. Our approach is based on the following three steps, which yield an easy to implement and scalable class of graph neural networks for time series and sequential data on graphs: We first use time series data on dynamic graphs to calculate statistics of causal walks of different lengths $k$ . We use these statistics to select an higher-order De Bruijn graph model for the causal topology of a dynamic graph. This step is parameter-free, i.e. we can use statistical learning techniques to infer an optimal graph model for the causal topology directly from time series data, without need for hyperparameter tuning or cross-validation. We now define a graph convolutional network that builds on neural message passing in the higher-order De Bruijn graphs inferred in step one. The hidden layers of the resulting graph convolutional network yield meaningful latent representations of patterns in the causal topology of a dynamic graph. Since the nodes in a $k$ -th order De Bruijn graph model correspond to walks (i.e. sequences) of nodes of length $k - 1$ , we implement an additional bipartite layer that maps the latent space representations of sequences to nodes in the original graph. In the following, we provide a detailed description of the three steps outlined above:
|
| 70 |
+
|
| 71 |
+
Inference of Optimal Higher-Order De Bruijn Graph Model The first step in the DBGNN architecture is the inference of the higher-order De Bruijn graph model for the causal topology in a given dynamic graph data set. For this, we use Definition 1 to calculate the statistic of causal walks of different lengths $k$ for a given maximum time difference $\delta$ . We note that this can be achieved using efficient window-based algorithms $\left\lbrack {{29},{30}}\right\rbrack$ . The statistics of causal walks in the dynamic graph allows us to apply the model selection technique proposed in [12], which yields the optimal higher-order of a De Bruijn graph model given the statistics of causal walks (or paths). The resulting (static) higher-order De Bruijn graph model is the basis for our extension of the message passing scheme for a dynamic graph with non-Markovian characteristics.
|
| 72 |
+
|
| 73 |
+
Message passing in higher-order De Bruijn graphs Standard message passing algorithms in graph neural networks use the topology of a graph to propagate (and smooth) features across nodes, thus generating hidden features that incorporate patterns in the topology of a graph. To additionally incorporate patterns in the causal topology of a dynamic graph we perform message passing in multiple layers of higher-order De Bruijn graphs. Assuming a $k$ -th order De Bruijn graph model ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ as defined in Definition 2, the input to the first layer $l = 0$ is a set of $k$ -th order node features ${\mathbf{h}}^{\mathbf{k},\mathbf{0}} = \left\{ {{\overrightarrow{h}}_{1}^{k,0},{\overrightarrow{h}}_{2}^{k,0},\ldots ,{\overrightarrow{h}}_{N}^{k,0}}\right\}$ , for ${\overrightarrow{h}}_{i}^{k,0} \in {\mathbb{R}}^{{H}^{0}}$ , where $N = \left| {V}^{\left( k\right) }\right|$ and ${H}^{0}$ is the dimensionality of initial node features. The De Bruijn graph message passing layer uses the causal topology to learn a new set of hidden representations for higher-nodes ${\mathbf{h}}^{\mathbf{k},\mathbf{1}} = \left\{ {{\overrightarrow{h}}_{1}^{k,1},{\overrightarrow{h}}_{2}^{k,1},\ldots ,{\overrightarrow{h}}_{N}^{k,1}}\right\}$ , with ${\overrightarrow{h}}_{i}^{k,1} \in {\mathbb{R}}^{{H}^{1}}$ for each $k - {th}$ order node $i$ (corresponding to a causal walk of length $k - 1$ ). For layer $l$ , we define the update rule of the message passing as:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\overrightarrow{h}}_{v}^{k,l} = \sigma \left( {{\mathbf{W}}^{k,l}\mathop{\sum }\limits_{{\left\{ {u \in {V}^{\left( k\right) } : \left( {u,v}\right) \in {E}^{\left( k\right) }}\right\} \cup \{ v\} }}\frac{w\left( {u,v}\right) \cdot {\overrightarrow{h}}_{v}^{k,l - 1}}{\sqrt{S\left( v\right) \cdot S\left( u\right) }}}\right) , \tag{1}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where ${\overrightarrow{h}}_{u}^{k,l - 1}$ is the previous hidden representation of node $u \in {V}^{k},w\left( {u,v}\right)$ is the weight of edge $\left( {u,v}\right) \in {E}^{k}$ (capturing the frequency of the corresponding causal walk as explained in section 2), ${\mathbf{W}}^{k,l} \in {\mathbb{R}}^{{H}^{l} \times {H}^{l - 1}}$ are trainable weight matrices, $S\left( v\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{u \in {V}^{\left( k\right) }}}w\left( {u,w}\right)$ is the sum of weights of incoming edges of nodes, and $\sigma$ is a non-linear activation function. Since the message passing is performed on a higher-order De Bruijn graph, we obtain a non-Markovian (or rather higher-order Markovian) message passing dynamics, i.e. we perform a Laplacian smoothing that follows the non-Markovian patterns in the causal walks in the underlying dynamic graph. Different from standard, static graph neural networks that ignore the temporal dimension of dynamic graphs, this enables our architecture to incorporate temporal patters that shape the causal topology, i.e. which nodes in a dynamic graph can influence each other directly and indirectly based on the temporal ordering of time-stamped edges (and the arrow of time).
|
| 80 |
+
|
| 81 |
+
First-order message passing and bipartite projection layer While the (static) topology of edges influences the (possible) causal walks and thus the edges in the $k$ -th order De Bruin graph, it is important to note that -due to the fact that it operates on nodes ${V}^{\left( k\right) }$ in the higher-order graph- the message passing outlined above does not allow us to incorporate information on the first-order topology. To address this issue, we additionally include message passing in the (static) time-aggregated weighted graph $G$ , which can be done in parallel to the message passing in the higher-order De Bruijn graph. The $g$ layers of this first-order message passing (whose formal definition we omit as it simply uses the GCN update rule [28]) generate hidden representations ${\overrightarrow{h}}_{v}^{1,g}$ of nodes $v \in V$ . This approach enables us to incorporate optional node features ${\overrightarrow{h}}_{v}^{0,g}$ (or alternatively use a one-hot-encoding of nodes).
|
| 82 |
+
|
| 83 |
+
Since the message passing in a higher-order De Bruijn graph generates hidden features for higher-order nodes ${V}^{\left( k\right) }$ (i.e. sequences of $k$ nodes) rather than nodes $V$ in the original dynamic graph, we finally define a bipartite graph ${G}^{b} = \left( {{V}^{\left( k\right) } \cup V,{E}^{b} \subseteq {V}^{\left( k\right) } \times V}\right)$ that maps node features of higher-nodes to the first-order node space. For a given node $v \in V$ , this bipartite layer sums the hidden representations ${\overrightarrow{h}}_{u}^{k,l}$ of each higher-order node $u = \left( {{u}_{0},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ with ${u}_{k - 1} = v$ to the representation ${h}_{v}^{1,g} \in {\mathbb{R}}^{{F}^{g}}$ generated by the last layer of the first-order message passing. Notice that the dimensions of representations in the last layers of the $k$ -th and first-order message passing should satisfy ${F}^{g} = {H}^{l}$ to enable the summing of the representations. We obtain representations $\left\{ {{\overrightarrow{h}}_{u}^{k,l} + {\overrightarrow{h}}_{v}^{1,g} : \text{ for }u \in {V}^{k}\text{ with }\left( {u,v}\right) \in {E}^{b}}\right\}$ that are the higher-order node representations augmented by the corresponding first order representations. We then use a function $\mathcal{F}$ to aggregate the augmented higher-order representations at the level of first-order nodes. In our experiments, we learn first-order node representations ${h}^{1,g}$ using GCN message passing with $g$ layers, allowing to integrate information on the static and the causal topology of a dynamic graph. Formally, we define the bipartite layer as
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\overrightarrow{h}}_{v}^{b} = \sigma \left( {{\mathbf{W}}^{b}\mathcal{F}\left( \left\{ {{\overrightarrow{h}}_{u}^{k,l} + {\overrightarrow{h}}_{v}^{1,g} : \text{ for }u \in {V}^{\left( k\right) }\text{ with }\left( {u,v}\right) \in {E}^{b}}\right\} \right) }\right) , \tag{2}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where ${\overrightarrow{h}}_{v}^{b}$ is the output of the bipartite layer for node $v \in V$ , and ${\mathbf{W}}^{b} \in {\mathbb{R}}^{{F}^{g} \times {H}^{l}}$ is a learnable weight matrix. The function $\mathcal{F}$ can be SUM, MEAN, MAX, MIN.
|
| 90 |
+
|
| 91 |
+
Figure 2 gives an overview of the proposed neural network architecture for the dynamic graph (and associated second-order De Bruijn graph model) shown in Figure 1 (left). The higher-order message
|
| 92 |
+
|
| 93 |
+
< g r a p h i c s >
|
| 94 |
+
|
| 95 |
+
Figure 2: Illustration of DBGNN architecture with two message passing layers in first- (left, gray) and second-order De Bruijn graph (right, orange) corresponding to the dynamic graph in Figure 1 (left). Red edges represent indicate the bipartite mapping ${G}^{b}$ of higher-order node representations to first-order representations. An additional linear layer (not shown) is used for node classification.
|
| 96 |
+
|
| 97 |
+
passing layers on the right use the topology of the second-order De Bruijn graph in Figure 1 (left), while the first-order message passing layers (left) use the topology of the first-order graph. Note that the first-order and higher-order message passing can be performed in parallel, and that the number of message passing layers do not necessarily need to be the same. Red edges indicate the propagation of higher-order node representations to first order nodes performed in the final bipartite layer. Due to space constraints, in Figure 2 we omit the final linear layer used for classification.
|
| 98 |
+
|
| 99 |
+
§ 4 EXPERIMENTAL EVALUATION
|
| 100 |
+
|
| 101 |
+
In the following, we experimentally evaluate our proposed causality-aware graph neural network architecture both in synthetic and empirical time series data on dynamic graphs. With our evaluation, we want to answer the following questions:
|
| 102 |
+
|
| 103 |
+
Q1 How does the performance of De Bruijn Graph Neural Networks compare to temporal and non-temporal graph learning techniqes?
|
| 104 |
+
|
| 105 |
+
Q2 Can we use De Bruijn Graph Neural Networks to learn interpretable static latent space representations of nodes in dynamic graphs?
|
| 106 |
+
|
| 107 |
+
To address those questions, we use six time series data sets on dynamic graphs that provide meta-information on node classes. The overall statistics of the data sets can be found in table 1, temp-clusters is a synthetically generated dynamic graph with three clusters in the causal topology, but no pattern in the static topology. To generate this data set, we first constructed a random graph and generated random sequences of time-stamped edges. We then selectively swap the time stamps of edges such that causal walks of length two within three clusters of nodes are overrepresented, while causal walks between clusters are underrepresented. We include a more detailed description in the appendix (code and data will be provided in a companion repository). Apart from this synthetic data set, we use five empirical time series data sets: student-sms captures time-stamped SMS exchanged over four weeks between freshmen at the Technical University of Denmark [31]. We use the gender of participants as ground truth classes and use a maximum time difference of $\delta = {40}$ . Since the time granularity of this data set is five minutes, this corresponds to a maximum time difference of 200 minutes. high-school-2011 and high-school-2012 capture time-stamped proximities between high-school students in two consecutive years [32] ( 4 days in 2001, 7 days in 2012). We use the gender of students as ground truth classes. workplace captures time-stamped proximity interaction between employees recorded in an office building for multiple days in different years [33]. We use the department of employees as ground truth classes. hospital captures time-stamped proximities between patients and healthcare workers in a hospital ward. We use employees' roles (patient, nurse, administrative, doctor) as ground truth node classes. All the proximity datasets were collected with a resolution of 20 seconds. To mitigate the computational complexity of the causal walk extraction in the (undirected) proximity data sets, we coarsen the resolution by aggregating interactions to a resolution of fifteen minutes and use $\delta = 4$ , which corresponds to a maximum time difference of one hour. Based on the resulting statistics of causal walks, we use the method (and code) provided in [12] to select a higher-order De Bruijn graph model. In table 1 we report the $p$ -value of the resulting likelihood ratio test, which is used to test the hypothesis that a first-order graph model is sufficient to explain the observed causal walk statistics, against the alternative hypothesis that a second-order De Bruijn graph model is needed. Since all $p$ -values are numerically zero, we find strong evidence for patterns that justify a second-order De Bruijn graph model for all data sets.
|
| 108 |
+
|
| 109 |
+
max width=
|
| 110 |
+
|
| 111 |
+
Data set Ref $\left| V\right|$ $\left| E\right|$ $\left| {E}^{\mathcal{T}}\right|$ $p\left( {k = 2}\right)$ $\left| {V}^{\left( 2\right) }\right|$ $\left| {E}^{\left( 2\right) }\right|$ $\delta$ Classes
|
| 112 |
+
|
| 113 |
+
1-10
|
| 114 |
+
temp-clusters [blinded] 30 560 60000 0.0 560 6,789 1 3
|
| 115 |
+
|
| 116 |
+
1-10
|
| 117 |
+
high-school-2011 [32] 126 3042 28561 0.0 3042 17141 4 2
|
| 118 |
+
|
| 119 |
+
1-10
|
| 120 |
+
high-school-2012 [32] 180 3965 45047 0.0 3965 20614 4 2
|
| 121 |
+
|
| 122 |
+
1-10
|
| 123 |
+
hospital [34] 75 2028 32424 0.0 2028 15500 4 4
|
| 124 |
+
|
| 125 |
+
1-10
|
| 126 |
+
student-sms [31] 429 733 46138 0.0 733 846 40 2
|
| 127 |
+
|
| 128 |
+
1-10
|
| 129 |
+
workplace [33] 92 1431 9827 0.0 1431 7121 4 5
|
| 130 |
+
|
| 131 |
+
1-10
|
| 132 |
+
|
| 133 |
+
Table 1: Overview of time series data and ground truth node classes used in the experiments.
|
| 134 |
+
|
| 135 |
+
Using a second-order De Bruijn graph, we compare the node classification performance of the DBGNN architecture against the following five baselines. The first three are standard (static) graph learning techniques, namely Graph Convolutional Networks (GCN) [28], DeepWalk [35] and node2vec [36]. We further use two recently proposed temporal graph embedding techniques: Embedding Variable Orders (EVO) [14], is a node representation learning framework that captures non-Markovian characteristics in dynamic graphs. Similar to our approach, EVO uses a higher-order network to generate time-aware node representations that can be used for downstream node classification. HONEM [20] is a higher-order network embedding approach that captures non-Markovian dependencies in time series data on graphs. This framework uses truncated SVD on a higher-order neighborhood matrix that considers the temporal order of interactions.
|
| 136 |
+
|
| 137 |
+
Addressing Q1, the results of our experiments on node classification are shown in Table 2. Since the classes of the empirical data sets are imbalanced, we use balanced accuracy and additionally report macro-averaged precision, recall and f1-score for a 70-30 training-test split. We report the average performance across multiple splits. For DBGNN, GCN, DeepWalk, node2vec, and HONEM we performed 50 runs. Due to its larger computational complexity (and time constraints) we could only perform 10 runs on EVO. The standard deviations are included in the appendix. We trained node2vec, EVO, and DeepWalk with 80 walks of length 40 per each node and a window of 10 . We obtained the embeddings using the word2vec implementation in [37]. For EVO, we use the average as an aggregator for the higher-order representations. To ensure the comparability of the results from GCN and DBGNN, we train both with the same number of convolutional layers with a learning rate of 0.001 for 5000 epochs, ELU [38] as activation function, and Adam [39] optimiser. For DBGNN, we use SUM as aggregation function $\mathcal{F}$ . Since the data sets had no node features, we used one-hot encoding of nodes as a feature matrix (and a one-hot encoding of higher-order nodes in the initial layer of the DBGNN). For all methods, we fix the dimensionality of the learned representations to $d = {16}$ , which is justified by the size of the graphs. We manually tuned the number of hidden dimensions of the first hidden layers for GCN and DBGNN, as well as the $\mathrm{p}$ and $\mathrm{q}$ parameters of EVO and node2vec. We report the results for the best combination of hyperparameters.
|
| 138 |
+
|
| 139 |
+
As expected, the results in Table 2 for the synthetic temporal clusters data set show that the three time-aware methods (EVO, HONEM, and DBGNN) perform considerably better than the static counterparts, which only "see" a random graph topology that does not allow to meaningfully assign node classes. Both EVO and our proposed DBGNN architecture are able to perfectly classify nodes in this data set. Interestingly, despite their good performance in the synthetic data set, the three time-aware methods show much higher variability in the empirical data sets. We find that DBGNN shows superior performance in terms of balanced accuracy, fl-macro, and recall-macro, for all of the five empirical data sets, with a relative performance increase compared to the second best method ranging from 1.55% to 28.16%. For precision-macro, DBGNN performs best in four of the five. We attribute these results to the ability of our architecture to consider both patterns in the (static) graph topology and the causal topology, as well as to the underlying supervised approach that is due to the use of the GCN-based message passing.
|
| 140 |
+
|
| 141 |
+
To address Q2, we study visualizations of the hidden representations of higher- and first-order nodes generated by the DBGNN architecture for the synthetic temporal cluster data set, which exhibits three clear clusters in the causal topology. We use the hidden representations $\overrightarrow{{h}_{v}^{b}}$ generated by the bipartite layer of our DBGNN architecture, as defined in Section 3. We compare this to the representation generated in the last message passing layer of a GCN. Figure 3 in the appendix confirms that the DBGNN architecture learns meaningful latent space representations of nodes that incorporate temporal patterns.
|
| 142 |
+
|
| 143 |
+
max width=
|
| 144 |
+
|
| 145 |
+
dataset method Balanced Accuracy F1-score-macro Precision-macro Recall-macro
|
| 146 |
+
|
| 147 |
+
1-6
|
| 148 |
+
6*temp-clusters DeepWalk 32.47 30.39 32.25 32.47
|
| 149 |
+
|
| 150 |
+
2-6
|
| 151 |
+
Node2Vec $p = {1q} = 4$ 35.48 33.02 34.92 35.48
|
| 152 |
+
|
| 153 |
+
2-6
|
| 154 |
+
GCN (8,32) 33.52 12.5 8.61 33.52
|
| 155 |
+
|
| 156 |
+
2-6
|
| 157 |
+
EVO p=1 q=1 100.0 100.0 100.0 100.0
|
| 158 |
+
|
| 159 |
+
2-6
|
| 160 |
+
HONEM 54.94 53.5 58.16 54.94
|
| 161 |
+
|
| 162 |
+
2-6
|
| 163 |
+
DBGNN (16,16) 100.0 100.0 100.0 100.0
|
| 164 |
+
|
| 165 |
+
1-6
|
| 166 |
+
gain X 0% 0% 0% 0%
|
| 167 |
+
|
| 168 |
+
1-6
|
| 169 |
+
6*high-school-2011 DeepWalk 55.25 54.02 60.45 55.25
|
| 170 |
+
|
| 171 |
+
2-6
|
| 172 |
+
Node2Vec $p = {1q} = 4$ 56.89 56.29 60.05 56.89
|
| 173 |
+
|
| 174 |
+
2-6
|
| 175 |
+
GCN (32,4) 50.06 40.27 33.99 50.06
|
| 176 |
+
|
| 177 |
+
2-6
|
| 178 |
+
EVO p=1 q=4 57.21 56.28 62.09 57.21
|
| 179 |
+
|
| 180 |
+
2-6
|
| 181 |
+
HONEM 54.24 53.08 56.44 54.24
|
| 182 |
+
|
| 183 |
+
2-6
|
| 184 |
+
DBGNN (32,8) 64.4 63.7 65.14 64.4
|
| 185 |
+
|
| 186 |
+
1-6
|
| 187 |
+
gain X 12.57% 13.16% 4.91% 12.57%
|
| 188 |
+
|
| 189 |
+
1-6
|
| 190 |
+
6*high-school-2012 DeepWalk 59.46 59.6 71.71 59.46
|
| 191 |
+
|
| 192 |
+
2-6
|
| 193 |
+
Node2Vec $p = {1q} = 4$ 60.75 61.23 72.44 60.75
|
| 194 |
+
|
| 195 |
+
2-6
|
| 196 |
+
GCN (8,32) 58.03 56.39 59.16 58.03
|
| 197 |
+
|
| 198 |
+
2-6
|
| 199 |
+
EVO p=4 q=1 57.98 57.5 69.42 57.98
|
| 200 |
+
|
| 201 |
+
2-6
|
| 202 |
+
HONEM 53.16 51.7 56.59 53.16
|
| 203 |
+
|
| 204 |
+
2-6
|
| 205 |
+
DBGNN (4,8) 65.8 65.89 67.27 65.8
|
| 206 |
+
|
| 207 |
+
1-6
|
| 208 |
+
gain X 8.31% 7.61% -7.14% 8.31%
|
| 209 |
+
|
| 210 |
+
1-6
|
| 211 |
+
6*hospital DeepWalk 47.18 44.18 43.91 47.18
|
| 212 |
+
|
| 213 |
+
2-6
|
| 214 |
+
Node2Vec $p = {1q} = 4$ 50.6 47.14 45.81 50.6
|
| 215 |
+
|
| 216 |
+
2-6
|
| 217 |
+
GCN [32,32] 49.48 44.62 43.55 49.48
|
| 218 |
+
|
| 219 |
+
2-6
|
| 220 |
+
EVO p=1 q=4 36.34 36.44 42.1 36.34
|
| 221 |
+
|
| 222 |
+
2-6
|
| 223 |
+
HONEM 46.17 43.13 44.45 46.17
|
| 224 |
+
|
| 225 |
+
2-6
|
| 226 |
+
DBGNN (32,16) 59.04 55.26 58.71 57.71
|
| 227 |
+
|
| 228 |
+
1-6
|
| 229 |
+
gain X 16.68% 17.23% 28.16% 14.05%
|
| 230 |
+
|
| 231 |
+
1-6
|
| 232 |
+
6*student-sms DeepWalk 53.22 50.57 60.57 53.22
|
| 233 |
+
|
| 234 |
+
2-6
|
| 235 |
+
Node2Vec $p = 1q = 4$ 53.22 50.97 58.56 53.22
|
| 236 |
+
|
| 237 |
+
2-6
|
| 238 |
+
GCN (4,32) 57.33 57.25 57.72 57.33
|
| 239 |
+
|
| 240 |
+
2-6
|
| 241 |
+
EVO p=4 q=1 52.93 50.66 57.14 52.93
|
| 242 |
+
|
| 243 |
+
2-6
|
| 244 |
+
HONEM 50.43 44.44 52.91 50.43
|
| 245 |
+
|
| 246 |
+
2-6
|
| 247 |
+
DBGNN (4,4) 60.6 60.89 62.55 60.6
|
| 248 |
+
|
| 249 |
+
1-6
|
| 250 |
+
gain X 5.7% 6.36% 3.27% 5.7%
|
| 251 |
+
|
| 252 |
+
1-6
|
| 253 |
+
6*workplace DeepWalk 77.81 76.74 76.06 77.81
|
| 254 |
+
|
| 255 |
+
2-6
|
| 256 |
+
Node2Vec $p = {1q} = 4$ 78.0 77.01 76.38 78.0
|
| 257 |
+
|
| 258 |
+
2-6
|
| 259 |
+
GCN (32,16) 81.86 78.72 78.58 79.93
|
| 260 |
+
|
| 261 |
+
2-6
|
| 262 |
+
EVO p=1 q=4 77.0 75.68 75.03 77.0
|
| 263 |
+
|
| 264 |
+
2-6
|
| 265 |
+
HONEM 73.26 72.82 73.73 73.26
|
| 266 |
+
|
| 267 |
+
2-6
|
| 268 |
+
DBGNN (32,8) 83.13 81.06 81.52 81.75
|
| 269 |
+
|
| 270 |
+
1-6
|
| 271 |
+
gain X 1.55% 2.97% 3.74% 2.28%
|
| 272 |
+
|
| 273 |
+
1-6
|
| 274 |
+
|
| 275 |
+
Table 2: Results of node classification in six dynamic graphs for static graph learning techniques (DeepWalk, node2vec, GCN) and time-aware methods (HONEM, EVO) as well as the DBGNN architecture proposed in this work.
|
| 276 |
+
|
| 277 |
+
§ 5 CONCLUSION
|
| 278 |
+
|
| 279 |
+
In summary, we propose an approach to apply graph neural networks to high-resolution time series data that captures the temporal ordering of time-stamped edges in dynamic graphs. Our method is based on a novel combination of (i) a statistical approach to infer an optimal static higher-order De Bruijn graph model for the causal topology that is due to the temporal ordering of edges, (ii) gradient-based learning in a neural network architecture that performs neural message passing in the inferred higher-order De Bruijn graph, and (iii) an additional bipartite mapping layer that maps the learnt hidden representation of higher-order nodes to the original node space. Thanks to this approach, our architecture is able to generalize neural message passing to a static higher-order graph model that captures the causal topology of a dynamic graph, which can considerably deviate from what we would expect based on the mere (static) topology of edges. The results of our experiments demonstrate that the resulting architecture can considerably improve the performance of node classification in time series data, despite using message passing in a relatively simple static (augmented) graph. Bridging recent research on higher-order graph models in network science and deep learning in graphs $\left\lbrack {{13},{15},{23},{24}}\right\rbrack$ , our work contributes to the ongoing discussion about the need for augmented message passing schemes in data on graphs with complex characteristics [27].
|
papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Taxonomy of Benchmarks in Graph Representation Learning
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a sensitivity profile that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach and implementation in GTaxoGym package ${}^{1}$ are extendable to multiple graph prediction task types and future datasets.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Machine learning for graph representation learning (GRL) has seen rapid development in recent years [27]. Originally inspired by the success of convolutional neural networks in regular Euclidean domains, thanks to their ability to leverage data-intrinsic geometries, classical graph neural network (GNN) models $\left\lbrack {{15},{35},{56}}\right\rbrack$ extend those principles to irregular graph domain. Further advances in the field have led to a wide selection of complex and powerful GNN architectures. Some models are provably more expressive than others [63, 43], can leverage multi-resolution views of graphs [41], or can account for implicit symmetries in graph data [9]. Comprehensive surveys of graph neural networks can be found in Bronstein et al. [8], Wu et al. [61], Zhou et al. [67].
|
| 16 |
+
|
| 17 |
+
Most graph-structured data encode information in graph structures and node features. The structure of each graph represents relationships (i.e., edges) between different nodes, while the node features represent quantities of interest at each individual node. For example, in citation networks, nodes represent papers and edges represent citations between the papers. On such networks, node features often capture the presence or absence of certain keywords in each paper, encoded in binary feature vectors. In graphs modeling social networks, each node represents a user, and the corresponding node features often include user statistics like gender, age, or binary encodings of personal interests.
|
| 18 |
+
|
| 19 |
+
Intuitively, the power of GNNs lies in relating local node-feature information to global graph structure information, typically achieved by applying a cascade of feature aggregation and transformation steps. In aggregation steps, information is exchanged between neighboring nodes, while transformation steps apply a (multi-layer) perceptron to feature vectors of each node individually. Such architectures are commonly referred to as Message Passing Neural Networks (MPNN) [24].
|
| 20 |
+
|
| 21 |
+
Historically, GNN methods have been evaluated on a small collection of datasets [44], many of which originated from the development of graph kernels. The limited quantity, size and variety of these datasets have rendered them insufficient to serve as distinguishing benchmarks [17, 46]. Therefore, recent work has focused on compiling a set of large(r) benchmarking datasets across diverse graph domains $\left\lbrack {{17},{31}}\right\rbrack$ . Despite these efforts and the introduction of new datasets, it is still not well understood what aspects of a dataset most influence the performance of GNNs. Which is more important, the geometric structure of the graph or node features? Are long-range interactions crucial, or are short-range interactions sufficient for most tasks? This lack of understanding of the dataset properties and of their similarities makes it difficult to select a benchmarking suit that would enable comprehensive evaluation of GNN models. Even when an array of seemingly different datasets is used, they may be probing similar aspects of graph representation learning.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
${}^{1}$ https://github.com/G-Taxonomy-Workgroup/GTaxoGym
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
Figure 1: Overview of our pipeline to taxonomize graph learning datasets.
|
| 32 |
+
|
| 33 |
+
Leveraging symmetries and other geometric priors in graph data is crucial for generalizable learning [9]. While invariance or equivariance to some transformations is inherent, invariance to others may only be empirically or partially apparent. Motivated by this observation, we propose to use the lens of empirical transformation sensitivity to gauge how task-related information is encoded in graph datasets and subsequently taxonomize their use as benchmarks in graph representation learning. Our approach is illustrated in Figure 1. Namely, we list our contributions in this study as:
|
| 34 |
+
|
| 35 |
+
1. We develop a graph dataset taxonomization framework that is extendable to both new datasets and evaluation of additional graph/task properties,
|
| 36 |
+
|
| 37 |
+
2. Using this framework, we provide the first taxonomization of GNN (and GRL) benchmarking datasets, collected from TUDatasets [44], OGB [31] and other sources,
|
| 38 |
+
|
| 39 |
+
3. Through the resulting taxonomy, we provide insights about existing datasets and guide better dataset selection in future benchmarking of GNN models.
|
| 40 |
+
|
| 41 |
+
## 2 Methods
|
| 42 |
+
|
| 43 |
+
As a proxy for invariance or sensitivity to graph perturbations, we study the changes in GNN performance on perturbed versions of each dataset. These perturbations are designed to eliminate or emphasize particular types of information embedded in the graphs. We define an empirical sensitivity profile of a dataset as a vector where each element is the performance of a GNN after a given perturbation, reported as a percentage of the network's performance on the original dataset. In particular, we use a set of 13 perturbations, visualized in Figure 2. Of these perturbations, 6 are designed to perturb node features, while keeping the graph structure intact, whereas the remaining 7 keep the node attributes the same, but manipulate the graph structure.
|
| 44 |
+
|
| 45 |
+
For the purpose of these perturbations, we consider all graphs to be undirected and unweighted, and assume they all have node features, but not edge features. These assumptions hold for most datasets we use in this study. However, if necessary, we preprocess the data by symmetrizing each graph's adjacency matrix and dropping any edge attributes. Formally, let $G = \left( {V, E,\mathbf{X}}\right)$ be an undirected, unweighted, attributed graph with node set $V$ of cardinality $\left| V\right| = n$ , edge set $E \subset V \times V$ , and a matrix of $d$ -dimensional node features $\mathbf{X} \in {\mathbb{R}}^{n \times d}$ . We let $\mathbf{M} \in {\mathbb{R}}^{n \times n}$ denote the adjacency matrix of each graph, where $\mathbf{M}\left( {u, v}\right) = 1$ if $\left( {u, v}\right) \in E$ and zero otherwise.
|
| 46 |
+
|
| 47 |
+
Several of our perturbations are based on spectral graph theory, which represents graph signals in a spectral domain analogous to classical Fourier analysis. We define the graph Laplacian $\mathbf{L} \mathrel{\text{:=}} \mathbf{D} - \mathbf{M}$ and the symmetric normalized graph Laplacian $\mathbf{N} \mathrel{\text{:=}} {\mathbf{D}}^{-\frac{1}{2}}\mathbf{L}{\mathbf{D}}^{-\frac{1}{2}} = \mathbf{I} - {\mathbf{D}}^{-\frac{1}{2}}\mathbf{M}{\mathbf{D}}^{-\frac{1}{2}}$ , where $\mathbf{D}$ is the diagonal degree matrix. Both $\mathbf{L}$ and $\mathbf{N}$ are positive semi-definite and have an orthonormal eigendecompositions $\mathbf{L} = \mathbf{\Phi }\mathbf{\Lambda }{\mathbf{\Phi }}^{\top }$ and $\mathbf{N} = \widetilde{\mathbf{\Phi }}\widetilde{\mathbf{\Lambda }}{\widetilde{\mathbf{\Phi }}}^{\top }$ . By convention, we order the eigenvalues and corresponding eigenvectors ${\left\{ \left( {\lambda }_{i},{\phi }_{i}\right) \right\} }_{0 \leq i \leq n - 1}$ of $\mathbf{L}$ (and similarly for $\mathbf{N}$ ) in ascending order $0 = {\lambda }_{0} \leq {\lambda }_{1} \leq \cdots \leq {\lambda }_{n - 1}$ . The eigenvectors ${\left\{ {\phi }_{i}\right\} }_{0 \leq i \leq n - 1}$ constitute a basis of the space of graph signals and can be considered as generalized Fourier modes. The eigenvalues ${\left\{ {\lambda }_{i}\right\} }_{0 \leq i \leq n - 1}$ characterize the variation of these Fourier modes over the graph and can be interpreted as (squared) frequencies.
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
Figure 2: Node feature and graph structure perturbations of the first graph in ENZYMES. The color coding of nodes illustrates their feature values, except (k-n) where the fragment assignment is shown.
|
| 52 |
+
|
| 53 |
+
### 2.1 Node Feature Perturbations
|
| 54 |
+
|
| 55 |
+
We first consider two perturbations that alter local node features, setting them either to a fixed constant (w.l.o.g., one) for all nodes, or to a one-hot encoding of the degree of the node. We refer to these perturbations as NoNodeFtrs (since constant node features carry no additional information) and NodeDeg, respectively. In addition, we consider a random node feature perturbation (RandFtrs) by sampling a one-dimensional feature for each node uniformly at random within $\left\lbrack {-1,1}\right\rbrack$ . Sensitivity to these perturbations, exhibited by a large decrease in predictive performance, may indicate that a dataset (or task) is dominated by highly informative node features.
|
| 56 |
+
|
| 57 |
+
We also develop spectral node feature perturbations. As in Euclidean settings, the Fourier decomposition can be used to decompose graph signals into a set of canonical signals, called Fourier modes, which are organized according to increasing variation (or frequency). In Euclidean Fourier analysis, these modes are sinusoidal waves oscillating at different frequencies. A standard practice in audio signal processing is to remove noise from a signal by identifying and removing certain Fourier modes or frequency bands. We generalize this technique to graph datasets and systematically remove certain graph Fourier modes to probe the importance of the corresponding frequency bands.
|
| 58 |
+
|
| 59 |
+
In this perturbation, we use the frequencies derived from the symmetric normalized graph Laplacian $\mathbf{N}$ and split them into three roughly equal-sized frequency bands (low, mid, high), i.e., bins of subsequent eigenvalues. To assess the importance of each of the frequency bands, we then apply hard band-pass filtering to the graph signals (node feature vectors), i.e., we project the signals on the span of the selected Fourier modes. More specifically, for each band, we let ${\mathbf{I}}_{\text{band }}$ be a diagonal matrix with diagonal elements equal to one if the corresponding eigenvalue is in the band, and zero otherwise. Then, the hard band-pass filtered signal is computed as
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{\mathbf{X}}_{\text{band }} = \widetilde{\mathbf{\Phi }}{\mathbf{I}}_{\text{band }}{\widetilde{\mathbf{\Phi }}}^{\top }\mathbf{X}. \tag{1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
The above band-pass filtering perturbation enables a precise selection of the frequency bands. However, it requires a full eigendecomposition of the normalized graph Laplacian, which is impractical for large graphs. We therefore provide an alternative approach based on wavelet bank filtering [13]. This leverages the fact that polynomial filters $h$ of the normalized graph Laplacian directly transform the spectrum via $h\left( \mathbf{N}\right) = \widetilde{\mathbf{\Phi }}h\left( \widetilde{\mathbf{\Lambda }}\right) {\widetilde{\mathbf{\Phi }}}^{\top }$ , yielding the frequency response $h\left( \lambda \right)$ for any eigenvalue $\lambda$ of N. This is usually done by taking the symmetrized diffusion matrix
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\mathbf{T} = \frac{1}{2}\left( {\mathbf{I} + {\mathbf{D}}^{-\frac{1}{2}}{\mathbf{{MD}}}^{-\frac{1}{2}}}\right) = \frac{1}{2}\left( {2\mathbf{I} - \mathbf{N}}\right) . \tag{2}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
By construction, $\mathbf{T}$ admits the same eigenbasis as $\mathbf{N}$ but its eigenvalues are mapped from $\left\lbrack {0,2}\right\rbrack$ to $\left\lbrack {0,1}\right\rbrack$ via the frequency response $h\left( \lambda \right) = 1 - \lambda /2$ . As a result, large eigenvalues are mapped to small values (and vice versa). Next, we construct diffusion wavelets [15] that consist of differences of dyadic powers ${2}^{k}, k \in {\mathbb{N}}_{0}$ of $\mathbf{T}$ , i.e., ${\Psi }_{k} = {\mathbf{T}}^{{2}^{k - 1}} - {\mathbf{T}}^{{2}^{k}}$ , which act as bandpass filters on the signal. Intuitively, this operator "compares" two neighborhoods of different sizes (radius ${2}^{k - 1}$ and ${2}^{k}$ ) at each node. Diffusion wavelets are usually maintained in a wavelet bank ${\mathcal{W}}_{K} = {\left\{ {\mathbf{\Psi }}_{k},{\mathbf{\Phi }}_{\mathbf{K}}\right\} }_{k = 0}^{K}$ , which contains additional highpass ${\mathbf{\Psi }}_{0} = \mathbf{I} - \mathbf{T}$ and lowpass ${\mathbf{\Psi }}_{\mathbf{K}} = {\mathbf{T}}^{K}$ filters. In our experiments, we choose $K = 1$ , resulting in the following low, mid, and highpass filtered node features:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\mathbf{X}}_{\text{high }} = \left( {\mathbf{I} - \mathbf{T}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{mid }} = \left( {\mathbf{T} - {\mathbf{T}}^{2}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{low }} = {\mathbf{T}}^{2}\mathbf{X}. \tag{3}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
These filters correspond to frequency responses ${h}_{\text{high }}\left( \lambda \right) = \lambda /2,{h}_{\text{mid }}\left( \lambda \right) = \left( {1 - \lambda /2}\right) - {\left( 1 - \lambda /2\right) }^{2}$ and ${h}_{\text{low }}\left( \lambda \right) = {\left( 1 - \lambda /2\right) }^{2}$ . Therefore, the low-pass filtering preserves low-frequency information while suppressing high-frequency information whereas high-pass filtering does the opposite. The mid-pass filtering suppresses all frequencies. However, it preserves much more middle-frequency information than it does high- or low-frequency information.
|
| 78 |
+
|
| 79 |
+
Therefore, this filtering may be interpreted as approximation of the hard band-pass filtering discussed above. From the spatial message passing perspective, low-pass filtering is equivalent to local averaging of the node features, which has a profound implication on homophilic and heterophilic characteristics of the datasets (Sec. 3.2). Finally, since the computations needed in (3) can be carried out via sparse matrix multiplications, they have the advantage of scaling well to large graphs. Therefore, we utilize the wavelet bank filtering for the datasets with larger graphs considered in Sec. 3.2, while for the smaller graphs, considered in Sec. 3.1, we employ the direct band-pass filtering approach.
|
| 80 |
+
|
| 81 |
+
### 2.2 Graph Structure Perturbations
|
| 82 |
+
|
| 83 |
+
The following perturbations act on the graph structure by altering the adjacency matrix. By removing all edges (NoEdges) or making the graph fully-connected (FullyConn), we can eliminate the structural information completely and essentially turn the graph into a set. The difference between the two perturbations lies in whether all nodes are processed independently or all nodes are processed together. However, FullyConn is only applied to inductive datasets in Sec. 3.1 due to computational limitations. Furthermore, we consider a degree-preserving random edge rewiring perturbation (RandRewire). In each step, we randomly sample a pair of edges and randomly exchange their end nodes. We then repeat this process without replacement until ${50}\%$ of the edges have been randomly rewired.
|
| 84 |
+
|
| 85 |
+
To inspect the importance of local vs. global graph structure, we designed the Frag- $k$ perturbations, which randomly partition the graph into connected components consisting of nodes whose distance to a seed node is less than $k$ . Specifically, we randomly draw one seed node at a time and extract its $k$ -hop neighborhood by eliminating all edges between this new fragment and the rest of the graph; we repeat this process on the remaining graph until the whole graph is processed. A smaller $k$ implies smaller components, and hence discards the global structure and long-range interactions.
|
| 86 |
+
|
| 87 |
+
Graph fragmentations can also be constructed using spectral graph theory. In our taxonomization, we adopt one such method, which we refer to as Fiedler fragmentation (FiedlerFrag) (see [33] and the references therein). In the case when the graph $G$ is connected, ${\phi }_{0}$ , the eigenvector of the graph Laplacian $\mathbf{L}$ corresponding to ${\lambda }_{0} = 0$ , is constant. The eigenvector ${\phi }_{1}$ corresponding to the next smallest eigenvalue, ${\lambda }_{1}$ , is known as the Fiedler vector [21]. Since ${\phi }_{0}$ is constant, it follows that ${\phi }_{1}$ has zero average. This motivates partitioning the graph into two sets of vertices, one where ${\phi }_{1}$ is positive and the other where ${\phi }_{1}$ is negative. We refer to this process as binary Fiedler fragmentation. This heuristic is used to construct the ratio cut for a connected graph [26]. The ratio cut partitions a connected graph into two disjoint connected components $V = U \cup W$ , such that the objective $\left| {E\left( {U, W}\right) }\right| /\left( {\left| U\right| \cdot \left| W\right| }\right)$ is minimized, where $E\left( {U, W}\right) \mathrel{\text{:=}} \{ \left( {u, w}\right) \in E : u \in U, w \in W\}$ is the set of removed edges when fragmenting $G$ accordingly. This can be seen as a combination of the min cut objective (numerator), while encouraging a balanced partition (denominator).
|
| 88 |
+
|
| 89 |
+
FiedlerFrag is based on iteratively applying binary Fiedler fragmentation. In each step, we separate out the graph into its connected components and apply binary Fiedler fragmentation to the largest component. We repeat this process until either we reach 200 iterations, or the size of the largest connected component falls below 20. In contrast to the random fragmentation Frag- $k$ , this perturbation preserves densely connected regions of the graph and eliminates connections between them. Thus, FiedlerFrag tests the importance of inter community message flow. Due to computational limits, we only apply FiedlerFrag to inductive datasets in Sec. 3.1 for which this computation is feasible.
|
| 90 |
+
|
| 91 |
+
### 2.3 Data-driven Taxonomization by Hierarchical Clustering
|
| 92 |
+
|
| 93 |
+
To study a systematic classification of the graph datasets, we use Ward's method [58] for hierarchical clustering analysis of their sensitivity profiles. The sensitivity profiles are established empirically by contrasting the performance of a GNN model on a perturbed dataset and on the original dataset. To quantify this performance change, we use ${\log }_{2}$ -transformed ratio of test AUROC (area under the ROC curve). Thus a sensitivity profile is a 1-D vector with as many elements as we have perturbation experiments. See Figure 1 and Appendix A for further details.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+
Figure 3: Visualization of (a) inductive and (b) transductive datasets based on PCA of their perturbation sensitivity profiles according to a GCN model. The datasets are labeled according to their taxonomization by hierarchical clustering, shown in Figure 4 and 6, which corroborates with the emerging clustering in the PCA plots. In the bottom part are shown the loadings of the first two principal components and (in parenthesis) the percentage of variance explained by each of them.
|
| 98 |
+
|
| 99 |
+
In order to generate sensitivity profiles, we must select suitable GNN models based on several practical considerations: (i) The model has to be expressive enough to efficiently leverage aspects of the node features and graph structure that we perturb. Otherwise, our analysis will not be able to uncover reliance on these properties. (ii) The model needs to be general enough to be applicable to a wide variety of datasets, avoiding dataset-specific adjustments that may lead to profiling that is not comparable between datasets. Therefore, we did not aim for specialized models that maximize performance, but rather models that (i) achieve at least baseline performance comparable to published works over all datasets, (ii) have manageable computational complexity to facilitate large-scale experimentation, and (iii) use well-established and theoretically well-understood architectures.
|
| 100 |
+
|
| 101 |
+
With these criteria in mind, we focused on two popular MPNN models in our analysis: GCN [35] and GIN [63]. The original GCN serves as an ideal starting point as its abilities and limitations are well-understood. However, we also wanted to perform taxonomization through a provably more expressive and recent method, which motivated our selection of GIN as the second architecture. We emphasize that the main focus here is not to provide a benchmarking of GNN models per se, but rather to address the taxonomization of graph datasets (and accompanying tasks) used in such benchmarks. Nevertheless, we have also generated sensitivity profiles by additional models in order to comparatively demonstrate the robustness of our approach: 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]; see Figure 5.
|
| 102 |
+
|
| 103 |
+
## 3 Results
|
| 104 |
+
|
| 105 |
+
Each of the 48 datasets we consider is equipped with either a node classification or graph classification task. In the case of node classification, we further differentiate between the inductive setting, in which learning is done on a set of graphs and the generalization occurs from a training set of graphs to a test set, and the transductive setting, in which learning is done in one (large) graph and the generalization occurs between subsets of nodes in this graph. Graph classification tasks, by contrast, always appear in an inductive setting. The only major difference between graph classification and inductive node classification is that prior to final prediction, the hidden representations of all nodes are pooled into a single graph-level representation. In the following two subsections, we provide an analysis of the sensitivity profiles for datasets with inductive and transductive tasks.
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
Figure 4: Taxonomy of inductive graph learning datasets via graph perturbations. For each dataset and perturbation combination, we show the GCN model performance relative to its performance on the unmodified dataset.
|
| 110 |
+
|
| 111 |
+
### 3.1 Taxonomy of Inductive Benchmarks
|
| 112 |
+
|
| 113 |
+
Datasets. We examine a total of 23 datasets, 20 of which are equipped with a graph-classification task (inductive by nature) and the other three are equipped with an inductive node-classification task. Of these datasets, 17 are derived from real-world data, while the other six are synthetically generated.
|
| 114 |
+
|
| 115 |
+
For real-world data, we consider several domains. Biochemistry tasks are the most ubiquitous, including compound classification based on effects on cancer or HIV inhibition (NCI1 & NCI109 [57], ogbg-molhiv [31]), protein-protein interaction PPI [68, 28], multilabel compound classification based on toxicity on biological targets (ogbg-moltox21 [31]), and multiclass classification of enzymes (ENZYMES [31]). We also consider superpixel-based graph classification as an extension of image classification (MNIST & CIFAR10 [17]), collaboration datasets (IMDB-BINARY & COLLAB [64]), and social graphs (REDDIT-BINARY & REDDIT-MULTI-5K [64]).
|
| 116 |
+
|
| 117 |
+
For synthetic data, we have concrete understanding of their graph domain properties and how these properties relate to their prediction task. This allows us to derive a deeper understanding of their sensitivity profiles. The six synthetic datasets in our study make use of a varied set of graph generation algorithms. Small-world [65] is based on graph generation with the Watz-Strogatz (WS) model; the task is to classify graphs based on average path length. Scale-free [65] retains the same task definition, but the graph generation algorithm is an extension of the Barabási-Albert (BA) model proposed by Holme and Kim [30]. PATTERN and CLUSTER are node-level classification tasks generated with stochastic block models (SBM) [29]. Synthie [42] graphs are derived by first sampling graphs from the well-known Erdös-Rényi (ER) model, then deriving each class of graphs by a specific graph surgery and sampling of node features from a distinct distribution per each class. Similarly, SYNTHETICnew [18] graphs are generated from a random graph, where different classes are formed by specific modifications to the original graph structure and node features. Further details of dataset definitions and synthetic graph generation algorithms are provided in Appendix C.
|
| 118 |
+
|
| 119 |
+
Insights. Here we itemize the main insights into inductive datasets. Our full taxonomy is shown in Figures 4 and 3a, with a detailed analysis of individual clusters given in Appendix B.1.
|
| 120 |
+
|
| 121 |
+
- Three distinct groups of datasets. We identify a categorization into three dataset clusters $\mathrm{I} - \{ 1,2,3\}$ that emerge from both the hierarchical clustering and PCA. The datasets in $\mathrm{I} - \{ 1,2\}$ exhibit stronger node feature dependency and do not encode crucial information in the graph structure. The main differentiating factor between I-1 and I-2 is their relative sensitivity to node feature perturbations - in particular, how well NodeDeg can substitute the original node features. On the other hand, datasets in I-3 rely considerably more on graph structure for correct task prediction. This is also reflected by the first two principal components (Figure 3a), where PC1 approximately corresponds to structural perturbations and PC2 to node feature perturbations.
|
| 122 |
+
|
| 123 |
+
- No clear clustering by dataset domain. While datasets that are derived in a similar fashion cluster together (e.g., REDDIT-* datasets), in general, each of the three clusters contains datasets from a variety of application domains. Not all molecular datasets behave alike; e.g., ogbg-mol* datasets in I-2 considerably differ from NCI* datasets in I-3.
|
| 124 |
+
|
| 125 |
+
- Synthetic datasets do not fully represent real-world scenarios. CLUSTER, SYNTHETICnew, and PATTERN lie at the periphery of the PCA embeddings, suggesting that existing synthetic datasets do not resemble the type of complexity encountered in real-world data. Hence, one should use synthetic datasets in conjunction with real-world datasets to comprehensively evaluate GNN performance rather than solely relying on synthetic ones.
|
| 126 |
+
|
| 127 |
+
- Representative set. One can now select a representative subset of all datasets to cover the observed heterogeneity among the datasets. Our recommendation: CIFAR10 from I-1; D&D, ogbg-molhiv from I-2; NCI1, COLLAB, REDDIT-MULTI-5K, CLUSTER from I-3.
|
| 128 |
+
|
| 129 |
+
- Robustness w.r.t. GNN choice. In addition to GCN, we have performed our perturbation analysis w.r.t. GIN [63], 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]. These models were selected to cover a variety of inductive model biases: GIN is provably 1-WL expressive, ChebNet uses higher-order approximation of the Laplacian, GatedGCN employs gating akin to attention, and GCN II leverages skip connections and identity mapping to alleviate oversmoothing. We have also tested a 2-layer GIN to probe the robustness to number of message-passing layers. The taxonomies w.r.t. other models (Figure B.1) are congruent with that of GCN. Given the differing inductive biases and representational capacity, some difference in the sensitivity profiles are not only expected but desired to validate their functions in benchmarking. The resulting profiles can be used for a detailed comparative analysis of these models, but the overall conclusions remain consistent. This consistency is further validated by our correlation analysis amongst these models, shown in Figure 5. The Pearson correlation coefficients of all pairs are above 90%, implying that our taxonomy is sufficiently robust w.r.t. different GNNs and the number of layers.
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+
Figure 5: Pearson correlation between profiles derived by six GNN models.
|
| 134 |
+
|
| 135 |
+
### 3.2 Taxonomy of Transductive Benchmarks
|
| 136 |
+
|
| 137 |
+
Datasets. We selected a wide variety of 25 transductive datasets with node classification task, including citation networks, social networks, and other web page derived networks (see Appendix C). In citation networks, such as CitationFull (CF) [5], nodes and edges correspond to papers that are linked via citation. In web page derived networks, like WikiNet [48], Actor [48], and WikiCS [40], they correspond to hyperlinks between pages. In social networks, like Deezer (DzEu) [50], LastFM (LFMA) [50], Twitch [49], Facebook (FBPP) [49], Github [49], and Coau [52], nodes and edges are based on a type of relationship, such as mutual-friendship and co-authorship. Flickr [66] and Amazon [52] are constructed based on other notions of similarity between entities, such as co-purchasing and image property similarities. WebKB [48] contains networks of university web pages connected via hyperlinks. It is an example of a heterophilic dataset [45], since immediate neighbor nodes do not necessarily share the same labels (which correspond to a user's role such as faculty or graduate student). By contrast, Cora, CiteSeer, and PubMed are known to be homophilic datasets where nodes within a neighborhood are likely to share the same label. In fact, no less than 60% of nodes in these networks have neighborhoods that share the same node label as the central node [40].
|
| 138 |
+
|
| 139 |
+
Insights. Below we list the main insights into transductive graph datasets and their taxonomy (Figures 6 and 3b). We refer the reader to Appendix B.2 for the analysis of individual clusters.
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+
Figure 6: Taxonomization of transductive datasets based on sensitivity profiles w.r.t. a GCN model.
|
| 144 |
+
|
| 145 |
+
- Transductive datasets are uniformly insensitive to structural perturbations. Sensitivity profiles of all transductive datasets show high robustness to all graph structure perturbations. This is in stark contrast with the inductive datasets, where the largest cluster I-3 is defined by high sensitivity to structural perturbations. The graph connectivity may not be vital to every dataset/task, e.g., in WikiCS word embeddings of Wikipedia pages may be sufficient for categorization without hyperlinks. While the observation that no dataset significantly depends on structural information is startling, it corroborates with reported strong performance of MLP or similar models augmented with label propagation to outperform GNNs in several of these transductive datasets [23, 32].
|
| 146 |
+
|
| 147 |
+
- Three distinct groups of datasets. The transductive datasets are also categorized into three clusters as T- $\{ 1,2,3\}$ . T-1 consists of heterophilic datasets, such as WebKB and Actor [45,39]. These are well-separated from others, as seen in the right half of the PCA plot (Figure 3b), primarily via PC1 and characterized by performance drop due to removal of the original node features (NoNodeFtrs, RandFtrs) and their replacement by node degrees (NodeDeg). T-3 is indifferent to both node and structure removal, implying redundancies between node features and graph structure for their tasks. T-2 datasets, on the other hand, experience significant performance degradation on NoNodeFtrs and RandFtrs, yet these drops are recovered in NodeDeg. This indicates that T-2 datasets have tasks for which structural summary information is sufficient, perhaps due to homophily.
|
| 148 |
+
|
| 149 |
+
- Representative set. Many datasets have very close sensitivity profiles, thus factoring in also the graph size and original AUROC (avoiding saturated datasets), we make the following recommendation: WebKB-Wis, Actor from T-1; WikiNet-cham, WikiCS, Flickr from T-2; WikiNet-squir, Twitch-EN, GitHub from T-3.
|
| 150 |
+
|
| 151 |
+
## 4 Discussion
|
| 152 |
+
|
| 153 |
+
Our results quantify the extent to which graph features or structures are more important for the downstream tasks, an important question brought up in classical works on graph kernels [37, 51]. We observed that more than half of the datasets contain rich node features. On average, excluding these features reduces GNN prediction performance more than excluding the entire graph structures, especially for transductive node-level tasks. Furthermore, low-frequency information in node features appears to be essential in most datasets that rely on node features. Historically, most graph data aimed to capture closeness among entities, which has prompted development of local aggregation approaches, such as label propagation, personalized page rank, and diffusion kernels [36, 14], all of which share a common principle of low pass filtering. High-frequency information, on the other hand, may be important in recently emerging application areas, such as combinatorial optimization, logical reasoning or biochemical property prediction, which require complex non-local representations.
|
| 154 |
+
|
| 155 |
+
Further, despite the recent interest in development of new methods that could leverage long-range dependencies and heterophily, the availability of adequate benchmarking datasets remains lacking or less readily accessible. Meanwhile, some recent efforts such as GraphWorld [46] aim to comprehensively profile a GNN's performance using a collection of synthetic datasets that cover an entire parametric space. Notably, our analysis demonstrates that synthetic tasks do not fully resemble the complexity of real-world applications. Hence, bench marking made purely by synthetic datasets should be taken with caution, as the behavior might not be representative of real-world scenarios.
|
| 156 |
+
|
| 157 |
+
As a comprehensive benchmarking framework, our work provides several potential use cases beyond the taxonomy analysis presented here. One such usage is understanding the characteristics of any new datasets and how they are related to existing ones. For example, DeezerEurope (DzEu) is a relatively new dataset [50] that is less commonly benchmarked and studied than the other datasets we consider. The inclusion of DzEu in T-1 suggested its heterophilic nature, which indeed has been recently demonstrated [38]. On the other hand, since the sensitivity profiles naturally suggest the invariances that are important for different datasets from a practical standpoint, they could provide valuable guidance to the development of self-supervised learning and data augmentations for GNNs [62].
|
| 158 |
+
|
| 159 |
+
Finally, we observed that overall patterns in sensitivity profiles remain similar regardless whether we used GCN, GIN, or the other 4 models to derive them. Subtle differences in sensitivity profiles w.r.t. different GNN models are not only expected but also desired when comparing models that have distinct levels of expressivity. While we expect overall patterns to be similar, more expressive models should provide enhanced resolution. One could then contrast taxonomization w.r.t. first-order GNNs (such as those we used) with more expressive higher-order GNNs, Transformer-based models with global attention, and others. We hope our work will also inspire future work to empirically validate expressivity of new graph learning methods in this vein, beyond classical benchmarking.
|
| 160 |
+
|
| 161 |
+
Limitations and Future Work. Our perturbation-based approach is fundamentally limited in that we cannot test the significance of a property that we cannot perturb or that the reference GNN model cannot capture. Therefore, designing more sophisticated perturbation strategies to gauge specific relations could bring further insight into the datasets and GNN models alike. New perturbations may gauge the usefulness of geometric substructures such as cycles [3] or the effects of graph bottlenecks, e.g., by rewiring graphs to modify their "curvatures" [55]. Other perturbations could include graph sparsification (edge removal) [53] and graph coarsening (edge contraction) [10, 4].
|
| 162 |
+
|
| 163 |
+
A number of OGB node-level datasets are not included in this study due to memory cost of typical MPNNs. Conducting an analysis based on recent scalable GNN models [20] would be an interesting avenue of future research. Further, we only considered classification tasks, omitting regression tasks, as their evaluation metrics are not easily comparable. One way to circumvent this issue would be to quantize regression tasks into classification tasks by binning their continuous targets. Additionally, we disregarded edge features in two OGB molecular datasets we used. In a future work, edge features could be leveraged by an edge-feature aware generalization of MPNNs. The importance of edge features can then be analyzed by introducing new edge-feature perturbations. We also limited our analysis to node-level and graph-level tasks, but this framework could be further extended to link-prediction or edge-level tasks. While our perturbations could be used in this new scenario as well, new perturbations, such as the above-mentioned graph sparsification, would need to be considered. Similarly, hallmark models for link and relation predictions, outside MPNNs, should be considered.
|
| 164 |
+
|
| 165 |
+
## 5 Conclusion
|
| 166 |
+
|
| 167 |
+
We provide a systematic data-driven approach for taxonomizing a large collection of graph datasets - the first study of its kind. The core principle of our approach is to gauge the essential characteristics of a given dataset with respect to its accompanying prediction task by inspecting the downstream effects caused by perturbing its graph data. The resulting sensitivities to the diverse set of perturbations serve as "fingerprints" that allow to identify datasets with similar characteristics. We derive several insights into the current common benchmarks used in the field of graph representation learning, and make recommendations on selection of representative benchmarking suits. Our analysis also puts forward a foundation for evaluating new benchmarking datasets that will likely emerge in the field.
|
| 168 |
+
|
| 169 |
+
References
|
| 170 |
+
|
| 171 |
+
[1] A.K.Debnath, R.L. Lopez de Compadre, G. Debnath, A.J. Shusterman, and C. Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2): 786-797, 1991. 21, 22
|
| 172 |
+
|
| 173 |
+
[2] U. Alon and E. Yahav. On the bottleneck of graph neural networks and its practical implications, 2021. 19
|
| 174 |
+
|
| 175 |
+
[3] B. Bevilacqua, F. Frasca, D. Lim, B. Srinivasan, C. Cai, G. Balamurugan, M.M. Bronstein, and H. Maron. Equivariant subgraph aggregation networks, 2021. 9
|
| 176 |
+
|
| 177 |
+
[4] C. Bodnar, C. Cangea, and P. Liò. Deep graph mapper: Seeing graphs through the neural lens. Frontiers in Big Data, 4, June 2021. 9
|
| 178 |
+
|
| 179 |
+
[5] A. Bojchevski and S. Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In Proc. of ICLR, 2018. 7, 22, 23
|
| 180 |
+
|
| 181 |
+
[6] K.M. Borgwardt, C.S. Ong, S. Schönauer, SVN Vishwanathan, A.J. Smola, and H. Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21:i47-i56, 2005. 20, 22
|
| 182 |
+
|
| 183 |
+
[7] X. Bresson and T. Laurent. Residual gated graph convnets. ICLR, 2018. 5, 7
|
| 184 |
+
|
| 185 |
+
[8] M.M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, Jul 2017. ISSN 1558-0792. 1
|
| 186 |
+
|
| 187 |
+
[9] M.M. Bronstein, J. Bruna, T. Cohen, and P. Veličković. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, 2021. 1, 2
|
| 188 |
+
|
| 189 |
+
[10] N. Brugnone, A. Gonopolskiy, M.W. Moyle, M. Kuchroo, D. Dijk, K.R. Moon, D. Colon-Ramos, G. Wolf, M.J. Hirn, and S. Krishnaswamy. Coarse graining of data via inhomogeneous diffusion condensation. IEEE Big Data, Dec 2019. 9
|
| 190 |
+
|
| 191 |
+
[11] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks. In Proceedings of the 37th International Conference on Machine Learning, 2020. 5,7
|
| 192 |
+
|
| 193 |
+
[12] W. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C. Hsieh. Cluster-gen. Proc. of 25th SIGKDD, 2019. 19
|
| 194 |
+
|
| 195 |
+
[13] R.R. Coifman and M. Maggioni. Diffusion wavelets. Applied and computational harmonic analysis, 21(1):53-94, 2006. 3
|
| 196 |
+
|
| 197 |
+
[14] L. Cowen, T. Ideker, B.J. Raphael, and R. Sharan. Network propagation: a universal amplifier of genetic associations. Nat. Rev. Gene., 18(9):551-562, 2017. 8
|
| 198 |
+
|
| 199 |
+
[15] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in NeurIPS, volume 29, pages 3844-3852, 2016.1,3,5,7
|
| 200 |
+
|
| 201 |
+
[16] P.D. Dobson and A.J. Doig. Distinguishing enzyme structures from non-enzymes without alignments. J. of Mol. Bio., 330(4):771-783, 2003. 20, 22
|
| 202 |
+
|
| 203 |
+
[17] V.P. Dwivedi, C.K. Joshi, T. Laurent, Y. Bengio, and X. Bresson. Benchmarking Graph Neural Networks. arXiv:2003.00982, 2020. 1, 6, 20, 22
|
| 204 |
+
|
| 205 |
+
[18] A. Feragen, N. Kasenburg, J. Petersen, M. de Bruijne, and K. Borgwardt. Scalable kernels for graphs with continuous attributes. In Adv. in NeurIPS, volume 26, 2013. 6, 21, 22
|
| 206 |
+
|
| 207 |
+
[19] M. Fey and J.E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Repr. Learning on Graphs and Manifolds, 2019. 14
|
| 208 |
+
|
| 209 |
+
[20] M. Fey, J. E. Lenssen, F. Weichert, and J. Leskovec. GNNAutoScale: Scalable and expressive graph neural networks via historical embeddings, 2021. 9, 19
|
| 210 |
+
|
| 211 |
+
[21] M. Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory. Czechoslovak mathematical journal, 25(4):619-633, 1975. 4
|
| 212 |
+
|
| 213 |
+
[22] S. Freitas, Y. Dong, J. Neil, and D.H. Chau. A large-scale database for graph representation learning. In Adv. in NeurIPS, 2021. 21, 22
|
| 214 |
+
|
| 215 |
+
[23] J. Gasteiger, A. Bojchevski, and S. Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations, 2018. 8
|
| 216 |
+
|
| 217 |
+
[24] J. Gilmer, S.S. Schoenholz, P.F. Riley, O. Vinyals, and G.E. Dahl. Neural message passing for quantum chemistry, 2017. 1
|
| 218 |
+
|
| 219 |
+
[25] C.S. Greene, A. Krishnan, A.K. Wong, E. Ricciotti, R.A. Zelaya, D.S. Himmelstein, R. Zhang, B.M. Hartmann, E. Zaslavsky, S.C. Sealfon, et al. Understanding multicellular function and disease with human tissue-specific networks. Nature genetics, 47(6):569-576, 2015. 21
|
| 220 |
+
|
| 221 |
+
[26] L. Hagen and A.B. Kahng. New spectral methods for ratio cut partitioning and clustering. IEEE transactions on computer-aided design of integrated circuits and systems, 11(9):1074-1085, 1992.4
|
| 222 |
+
|
| 223 |
+
[27] W.L. Hamilton. Graph Representation Learning. Morgan & Claypool, 2020. 1
|
| 224 |
+
|
| 225 |
+
[28] W.L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1025-1035, 2017. 6, 21
|
| 226 |
+
|
| 227 |
+
[29] P.W. Holland, K.B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109-137, 1983. ISSN 0378-8733. 6, 20
|
| 228 |
+
|
| 229 |
+
[30] P. Holme and B.J. Kim. Growing scale-free networks with tunable clustering. Physical Review $E,{65}\left( 2\right)$ , Jan 2002. ISSN 1095-3787. doi: 10.1103/physreve.65.026107. 6,21
|
| 230 |
+
|
| 231 |
+
[31] W. Hu, M.s Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. Adv. in NeurIPS 33, 2020. 1, 2, 6, 21, 22
|
| 232 |
+
|
| 233 |
+
[32] Q. Huang, H. He, A. Singh, S. Lim, and A. Benson. Combining label propagation and simple models out-performs graph neural networks. In International Conference on Learning Representations, 2020. 8
|
| 234 |
+
|
| 235 |
+
[33] J. Irion and N. Saito. Efficient approximation and denoising of graph signals using the multiscale basis dictionaries. IEEE Transactions on Signal and Information Processing over Networks, 3 (3):607-616, 2016. 4
|
| 236 |
+
|
| 237 |
+
[34] D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. 14
|
| 238 |
+
|
| 239 |
+
[35] T.N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proc. of ICLR, 2017. 1, 5
|
| 240 |
+
|
| 241 |
+
[36] S. Köhler, S. Bauer, D. Horn, and P.N. Robinson. Walking the interactome for prioritization of candidate disease genes. The American Journal of Human Genetics, 82(4):949-958, April 2008.8
|
| 242 |
+
|
| 243 |
+
[37] N.M. Kriege, F.D. Johansson, and C. Morris. A survey on graph kernels. Applied Network Science, 5(1):1-42, 2020. 8
|
| 244 |
+
|
| 245 |
+
[38] D. Lim, F. Hohne, X. Li, S. Linda H., V. Gupta, O. Bhalerao, and S. Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods, 2021. 9
|
| 246 |
+
|
| 247 |
+
[39] Y. Ma, X. Liu, N. Shah, and J. Tang. Is homophily a necessity for graph neural networks?, 2021. 8, 19
|
| 248 |
+
|
| 249 |
+
[40] P. Mernyei and C. Cangea. Wiki-cs: A wikipedia-based benchmark for graph neural networks, 2020.7,22,23
|
| 250 |
+
|
| 251 |
+
[41] Y. Min, F. Wenkel, and G. Wolf. Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks. In Adv. in NeurIPS 33, pages 14498-14508, 2020. 1
|
| 252 |
+
|
| 253 |
+
[42] C. Morris, N.M. Kriege, K. Kersting, and P. Mutzel. Faster kernels for graphs with continuous attributes via hashing. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pages 1095-1100, 2016. 6, 21, 22
|
| 254 |
+
|
| 255 |
+
[43] C. Morris, M. Ritzert, M. Fey, W.L. Hamilton, J.E. Lenssen, G. Rattan, and M. Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on AI, volume 33, pages 4602-4609, 2019. 1
|
| 256 |
+
|
| 257 |
+
[44] C. Morris, N.M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann. TUDataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 GRL+ Workshop, 2020.1,2
|
| 258 |
+
|
| 259 |
+
[45] H. Mostafa, M. Nassar, and S. Majumdar. On local aggregation in heterophilic graphs. arXiv:2106.03213, 2021. 7, 8, 19
|
| 260 |
+
|
| 261 |
+
[46] J. Palowitch, A. Tsitsulin, B. Mayer, and B. Perozzi. GraphWorld: Fake graphs bring real insights for GNNs. ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022.1,9
|
| 262 |
+
|
| 263 |
+
[47] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024-8035, 2019. 14
|
| 264 |
+
|
| 265 |
+
[48] H. Pei, B. Wei, K.C. Chang, Y. Lei, and B. Yang. Geom-GCN: Geometric graph convolutional networks. In Proc. of ICLR, 2020. 7, 22, 23
|
| 266 |
+
|
| 267 |
+
[49] B. Rozemberczki and R. Sarkar. Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models. In Proc. of 29th ACM Int'l Conf. on Information & Knowledge Management, pages 1325-1334, 2020. 7, 23
|
| 268 |
+
|
| 269 |
+
[50] B. Rozemberczki, C. Allen, and R. Sarkar. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021. 7, 9, 22, 23
|
| 270 |
+
|
| 271 |
+
[51] T. Schulz and P. Welke. On the necessity of graph kernel baselines. In ECML-PKDD, GEM workshop, volume 1, page 6, 2019. 8
|
| 272 |
+
|
| 273 |
+
[52] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network evaluation. NeurIPS 2018 R2L workshop, 2018. 7, 22, 23
|
| 274 |
+
|
| 275 |
+
[53] D.A. Spielman and S. Teng. Spectral sparsification of graphs, 2010. 9
|
| 276 |
+
|
| 277 |
+
[54] D. Szklarczyk, A. Franceschini, S. Wyder, K. Forslund, D. Heller, J. Huerta-Cepas, M. Si-monovic, A. Roth, A. Santos, K.P. Tsafou, et al. STRING v10: protein-protein interaction networks, integrated over the tree of life. Nucleic acids research, 43(D1):D447-D452, 2015. 21
|
| 278 |
+
|
| 279 |
+
[55] J. Topping, F.D. Giovanni, B.P. Chamberlain, X. Dong, and M.M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature, 2021. 9, 19
|
| 280 |
+
|
| 281 |
+
[56] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. In The 6th ICLR, 2018. 1
|
| 282 |
+
|
| 283 |
+
[57] N. Wale, I.A. Watson, and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3):347-375, 2008. 6, 21, 22
|
| 284 |
+
|
| 285 |
+
[58] J.H. Ward Jr. Hierarchical grouping to optimize an objective function. Journal of the American statistical association, 58(301):236-244, 1963. 4, 14
|
| 286 |
+
|
| 287 |
+
[59] D.J. Watts and S.H. Strogatz. Collective dynamics of 'small-world' networks. Nature, 393 (6684):440-442, 1998. 21
|
| 288 |
+
|
| 289 |
+
[60] Z. Wu, B. Ramsundar, E.N. Feinberg, J. Gomes, C. Geniesse, A.S. Pappu, K. Leswing, and V. Pande. MoleculeNet: a benchmark for molecular machine learning. Chemical science, 9(2): 513-530, 2018. 21
|
| 290 |
+
|
| 291 |
+
[61] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4-24, 2020. 1
|
| 292 |
+
|
| 293 |
+
[62] Y. Xie, Z. Xu, J. Zhang, Z. Wang, and S. Ji. Self-supervised learning of graph neural networks: A unified review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. 9
|
| 294 |
+
|
| 295 |
+
[63] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In Proc. of ICLR, 2019. 1, 5, 7
|
| 296 |
+
|
| 297 |
+
[64] P. Yanardag and S.V.N. Vishwanathan. Deep graph kernels. In Proc. of 21th SIGKDD, pages 1365-1374, 2015. 6, 20, 21, 22
|
| 298 |
+
|
| 299 |
+
[65] J. You, R. Ying, and J. Leskovec. Design space for graph neural networks. In NeurIPS, 2020. 6, 14, 21, 22
|
| 300 |
+
|
| 301 |
+
[66] H. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. Prasanna. GraphSAINT: Graph sampling based inductive learning method. In Proc. of ICLR, 2020. 7, 19, 22, 23
|
| 302 |
+
|
| 303 |
+
[67] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun. Graph neural networks: A review of methods and applications. AI Open, 1:57-81, 2020. 1
|
| 304 |
+
|
| 305 |
+
[68] M. Zitnik and J. Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190-i198, 2017. 6, 21, 22
|
| 306 |
+
|
| 307 |
+
## A Extended Methods
|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
|
| 311 |
+
Figure A.1: MPNN model blueprint used for all datasets.
|
| 312 |
+
|
| 313 |
+
### A.1 Taxonomization by Hierarchical Clustering
|
| 314 |
+
|
| 315 |
+
To study a systematic classification of the graph datasets, we use Ward's method [58] for hierarchical clustering analysis of their sensitivity profiles. Specifically, we first construct a perturbation sensitivity matrix where each row represents a dataset and each column represents a perturbation. An entry in this matrix is computed by taking the ratio between the test score achieved with the perturbed dataset and the test score achieved with the original dataset. As our performance metric we use the area under the receiver operating characteristic (AUROC) averaged over 10 random seed runs or 10 cross-validation folds, depending on whether a dataset has predefined data splits or not. Row-wise hierarchical clustering provides us a data-driven taxonomization of the datasets.
|
| 316 |
+
|
| 317 |
+
Using AUROC as our metric, the values of the perturbation sensitivity matrix range from 0.5 to 1 when a perturbation causes a loss in predictive performance, and from 1 to 2 when it improves it. Therefore we element-wise ${\log }_{2}$ -transform the matrix to balance the two ranges and map the values onto $\left\lbrack {-1,1}\right\rbrack$ before hierarchical clustering. Yet, for a more intuitive presentation, we show the original ratio values as percentages throughout this paper.
|
| 318 |
+
|
| 319 |
+
### A.2 MPNN Hyperparameter Selection
|
| 320 |
+
|
| 321 |
+
We keep the model hyperparameters, illustrated in Figure A.1, identical for each dataset and perturbation combination. We use a linear node embedding layer, 5 graph convolutional layers with residual connections and batch normalization (only for inductive datasets), followed by global mean pooling (in case of graph-level prediction tasks), and finally a 2-layer MLP classifier. For training we use Adam optimizer [34] with learning rate reduction by 0.5 factor upon reaching a validation loss plateau. Early stopping is done based on validation split performance.
|
| 322 |
+
|
| 323 |
+
Implementation. Our pipeline is built using PyTorch [47] and PyG [19] with GraphGym [65] (provided under MIT License). Its modular & scalable design facilitated here one of the most extensive experimental evaluation of graph datasets to date.
|
| 324 |
+
|
| 325 |
+
Computing environment and used resources. All experiments were run in a shared computing cluster environment with varying CPU and GPU architectures. These involved a mix of NVidia V100 (32GB), RTX8000 (48GB), and A100 (40GB) GPUs. The resource budget for each experiment was 1 GPU, 4 CPUs, and up to 32GB system RAM.
|
| 326 |
+
|
| 327 |
+
## B Extended Results
|
| 328 |
+
|
| 329 |
+
### B.1 Taxonomy of Inductive Benchmarks
|
| 330 |
+
|
| 331 |
+
I-1: Node-feature reliance. The top-most cluster I-1, while indifferent to structural perturbations, is highly sensitive to node feature perturbations that comprise the left-hand-side columns in Figure 4. The presence of image-based datasets MNIST and CIFAR10 in this cluster is not surprising, as for superpixel graphs the structure loosely follows a grid layout for all classes, meaning determining class solely based on structure is difficult. Additionally, the coordinate information of superpixels is encoded also in the node features, together with average pixel intensities. A model with powerful enough classifier component is then sufficient for achieving high accuracy using these node features alone. Furthermore, the sensitivity of these datasets to MidPass and HighPass indicates that the overall shape of the signals encoded by low-frequencies is more informative for classifying the image content than sharp superpixel transitions encoded by high-frequencies. The presence of ENZYMES in I-1 is likely due to the fact that some of the node features are precomputed using graph kernels, and therefore are sufficient to distinguish the enzyme classes in the dataset when structural information is removed.
|
| 332 |
+
|
| 333 |
+
I-2: Node features contain majority of necessary structural information. For datasets in I-2, the graph structural information is again not necessary for achieving the baseline performance if the original node features are present, while the performance deteriorates noticably if NoNodeFtrs is applied. However, unlike I-1, these datasets are much less affected overall by the perturbations on node features. Many of the node features on these datasets are themselves derived from the graph's geometry, and it seems MPNNs are able to use either the graph structure or the node features to compensate for the absence of the other when encountering perturbed graphs. It appears that the low/mid/high-pass filterings in particular are able to retain a significant amount of geometric information.
|
| 334 |
+
|
| 335 |
+
The synthetic graphs of Scale-Free and Small-world (both I-2 datasets) are generated through different algorithms (WS and BA, respectively), but the node features and tasks are equivalent: The features are the local clustering coefficient and PageRank score of each node and the task is to classify graphs based on average path length. Since the encoded features are derived from graph structure itself, MPNNs are still able to exploit them when the original graph structure is perturbed. When the MPNNs are forced to rely on graph structure instead, they are still able to attain AUROCs above random despite some decrease.
|
| 336 |
+
|
| 337 |
+
For many of the I-2 datasets, NodeDeg allows one to replace geometric information of original node features with new geometric information, the degree of each vertex, to large success - for some of them the original AUROC scores are recovered and even surpassed, possibly due to NodeDeg reinforcing the existing structural signal. This trend is not as pronounced when the GIN-based model is used, since GIN achieves a comparatively high level of performance even in the face of NoNodeFtrs, likely due to the higher expressiveness of GIN compared to GCN in distinguishing of structural patterns.
|
| 338 |
+
|
| 339 |
+
On the other hand, there are datasets of biochemical origin in this cluster, whose node features encode chemical and physical attributes, such as atom or amino acid type. Except MUTAG, there appears to be some information encoded in these node features that is irreplaceable by graph structure or node degree information.
|
| 340 |
+
|
| 341 |
+
I-3: Graph-structure reliance. The I-3 cluster is characterized by strong structural dependencies, and can be further divided into two subgroups based on their sensitivities to node feature perturbations.
|
| 342 |
+
|
| 343 |
+
The first subgroup, which consists of PATTERN, COLLAB, IMDB-BINARY and REDDIT, is not affected by node feature perturbations. These datasets do not have any original informative node features and their tasks appear to be purely structure-based. Indeed, in the case of PATTERN the task is to detect structural patterns in graphs, rendering node features irrelevant for the task. On the other hand, structural perturbations such as NoEdges and FullyConn cause drastic performance drops in this group, since most of its task signals are sourced from graph structures. This group also exhibits limited to no sensitivity towards Frag- ${k2}$ and Frag- ${k3}$ perturbations, which test for degrees of reliance on longer range interactions by limiting information propagation to $\{ 2,3\}$ hops. We still see prominent sensitivity to Frag- ${k1}$ , though, implying reliance on information from immediate neighbors. We can attribute the insensitivity for $k > 1$ to inherent graph properties for some of these datasets: For dense networks like PATTERN or ego-nets such as IMDB-BINARY and COLLAB, just 1 or 2 hops recover the original graph - for these graphs, the notion of long-range information does not exist.
|
| 344 |
+
|
| 345 |
+
The second I-3 subgroup, formed by NCI datasets and Synthie, are the datasets that are notably affected by all perturbations. For Synthie, this sensitivity stems from its construction. The four synthetic classes in Synthie are formed by combinations of two distributions of graph structures and two distributions of node features - elimination of either leads to a partial collapse in the distinguishability of two classes. The NCI classification tasks, similarly to related bioinformatics datasets in I-2, show a degree of reliance on the high-dimensional node features, but additionally, they are also dependent on non-local structure as they are among the datasets most adversely affected by Frag- ${k2}$ and Frag- ${k3}$ .
|
| 346 |
+
|
| 347 |
+
Synthetic datasets CLUSTER and SYNTHETICnew are also adversely affected by both structural and node feature perturbations. However, they stand out due to the magnitude of this effect. Many of the perturbations lead to a major decrease in AUROC and close-to-random performance. A closer inspection can provide an explanation. The task of CLUSTER is semi-supervised clustering of unlabeled nodes into six clusters, and the true cluster labels are given as node features in only a single node per cluster. NoEdges and FullyConn remove the cluster structure altogether, while NoNodeFtrs and NodeDeg remove the given cluster labels, rendering the task unsolvable in either case. In SYNTHETICnew, the two classes are derived from a "base" graph by a class-specific edge rewiring and node feature permutation, hence either graph structure or node features should differentiate the classes. Despite such expectation, we observe that the original node features alone are not sufficient, as structure perturbations have detrimental impact on the prediction performance. On the other hand GIN and GCN with NodeDeg can learn to distinguish the two classes even without the original node 514 features. Thus, the original node features appear to be unnecessary, while after bandpass-filtering even provide misleading signal.
|
| 348 |
+
|
| 349 |
+

|
| 350 |
+
|
| 351 |
+
(c) Sensitivity profiles by 2-Layer GIN model; annotated by cluster assignment w.r.t. GCN model.
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
|
| 355 |
+
Figure B.1: Taxonomy of inductive graph learning datasets via graph perturbations. The categorization into 3 dataset clusters is stable across the following models with only minor deviations: (a) GCN, (b) GIN, (c) 2-Layer GIN, (d) ChebNet, (e) GatedGCN, (f) GCNII. Panel (a) left and right is as shown in Figure 3a and 4, respectively, shown here for ease of comparison. Missing performance ratios (due to out-of-memory error) are shown in gray.
|
| 356 |
+
|
| 357 |
+
### B.2 Taxonomy of Transductive Benchmarks
|
| 358 |
+
|
| 359 |
+
All transductive datasets are relatively insensitive to structural perturbations. Unlike many of the inductive datasets that show significant reliance on the graph structure (I-3), the lowest performance achieved for a transductive dataset due to graph structure removal is still as high as 92% (Flickr), suggesting a weak dependence on the full graph structure. Furthermore, on average, considering only the neighborhoods of up to 3-hops (Frag-k3) nearly retains the full potential of the model $({99}\% \pm$ 1.6%), revealing the lack of long-range dependencies in these node-level datasets. Such negligence of the full graph structure might be attributed to the limitations of the GCN expressivity and issues such as oversquashing [55]. While these limitations are fundamentally true, our observation of long-range dependencies on some graph-level tasks like NCI, coupled with our architecture being 5 layers deep with residual connections, indicate that our GCN model is capable of capturing non-local information in the 3-hop neighborhoods. Furthermore, our observed long-range independence in transductive node-level datasets is consistent with the promising results presented by recent development of scalable GNNs that operate on subgraphs $\left\lbrack {{12},{66},{20}}\right\rbrack$ , breaking or limiting long-range connections.
|
| 360 |
+
|
| 361 |
+
T-3: Indifference to node and structure removal. The datasets in T-3 are relatively insensitive to perturbations of graph structure and also to the removal of node features (NoNodeFtrs and NodeDeg). For example, the Amazon datasets (Am-Phot and Am-Comp) always achieve near perfect classification performance regardless of the perturbations applied, suggesting redundancy between node features and graph structure for the corresponding tasks. For these datasets, in particular, GitHub, Am, and Twitch, more sophisticated, or combinations of, perturbations might be needed to gauge their essential characteristics.
|
| 362 |
+
|
| 363 |
+
T-2: Rich node features but substitutable for structural (summary) information. T-2 contains a broad spectrum of datasets from citation networks (CF), social networks (Coau, FBPP, LFMA), to web pages (WikiNet, WikiCS). The considerable performance decrease due to node feature removal suggests the relevance of the node features for their tasks. For example, it is not surprising that the binary bag-of-words features of CF datasets provide relevant information to classify papers into different fields of research, as one might expect some keywords to appear more likely in one field than in another. Furthermore, using the one-hot encoded node degrees (NodeDeg) always results in better performance over NoNodeFtrs. And in many cases such as Facebook (FBPP), NodeDeg nearly retains the baseline performance, suggesting the relevance of node degree information, as a form of structural summary, for the respective tasks.
|
| 364 |
+
|
| 365 |
+
WebKB-Tex, although clustered into T-2 is more of an outlier that does not clearly fit into any of the existing clusters. As we will discuss more in T-1, WebKB-Tex considerably benefits from HighPass, while LowPass and MidPass severely decrease its performance.
|
| 366 |
+
|
| 367 |
+
T-1: Heterophilic datasets. Three of the four datasets in T-1 (Actor, WebKB-Cor, and WebKB-Wis) are commonly referred to as heterophilic datasets [45, 39]. While WebKB-Tex (T-2) is also known to be heterophilic, it is isolated from T-1 mainly due to its insensitivity to node feature removal, suggesting the structure alone is sufficient for its prediction task.
|
| 368 |
+
|
| 369 |
+
Our results show that in heterophilic datasets such as T-1 and WebKB-Tex, LowPass node feature filtering, realized by local aggregation (Eq. 3), significantly degrades the performance, unlike other homophilic datasets. By contrast, HighPass results in better performance than LowPass. In the case of WekbKB-Tex, HighPass significantly improves the performance over the baseline. This observation is related to recent findings [39] that in the case of extreme heterophily, local information, this time in form of the neighborhood patterns, may suffice to infer the correct node labels.
|
| 370 |
+
|
| 371 |
+
Finally, despite heterophilic datasets $\left\lbrack {{39},2,{55},{45}}\right\rbrack$ attracting much recent attention, this type of datasets (T-1 and WebKB-Tex) is lacking in availability compared to the others (T-\{2,3\}), which exhibit homophily but with different levels of reliance on node features. Thus, there is a need to 3 collect and generate more real-world heterophilic datasets.
|
| 372 |
+
|
| 373 |
+
B. 3 Correlations of Perturbations
|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+
Figure B.2: Pearson correlation coefficients of the log2 performance fold change between different perturbations (w.r.t. a GCN model).
|
| 378 |
+
|
| 379 |
+
We compute the Pearson correlation between all pairs of perturbations based on the log2 performance fold change. The results in Figure B. 2 indicate that many perturbations correlate with each other to some extend. For both transductive and inductive benchmarks, the perturbations roughly cluster into two groups, separating node feature perturbations (see Section 2.1) and graph structure perturbations (see Section 2.2). In particular, perturbations that replace the original node features with other less informative features, including RandFtrs, NoNodeFtrs, and NodeDeg, highly correlate with one another (Pearson $r \geq {0.6}$ ). Similarly, perturbations that severely break the graphs apart, including NoEdges, Frag-k1, and FiedlerFrag, are highly correlated (Pearson $r \geq {0.8}$ ).
|
| 380 |
+
|
| 381 |
+
## C Graph Learning Benchmarks
|
| 382 |
+
|
| 383 |
+
### C.1 Inductive Datasets
|
| 384 |
+
|
| 385 |
+
MNIST and CIFAR10 [17] are derived from the well-known image classification datasets. The images are converted to graphs by SLIC superpixelization; node features are the average pixel coordinates and intensities; edges are constructed based on kNN criterion.
|
| 386 |
+
|
| 387 |
+
PATTERN and CLUSTER [17] are node-level inductive datasets generated from SBMs [29]. In PATTERN, the task is to identify nodes of a structurally specific subgraph; CLUSTER has a semi-supervised clustering task of predicting the true cluster assignment of nodes while observing only one labelled node per cluster.
|
| 388 |
+
|
| 389 |
+
IMDB-BINARY [64] is a dataset of ego-networks, where nodes represent actors/actresses and an edge between two nodes means that the two artists played in a movie together. The task is to determine which genre (action or romance) each ego-network belongs to.
|
| 390 |
+
|
| 391 |
+
D&D [16] is a protein dataset where each protein is represented by a graph with rich node feature set. The task is to classify proteins as enzymes or non-enzymes.
|
| 392 |
+
|
| 393 |
+
ENZYMES [6] is a dataset of tertiary structures from six enzymatic classes (determined by Enzyme Commission numbers). Each node represents a secondary structure element (SSE), and has an edge between its three spatially closest nodes. Node features are the type of SSE, and the physical and chemical information.
|
| 394 |
+
|
| 395 |
+
PROTEINS [6] is a modification of the D&D [16]; the task is the same but the protein graphs are generated as in ENZYMES. NCI1 and NCI109 [57] consist of graph representations of chemical compounds; each graph represents a molecule in which nodes represent atoms and edges represent atomic bonds. Atom types are one-hot encoded as node features. The tasks are to determine whether a given compound is active or inactive in inhibiting non-small cell lung cancer (NCI1) or ovarian cancer (NCI109).
|
| 396 |
+
|
| 397 |
+
COLLAB [64] is an ego-network dataset of researchers in three different fields of physics. Each graph is a researcher's ego-network, where nodes are researchers and an edge between two nodes means the two researchers have collaborated on a paper. The task is to determine which field a given researcher ego-network belongs to.
|
| 398 |
+
|
| 399 |
+
REDDIT-BINARY and REDDIT-MULTI-5K [64] graphs are derived from Reddit communities (sub-reddits). These subreddits are Q&A based or discussion-based. Each graph represents a set of interactions between users through posts and comments; nodes represent users while an edge implies an interaction between two users. The task for REDDIT-BINARY is to determine whether the given interaction graph belongs to a Q&A or discussion subreddit. In REDDIT-MULTI-5K, the graphs are drawn from 5 specific subreddits instead, and the task is to predict the subreddit a graph belongs to.
|
| 400 |
+
|
| 401 |
+
MUTAG [1] is a dataset of Nitroaromatic compounds. Each compound is represented by a graph in which nodes represent atoms with their types one-hot encoded as node features, and edges represent atomic bonds. The task is to determine whether a given compound has mutagenic effects on Salmonella typhimurium bacteria.
|
| 402 |
+
|
| 403 |
+
MalNet-Tiny [22] is a smaller version of MalNet dataset, consisting of function call graphs of various malware on Android systems using Local Degree Profiles as node features. In MalNet-Tiny, the task is constrained to classification into 5 different types of malware.
|
| 404 |
+
|
| 405 |
+
ogbg-molhiv, ogbg-molpcba, ogbg-moltox21 [31] datasets, adopted from MoleculeNet [60], are composed of molecular graphs, where nodes represent atoms and edges represent atomic bonds in-between. Node features include atom type and physical/chemical information such chirality and charge. The task is to classify molecules on whether they inhibit HIV replication (ogbg-molhiv) or their toxicity on on 12 different targets such as receptors and stress response pathways in a multilabel classification setting (ogbg-moltox21). In ogbg-molpcba the task is 128-way multi-task binary classification derived from 128 bioassays from PubChem BioAssay.
|
| 406 |
+
|
| 407 |
+
PPI $\left\lbrack {{68},{28}}\right\rbrack$ dataset contains a collection of 24 tissue-specific protein-protein interaction networks derived from the STRING database [54] using tissue-specific gold-standards from [25]. 20 of the networks are used for training, 2 used for validation, and 2 used for testing. In each network, each protein (node) is associated with 50 different gene signatures as node features. The multi-label node classification task was to classify each gene (node) in a graph based on its gene ontology terms.
|
| 408 |
+
|
| 409 |
+
SYNTHETICnew [18] is a dataset where each graph is based on a random graph $G$ with scalar node features drawn from the normal distribution. Two classes of graphs are generated from $G$ by randomly rewiring edges and permuting node attributes; the number of rewirings and permuted attributes are distinct for the two classes. Noise is added to the node features to make the tasks more difficult. The task is to determine which class a given graph belongs to.
|
| 410 |
+
|
| 411 |
+
Synthie [42] dataset is generated from two Erdös-Rényi graphs ${G}_{1,2}$ : Two sets of graphs ${S}_{1,2}$ are then generated by randomly adding and removing edges from ${G}_{1,2}$ . Then,10 graphs were sampled from these sets and connected by randomly adding edges, resulting in a single graph. Two classes of these graphs, ${C}_{1,2}$ are generated by using distinct sampling probabilities for the two sets. The two classes are then in turn split into two by generating two sets of vectors $A$ and $B$ ; nodes from a given graph were appended a vector from $A$ as node features if they were sampled from ${S}_{1}$ , and $B$ for ${S}_{2}$ for one class, and vice versa for the other. The task is to classify which of these four classes a given graph belongs to.
|
| 412 |
+
|
| 413 |
+
Small-world and Scale-free [65] datasets are generated by tweaking graph generation parameters for the real-world-derived small-world [59] and scale-free [30] graphs. Graphs are generated using a range of Averaging Clustering Coefficient and Average Path Length parameters. In our experiments, clustering coefficients and PageRank scores constitute node features while task is to classify graphs based on average path length, where the continuous path length variable is rendered discrete by 10-way binning.
|
| 414 |
+
|
| 415 |
+
Table C.1: Inductive benchmarks. All datasets are equipped with graph-level classification tasks, except PATTERN and CLUSTER that are equipped with inductive node-level classification tasks.
|
| 416 |
+
|
| 417 |
+
<table><tr><td>$\mathbf{{Dataset}}$</td><td>#Graphs</td><td>Avg # Nodes</td><td>Avg # Edges</td><td>#Features</td><td>#Classes</td><td>Predef. split</td><td>$\mathbf{{Ref}.}$</td></tr><tr><td>MNIST</td><td>70,000</td><td>70.57</td><td>564.53</td><td>3</td><td>10</td><td>Yes</td><td>[17]</td></tr><tr><td>CIFAR10</td><td>60,000</td><td>117.63</td><td>941.07</td><td>5</td><td>10</td><td>Yes</td><td>[17]</td></tr><tr><td>PATTERN</td><td>14,000</td><td>118.89</td><td>6,078.57</td><td>3</td><td>2</td><td>Yes</td><td>[17]</td></tr><tr><td>CLUSTER</td><td>12,000</td><td>117.20</td><td>4,301.72</td><td>7</td><td>6</td><td>Yes</td><td>[17]</td></tr><tr><td>IMDB-BINARY</td><td>1,000</td><td>19.77</td><td>96.53</td><td>-</td><td>2</td><td>No</td><td>[64]</td></tr><tr><td>D&D</td><td>1,178</td><td>284.32</td><td>715.66</td><td>89</td><td>2</td><td>No</td><td>[16]</td></tr><tr><td>ENZYMES</td><td>600</td><td>32.63</td><td>62.14</td><td>21</td><td>6</td><td>No</td><td>[6]</td></tr><tr><td>PROTEINS</td><td>1,113</td><td>39.06</td><td>72.82</td><td>4</td><td>2</td><td>No</td><td>[6]</td></tr><tr><td>NCI1</td><td>4,110</td><td>29.87</td><td>32.3</td><td>37</td><td>2</td><td>No</td><td>[57]</td></tr><tr><td>NCI109</td><td>4,127</td><td>29.68</td><td>32.13</td><td>38</td><td>2</td><td>No</td><td>[57]</td></tr><tr><td>COLLAB</td><td>5,000</td><td>74.49</td><td>2,457.78</td><td>-</td><td>3</td><td>No</td><td>[64]</td></tr><tr><td>REDDIT-BINARY</td><td>2,000</td><td>429.63</td><td>497.75</td><td>-</td><td>2</td><td>No</td><td>[64]</td></tr><tr><td>REDDIT-MULTI-5K</td><td>4,999</td><td>508.52</td><td>594.87</td><td>-</td><td>5</td><td>No</td><td>[64]</td></tr><tr><td>MUTAG</td><td>188</td><td>17.93</td><td>19.79</td><td>7</td><td>2</td><td>No</td><td>[1]</td></tr><tr><td>MalNet-Tiny</td><td>5,000</td><td>1,410.3</td><td>2,859.94</td><td>5</td><td>5</td><td>No</td><td>[22]</td></tr><tr><td>ogbg-molhiv</td><td>41,127</td><td>25.5</td><td>27.5</td><td>9 sets</td><td>2</td><td>Yes</td><td>[31]</td></tr><tr><td>ogbg-molpeba</td><td>437,929</td><td>26.0</td><td>28.1</td><td>9 sets</td><td>128x binary</td><td>Yes</td><td>[31]</td></tr><tr><td>ogbg-moltox21</td><td>7,831</td><td>18.6</td><td>19.3</td><td>9 sets</td><td>12x binary</td><td>Yes</td><td>[31]</td></tr><tr><td>PPI</td><td>24</td><td>2,372.67</td><td>66,136</td><td>50</td><td>121</td><td>Yes</td><td>[68]</td></tr><tr><td>SYNTHETICnew</td><td>300</td><td>100</td><td>196</td><td>1</td><td>2</td><td>No</td><td>[18]</td></tr><tr><td>Synthie</td><td>400</td><td>95</td><td>196.25</td><td>15</td><td>4</td><td>No</td><td>[42]</td></tr><tr><td>Small-world</td><td>256</td><td>64</td><td>694</td><td>2</td><td>10</td><td>No</td><td>[65]</td></tr><tr><td>Scale-free</td><td>256</td><td>64</td><td>501.56</td><td>2</td><td>10</td><td>No</td><td>[65]</td></tr></table>
|
| 418 |
+
|
| 419 |
+
### C.2 Transductive Node-level Datasets
|
| 420 |
+
|
| 421 |
+
WikiNet [48] contains two networks of Wikipedia pages, where edges indicate mutual links between pages, and node features are bag-of-words (BOW) of informative nouns. The task is to classify the web pages based on their average monthly traffic bins.
|
| 422 |
+
|
| 423 |
+
WebKB [48] contains networks of web pages from different universities, where an (directed) edge is a hyperlink between two web pages, with BOW node features. The task is to classify the web pages into five categories: student, project, course, staff, and faculty.
|
| 424 |
+
|
| 425 |
+
Actor [48] is a network of actors, where an edge indicate co-occurrence of two actors on a same Wikipedia page, with node features represented by keywords about the actor on Wikipedia. The task is to classify the actor into one of five categories.
|
| 426 |
+
|
| 427 |
+
WikiCS [40] is a network of Wikipedia articles related to Computer Science, where edges represent hyperlinks between them, with 300-dimensional word embeddings of the articles. The task is to classify the articles into one of ten branches of the field.
|
| 428 |
+
|
| 429 |
+
Flickr [66] is a network of images, where the edges represent common properties between images, such as locations, gallery, and comments by the same users. The node features are BOW of image descriptions, and the task is to predict one of 7 tags for an image.
|
| 430 |
+
|
| 431 |
+
CF (CitationFull) [5] contains citation networks where nodes are papers and edges represent citations, with node features as BOW of papers. The task is to classify the papers based on their topics.
|
| 432 |
+
|
| 433 |
+
DzEu (DeezerEurope) [50] is a network of Deezer users from European countries where nodes are the users and edges are mutual follower relationships. The task is to predict the gender of users.
|
| 434 |
+
|
| 435 |
+
LFMA (LastFMAsia) [50] is a network of LastFM users from Asian countries where edges are mutual follower relationships between them. The task is to predict the location of users.
|
| 436 |
+
|
| 437 |
+
Amazon [52] contains Amazon Computers and Amazon Photo. They are segments of the Amazon co-purchase graph, where nodes represent goods, edges indicate that two goods are frequently bought together, node features are bag-of-words encoded product reviews, and class labels are given by the product category.
|
| 438 |
+
|
| 439 |
+
Table C.2: Transductive benchmarks with node-level classification tasks.
|
| 440 |
+
|
| 441 |
+
<table><tr><td>$\mathbf{{Dataset}}$</td><td>#Nodes</td><td>#Edges</td><td>#Node feat.</td><td>#Pred. classes</td><td>Predef. split</td><td>$\mathbf{{Ref}.}$</td></tr><tr><td>WikiNet-cham</td><td>2,277</td><td>72,202</td><td>128</td><td>5</td><td>Yes</td><td>[48]</td></tr><tr><td>WikiNet-squir</td><td>5,201</td><td>434,146</td><td>128</td><td>5</td><td>Yes</td><td>[48]</td></tr><tr><td>WebKB-Cor</td><td>183</td><td>298</td><td>1,703</td><td>10</td><td>Yes</td><td>[48]</td></tr><tr><td>WebKB-Wis</td><td>251</td><td>515</td><td>1,703</td><td>10</td><td>Yes</td><td>[48]</td></tr><tr><td>WebKB-Tex</td><td>183</td><td>325</td><td>1,703</td><td>10</td><td>Yes</td><td>[48]</td></tr><tr><td>Actor</td><td>7,600</td><td>30,019</td><td>932</td><td>10</td><td>Yes</td><td>[48]</td></tr><tr><td>WikiCS</td><td>11,701</td><td>297,110</td><td>300</td><td>10</td><td>Yes</td><td>[40]</td></tr><tr><td>Flickr</td><td>89,250</td><td>899,756</td><td>500</td><td>7</td><td>Yes</td><td>[66]</td></tr><tr><td>CF-Cora</td><td>19,793</td><td>126,842</td><td>8,710</td><td>70</td><td>No</td><td>[5]</td></tr><tr><td>CF-CoraML</td><td>2,995</td><td>16,316</td><td>2,879</td><td>7</td><td>No</td><td>[5]</td></tr><tr><td>CF-CiteSeer</td><td>4,230</td><td>10,674</td><td>602</td><td>6</td><td>No</td><td>[5]</td></tr><tr><td>CF-DBLP</td><td>17,716</td><td>105,734</td><td>1,639</td><td>4</td><td>No</td><td>[5]</td></tr><tr><td>CF-PubMed</td><td>19,717</td><td>88,648</td><td>500</td><td>3</td><td>No</td><td>[5]</td></tr><tr><td>DzEu</td><td>28,281</td><td>185,504</td><td>128</td><td>2</td><td>No</td><td>[50]</td></tr><tr><td>LFMA</td><td>7,624</td><td>55,612</td><td>128</td><td>18</td><td>No</td><td>[50]</td></tr><tr><td>Am-Comp</td><td>13,752</td><td>491,722</td><td>767</td><td>10</td><td>No</td><td>[52]</td></tr><tr><td>Am-Phot</td><td>7,650</td><td>238,162</td><td>745</td><td>8</td><td>No</td><td>[52]</td></tr><tr><td>Coau-CS</td><td>18,333</td><td>163,788</td><td>6,805</td><td>15</td><td>No</td><td>[52]</td></tr><tr><td>Coau-Phy</td><td>34,493</td><td>495,924</td><td>8,415</td><td>5</td><td>No</td><td>[52]</td></tr><tr><td>Twitch-EN</td><td>7,126</td><td>77,774</td><td>128</td><td>2</td><td>No</td><td>[49]</td></tr><tr><td>Twitch-ES</td><td>4,648</td><td>123,412</td><td>128</td><td>2</td><td>No</td><td>[49]</td></tr><tr><td>Twitch-DE</td><td>9,498</td><td>315,774</td><td>128</td><td>2</td><td>No</td><td>[49]</td></tr><tr><td>Twitch-PT</td><td>1,912</td><td>64,510</td><td>128</td><td>2</td><td>No</td><td>[49]</td></tr><tr><td>Github</td><td>37,700</td><td>578,006</td><td>128</td><td>2</td><td>No</td><td>[49]</td></tr><tr><td>FBPP</td><td>22,470</td><td>342,004</td><td>128</td><td>4</td><td>No</td><td>[49]</td></tr></table>
|
| 442 |
+
|
| 443 |
+
Coau (Coauthor) [52] contains Coauthor CS and Coauthor Physics. They are co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge 3. Nodes are authors, and are connected by an edge if they co-authored a paper; node features represent paper keywords for each author's papers, and class labels indicate most active fields of study for each author.
|
| 444 |
+
|
| 445 |
+
Twitch [49] contains Twitch user-user networks of gamers who stream in a certain language where nodes are the users themselves and the edges are mutual friendships between them. The task is to to predict whether a streamer uses explicit language. Due to low baseline performance even after a thorough hyperparameter search, we excluded Twitch-RU and Twitch-FR from our main analysis.
|
| 446 |
+
|
| 447 |
+
Github [49] is a network of GitHub developers where nodes are developers who have starred at least 10 repositories and edges are mutual follower relationships between them. The task is to predict whether the user is a web or a machine learning developer.
|
| 448 |
+
|
| 449 |
+
FBPP (FacebookPagePage) [49] is a network of verified Facebook pages that liked each other, where nodes correspond to official Facebook pages, edges to mutual likes between sites. The task is multi-class classification of the site category.
|
papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TAXONOMY OF BENCHMARKS IN GRAPH REPRESENTATION LEARNING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a sensitivity profile that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach and implementation in GTaxoGym package ${}^{1}$ are extendable to multiple graph prediction task types and future datasets.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Machine learning for graph representation learning (GRL) has seen rapid development in recent years [27]. Originally inspired by the success of convolutional neural networks in regular Euclidean domains, thanks to their ability to leverage data-intrinsic geometries, classical graph neural network (GNN) models $\left\lbrack {{15},{35},{56}}\right\rbrack$ extend those principles to irregular graph domain. Further advances in the field have led to a wide selection of complex and powerful GNN architectures. Some models are provably more expressive than others [63, 43], can leverage multi-resolution views of graphs [41], or can account for implicit symmetries in graph data [9]. Comprehensive surveys of graph neural networks can be found in Bronstein et al. [8], Wu et al. [61], Zhou et al. [67].
|
| 16 |
+
|
| 17 |
+
Most graph-structured data encode information in graph structures and node features. The structure of each graph represents relationships (i.e., edges) between different nodes, while the node features represent quantities of interest at each individual node. For example, in citation networks, nodes represent papers and edges represent citations between the papers. On such networks, node features often capture the presence or absence of certain keywords in each paper, encoded in binary feature vectors. In graphs modeling social networks, each node represents a user, and the corresponding node features often include user statistics like gender, age, or binary encodings of personal interests.
|
| 18 |
+
|
| 19 |
+
Intuitively, the power of GNNs lies in relating local node-feature information to global graph structure information, typically achieved by applying a cascade of feature aggregation and transformation steps. In aggregation steps, information is exchanged between neighboring nodes, while transformation steps apply a (multi-layer) perceptron to feature vectors of each node individually. Such architectures are commonly referred to as Message Passing Neural Networks (MPNN) [24].
|
| 20 |
+
|
| 21 |
+
Historically, GNN methods have been evaluated on a small collection of datasets [44], many of which originated from the development of graph kernels. The limited quantity, size and variety of these datasets have rendered them insufficient to serve as distinguishing benchmarks [17, 46]. Therefore, recent work has focused on compiling a set of large(r) benchmarking datasets across diverse graph domains $\left\lbrack {{17},{31}}\right\rbrack$ . Despite these efforts and the introduction of new datasets, it is still not well understood what aspects of a dataset most influence the performance of GNNs. Which is more important, the geometric structure of the graph or node features? Are long-range interactions crucial, or are short-range interactions sufficient for most tasks? This lack of understanding of the dataset properties and of their similarities makes it difficult to select a benchmarking suit that would enable comprehensive evaluation of GNN models. Even when an array of seemingly different datasets is used, they may be probing similar aspects of graph representation learning.
|
| 22 |
+
|
| 23 |
+
${}^{1}$ https://github.com/G-Taxonomy-Workgroup/GTaxoGym
|
| 24 |
+
|
| 25 |
+
< g r a p h i c s >
|
| 26 |
+
|
| 27 |
+
Figure 1: Overview of our pipeline to taxonomize graph learning datasets.
|
| 28 |
+
|
| 29 |
+
Leveraging symmetries and other geometric priors in graph data is crucial for generalizable learning [9]. While invariance or equivariance to some transformations is inherent, invariance to others may only be empirically or partially apparent. Motivated by this observation, we propose to use the lens of empirical transformation sensitivity to gauge how task-related information is encoded in graph datasets and subsequently taxonomize their use as benchmarks in graph representation learning. Our approach is illustrated in Figure 1. Namely, we list our contributions in this study as:
|
| 30 |
+
|
| 31 |
+
1. We develop a graph dataset taxonomization framework that is extendable to both new datasets and evaluation of additional graph/task properties,
|
| 32 |
+
|
| 33 |
+
2. Using this framework, we provide the first taxonomization of GNN (and GRL) benchmarking datasets, collected from TUDatasets [44], OGB [31] and other sources,
|
| 34 |
+
|
| 35 |
+
3. Through the resulting taxonomy, we provide insights about existing datasets and guide better dataset selection in future benchmarking of GNN models.
|
| 36 |
+
|
| 37 |
+
§ 2 METHODS
|
| 38 |
+
|
| 39 |
+
As a proxy for invariance or sensitivity to graph perturbations, we study the changes in GNN performance on perturbed versions of each dataset. These perturbations are designed to eliminate or emphasize particular types of information embedded in the graphs. We define an empirical sensitivity profile of a dataset as a vector where each element is the performance of a GNN after a given perturbation, reported as a percentage of the network's performance on the original dataset. In particular, we use a set of 13 perturbations, visualized in Figure 2. Of these perturbations, 6 are designed to perturb node features, while keeping the graph structure intact, whereas the remaining 7 keep the node attributes the same, but manipulate the graph structure.
|
| 40 |
+
|
| 41 |
+
For the purpose of these perturbations, we consider all graphs to be undirected and unweighted, and assume they all have node features, but not edge features. These assumptions hold for most datasets we use in this study. However, if necessary, we preprocess the data by symmetrizing each graph's adjacency matrix and dropping any edge attributes. Formally, let $G = \left( {V,E,\mathbf{X}}\right)$ be an undirected, unweighted, attributed graph with node set $V$ of cardinality $\left| V\right| = n$ , edge set $E \subset V \times V$ , and a matrix of $d$ -dimensional node features $\mathbf{X} \in {\mathbb{R}}^{n \times d}$ . We let $\mathbf{M} \in {\mathbb{R}}^{n \times n}$ denote the adjacency matrix of each graph, where $\mathbf{M}\left( {u,v}\right) = 1$ if $\left( {u,v}\right) \in E$ and zero otherwise.
|
| 42 |
+
|
| 43 |
+
Several of our perturbations are based on spectral graph theory, which represents graph signals in a spectral domain analogous to classical Fourier analysis. We define the graph Laplacian $\mathbf{L} \mathrel{\text{ := }} \mathbf{D} - \mathbf{M}$ and the symmetric normalized graph Laplacian $\mathbf{N} \mathrel{\text{ := }} {\mathbf{D}}^{-\frac{1}{2}}\mathbf{L}{\mathbf{D}}^{-\frac{1}{2}} = \mathbf{I} - {\mathbf{D}}^{-\frac{1}{2}}\mathbf{M}{\mathbf{D}}^{-\frac{1}{2}}$ , where $\mathbf{D}$ is the diagonal degree matrix. Both $\mathbf{L}$ and $\mathbf{N}$ are positive semi-definite and have an orthonormal eigendecompositions $\mathbf{L} = \mathbf{\Phi }\mathbf{\Lambda }{\mathbf{\Phi }}^{\top }$ and $\mathbf{N} = \widetilde{\mathbf{\Phi }}\widetilde{\mathbf{\Lambda }}{\widetilde{\mathbf{\Phi }}}^{\top }$ . By convention, we order the eigenvalues and corresponding eigenvectors ${\left\{ \left( {\lambda }_{i},{\phi }_{i}\right) \right\} }_{0 \leq i \leq n - 1}$ of $\mathbf{L}$ (and similarly for $\mathbf{N}$ ) in ascending order $0 = {\lambda }_{0} \leq {\lambda }_{1} \leq \cdots \leq {\lambda }_{n - 1}$ . The eigenvectors ${\left\{ {\phi }_{i}\right\} }_{0 \leq i \leq n - 1}$ constitute a basis of the space of graph signals and can be considered as generalized Fourier modes. The eigenvalues ${\left\{ {\lambda }_{i}\right\} }_{0 \leq i \leq n - 1}$ characterize the variation of these Fourier modes over the graph and can be interpreted as (squared) frequencies.
|
| 44 |
+
|
| 45 |
+
< g r a p h i c s >
|
| 46 |
+
|
| 47 |
+
Figure 2: Node feature and graph structure perturbations of the first graph in ENZYMES. The color coding of nodes illustrates their feature values, except (k-n) where the fragment assignment is shown.
|
| 48 |
+
|
| 49 |
+
§ 2.1 NODE FEATURE PERTURBATIONS
|
| 50 |
+
|
| 51 |
+
We first consider two perturbations that alter local node features, setting them either to a fixed constant (w.l.o.g., one) for all nodes, or to a one-hot encoding of the degree of the node. We refer to these perturbations as NoNodeFtrs (since constant node features carry no additional information) and NodeDeg, respectively. In addition, we consider a random node feature perturbation (RandFtrs) by sampling a one-dimensional feature for each node uniformly at random within $\left\lbrack {-1,1}\right\rbrack$ . Sensitivity to these perturbations, exhibited by a large decrease in predictive performance, may indicate that a dataset (or task) is dominated by highly informative node features.
|
| 52 |
+
|
| 53 |
+
We also develop spectral node feature perturbations. As in Euclidean settings, the Fourier decomposition can be used to decompose graph signals into a set of canonical signals, called Fourier modes, which are organized according to increasing variation (or frequency). In Euclidean Fourier analysis, these modes are sinusoidal waves oscillating at different frequencies. A standard practice in audio signal processing is to remove noise from a signal by identifying and removing certain Fourier modes or frequency bands. We generalize this technique to graph datasets and systematically remove certain graph Fourier modes to probe the importance of the corresponding frequency bands.
|
| 54 |
+
|
| 55 |
+
In this perturbation, we use the frequencies derived from the symmetric normalized graph Laplacian $\mathbf{N}$ and split them into three roughly equal-sized frequency bands (low, mid, high), i.e., bins of subsequent eigenvalues. To assess the importance of each of the frequency bands, we then apply hard band-pass filtering to the graph signals (node feature vectors), i.e., we project the signals on the span of the selected Fourier modes. More specifically, for each band, we let ${\mathbf{I}}_{\text{ band }}$ be a diagonal matrix with diagonal elements equal to one if the corresponding eigenvalue is in the band, and zero otherwise. Then, the hard band-pass filtered signal is computed as
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\mathbf{X}}_{\text{ band }} = \widetilde{\mathbf{\Phi }}{\mathbf{I}}_{\text{ band }}{\widetilde{\mathbf{\Phi }}}^{\top }\mathbf{X}. \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
The above band-pass filtering perturbation enables a precise selection of the frequency bands. However, it requires a full eigendecomposition of the normalized graph Laplacian, which is impractical for large graphs. We therefore provide an alternative approach based on wavelet bank filtering [13]. This leverages the fact that polynomial filters $h$ of the normalized graph Laplacian directly transform the spectrum via $h\left( \mathbf{N}\right) = \widetilde{\mathbf{\Phi }}h\left( \widetilde{\mathbf{\Lambda }}\right) {\widetilde{\mathbf{\Phi }}}^{\top }$ , yielding the frequency response $h\left( \lambda \right)$ for any eigenvalue $\lambda$ of N. This is usually done by taking the symmetrized diffusion matrix
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\mathbf{T} = \frac{1}{2}\left( {\mathbf{I} + {\mathbf{D}}^{-\frac{1}{2}}{\mathbf{{MD}}}^{-\frac{1}{2}}}\right) = \frac{1}{2}\left( {2\mathbf{I} - \mathbf{N}}\right) . \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
By construction, $\mathbf{T}$ admits the same eigenbasis as $\mathbf{N}$ but its eigenvalues are mapped from $\left\lbrack {0,2}\right\rbrack$ to $\left\lbrack {0,1}\right\rbrack$ via the frequency response $h\left( \lambda \right) = 1 - \lambda /2$ . As a result, large eigenvalues are mapped to small values (and vice versa). Next, we construct diffusion wavelets [15] that consist of differences of dyadic powers ${2}^{k},k \in {\mathbb{N}}_{0}$ of $\mathbf{T}$ , i.e., ${\Psi }_{k} = {\mathbf{T}}^{{2}^{k - 1}} - {\mathbf{T}}^{{2}^{k}}$ , which act as bandpass filters on the signal. Intuitively, this operator "compares" two neighborhoods of different sizes (radius ${2}^{k - 1}$ and ${2}^{k}$ ) at each node. Diffusion wavelets are usually maintained in a wavelet bank ${\mathcal{W}}_{K} = {\left\{ {\mathbf{\Psi }}_{k},{\mathbf{\Phi }}_{\mathbf{K}}\right\} }_{k = 0}^{K}$ , which contains additional highpass ${\mathbf{\Psi }}_{0} = \mathbf{I} - \mathbf{T}$ and lowpass ${\mathbf{\Psi }}_{\mathbf{K}} = {\mathbf{T}}^{K}$ filters. In our experiments, we choose $K = 1$ , resulting in the following low, mid, and highpass filtered node features:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\mathbf{X}}_{\text{ high }} = \left( {\mathbf{I} - \mathbf{T}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{ mid }} = \left( {\mathbf{T} - {\mathbf{T}}^{2}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{ low }} = {\mathbf{T}}^{2}\mathbf{X}. \tag{3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
These filters correspond to frequency responses ${h}_{\text{ high }}\left( \lambda \right) = \lambda /2,{h}_{\text{ mid }}\left( \lambda \right) = \left( {1 - \lambda /2}\right) - {\left( 1 - \lambda /2\right) }^{2}$ and ${h}_{\text{ low }}\left( \lambda \right) = {\left( 1 - \lambda /2\right) }^{2}$ . Therefore, the low-pass filtering preserves low-frequency information while suppressing high-frequency information whereas high-pass filtering does the opposite. The mid-pass filtering suppresses all frequencies. However, it preserves much more middle-frequency information than it does high- or low-frequency information.
|
| 74 |
+
|
| 75 |
+
Therefore, this filtering may be interpreted as approximation of the hard band-pass filtering discussed above. From the spatial message passing perspective, low-pass filtering is equivalent to local averaging of the node features, which has a profound implication on homophilic and heterophilic characteristics of the datasets (Sec. 3.2). Finally, since the computations needed in (3) can be carried out via sparse matrix multiplications, they have the advantage of scaling well to large graphs. Therefore, we utilize the wavelet bank filtering for the datasets with larger graphs considered in Sec. 3.2, while for the smaller graphs, considered in Sec. 3.1, we employ the direct band-pass filtering approach.
|
| 76 |
+
|
| 77 |
+
§ 2.2 GRAPH STRUCTURE PERTURBATIONS
|
| 78 |
+
|
| 79 |
+
The following perturbations act on the graph structure by altering the adjacency matrix. By removing all edges (NoEdges) or making the graph fully-connected (FullyConn), we can eliminate the structural information completely and essentially turn the graph into a set. The difference between the two perturbations lies in whether all nodes are processed independently or all nodes are processed together. However, FullyConn is only applied to inductive datasets in Sec. 3.1 due to computational limitations. Furthermore, we consider a degree-preserving random edge rewiring perturbation (RandRewire). In each step, we randomly sample a pair of edges and randomly exchange their end nodes. We then repeat this process without replacement until ${50}\%$ of the edges have been randomly rewired.
|
| 80 |
+
|
| 81 |
+
To inspect the importance of local vs. global graph structure, we designed the Frag- $k$ perturbations, which randomly partition the graph into connected components consisting of nodes whose distance to a seed node is less than $k$ . Specifically, we randomly draw one seed node at a time and extract its $k$ -hop neighborhood by eliminating all edges between this new fragment and the rest of the graph; we repeat this process on the remaining graph until the whole graph is processed. A smaller $k$ implies smaller components, and hence discards the global structure and long-range interactions.
|
| 82 |
+
|
| 83 |
+
Graph fragmentations can also be constructed using spectral graph theory. In our taxonomization, we adopt one such method, which we refer to as Fiedler fragmentation (FiedlerFrag) (see [33] and the references therein). In the case when the graph $G$ is connected, ${\phi }_{0}$ , the eigenvector of the graph Laplacian $\mathbf{L}$ corresponding to ${\lambda }_{0} = 0$ , is constant. The eigenvector ${\phi }_{1}$ corresponding to the next smallest eigenvalue, ${\lambda }_{1}$ , is known as the Fiedler vector [21]. Since ${\phi }_{0}$ is constant, it follows that ${\phi }_{1}$ has zero average. This motivates partitioning the graph into two sets of vertices, one where ${\phi }_{1}$ is positive and the other where ${\phi }_{1}$ is negative. We refer to this process as binary Fiedler fragmentation. This heuristic is used to construct the ratio cut for a connected graph [26]. The ratio cut partitions a connected graph into two disjoint connected components $V = U \cup W$ , such that the objective $\left| {E\left( {U,W}\right) }\right| /\left( {\left| U\right| \cdot \left| W\right| }\right)$ is minimized, where $E\left( {U,W}\right) \mathrel{\text{ := }} \{ \left( {u,w}\right) \in E : u \in U,w \in W\}$ is the set of removed edges when fragmenting $G$ accordingly. This can be seen as a combination of the min cut objective (numerator), while encouraging a balanced partition (denominator).
|
| 84 |
+
|
| 85 |
+
FiedlerFrag is based on iteratively applying binary Fiedler fragmentation. In each step, we separate out the graph into its connected components and apply binary Fiedler fragmentation to the largest component. We repeat this process until either we reach 200 iterations, or the size of the largest connected component falls below 20. In contrast to the random fragmentation Frag- $k$ , this perturbation preserves densely connected regions of the graph and eliminates connections between them. Thus, FiedlerFrag tests the importance of inter community message flow. Due to computational limits, we only apply FiedlerFrag to inductive datasets in Sec. 3.1 for which this computation is feasible.
|
| 86 |
+
|
| 87 |
+
§ 2.3 DATA-DRIVEN TAXONOMIZATION BY HIERARCHICAL CLUSTERING
|
| 88 |
+
|
| 89 |
+
To study a systematic classification of the graph datasets, we use Ward's method [58] for hierarchical clustering analysis of their sensitivity profiles. The sensitivity profiles are established empirically by contrasting the performance of a GNN model on a perturbed dataset and on the original dataset. To quantify this performance change, we use ${\log }_{2}$ -transformed ratio of test AUROC (area under the ROC curve). Thus a sensitivity profile is a 1-D vector with as many elements as we have perturbation experiments. See Figure 1 and Appendix A for further details.
|
| 90 |
+
|
| 91 |
+
< g r a p h i c s >
|
| 92 |
+
|
| 93 |
+
Figure 3: Visualization of (a) inductive and (b) transductive datasets based on PCA of their perturbation sensitivity profiles according to a GCN model. The datasets are labeled according to their taxonomization by hierarchical clustering, shown in Figure 4 and 6, which corroborates with the emerging clustering in the PCA plots. In the bottom part are shown the loadings of the first two principal components and (in parenthesis) the percentage of variance explained by each of them.
|
| 94 |
+
|
| 95 |
+
In order to generate sensitivity profiles, we must select suitable GNN models based on several practical considerations: (i) The model has to be expressive enough to efficiently leverage aspects of the node features and graph structure that we perturb. Otherwise, our analysis will not be able to uncover reliance on these properties. (ii) The model needs to be general enough to be applicable to a wide variety of datasets, avoiding dataset-specific adjustments that may lead to profiling that is not comparable between datasets. Therefore, we did not aim for specialized models that maximize performance, but rather models that (i) achieve at least baseline performance comparable to published works over all datasets, (ii) have manageable computational complexity to facilitate large-scale experimentation, and (iii) use well-established and theoretically well-understood architectures.
|
| 96 |
+
|
| 97 |
+
With these criteria in mind, we focused on two popular MPNN models in our analysis: GCN [35] and GIN [63]. The original GCN serves as an ideal starting point as its abilities and limitations are well-understood. However, we also wanted to perform taxonomization through a provably more expressive and recent method, which motivated our selection of GIN as the second architecture. We emphasize that the main focus here is not to provide a benchmarking of GNN models per se, but rather to address the taxonomization of graph datasets (and accompanying tasks) used in such benchmarks. Nevertheless, we have also generated sensitivity profiles by additional models in order to comparatively demonstrate the robustness of our approach: 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]; see Figure 5.
|
| 98 |
+
|
| 99 |
+
§ 3 RESULTS
|
| 100 |
+
|
| 101 |
+
Each of the 48 datasets we consider is equipped with either a node classification or graph classification task. In the case of node classification, we further differentiate between the inductive setting, in which learning is done on a set of graphs and the generalization occurs from a training set of graphs to a test set, and the transductive setting, in which learning is done in one (large) graph and the generalization occurs between subsets of nodes in this graph. Graph classification tasks, by contrast, always appear in an inductive setting. The only major difference between graph classification and inductive node classification is that prior to final prediction, the hidden representations of all nodes are pooled into a single graph-level representation. In the following two subsections, we provide an analysis of the sensitivity profiles for datasets with inductive and transductive tasks.
|
| 102 |
+
|
| 103 |
+
< g r a p h i c s >
|
| 104 |
+
|
| 105 |
+
Figure 4: Taxonomy of inductive graph learning datasets via graph perturbations. For each dataset and perturbation combination, we show the GCN model performance relative to its performance on the unmodified dataset.
|
| 106 |
+
|
| 107 |
+
§ 3.1 TAXONOMY OF INDUCTIVE BENCHMARKS
|
| 108 |
+
|
| 109 |
+
Datasets. We examine a total of 23 datasets, 20 of which are equipped with a graph-classification task (inductive by nature) and the other three are equipped with an inductive node-classification task. Of these datasets, 17 are derived from real-world data, while the other six are synthetically generated.
|
| 110 |
+
|
| 111 |
+
For real-world data, we consider several domains. Biochemistry tasks are the most ubiquitous, including compound classification based on effects on cancer or HIV inhibition (NCI1 & NCI109 [57], ogbg-molhiv [31]), protein-protein interaction PPI [68, 28], multilabel compound classification based on toxicity on biological targets (ogbg-moltox21 [31]), and multiclass classification of enzymes (ENZYMES [31]). We also consider superpixel-based graph classification as an extension of image classification (MNIST & CIFAR10 [17]), collaboration datasets (IMDB-BINARY & COLLAB [64]), and social graphs (REDDIT-BINARY & REDDIT-MULTI-5K [64]).
|
| 112 |
+
|
| 113 |
+
For synthetic data, we have concrete understanding of their graph domain properties and how these properties relate to their prediction task. This allows us to derive a deeper understanding of their sensitivity profiles. The six synthetic datasets in our study make use of a varied set of graph generation algorithms. Small-world [65] is based on graph generation with the Watz-Strogatz (WS) model; the task is to classify graphs based on average path length. Scale-free [65] retains the same task definition, but the graph generation algorithm is an extension of the Barabási-Albert (BA) model proposed by Holme and Kim [30]. PATTERN and CLUSTER are node-level classification tasks generated with stochastic block models (SBM) [29]. Synthie [42] graphs are derived by first sampling graphs from the well-known Erdös-Rényi (ER) model, then deriving each class of graphs by a specific graph surgery and sampling of node features from a distinct distribution per each class. Similarly, SYNTHETICnew [18] graphs are generated from a random graph, where different classes are formed by specific modifications to the original graph structure and node features. Further details of dataset definitions and synthetic graph generation algorithms are provided in Appendix C.
|
| 114 |
+
|
| 115 |
+
Insights. Here we itemize the main insights into inductive datasets. Our full taxonomy is shown in Figures 4 and 3a, with a detailed analysis of individual clusters given in Appendix B.1.
|
| 116 |
+
|
| 117 |
+
* Three distinct groups of datasets. We identify a categorization into three dataset clusters $\mathrm{I} - \{ 1,2,3\}$ that emerge from both the hierarchical clustering and PCA. The datasets in $\mathrm{I} - \{ 1,2\}$ exhibit stronger node feature dependency and do not encode crucial information in the graph structure. The main differentiating factor between I-1 and I-2 is their relative sensitivity to node feature perturbations - in particular, how well NodeDeg can substitute the original node features. On the other hand, datasets in I-3 rely considerably more on graph structure for correct task prediction. This is also reflected by the first two principal components (Figure 3a), where PC1 approximately corresponds to structural perturbations and PC2 to node feature perturbations.
|
| 118 |
+
|
| 119 |
+
* No clear clustering by dataset domain. While datasets that are derived in a similar fashion cluster together (e.g., REDDIT-* datasets), in general, each of the three clusters contains datasets from a variety of application domains. Not all molecular datasets behave alike; e.g., ogbg-mol* datasets in I-2 considerably differ from NCI* datasets in I-3.
|
| 120 |
+
|
| 121 |
+
* Synthetic datasets do not fully represent real-world scenarios. CLUSTER, SYNTHETICnew, and PATTERN lie at the periphery of the PCA embeddings, suggesting that existing synthetic datasets do not resemble the type of complexity encountered in real-world data. Hence, one should use synthetic datasets in conjunction with real-world datasets to comprehensively evaluate GNN performance rather than solely relying on synthetic ones.
|
| 122 |
+
|
| 123 |
+
* Representative set. One can now select a representative subset of all datasets to cover the observed heterogeneity among the datasets. Our recommendation: CIFAR10 from I-1; D&D, ogbg-molhiv from I-2; NCI1, COLLAB, REDDIT-MULTI-5K, CLUSTER from I-3.
|
| 124 |
+
|
| 125 |
+
* Robustness w.r.t. GNN choice. In addition to GCN, we have performed our perturbation analysis w.r.t. GIN [63], 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]. These models were selected to cover a variety of inductive model biases: GIN is provably 1-WL expressive, ChebNet uses higher-order approximation of the Laplacian, GatedGCN employs gating akin to attention, and GCN II leverages skip connections and identity mapping to alleviate oversmoothing. We have also tested a 2-layer GIN to probe the robustness to number of message-passing layers. The taxonomies w.r.t. other models (Figure B.1) are congruent with that of GCN. Given the differing inductive biases and representational capacity, some difference in the sensitivity profiles are not only expected but desired to validate their functions in benchmarking. The resulting profiles can be used for a detailed comparative analysis of these models, but the overall conclusions remain consistent. This consistency is further validated by our correlation analysis amongst these models, shown in Figure 5. The Pearson correlation coefficients of all pairs are above 90%, implying that our taxonomy is sufficiently robust w.r.t. different GNNs and the number of layers.
|
| 126 |
+
|
| 127 |
+
< g r a p h i c s >
|
| 128 |
+
|
| 129 |
+
Figure 5: Pearson correlation between profiles derived by six GNN models.
|
| 130 |
+
|
| 131 |
+
§ 3.2 TAXONOMY OF TRANSDUCTIVE BENCHMARKS
|
| 132 |
+
|
| 133 |
+
Datasets. We selected a wide variety of 25 transductive datasets with node classification task, including citation networks, social networks, and other web page derived networks (see Appendix C). In citation networks, such as CitationFull (CF) [5], nodes and edges correspond to papers that are linked via citation. In web page derived networks, like WikiNet [48], Actor [48], and WikiCS [40], they correspond to hyperlinks between pages. In social networks, like Deezer (DzEu) [50], LastFM (LFMA) [50], Twitch [49], Facebook (FBPP) [49], Github [49], and Coau [52], nodes and edges are based on a type of relationship, such as mutual-friendship and co-authorship. Flickr [66] and Amazon [52] are constructed based on other notions of similarity between entities, such as co-purchasing and image property similarities. WebKB [48] contains networks of university web pages connected via hyperlinks. It is an example of a heterophilic dataset [45], since immediate neighbor nodes do not necessarily share the same labels (which correspond to a user's role such as faculty or graduate student). By contrast, Cora, CiteSeer, and PubMed are known to be homophilic datasets where nodes within a neighborhood are likely to share the same label. In fact, no less than 60% of nodes in these networks have neighborhoods that share the same node label as the central node [40].
|
| 134 |
+
|
| 135 |
+
Insights. Below we list the main insights into transductive graph datasets and their taxonomy (Figures 6 and 3b). We refer the reader to Appendix B.2 for the analysis of individual clusters.
|
| 136 |
+
|
| 137 |
+
< g r a p h i c s >
|
| 138 |
+
|
| 139 |
+
Figure 6: Taxonomization of transductive datasets based on sensitivity profiles w.r.t. a GCN model.
|
| 140 |
+
|
| 141 |
+
* Transductive datasets are uniformly insensitive to structural perturbations. Sensitivity profiles of all transductive datasets show high robustness to all graph structure perturbations. This is in stark contrast with the inductive datasets, where the largest cluster I-3 is defined by high sensitivity to structural perturbations. The graph connectivity may not be vital to every dataset/task, e.g., in WikiCS word embeddings of Wikipedia pages may be sufficient for categorization without hyperlinks. While the observation that no dataset significantly depends on structural information is startling, it corroborates with reported strong performance of MLP or similar models augmented with label propagation to outperform GNNs in several of these transductive datasets [23, 32].
|
| 142 |
+
|
| 143 |
+
* Three distinct groups of datasets. The transductive datasets are also categorized into three clusters as T- $\{ 1,2,3\}$ . T-1 consists of heterophilic datasets, such as WebKB and Actor [45,39]. These are well-separated from others, as seen in the right half of the PCA plot (Figure 3b), primarily via PC1 and characterized by performance drop due to removal of the original node features (NoNodeFtrs, RandFtrs) and their replacement by node degrees (NodeDeg). T-3 is indifferent to both node and structure removal, implying redundancies between node features and graph structure for their tasks. T-2 datasets, on the other hand, experience significant performance degradation on NoNodeFtrs and RandFtrs, yet these drops are recovered in NodeDeg. This indicates that T-2 datasets have tasks for which structural summary information is sufficient, perhaps due to homophily.
|
| 144 |
+
|
| 145 |
+
* Representative set. Many datasets have very close sensitivity profiles, thus factoring in also the graph size and original AUROC (avoiding saturated datasets), we make the following recommendation: WebKB-Wis, Actor from T-1; WikiNet-cham, WikiCS, Flickr from T-2; WikiNet-squir, Twitch-EN, GitHub from T-3.
|
| 146 |
+
|
| 147 |
+
§ 4 DISCUSSION
|
| 148 |
+
|
| 149 |
+
Our results quantify the extent to which graph features or structures are more important for the downstream tasks, an important question brought up in classical works on graph kernels [37, 51]. We observed that more than half of the datasets contain rich node features. On average, excluding these features reduces GNN prediction performance more than excluding the entire graph structures, especially for transductive node-level tasks. Furthermore, low-frequency information in node features appears to be essential in most datasets that rely on node features. Historically, most graph data aimed to capture closeness among entities, which has prompted development of local aggregation approaches, such as label propagation, personalized page rank, and diffusion kernels [36, 14], all of which share a common principle of low pass filtering. High-frequency information, on the other hand, may be important in recently emerging application areas, such as combinatorial optimization, logical reasoning or biochemical property prediction, which require complex non-local representations.
|
| 150 |
+
|
| 151 |
+
Further, despite the recent interest in development of new methods that could leverage long-range dependencies and heterophily, the availability of adequate benchmarking datasets remains lacking or less readily accessible. Meanwhile, some recent efforts such as GraphWorld [46] aim to comprehensively profile a GNN's performance using a collection of synthetic datasets that cover an entire parametric space. Notably, our analysis demonstrates that synthetic tasks do not fully resemble the complexity of real-world applications. Hence, bench marking made purely by synthetic datasets should be taken with caution, as the behavior might not be representative of real-world scenarios.
|
| 152 |
+
|
| 153 |
+
As a comprehensive benchmarking framework, our work provides several potential use cases beyond the taxonomy analysis presented here. One such usage is understanding the characteristics of any new datasets and how they are related to existing ones. For example, DeezerEurope (DzEu) is a relatively new dataset [50] that is less commonly benchmarked and studied than the other datasets we consider. The inclusion of DzEu in T-1 suggested its heterophilic nature, which indeed has been recently demonstrated [38]. On the other hand, since the sensitivity profiles naturally suggest the invariances that are important for different datasets from a practical standpoint, they could provide valuable guidance to the development of self-supervised learning and data augmentations for GNNs [62].
|
| 154 |
+
|
| 155 |
+
Finally, we observed that overall patterns in sensitivity profiles remain similar regardless whether we used GCN, GIN, or the other 4 models to derive them. Subtle differences in sensitivity profiles w.r.t. different GNN models are not only expected but also desired when comparing models that have distinct levels of expressivity. While we expect overall patterns to be similar, more expressive models should provide enhanced resolution. One could then contrast taxonomization w.r.t. first-order GNNs (such as those we used) with more expressive higher-order GNNs, Transformer-based models with global attention, and others. We hope our work will also inspire future work to empirically validate expressivity of new graph learning methods in this vein, beyond classical benchmarking.
|
| 156 |
+
|
| 157 |
+
Limitations and Future Work. Our perturbation-based approach is fundamentally limited in that we cannot test the significance of a property that we cannot perturb or that the reference GNN model cannot capture. Therefore, designing more sophisticated perturbation strategies to gauge specific relations could bring further insight into the datasets and GNN models alike. New perturbations may gauge the usefulness of geometric substructures such as cycles [3] or the effects of graph bottlenecks, e.g., by rewiring graphs to modify their "curvatures" [55]. Other perturbations could include graph sparsification (edge removal) [53] and graph coarsening (edge contraction) [10, 4].
|
| 158 |
+
|
| 159 |
+
A number of OGB node-level datasets are not included in this study due to memory cost of typical MPNNs. Conducting an analysis based on recent scalable GNN models [20] would be an interesting avenue of future research. Further, we only considered classification tasks, omitting regression tasks, as their evaluation metrics are not easily comparable. One way to circumvent this issue would be to quantize regression tasks into classification tasks by binning their continuous targets. Additionally, we disregarded edge features in two OGB molecular datasets we used. In a future work, edge features could be leveraged by an edge-feature aware generalization of MPNNs. The importance of edge features can then be analyzed by introducing new edge-feature perturbations. We also limited our analysis to node-level and graph-level tasks, but this framework could be further extended to link-prediction or edge-level tasks. While our perturbations could be used in this new scenario as well, new perturbations, such as the above-mentioned graph sparsification, would need to be considered. Similarly, hallmark models for link and relation predictions, outside MPNNs, should be considered.
|
| 160 |
+
|
| 161 |
+
§ 5 CONCLUSION
|
| 162 |
+
|
| 163 |
+
We provide a systematic data-driven approach for taxonomizing a large collection of graph datasets - the first study of its kind. The core principle of our approach is to gauge the essential characteristics of a given dataset with respect to its accompanying prediction task by inspecting the downstream effects caused by perturbing its graph data. The resulting sensitivities to the diverse set of perturbations serve as "fingerprints" that allow to identify datasets with similar characteristics. We derive several insights into the current common benchmarks used in the field of graph representation learning, and make recommendations on selection of representative benchmarking suits. Our analysis also puts forward a foundation for evaluating new benchmarking datasets that will likely emerge in the field.
|
papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,469 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Neighborhood-aware Scalable Temporal Network Representation Learning
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Temporal networks have been widely used to model real-world complex systems such as financial systems and e-commerce systems. In a temporal network, the joint neighborhood of a set of nodes often provides crucial structural information useful for predicting whether they may interact at a certain time. However, recent representation learning methods for temporal networks often fail to extract such information or depend on online construction of structural features, which is time-consuming. To address the issue, this work proposes Neighborhood-Aware Temporal network model (NAT). For each node in the network, NAT abandons the commonly-used one-single-vector-based representation while adopting a novel dictionary-type neighborhood representation. Such a dictionary representation records a down-sampled set of the neighboring nodes as keys, and allows fast construction of structural features for a joint neighborhood of multiple nodes. We also design dedicated data structure termed $N$ -cache to support parallel access and update of those dictionary representations on GPUs. NAT gets evaluated over seven real-world large-scale temporal networks. NAT not only outperforms all cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy, respectively, but also keeps scalable by achieving a speed-up of ${4.1} - {76.7} \times$ against the baselines that adopts joint structural features and achieves a speed-up of ${1.6} - {4.0} \times$ against the baselines that cannot adopt those features. The link to the code: https://anonymous.4open.science/r/NAT-617D.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Temporal networks are widely used as abstractions of real-world complex systems [1]. They model interacting elements as nodes, interactions as links, and when those interactions happen as timestamps on those links. Temporal networks often evolve by following certain patterns. Ranging from triadic closure [2] to higher-order motif closure [3-6], the interacting behaviors between multiple nodes have been shown to strongly depend on the network structure of their joint neighborhood. Researchers have leveraged this observation and built many practical systems to monitor and make prediction on temporal networks such as anomaly detection in financial networks [7-9], friend recommendation in social networks [10], and collaborative filtering techniques in e-commerce systems [11].
|
| 16 |
+
|
| 17 |
+
Recently, graph neural networks (GNNs) have been widely used to encode network-structured data [12] and have achieved state-of-the-art (SOTA) performance in many tasks such as node/graph classification [13-15]. However, to predict how nodes interact with each other in temporal networks, a direct generalization of GNNs may not work well. Traditional GNNs often learn a vector representation for each node, and predict whether two node may interact (aka. a link) based on a combination (e.g. the inner product) of the two vector representations. This link prediction strategy often fails to capture the structure features of the joint neighborhood of the two nodes [16-19]. Consider a toy example with a temporal network in Fig. 1: Node $w$ and node $v$ share the same local structure before ${t}_{3}$ , so GNNs including their variants on temporal networks (e.g., TGN [20]) will associate $w$ and $v$ with the same vector representation. Hence, GNNs will fail to make correct prediction to tell whether $u$ will interact with $w$ or $v$ at ${t}_{3}$ . Here, GNNs cannot capture the important joint structural feature that $u$ and $v$ have a common neighbor $a$ before ${t}_{3}$ . This issue makes almost all previous works that generalize GNNs for temporal networks provide only subpar performance [20-29]. Some recent works have been proposed to address such an issue on static networks [18, 19, 30]. Their key ideas are to construct node structural features to learn the two-node joint neighborhood representations. Specifically, for two nodes of interest, they either label one linked node and construct its distance to the other node $\left\lbrack {{31},{32}}\right\rbrack$ , or label all nodes in the neighborhood with their distances to these two linked nodes $\left\lbrack {{18},{33}}\right\rbrack$ . Traditional GNNs can afterwards encode such feature-augmented neighborhood to achieve better inference. Although these ideas are theoretically powerful [18, 19] and provide good empirical performance on small networks, the induced models are not scaled up to large networks. This is because constructing such structural features is time-consuming and should be done separately for each link to be predicted. This issue becomes even more severe over temporal networks, because two nodes may interact many times and thus the number of links to be predicted is often much larger than the corresponding number in static networks.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: A toy example to predict how a temporal network evolves. Given the historical temporal network as shown in the left, the task is to predict whether $u$ prefers to interact with $v$ or $w$ at timestamp ${t}_{3}$ . If this is a social network,(u, v)is likely to happen because $u, v$ share a common neighbor $a$ and follow the principle of triadic closure [2]. However, traditional GNNs, even for their generalization on temporal networks fail here as they learn the same representations for node $v$ and node $w$ due to their common structural contexts, as shown in the middle. In the right, we show a high-level abstraction of joint neighborhood features based on $\mathrm{N}$ -caches of $\mathbf{u}$ and $\mathbf{v}$ : In the N-caches for 1-hop neighborhoods of both node $u$ and node $v, a$ appears as the keys. Joining these keys can provide a structural feature that encodes such common-neighbor information at least for prediction.
|
| 22 |
+
|
| 23 |
+
In this work, we propose Neighborhood-Aware Temporal network model (NAT) that can address the aforementioned modeling issue while keeping a good scalability of the model. The key novelty of NAT is to incorporate dictionary-type neighborhood representations in place of one-single-vector node representation and a computation-friendly neighborhood cache (N-cache) to maintain such dictionary-type respresentations. Specifically, the N-cache of a node stores several size-constrained dictionaries on GPUs. Each dictionary has a sampled collection of historical neighbors of the center node as keys, and aggregates the timestamps and the features on the links connected to these neighbors as values (vector representations). With N-caches, NAT can in parallel construct the joint neighborhood structural features for a batch of node pairs to achieve fast link predictions. NAT can also update the N-caches with new interacted neighbors efficiently by adopting hash-based search functions which support GPU parallel computation.
|
| 24 |
+
|
| 25 |
+
NAT provides a novel solution for scalable temporal network representation learning. We evaluate NAT over 7 real-world temporal networks, among which, one contains $1\mathrm{M} +$ nodes and almost 10 $\mathrm{M}$ temporal links to evaluate the scalability of NAT. NAT outperforms cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy respectively. NAT achieves 4.1-76.7 - speed-up compared to the baseline CAWN [34] that constructs joint neighborhood features based on random walk sampling. NAT also achieves ${1.6} - {4.0} \times$ speed-up of the fastest baselines that do not construct joint neighborhood features (and thus suffer from the issue in Fig. 1) on large networks.
|
| 26 |
+
|
| 27 |
+
## 2 Related works
|
| 28 |
+
|
| 29 |
+
Neighborhood structure often governs how temporal networks evolve over time. Early-time temporal network prediction models count motifs $\left\lbrack {{35},{36}}\right\rbrack$ or subgraphs $\left\lbrack {37}\right\rbrack$ in the historical neighborhood of two interacting objects as features to predict their future interactions. These models cannot use network attributes and often suffer from scalability issues because counting combinatorial structures is complicated and hard to be executed in parallel. Network-embedding approaches for temporal networks [38-42] suffer from the similar problem, because the optimization problem used to compute node embeddings is often too complex to be solved again and again as the network evolves.
|
| 30 |
+
|
| 31 |
+
Recent works based on neural networks often provide more accurate and faster models, which benefit from the parallel computation hardware and scalable system support $\left\lbrack {{43},{44}}\right\rbrack$ for deep learning. Some of these works simply aggregate the sequence of links into network snapshots and treat temporal networks as a sequence of static network snapshots [21-26]. These methods may offer low prediction accuracy as they cannot model the interactions that lie in different levels of time granularity.
|
| 32 |
+
|
| 33 |
+
Move advanced methods deal with link streams directly [20, 27-29, 45-47]. They generalize GNNs to encode temporal networks by associating each node with a vector representation and update it based on the nodes that one interacts with. Some works use the representation of the node that one is currently interacting with $\left\lbrack {{27},{28},{45}}\right\rbrack$ . Other works use those of the nodes that one has interacted with in the history $\left\lbrack {{20},{29},{46},{47}}\right\rbrack$ . However, in either way, these methods suffer from the limited power of GNNs to capture the structural features from the joint neighborhood of multiple nodes [17, 19]. Recently, CAWN [34] and HIT [4], inspired by the theory in static networks [18, 19], have proposed to construct such structural features to improve the representation learning on temporal networks, CAWN for link prediction and HIT for higher-order interaction prediction. However, their computational complexity is high, as for every queried link, they need to sample a large group of random walks and construct the structural features on CPUs that limit the level of parallelism. However, NAT addresses these problems via neighborhood representations and N-caches.
|
| 34 |
+
|
| 35 |
+
## 3 Preliminaries: Notations and Problem Formulation
|
| 36 |
+
|
| 37 |
+
In this section, we introduce some notations and the problem formulation. We consider temporal network as a sequence of timestamped interactions between pairs of nodes.
|
| 38 |
+
|
| 39 |
+
Definition 3.1 (Temporal network) A temporal network $\mathcal{E}$ can be represented as $\mathcal{E} =$ $\left\{ {\left( {{u}_{1},{v}_{1},{t}_{1}}\right) ,\left( {{u}_{2},{v}_{2},{t}_{2}}\right) ,\cdots }\right\} ,{t}_{1} < {t}_{2} < \cdots$ where ${u}_{i},{v}_{i}$ denote interacting node IDs of the ith link, ${t}_{i}$ denotes the timestamp. Each temporal link(u, v, t)may have link feature ${e}_{u, v}^{t}$ . We also denote the entire node set as $\mathcal{V}$ . Without loss of generality, we use integers as node IDs, i.e., $\mathcal{V} = \{ 1,2,\ldots \}$ .
|
| 40 |
+
|
| 41 |
+
A good representation learning of temporal networks is able to efficiently and accurately predict how temporal networks evolve over time. Hence, we formulate our problem as follows.
|
| 42 |
+
|
| 43 |
+
Definition 3.2 (Problem formulation) Our problem is to learn a model that may use the historical information before $t$ , i.e., $\left\{ {\left( {{u}^{\prime },{v}^{\prime },{t}^{\prime }}\right) \in \mathcal{E} \mid {t}^{\prime } < t}\right\}$ , to accurately and efficiently predict whether there will be a temporal link between two nodes at time $t$ , i.e.,(u, v, t).
|
| 44 |
+
|
| 45 |
+
Next, we define neighborhood in temporal networks.
|
| 46 |
+
|
| 47 |
+
Definition 3.3 ( $k$ -hop neighborhood in a temporal network) Given a timestamp $t$ , denote a static network constructed by all the temporal links before $t$ as ${\mathcal{G}}_{t}$ . Remove all timestamps in ${\mathcal{G}}_{t}$ . Given a node $v$ , define $k$ -hop neighborhood of $v$ before time $t$ , denoted by ${\mathcal{N}}_{v}^{t, k}$ , as the set of all nodes $u$ such that there exists at least one walk of length $k$ from $u$ to $v$ over ${\mathcal{G}}_{t}$ . For two nodes $u, v$ , their joint neighborhood up-to $K$ hops refers to ${ \cup }_{k = 1}^{K}\left( {{\mathcal{N}}_{v}^{t, k} \cup {\mathcal{N}}_{u}^{t, k}}\right)$ .
|
| 48 |
+
|
| 49 |
+
## 4 Methodology
|
| 50 |
+
|
| 51 |
+
In this section, we introduce NAT. NAT consists of two major components: neighborhood representations and N-caches, constructing joint neighborhood features and NN-based encoding.
|
| 52 |
+
|
| 53 |
+
### 4.1 Neighborhood Representations and N-caches
|
| 54 |
+
|
| 55 |
+
In NAT, a node representation is tracked by a fixed-sized memory module, i.e., N-cache over time as the temporal network evolves. Fig. 2 Left gives an illustration. In contrast to all previous methods that adopt a single vector representation for each node $u$ , NAT adopts neighborhood representations $\left( {{Z}_{u}^{\left( 0\right) }\left( t\right) ,{Z}_{u}^{\left( 1\right) }\left( t\right) ,\ldots ,{Z}_{u}^{\left( K\right) }\left( t\right) }\right)$ , where ${Z}_{u}^{\left( k\right) }\left( t\right)$ denotes the $k$ -hop neighborhood representation, for $k = 0,1,\ldots , K$ . Note that these representations may evolve over time. For notation simplicity, the timestamps in these notations are ignored while they typically can be inferred from the context. The main goal of tracking these neighborhood representations is to enable efficient construction of structural features, which will be detailed in Sec. 4.2. Next, we first explain these neighborhood representations from the perspective of modeling and how they evolve over time. Then, we introduce the scalable implementation of $\mathrm{N}$ -caches.
|
| 56 |
+
|
| 57 |
+
Modeling. For a node $u$ , the 0 -hop representation, or termed self-representation ${Z}_{u}^{\left( 0\right) }$ simply works as the standard node representation for $u$ . It gets updated via an RNN ${Z}_{u}^{\left( 0\right) } \leftarrow$ $\mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u, v}}\right\rbrack }\right)$ when node $u$ interacts with another node $v$ as shown in Fig. 2 Left. The rest neighborhood representations are more complicated. To give some intuition, we first introduce the 1-hop representation ${Z}_{u}^{\left( 1\right) }.{Z}_{u}^{\left( 1\right) }$ is a dictionary whose keys, denoted by $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , correspond to a down-sampled set of the (IDs of) nodes in the 1-hop neighborhood of $u$ . For a node $a$ in $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , the dictionary value denoted by ${Z}_{u, a}^{\left( 1\right) }$ is a vector representation as a summary of previous interactions between $u$ and $a.{Z}_{u}^{\left( 1\right) }$ will be updated as temporal network evolves. For example, in Fig. 1, as $v$ interacts with $u$ at time ${t}_{3}$ with the link feature ${e}_{u, v}$ , the entry in ${Z}_{u}^{\left( 1\right) }$ that corresponds to $v,{Z}_{u, v}^{\left( 1\right) }$ will get updated via an RNN ${Z}_{u, v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u, v}^{\left( 1\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u, v}}\right\rbrack }\right)$ . If ${Z}_{u, v}^{\left( 1\right) }$ does not exist in current ${Z}_{u}^{\left( 1\right) }$ (e.g., in the first $v, u$ interaction), a default initialization of ${Z}_{u, v}^{\left( 1\right) }$ is used. Once updated, the new value ${Z}_{u, v}^{\left( 1\right) }$ paired with the key (node ID) $v$ will be inserted into ${Z}_{u}^{\left( 1\right) }$ .
|
| 58 |
+
|
| 59 |
+
<table><tr><td>$\mathbf{{No}.}$</td><td>Notations</td><td>Definitions</td></tr><tr><td>1.</td><td>${Z}_{u}^{\left( k\right) }$</td><td>A dictionary (with values ${Z}_{u, a}^{\left( k\right) }$ , of size ${M}_{k}$ ) denoting the $k$ -hop neighborhood representation for node $u$ .</td></tr><tr><td>2.</td><td>${Z}_{u, a}^{\left( k\right) }$</td><td>A vector (of length $F$ for $k \geq 1$ ) in the values of ${Z}_{u}^{\left( k\right) }$ representing node $v$ as a $k$ -hop neighbor of $u$ .</td></tr><tr><td>3.</td><td>${s}_{u}^{\left( k\right) }$</td><td>An auxiliary array to record the node IDs who are currently recorded as the keys of ${Z}_{u}^{\left( k\right) }$ .</td></tr><tr><td>4.</td><td>${\mathrm{{DE}}}_{u}^{t}\left( a\right)$</td><td>The distance encoding of node $a$ based on the keys of N-caches of node $u$ at time $t$ (Eq. (1)).</td></tr><tr><td>5.</td><td>hash(a)</td><td>The hash function mapping a node ID $a$ to the position of ${Z}_{u, a}^{\left( k\right) }$ in the $k$ -hop N-cache of any node $u$ .</td></tr></table>
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 2: Neighborhood representations and Joining Neighborhood Features & Representations to make predictions. Left: Neighborhood representations of a node. Node $u$ interacts with $v$ at ${t}_{3}$ in the example in Fig. 1. The 0-hop (self) representation and 1-hop representations will be updated based on ${Z}_{v}^{\left( 0\right) }$ . The 2-hop representations will be updated by inserting ${Z}_{v}^{\left( 1\right) }.{Z}_{u}^{\left( k\right) }$ ’s are maintained in N-caches. Right: In the example of Fig. 1, to predict the link $\left( {u, v,{t}_{3}}\right)$ , the neighborhood representations of node $u$ and node $v$ will be joined: The structural feature DE is constructed according to Eq. (1); The representations are sum-pooled according to Eq. (2). Then, an attention layer (Eq. (3)) is adopted to make the final prediction.
|
| 64 |
+
|
| 65 |
+
One remark is that for the input timestamps ${t}_{i}$ , we adopt Fourier features to encode them before filling them into RNNs, i.e., with learnable parameter ${\omega }_{i}$ ’s, $1 \leq i \leq d$ , T-encoding $\left( t\right) =$ $\left\lbrack {\cos \left( {{\omega }_{1}t}\right) ,\sin \left( {{\omega }_{1}t}\right) ,\ldots ,\cos \left( {{\omega }_{d}t}\right) ,\sin \left( {{\omega }_{d}t}\right) }\right\rbrack$ , which has been proved to be useful for temporal network representation learning [4, 20, 29, 34, 48, 49].
|
| 66 |
+
|
| 67 |
+
The large-hop $\left( { > 1}\right)$ neighborhood representation ${Z}_{u}^{\left( k\right) }$ is also a dictionary. Similarly, the keys of ${Z}_{u}^{\left( k\right) }$ correspond to the nodes who lie in the $k$ -hop neighborhood of $u$ . The update of ${Z}_{u}^{\left( k\right) }$ is as follows: If $u$ interacts with $v, v$ ’s(k - 1)-hop neighborhood by definition becomes a part of $k$ -hop neighborhood of $u$ after the interaction. Given this observation, ${Z}_{u}^{\left( k\right) }$ can also be updated by using ${Z}_{v}^{\left( k - 1\right) }$ . However, we avoid using a RNN for the large-hop update to reduce complexity. Instead, we directly insert ${Z}_{v}^{\left( k - 1\right) }$ into ${Z}_{u}^{\left( k\right) }$ , i.e., setting ${Z}_{u, a}^{\left( k\right) } \leftarrow {Z}_{v, a}^{\left( k - 1\right) }$ for all $a \in \operatorname{key}\left\lbrack {Z}_{v}^{\left( k - 1\right) }\right\rbrack$ . If ${Z}_{u, a}^{\left( k\right) }$ has already existed before the insertion, we simply replace it.
|
| 68 |
+
|
| 69 |
+
Next, we will introduce the implementation of the above representations via N-caches. Readers who only care about the learning models can skip this part and directly go to Sec. 4.2. The maintenance of N-caches (aka. neighborhood representations) as the network evolves is summarized in Alg. 1.
|
| 70 |
+
|
| 71 |
+
Scalable Implementation. Neighborhood representations cannot be directly implemented via python dictionary to achieve scalable maintenance. Instead, we adopt the following three design techniques: (a) Setting size limit; (b) Parallelizing hash-maps; (c) Addressing collisions.
|
| 72 |
+
|
| 73 |
+
Algorithm 1: N-caches construction and update $\left( {\mathcal{V},\mathcal{E},\alpha }\right)$
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
for $k$ from 0 to 2 (consider only two hops) do
|
| 78 |
+
|
| 79 |
+
for $u$ in $\mathcal{V}$ , in parallel, do
|
| 80 |
+
|
| 81 |
+
Initialize fixed-size dictionaries ${Z}_{u}^{\left( k\right) }$ in GPU with key spaces ${s}_{u}^{\left( k\right) }$ and value spaces;
|
| 82 |
+
|
| 83 |
+
I for(u, v, t, e)in each mini-batch(u, v, t, e)of $\mathcal{E}$ , in parallel, do
|
| 84 |
+
|
| 85 |
+
${Z}_{u}^{\left( 0\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) }, t, e}\right\rbrack }\right) //$ update 0-hop self-representation
|
| 86 |
+
|
| 87 |
+
${Z}_{\text{prev }} \leftarrow {Z}_{u, v}^{\left( 1\right) }$ if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals $v$ , else 0 // check if ${Z}_{u, v}^{\left( 1\right) }$ is recorded in ${Z}_{u}^{\left( 1\right) }$ or not;
|
| 88 |
+
|
| 89 |
+
if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals ( $v$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
|
| 90 |
+
|
| 91 |
+
${s}_{u}^{\left( 1\right) }\left\lbrack {\text{hash}\left( v\right) }\right\rbrack \leftarrow v,{Z}_{u, v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{\text{prev }},\left\lbrack {{Z}_{v}^{\left( 0\right) }, t, e}\right\rbrack }\right) ;//$ update 1-hop nbr. representation
|
| 92 |
+
|
| 93 |
+
for $w$ in ${s}_{v}^{\left( 1\right) }$ , in parallel, do
|
| 94 |
+
|
| 95 |
+
if ${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack$ equals ( $w$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
|
| 96 |
+
|
| 97 |
+
${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack \leftarrow w,{Z}_{u, w}^{\left( 2\right) } \leftarrow {Z}_{v, w}^{\left( 1\right) };//$ update 2-hop nbr. representations
|
| 98 |
+
|
| 99 |
+
repeat lines 5-11 with(v, u, t, e)
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
(a) Limiting size: In a real-world network, the size of neighborhood of a node typically follows a long-tailed distribution [50, 51]. So, it is irregular and memory inefficient to record the entire neighborhood. Instead, we set an upper limit ${M}_{k}$ to the size of each-hop representation ${Z}_{u}^{\left( k\right) }$ , which means ${Z}_{u}^{\left( k\right) }$ may record only a subset of nodes in the $k$ -hop neighborhood of node $u$ . This idea is inspired by previous works that have shown structural features constructed based on a down-sampled neighborhood is sufficient to provide good performance [34, 52]. To further decrease the memory overhead, we only set each representation ${Z}_{u, a}^{\left( k\right) }, k \geq 1$ as a vector of small dimension $F$ . Overall, the memory overhead of the $\mathrm{N}$ -cache per node is $O\left( {\mathop{\sum }\limits_{{k = 1}}^{K}{M}_{k} \times F}\right)$ . In our experiments, we consider at most $K = 2$ hops, and set the numbers of tracked neighbors ${M}_{1},{M}_{2} \in \left\lbrack {2,{40}}\right\rbrack$ and the size of each representation $F \in \left\lbrack {2,8}\right\rbrack$ , which already gives very good performance. Based on the above design, the overall memory overhead is just about hundreds per node, which is comparable to the commonly-used memory cost of tracking a big single-vector representation for each node.
|
| 104 |
+
|
| 105 |
+
(b) The hash-map: As NAT needs to frequently access N-caches, a fast implementation of using node IDs to search within N-caches in parallel is needed. To enable the parallel search, we design GPU dictionaries to implement N-caches. Specifically, for every node $u$ , we pre-allocate $O\left( {{M}_{k} \times F}\right)$ space in GPU-RAM to record the values in ${Z}_{u}^{\left( k\right) }$ . A hash function is adopted to access the values in ${Z}_{u}^{\left( k\right) }$ . For some node $a$ , we compute $\operatorname{hash}\left( a\right) \equiv \left( {q * a}\right) \left( {\;\operatorname{mod}\;{M}_{k}}\right)$ for a fixed large prime number $q$ to decide the row-index in ${Z}_{u}^{\left( k\right) }$ that records ${Z}_{u, a}^{\left( k\right) }$ . Such a simple hashing allows NAT accessing multiple neighborhood representations in N-caches in parallel.
|
| 106 |
+
|
| 107 |
+
However, as the size ${M}_{k}$ of each $\mathrm{N}$ -cache is small, in particular smaller than the corresponding neighborhood, the hash-map may encounter collisions. To detect such collisions, we also pre-allocate $O\left( {M}_{k}\right)$ space in each $\mathrm{N}$ -cache ${Z}_{u}^{\left( k\right) }$ for an array ${s}_{u}^{\left( k\right) }$ to record the IDs of the nodes who are the most recent ones recorded in ${Z}_{u}^{\left( k\right) }$ . Specifically, we use ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ to check whether node $a$ is a key of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is $a,{Z}_{u, a}^{\left( k\right) }$ is recorded at the position hash(a)of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is neither $a$ nor EMPTY, the position hash(a)of ${Z}_{u}^{\left( k\right) }$ records the representation of another node.
|
| 108 |
+
|
| 109 |
+
(c) Addressing collisions: If encountering a collision when NAT works on an evolving network, NAT addresses that collision in a random manner. Specifically, suppose we are to write ${Z}_{u, a}^{\left( k\right) }$ into ${Z}_{u}^{\left( k\right) }$ . If another node $b$ satisfies $\operatorname{hash}\left( a\right) = \operatorname{hash}\left( b\right) = p$ and ${Z}_{u, b}^{\left( k\right) }$ has occupied the position $p$ of ${Z}_{u}^{\left( k\right) }$ , then, we replace ${Z}_{u, b}^{\left( k\right) }$ by ${Z}_{u, a}^{\left( k\right) }$ (and ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack \leftarrow a$ simultaneously) with probability $\alpha$ . Here, $\alpha \in (0,1\rbrack$ is a hyperparameter. Although the above random replacement strategy sounds heuristic, it is essentially equivalent to random-sampling nodes from the neighborhood without replacement (random dropping $\leftrightarrow$ random sampling). Note that random-sampling neighbors is a common strategy used to scale up GNNs for static networks [53-55], so here we essentially apply an idea of similar spirit to temporal networks. We find a small size ${M}_{k}\left( { \leq {40}}\right)$ can give a good empirical performance while keeping the model scalable, and NAT is relatively robust to a wide range of $\alpha$ .
|
| 110 |
+
|
| 111 |
+
### 4.2 Joint Neighborhood Structural Features and Neural-network-based Encoding
|
| 112 |
+
|
| 113 |
+
As illustrated in the toy example in Fig. 1, structural features from the joint neighborhood are critical to reveal how temporal networks evolve. Previous methods in static networks adopt distance encoding (DE) (or called labeling tricks more broadly) to formulate these features [18, 19]. Recently, this idea has got generalized to temporal networks [34]. However, the model CAWN in [34] uses online random-walk sampling, which cannot be parallelized on GPUs and is thus extremely slow. Our design of N-caches allows addressing such a problem. Fig. 2 Right illustrates the procedure.
|
| 114 |
+
|
| 115 |
+
NAT generates joint neighborhood structural features as follows. Suppose our prediction is made for a temporal link(u, v, t). For every node $a$ in the joint neighborhood of $u$ and $v$ decided by their N-caches at timestamp $t$ , i.e., $a \in \left\lbrack {{ \cup }_{k = 0}^{K}\operatorname{key}\left( {Z}_{u}^{\left( k\right) }\right) }\right\rbrack \cup \left\lbrack {{ \cup }_{{k}^{\prime } = 0}^{K}\operatorname{key}\left( {Z}_{v}^{\left( {k}^{\prime }\right) }\right) }\right\rbrack$ , we associate it with a DE
|
| 116 |
+
|
| 117 |
+
${\mathrm{{DE}}}_{uv}^{t}\left( a\right) = {\mathrm{{DE}}}_{u}^{t}\left( a\right) \oplus {\mathrm{{DE}}}_{v}^{t}\left( a\right)$ , where ${\mathrm{{DE}}}_{w}^{t}\left( a\right) = \left\lbrack {\chi \left\lbrack {a \in {Z}_{w}^{\left( 0\right) }}\right\rbrack ,\ldots ,\chi \left\lbrack {a \in {Z}_{w}^{\left( K\right) }}\right\rbrack }\right\rbrack , w \in \{ u, v\}$(1)
|
| 118 |
+
|
| 119 |
+
Here, $\chi \left\lbrack {a \in {Z}_{w}^{\left( i\right) }}\right\rbrack$ is 1 if $a$ is among the keys of N-cache ${Z}_{w}^{\left( i\right) }$ or 0 otherwise. $\oplus$ denotes vector concatenation. As for the example to predict $\left( {u, v,{t}_{3}}\right)$ in Fig. 1, the DEs of four nodes $u, a, v, b$ are as shown in Fig. 2 Right. Note that ${\mathrm{{DE}}}_{uv}^{{t}_{3}}\left( a\right) = \left\lbrack {0,1,0}\right\rbrack \oplus \left\lbrack {0,1,0}\right\rbrack$ because $a$ appears in the keys of both ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{v}^{\left( 1\right) }$ , which further implies $a$ as a common neighbor of $u$ and $v$ .
|
| 120 |
+
|
| 121 |
+
Simultaneously, NAT also aggregates neighborhood representations for every node $a$ in the common neighborhood of $u$ and $v$ . Specifically, for node $a$ , we aggregate the representations via a sum pool
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{Q}_{uv}^{t}\left( a\right) = \mathop{\sum }\limits_{{k = 0}}^{K}\mathop{\sum }\limits_{{w \in \{ u, v\} }}{Z}_{w, a}^{\left( k\right) } \times \chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack . \tag{2}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
Here, if $a$ is not in the neighborhood ${Z}_{w}^{\left( k\right) },\chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack = 0$ and thus ${Z}_{w, a}^{\left( k\right) }$ does not participate in the aggregation. Both DE (Eq (1)) and representation aggregation (Eq (2)) can be done for multiple node pairs in parallel on GPUs. We detail the parallel steps in Appendix A. After joining DE and neighborhood representations, for each link(u, v, t)to be predicted, NAT has a collection of representations ${\Omega }_{u, v}^{t} = \left\{ {{\mathrm{{DE}}}_{uv}^{t}\left( a\right) \oplus {Q}_{uv}^{t}\left( a\right) \mid a \in {\mathcal{N}}_{u, v}^{t}}\right\}$ .
|
| 128 |
+
|
| 129 |
+
Ultimately, we propose to use attention to aggregate the collected representations in ${\Omega }_{u, v}^{t}$ to make the final prediction for the link(u, v, t). Let MLP denote a multi-layer perceptron and we adopt
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\text{logit} = \operatorname{MLP}\left( {\mathop{\sum }\limits_{{h \in {\Omega }_{u, v}^{t}}}{\alpha }_{h}\operatorname{MLP}\left( h\right) }\right) \text{, where}\left\{ {\alpha }_{h}\right\} = \operatorname{softmax}\left( \left\{ {{w}^{T}\operatorname{MLP}\left( h\right) \mid h \in {\Omega }_{u, v}^{t}}\right\} \right) \text{,} \tag{3}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $w$ is a learnable vector parameter and the logit can be plugged in the cross-entropy loss for training or compared with a threshold to make the final prediction.
|
| 136 |
+
|
| 137 |
+
## 5 Experiments
|
| 138 |
+
|
| 139 |
+
In this section, we evaluate the performance and the scalability of NAT against a variety of baselines on real-world temporal networks. We further conduct ablation study on relevant modules and hyperparameter analysis. Unless specified for comparison, the hyperparameters of NAT (such as ${M}_{1},{M}_{2}, F,\alpha$ ) are detailed in Appendix C and Table 7 (in the Appendix).
|
| 140 |
+
|
| 141 |
+
### 5.1 Experimental setup
|
| 142 |
+
|
| 143 |
+
Datasets. We use seven real-world datasets that are available to the public, whose statistics are listed in Table 1. Further details of these datasets can be found in Appendix B. We preprocess all datasets by following previous literatures. We transform the node and edge features of Wikipedia and Reddit to 172-dim feature vectors. For other datasets, those features will be zeros since they are non-attributed. We split the datasets into training, validation and testing data according to the ratio of 70/15/15. For inductive test, we sample the unique nodes in validation and testing data with probability 0.1 and remove them and their associated edges from the networks during the model training. We detail the procedure of inductive evaluation for NAT in Appendix C.1.
|
| 144 |
+
|
| 145 |
+
Baselines. We run experiments against 6 strong baselines that give the SOTA approaches for modeling temporal networks. Out of the 6 baselines, CAWN [34], TGAT [29] and TGN [20] need to sample neighbors from the historical events, while JODIE [28], DyRep [27], keep track of dynamic node representations to avoid sampling. CAWN is the only model that constructs neighborhood structural features. As we are interested in both prediction performance and model scalability, we include an efficient implementation of TGN sourced from Pytorch Geometric (TGN-pg), a library built upon PyTorch including different variants of GNNs [56]. TGN is slower than TGN-pg because TGN in [20] does not process a batch fully in parallel while TGN-pg does. Additional details about the baselines can be found in appendix $\mathrm{C}$ .
|
| 146 |
+
|
| 147 |
+
<table><tr><td>Measurement</td><td>Wikipedia</td><td>Reddit</td><td>Social E. $1\mathrm{\;m}$ .</td><td>Social E.</td><td>Enron</td><td>UCI</td><td>Ubuntu</td><td>Wiki-talk</td></tr><tr><td>nodes</td><td>9,227</td><td>10,985</td><td>71</td><td>74</td><td>184</td><td>1,899</td><td>159,316</td><td>1,140,149</td></tr><tr><td>temporal links</td><td>157,474</td><td>672,447</td><td>176,090</td><td>2,099,519</td><td>125,235</td><td>59,835</td><td>964,437</td><td>7,833,140</td></tr><tr><td>static links</td><td>18,257</td><td>78,516</td><td>2,457</td><td>4486</td><td>3,125</td><td>20,296</td><td>596,933</td><td>3,309,592</td></tr><tr><td>node & link attributes</td><td>172 & 172</td><td>172 & 172</td><td>0 & 0</td><td>0 & 0</td><td>0 & 0</td><td>0 & 0</td><td>0 & 0</td><td>0 & 0</td></tr><tr><td>bipartite</td><td>true</td><td>true</td><td>false</td><td>false</td><td>false</td><td>true</td><td>false</td><td>false</td></tr></table>
|
| 148 |
+
|
| 149 |
+
Table 1: Summary of dataset statistics.
|
| 150 |
+
|
| 151 |
+
<table><tr><td>Task</td><td>Method</td><td>Wikipedia</td><td>Reddit</td><td>Social E. $1\mathrm{\;m}$ .</td><td>Social E.</td><td>Enron</td><td>UCI</td><td>Ubuntu</td><td>Wiki-talk</td></tr><tr><td rowspan="7">Inductive</td><td>CAWN</td><td>${98.52} \pm {0.04}$</td><td>${98.19} \pm {0.03}$</td><td>${80.09} \pm {1.89}$</td><td>${50.00} \pm {0.00}{}^{ * }$</td><td>${93.28} \pm {0.01}$</td><td>${80.37} \pm {0.65}$</td><td>${50.00} \pm {0.00}^{ * }$</td><td>${50.00} \pm {0.00}^{ * }$</td></tr><tr><td>JODIE</td><td>${95.58} \pm {0.37}$</td><td>${95.96} \pm {0.29}$</td><td>${80.61} \pm {1.55}$</td><td>${81.13} \pm {0.52}$</td><td>${81.69} \pm {2.21}$</td><td>${86.13} \pm {0.34}$</td><td>${56.68} \pm {0.49}$</td><td>${65.89} \pm {4.72}$</td></tr><tr><td>DyRep</td><td>${94.72} \pm {0.14}$</td><td>${97.04} \pm {0.29}$</td><td>${81.54} \pm {1.81}$</td><td>${52.68} \pm {0.11}$</td><td>${77.44} \pm {2.28}$</td><td>${68.38} \pm {1.30}$</td><td>${53.25} \pm {0.03}$</td><td>${51.87} \pm {0.93}$</td></tr><tr><td>TGN</td><td>${98.01} \pm {0.06}$</td><td>${97.76} \pm {0.05}$</td><td>${86.00} \pm {0.70}$</td><td>${67.01} \pm {10.3}$</td><td>${75.72} \pm {2.55}$</td><td>${83.21} \pm {1.16}$</td><td>${62.14} \pm {3.17}$</td><td>${56.73} \pm {2.88}$</td></tr><tr><td>TGN-pg</td><td>${94.91} \pm {0.35}$</td><td>${94.34} \pm {3.22}$</td><td>${63.44} \pm {3.54}$</td><td>${88.10} \pm {4.81}$</td><td>${69.55} \pm {1.62}$</td><td>${86.36} \pm {3.60}$</td><td>${79.44} \pm {0.85}$</td><td>${85.35} \pm {2.96}$</td></tr><tr><td>TGAT</td><td>${97.25} \pm {0.18}$</td><td>${96.69} \pm {0.11}$</td><td>${54.66} \pm {0.66}$</td><td>${50.00} \pm {0.00}$</td><td>${57.09} \pm {0.89}$</td><td>${70.47} \pm {0.59}$</td><td>${54.73} \pm 4.{.94}$</td><td>${71.04} \pm {3.59}$</td></tr><tr><td>NAT</td><td>$\mathbf{{98.55} \pm {0.09}}$</td><td>$\mathbf{{98.56} \pm {0.21}}$</td><td>$\mathbf{{91.82} \pm {1.91}}$</td><td>$\mathbf{{95.16} \pm {0.66}}$</td><td>${94.94} \pm {1.15}$</td><td>$\mathbf{{92.46} \pm {0.93}}$</td><td>$\mathbf{{90.35} \pm {0.20}}$</td><td>$\mathbf{{93.81} \pm {1.16}}$</td></tr><tr><td rowspan="7">Transductive</td><td>CAWN</td><td>${98.62} \pm {0.05}$</td><td>${98.66} \pm {0.09}$</td><td>${79.59} \pm {0.21}$</td><td>${50.00} \pm {0.00}{}^{ * }$</td><td>${91.46} \pm {0.35}$</td><td>${82.84} \pm {0.16}$</td><td>${50.00} \pm {0.00}^{ * }$</td><td>${50.00} \pm {0.00}^{ * }$</td></tr><tr><td>JODIE</td><td>${96.15} \pm {0.36}$</td><td>${97.29} \pm {0.05}$</td><td>${77.02} \pm {1.11}$</td><td>${69.30} \pm {0.21}$</td><td>${83.42} \pm {2.63}$</td><td>${91.09} \pm {0.69}$</td><td>${60.29} \pm {2.66}$</td><td>${75.00} \pm {4.90}$</td></tr><tr><td>DyRep</td><td>${95.81} \pm {0.15}$</td><td>${98.00} \pm {0.19}$</td><td>${76.96} \pm {4.05}$</td><td>${51.14} \pm {0.24}$</td><td>${78.04} \pm {2.08}$</td><td>${72.25} \pm {1.81}$</td><td>${52.22} \pm {0.02}$</td><td>${62.07} \pm {0.06}$</td></tr><tr><td>TGN</td><td>${98.57} \pm {0.05}$</td><td>${98.70} \pm {0.03}$</td><td>${88.72} \pm {0.65}$</td><td>${69.39} \pm {10.50}$</td><td>${80.87} \pm {4.37}$</td><td>${89.53} \pm {1.49}$</td><td>${53.80} \pm {2.23}$</td><td>${66.01} \pm {4.79}$</td></tr><tr><td>TGN-pg</td><td>${97.26} \pm {0.10}$</td><td>${98.62} \pm {0.07}$</td><td>${66.39} \pm {6.90}$</td><td>${64.03} \pm {8.97}$</td><td>${80.85} \pm {2.70}$</td><td>${91.47} \pm {0.29}$</td><td>${90.56} \pm {0.44}$</td><td>${94.16} \pm {0.09}$</td></tr><tr><td>TGAT</td><td>${96.65} \pm {0.06}$</td><td>${98.19} \pm {0.08}$</td><td>${58.10} \pm {0.47}$</td><td>${50.00} \pm {0.00}$</td><td>${61.25} \pm {0.99}$</td><td>${77.88} \pm {0.31}$</td><td>${55.46} \pm {5.47}$</td><td>${78.43} \pm {2.15}$</td></tr><tr><td>NAT</td><td>$\mathbf{{98.68} \pm {0.04}}$</td><td>$\mathbf{{99.10} \pm {0.09}}$</td><td>$\mathbf{{90.20} \pm {0.20}}$</td><td>${94.43} \pm {1.67}$</td><td>$\mathbf{{92.42} \pm {0.09}}$</td><td>$\mathbf{{93.92} \pm {0.15}}$</td><td>$\mathbf{{93.50} \pm {0.34}}$</td><td>${95.82} \pm {0.31}$</td></tr></table>
|
| 152 |
+
|
| 153 |
+
Table 2: Performance in average precision (AP) (mean in percentage $\pm {95}\%$ confidence level). Bold font and underline highlight the best performance and the second best performance on average. *The under-performance of CAWN on Social E., Ubuntu and Wiki-talk may be caused by a recent code change due to a bug [57].
|
| 154 |
+
|
| 155 |
+
Regarding hyperparameters, if a dataset has been tested by a baseline, we use the set of hyperparame-ters that are provided in the corresponding paper. Otherwise, we tune the parameters such that similar components have sizes in the same scale. For example, matching the number of neighbors sampled and the embedding sizes. We also fix the training and inference batch sizes so that the comparison of training and inference time can be fair between different models. For training, since CAWN uses 32 as the default while others use 200 , we decide on using 100 that is between the two. For validation and testing, we use batch size 32 over all baselines. We also apply the early stopping strategy for all models to record the number of epochs to converge and the total model running time to converge. We also set a time limit of 10 hours for training. once that time is reached, we will use the best epoch so far for evaluation. More detailed hyperparameters are provided in Appendix C.
|
| 156 |
+
|
| 157 |
+
Hardware. We run all experiments using the same device that is equipped with eight Intel Core i7-4770HQ CPU @ 2.20GHz with 15.5 GiB RAM and one GPU (GeForce GTX 1080 Ti).
|
| 158 |
+
|
| 159 |
+
Evaluation Metrics. For prediction performance, we evaluation all models with Average Precision (AP) and Area Under the ROC curve (AUC). In the main text, the prediction performance in all tables is evaluated in AP. The AUC results are given in the appendix. All results are summarized based on 5 time independent experiments. For computing performance, the metrics include (a) average training and inference time (in seconds) per epoch, denoted as Train and Test respectively, (b) averaged total time (in seconds) of a model run, including training of all epochs, and testing, denoted as Total, (c) the averaged number of epochs for convergence, denoted as Epoch, (d) the maximum GPU memory and RAM occupancy percentage monitored throughout the entire processes, denoted as GPU and $\mathbf{{RAM}}$ , respectively. We ensure that there are no other applications running during our evaluations.
|
| 160 |
+
|
| 161 |
+
### 5.2 Results and Discussion
|
| 162 |
+
|
| 163 |
+
Overall, our method achieves SOTA performance on all 7 datasets. The modeling capacity of NAT exceeds all of the baselines and the time complexities of training and inference are either lower or comparable to the fastest baselines. Let us provide the detailed analysis next.
|
| 164 |
+
|
| 165 |
+
Prediction Performance. We give the result of AP in Table 2 and AUC in Appendix Table 6.
|
| 166 |
+
|
| 167 |
+
<table><tr><td/><td>Method</td><td>Train</td><td>Test</td><td>Total</td><td>RAM</td><td>GPU</td><td>Epoch</td></tr><tr><td rowspan="7">Wikipedia</td><td>CAWN</td><td>1,006</td><td>174</td><td>11,845</td><td>30.2</td><td>58.0</td><td>6.7</td></tr><tr><td>JODIE</td><td>28.8</td><td>30.6</td><td>1,482</td><td>28.3</td><td>17.9</td><td>19.1</td></tr><tr><td>DyRep</td><td>32.4</td><td>32.5</td><td>1,681</td><td>28.3</td><td>17.8</td><td>21.5</td></tr><tr><td>TGN</td><td>37.1</td><td>33.0</td><td>2,047</td><td>28.3</td><td>19.3</td><td>23.1</td></tr><tr><td>TGN-pg</td><td>24.2</td><td>6.04</td><td>624.8</td><td>30.8</td><td>18.1</td><td>15.6</td></tr><tr><td>TGAT</td><td>225</td><td>63.0</td><td>3,657</td><td>28.5</td><td>24.6</td><td>12.0</td></tr><tr><td>NAT</td><td>21.0</td><td>6.94</td><td>154.4</td><td>29.1</td><td>12.1</td><td>2.6</td></tr><tr><td rowspan="7">Reddit</td><td>CAWN</td><td>2,983</td><td>812</td><td>17,056</td><td>38.8</td><td>41.2</td><td>16.3</td></tr><tr><td>JODIE</td><td>234.4</td><td>176</td><td>8,082</td><td>36.4</td><td>23.7</td><td>15.3</td></tr><tr><td>DyRep</td><td>252.9</td><td>184</td><td>7,716</td><td>33.3</td><td>24.3</td><td>12.7</td></tr><tr><td>TGN</td><td>271.7</td><td>189</td><td>8,487</td><td>33.7</td><td>25.4</td><td>15.3</td></tr><tr><td>TGN-pg</td><td>155.1</td><td>27.1</td><td>2,142</td><td>39.2</td><td>23.6</td><td>6.6</td></tr><tr><td>TGAT</td><td>1,203</td><td>291</td><td>16,462</td><td>37.2</td><td>31.0</td><td>8.4</td></tr><tr><td>NAT</td><td>90.6</td><td>28.5</td><td>771.3</td><td>37.7</td><td>18.5</td><td>3.0</td></tr></table>
|
| 168 |
+
|
| 169 |
+
<table><tr><td/><td>Method</td><td>Train</td><td>Test</td><td>Total</td><td>RAM</td><td>GPU</td><td>Epoch</td></tr><tr><td rowspan="7">Ubuntu</td><td>CAWN</td><td>1,066</td><td>222</td><td>5,385</td><td>38.9</td><td>17.4</td><td>1.0</td></tr><tr><td>JODIE</td><td>66.70</td><td>2,860</td><td>76,220</td><td>35.3</td><td>18.7</td><td>5.5</td></tr><tr><td>DyRep</td><td>2,195</td><td>2,857</td><td>39,148</td><td>38.5</td><td>16.6</td><td>1.0</td></tr><tr><td>TGN</td><td>5,975</td><td>2,391</td><td>73,633</td><td>39</td><td>19.6</td><td>5.5</td></tr><tr><td>TGN-pg</td><td>188.7</td><td>36.5</td><td>3,682</td><td>37.0</td><td>32.1</td><td>11.4</td></tr><tr><td>TGAT</td><td>887</td><td>330</td><td>18,431</td><td>47.3</td><td>17.0</td><td>2.5</td></tr><tr><td>NAT</td><td>125.8</td><td>41.2</td><td>1,321</td><td>28.9</td><td>10.1</td><td>5.4</td></tr><tr><td rowspan="7">Wiki-talk</td><td>CAWN</td><td>13,685</td><td>2,419</td><td>34,368</td><td>99.1</td><td>19.4</td><td>1.0</td></tr><tr><td>JODIE</td><td>284,789</td><td>145,909</td><td>566,607</td><td>58.2</td><td>20.9</td><td>1.0</td></tr><tr><td>DyRep</td><td>280,659</td><td>135,491</td><td>514,621</td><td>84.4</td><td>49.6</td><td>1.0</td></tr><tr><td>TGN</td><td>281,267</td><td>136,780</td><td>534,827</td><td>77.9</td><td>24.1</td><td>1.0</td></tr><tr><td>TGN-pg</td><td>1,236</td><td>311.5</td><td>12,761</td><td>60.9</td><td>59.0</td><td>5.1</td></tr><tr><td>TGAT</td><td>6,164</td><td>2,451</td><td>186,513</td><td>65.0</td><td>17.6</td><td>16.0</td></tr><tr><td>$\mathbf{{NAT}}$</td><td>833.1</td><td>280.1</td><td>7,802</td><td>37.1</td><td>22.3</td><td>2.7</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 3: Scalability evaluation on Wikipedia, Reddit, Ubuntu and Wiki-talk.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 3: Convergence v.s. wall-clock time on Reddit Figure 4: Sensitivity (mean) of the overwriting (left) and Wiki-talk (right). Each dot on the curves gets probability $\alpha$ for hash-map collisions on Ubuntu collected per epoch. (Left) & Reddit (Right).
|
| 178 |
+
|
| 179 |
+
On Wikipedia and Reddit, a lot of baselines achieve high performance because of the valid attributes. However, NAT still gains marginal improvements. On Wikipedia, Reddit and Enron, CAWN outperforms all baselines on inductive study and most baselines on transductive. We believe the reason is that it captures neighborhood structural information via its temporal random walk sampling. However, we are not able to reproduce comparable scores on Social Evolve, Ubuntu and Wiki-talk even tuning training batch size to 32 . We notice there is a recent code change to debug the CAWN implementation[57], which might be the cause of its under-performance.
|
| 180 |
+
|
| 181 |
+
TGN and its efficient implementation TGN-pg are strong baselines without constructing structure features. On both large-scale datasets Ubuntu and Wiki-talk, TGN-pg gives impressive results on transductive learning. However, NAT still outperforms it consistently. Furthermore, TGN-pg performs poorly for inductive tasks on both datasets, while NAT gains 8-11% lift for these tasks.
|
| 182 |
+
|
| 183 |
+
On Social Evolve, NAT significantly outperforms all baselines by at least 25% on transductive and 7% on inductive predictions. From Table 1, we can see that Social Evolve has a small number of nodes but many interactions. This highlights one of the advantages of NAT on dense temporal graphs. NAT keeps the neighborhood representation for a node's every individual neighbor separately so the older interactions are not squashed with the more recent ones into a single representation. Pairing with N-caches, NAT can effectively denoise the dense history and extract neighborhood features.
|
| 184 |
+
|
| 185 |
+
Scalability. Table 3 shows that NAT is always trained much faster than all baselines. The inference speed of NAT is significantly faster than CAWN that can also constructs neighborhood structural features, which achieves 25-29 times speedup on inference for attributed networks. NAT also achieves at least four times faster inference than TGN, JODIE and DyRep. Compared to TGN-pg, NAT achieves comparable inference time in most cases while achieves about ${10}\%$ speed up over the largest dataset Wiki-talk. This is because when the network is large, online sampling of TGN-pg may dominate the time cost. We may expect NAT to show even better scalability for larger networks. Moreover, on the two large networks Ubuntu and Wiki-talk, NAT requires much less GPU memory. Note that albeit with just comparable or slightly better scalability, over all datasets, NAT significantly outperform TGN-pg in prediction performance.
|
| 186 |
+
|
| 187 |
+
Across all datasets, NAT does not need larger model sizes than baselines to achieve better performances. More impressively, we observe that NAT uniformly requires fewer epochs to converge than all baselines, especially on larger datasets. It can be attributed to the inductive power given by the joint structural features. Because of this, the total runtime of the model is much shorter than the baselines on all datasets. Specifically, on large datasets, Ubuntu and Wiki-talk, NAT is more than three times as fast as TGN-pg. We also plot the curves on the model convergence v.s. CPU/GPU wall-clock time on Reddit and Wiki-talk for comparison in Fig. 3.
|
| 188 |
+
|
| 189 |
+
<table><tr><td>Ablation</td><td>Dataset</td><td>Inductive</td><td>Transductive</td><td>Train</td><td>Test</td><td>GPU</td></tr><tr><td rowspan="3">original method</td><td>Social E.</td><td>${95.16} \pm {0.66}$</td><td>${91.75} \pm {0.37}$</td><td>281.0</td><td>89.0</td><td>8.88</td></tr><tr><td>Ubuntu</td><td>${90.35} \pm {0.20}$</td><td>${93.50} \pm {0.34}$</td><td>125.8</td><td>41.2</td><td>10.1</td></tr><tr><td>Wiki-talk*</td><td>${93.81} \pm {1.16}$</td><td>${95.00} \pm {0.31}$</td><td>833.1</td><td>280.1</td><td>22.3</td></tr><tr><td rowspan="2">remove 2-hop N-cache</td><td>Social E.</td><td>${94.30} \pm {0.90}$</td><td>${90.77} \pm {0.26}$</td><td>253.1</td><td>75.9</td><td>8.87</td></tr><tr><td>Ubuntu</td><td>${89.45} \pm {1.04}$</td><td>${93.48} \pm {0.34}$</td><td>111.3</td><td>35.7</td><td>9.95</td></tr><tr><td>remove</td><td>Social E.</td><td>${55.10} \pm {11.54}$</td><td>${62.12} \pm {3.53}$</td><td>212.9</td><td>64.0</td><td>8.46</td></tr><tr><td>1-&-2-hop</td><td>Ubuntu</td><td>${85.11} \pm {0.23}$</td><td>${91.89} \pm {0.09}$</td><td>98.1</td><td>29.5</td><td>9.07</td></tr><tr><td>N-cache</td><td>Wiki-talk</td><td>${86.54} \pm {3.87}$</td><td>${94.89} \pm {1.83}$</td><td>409.5</td><td>125.4</td><td>16.2</td></tr></table>
|
| 190 |
+
|
| 191 |
+
Table 4: Ablation study on N-caches. *Original method for Wiki-talk does not use the second-hop N-cache.
|
| 192 |
+
|
| 193 |
+
<table><tr><td>Param</td><td>Size</td><td>Inductive</td><td>Transductive</td><td>Train</td><td>Test</td><td>GPU</td></tr><tr><td rowspan="5">${M}_{1}$</td><td>4</td><td>${92.95} \pm {2.95}$</td><td>${95.26} \pm {0.49}$</td><td>834.9</td><td>281.4</td><td>18.4</td></tr><tr><td>8</td><td>$\mathbf{{93.96} \pm {0.91}}$</td><td>${95.39} \pm {0.28}$</td><td>806.3</td><td>274.9</td><td>19.9</td></tr><tr><td>12</td><td>${92.67} \pm {0.82}$</td><td>${95.05} \pm {0.58}$</td><td>818.2</td><td>277.6</td><td>21.0</td></tr><tr><td>16</td><td>${93.81} \pm {1.16}$</td><td>${95.82} \pm {0.31}$</td><td>833.1</td><td>280.1</td><td>22.3</td></tr><tr><td>20</td><td>${93.40} \pm {0.50}$</td><td>${95.83} \pm {0.44}$</td><td>841.3</td><td>284.8</td><td>23.8</td></tr><tr><td rowspan="4">${M}_{2}$</td><td>0</td><td>${93.81} \pm {1.16}$</td><td>${95.82} \pm {0.31}$</td><td>833.1</td><td>280.1</td><td>22.3</td></tr><tr><td>2</td><td>${92.91} \pm {1.01}$</td><td>${96.08} \pm {0.34}$</td><td>960.5</td><td>330.9</td><td>22.7</td></tr><tr><td>4</td><td>${94.26} \pm {0.89}$</td><td>$\mathbf{{96.29} \pm {0.09}}$</td><td>935.3</td><td>322.9</td><td>23.8</td></tr><tr><td>8</td><td>${94.53} \pm {0.51}$</td><td>${95.90} \pm {0.07}$</td><td>943.3</td><td>325.3</td><td>26.0</td></tr><tr><td rowspan="3">F</td><td>2</td><td>${90.86} \pm {2.52}$</td><td>${95.74} \pm {0.27}$</td><td>843.6</td><td>284.0</td><td>18.5</td></tr><tr><td>4</td><td>$\mathbf{{93.81} \pm {1.16}}$</td><td>$\mathbf{{95.82} \pm {0.31}}$</td><td>833.1</td><td>280.1</td><td>22.3</td></tr><tr><td>8</td><td>${93.55} \pm {0.93}$</td><td>${95.63} \pm {0.30}$</td><td>828.7</td><td>281.1</td><td>26.2</td></tr></table>
|
| 194 |
+
|
| 195 |
+
Table 5: Sensitivity of N-cache sizes on Wiki-talk.
|
| 196 |
+
|
| 197 |
+
### 5.3 Further Analysis
|
| 198 |
+
|
| 199 |
+
Ablation study. We conduct ablation studies on the effectiveness of the N-caches. Table 4 shows the results of removing the second-hop N-caches ${Z}_{u}^{\left( 2\right) }$ and removing both the first-hop and second-hop $\mathrm{N}$ -caches ${Z}_{u}^{\left( 1\right) },{Z}_{u}^{\left( 2\right) }$ . As expected, dropping the $\mathrm{N}$ -caches reduces the training, inference time and the GPU cost. However, it also results in prediction performance decay. Just removing ${Z}_{u}^{\left( 2\right) }$ can hurt performance by up to $1\%$ . By removing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ but keeping only the self representation, the performance drops significantly, especially on inductive settings. Keeping only self representation is analogous to some baselines such as TGN which keeps a memory state. However, since we use a smaller dimension usually between 32 to 72 , the self representation itself cannot be generalized well on these datasets. Ablation studies on other components including joint neighborhood structural features, T-encoding, RNNs, and DE are detailed in Table 8 (in the appendix).
|
| 200 |
+
|
| 201 |
+
Sensitivity of the sizes of N-cache. Since N-caches induce the major consumption of the GPU memory, we study how the memory size correlates with the model performance on Wiki-talk. We compare the performances between different values of ${M}_{1},{M}_{2}$ and $F$ of $\mathrm{N}$ -caches. The baseline has ${M}_{1} = {16},{M}_{2} = 0$ and $F = 4$ and we study each parameter by fixing the other two. Table 5 details the changes in the model performance. We also study for the ubuntu dataset in Appendix Table 9.
|
| 202 |
+
|
| 203 |
+
We can see that GPU memory cost scales close to a linear function for all param changes. However, increasing the model size does not necessarily improve the performance. Changing ${M}_{1}$ to either a smaller or a larger value may decrease both the transductive and the inductive performance. Increasing ${M}_{2}$ boosts the transductive performance but hurts the inductive performance. In general, changing ${M}_{2}$ is less sensitive than changing ${M}_{1}$ . Lastly, a larger $F$ could overfit the model as we can see a slight drop in the inductive prediction with the largest $F$ . Overall, training and inference time remains stable because of the parallelization of NAT. Interestingly, with larger ${M}_{1}$ and ${M}_{2}$ , we sometimes even see a decrease in running time. We hypothesize it is because it avoids hash collisions and short-circuits $\mathrm{N}$ -cache overwriting steps.
|
| 204 |
+
|
| 205 |
+
Sensitivity of overwriting probability $\alpha$ . We also experiment on $\alpha$ to study whether N-cache refresh frequency is related to the prediction quality. Here, we use a large dataset Ubuntu and a medium dataset Reddit. Results can be found in Fig. 4. For Ubuntu, we update from the original sizes to ${M}_{1} = 4,{M}_{2} = 1, F = 4$ and for Reddit, we change to ${M}_{1} = {16},{M}_{2} = 2, F = 8$ to increase the number of potential collisions so that the effect of $\alpha$ can be better observed. On both datasets, we can see an overall trend that a larger $\alpha$ gives a better transductive performance. However, if $\alpha = 1$ and we always replace old neighbors, it is slightly worse than the optimal $\alpha$ . This pattern shows that the neighborhood information has to keep updated in order to gain a better performance. Some randomness can be useful because it preserves more diverse time ranges of interactions. The inductive performance is relatively more sensitive to the selection of $\alpha$ . We do not find a case when having two different probabilities for replacing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ significantly benefits model performance, so we use a single $\alpha$ for $\mathrm{N}$ -caches of different hops to keep it simple.
|
| 206 |
+
|
| 207 |
+
## 6 Conclusion and Future Works
|
| 208 |
+
|
| 209 |
+
In this work, we proposed NAT, the first method that adopts dictionary-type representations for nodes to track the neighborhood of nodes in temporal networks. Such representations support efficient construction of neighborhood structural features that are crucial to predict how temporal network evolves. NAT also develops N-caches to manage these representations in a parallel way. Our extensive experiments demonstrate the effectiveness of NAT in both prediction performance and scalability. In the future, we plan to extend NAT to process even larger networks that the GPU memory cannot hold the entire networks.
|
| 210 |
+
|
| 211 |
+
References
|
| 212 |
+
|
| 213 |
+
[1] Petter Holme and Jari Saramäki. Temporal networks. Physics reports, 519(3), 2012. 1
|
| 214 |
+
|
| 215 |
+
[2] Georg Simmel. The sociology of georg simmel, volume 92892. Simon and Schuster, 1950. 1, 2
|
| 216 |
+
|
| 217 |
+
[3] Austin R Benson, Rediet Abebe, Michael T Schaub, Ali Jadbabaie, and Jon Kleinberg. Simplicial closure and higher-order link prediction. Proceedings of the National Academy of Sciences, 115(48):E11221-E11230, 2018. 1
|
| 218 |
+
|
| 219 |
+
[4] Yunyu Liu, Jianzhu Ma, and Pan Li. Neural predicting higher-order patterns in temporal networks. In ${WWW},{2022.3},4$
|
| 220 |
+
|
| 221 |
+
[5] Ryan A Rossi, Anup Rao, Sungchul Kim, Eunyee Koh, Nesreen K Ahmed, and Gang Wu. Higher-order ranking and link prediction: From closing triangles to closing higher-order motifs. In ${WWW},{2020}$ .
|
| 222 |
+
|
| 223 |
+
[6] Lauri Kovanen, Márton Karsai, Kimmo Kaski, János Kertész, and Jari Saramäki. Temporal motifs in time-dependent networks. Journal of Statistical Mechanics: Theory and Experiment, 2011. 1
|
| 224 |
+
|
| 225 |
+
[7] Stephen Ranshous, Shitian Shen, Danai Koutra, Steve Harenberg, Christos Faloutsos, and Nagiza F Samatova. Anomaly detection in dynamic networks: a survey. Wiley Interdisciplinary Reviews: Computational Statistics, 7(3):223-247, 2015. 1
|
| 226 |
+
|
| 227 |
+
[8] Andrew Z Wang, Rex Ying, Pan Li, Nikhil Rao, Karthik Subbian, and Jure Leskovec. Bipartite dynamic representations for abuse detection. In ${KDD}$ , pages 3638-3648,2021.
|
| 228 |
+
|
| 229 |
+
[9] Pan Li, Yen-Yu Chang, Rok Sosic, MH Afifi, Marco Schweighauser, and Jure Leskovec. F-fade: Frequency factorization for anomaly detection in edge streams. In WSDM, 2021. 1
|
| 230 |
+
|
| 231 |
+
[10] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7), 2007. 1
|
| 232 |
+
|
| 233 |
+
[11] Yehuda Koren. Collaborative filtering with temporal dynamics. In ${KDD}$ , pages 447-456,2009. 1
|
| 234 |
+
|
| 235 |
+
[12] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 2008. 1
|
| 236 |
+
|
| 237 |
+
[13] Victor Fung, Jiaxin Zhang, Eric Juarez, and Bobby G Sumpter. Benchmarking graph neural networks for materials chemistry. npj Computational Materials, 7(1):1-8, 2021. 1
|
| 238 |
+
|
| 239 |
+
[14] Xiangyang Ju, Steven Farrell, Paolo Calafiura, Daniel Murnane, Lindsey Gray, Thomas Kli-jnsma, Kevin Pedro, Giuseppe Cerati, Jim Kowalkowski, Gabriel Perdue, et al. Graph neural networks for particle reconstruction in high energy physics detectors. In NeurIPS, 2019.
|
| 240 |
+
|
| 241 |
+
[15] Tianchun Li, Shikun Liu, Yongbin Feng, Nhan Tran, Miaoyuan Liu, and Pan Li. Semi-supervised graph neural network for particle-level noise removal. In NeurIPS 2021 AI for Science Workshop, 2021. 1
|
| 242 |
+
|
| 243 |
+
[16] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In ICML, 2019. 1
|
| 244 |
+
|
| 245 |
+
[17] Balasubramaniam Srinivasan and Bruno Ribeiro. On the equivalence between positional node embeddings and structural graph representations. In ${ICLR},{2020.3}$
|
| 246 |
+
|
| 247 |
+
[18] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. In NeurIPS, 2020. 2,3,6
|
| 248 |
+
|
| 249 |
+
[19] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. In NeurIPS, 2021. 1, 2, 3, 6
|
| 250 |
+
|
| 251 |
+
[20] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. In ICML 2020 Workshop on ${GRL},{2020.1},3,4,6,7,{15},{16}$
|
| 252 |
+
|
| 253 |
+
[21] Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. Variational graph recurrent neural networks. In NeurIPS, 2019. 3
|
| 254 |
+
|
| 255 |
+
[22] Palash Goyal, Sujit Rokka Chhetri, and Arquimedes Canedo. dyngraph2vec: Capturing network dynamics using dynamic graph representation learning. Knowledge-Based Systems, 187, 2020.
|
| 256 |
+
|
| 257 |
+
[23] Franco Manessi, Alessandro Rozza, and Mario Manzo. Dynamic graph convolutional networks. Pattern Recognition, 97, 2020.
|
| 258 |
+
|
| 259 |
+
[24] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kaneza-shi, Tim Kaler, Tao B Schardl, and Charles E Leiserson. EvolveGCN: Evolving graph convolutional networks for dynamic graphs. In AAAI, 2020.
|
| 260 |
+
|
| 261 |
+
[25] Jiaxuan You, Tianyu Du, and Jure Leskovec. Roland: Graph learning framework for dynamic graphs. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2358-2366, 2022.
|
| 262 |
+
|
| 263 |
+
[26] Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. DySAT: Deep neural representation learning on dynamic graphs via self-attention networks. In WSDM, 2020. 3
|
| 264 |
+
|
| 265 |
+
[27] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. In ${ICLR},{2019.3},7,{15}$
|
| 266 |
+
|
| 267 |
+
[28] Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. In ${KDD},{2019.3},7,{15},{16}$
|
| 268 |
+
|
| 269 |
+
[29] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Inductive representation learning on temporal graphs. In ${ICLR},{2020.1},3,4,6,{15},{16}$
|
| 270 |
+
|
| 271 |
+
[30] Liming Pan, Cheng Shi, and Ivan Dokmanić. Neural link prediction with walk pooling. In International Conference on Learning Representations, 2022. 2
|
| 272 |
+
|
| 273 |
+
[31] Jiaxuan You, Jonathan Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In ${AAAI},{2021.2}$
|
| 274 |
+
|
| 275 |
+
[32] Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. Neural bellman-ford networks: A general graph neural network framework for link prediction. In NeurIPS, 2021. 2
|
| 276 |
+
|
| 277 |
+
[33] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In NeurIPS, 2018.2
|
| 278 |
+
|
| 279 |
+
[34] Yanbang Wang, Yen-Yu Chang, Yunyu Liu, Jure Leskovec, and Pan Li. Inductive representation learning in temporal graphs via casual anonymous walk. In ICLR, 2021. 2, 3, 4, 5, 6, 14
|
| 280 |
+
|
| 281 |
+
[35] Purnamrita Sarkar, Deepayan Chakrabarti, and Michael I Jordan. Nonparametric link prediction in dynamic networks. In ICML, 2012. 2
|
| 282 |
+
|
| 283 |
+
[36] Ghadeer AbuOda, Gianmarco De Francisci Morales, and Ashraf Aboulnaga. Link prediction via higher-order motif features. In ECML PKDD, pages 412-429. Springer, 2019. 2
|
| 284 |
+
|
| 285 |
+
[37] Krzysztof Juszczyszyn, Katarzyna Musial, and Marcin Budka. Link prediction based on subgraph evolution in dynamic social networks. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, pages 27-34. IEEE, 2011. 2
|
| 286 |
+
|
| 287 |
+
[38] Le-kui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. Dynamic network embedding by modeling triadic closure process. In ${AAAI},{2018.2}$
|
| 288 |
+
|
| 289 |
+
[39] Lun Du, Yun Wang, Guojie Song, Zhicong Lu, and Junshan Wang. Dynamic network embedding: An extended approach for skip-gram based network embedding. In IJCAI, 2018.
|
| 290 |
+
|
| 291 |
+
[40] Sedigheh Mahdavi, Shima Khoshraftar, and Aijun An. dynnode2vec: Scalable dynamic network embedding. In International Conference on Big Data (Big Data). IEEE, 2018.
|
| 292 |
+
|
| 293 |
+
[41] Uriel Singer, Ido Guy, and Kira Radinsky. Node embedding over temporal graphs. In IJCAI, 2019.
|
| 294 |
+
|
| 295 |
+
[42] Giang Hoang Nguyen, John Boaz Lee, Ryan A Rossi, Nesreen K Ahmed, Eunyee Koh, and Sungchul Kim. Continuous-time dynamic network embeddings. In WWW, 2018. 2
|
| 296 |
+
|
| 297 |
+
[43] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. \{TensorFlow\}: A system for \{Large-Scale\} machine learning. In OSDI, pages 265-283, 2016. 2
|
| 298 |
+
|
| 299 |
+
[44] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, volume 32, 2019. 2
|
| 300 |
+
|
| 301 |
+
[45] Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-evolve: deep temporal reasoning for dynamic knowledge graphs. In ICML, 2017. 3
|
| 302 |
+
|
| 303 |
+
[46] Xuhong Wang, Ding Lyu, Mengjian Li, Yang Xia, Qi Yang, Xinwen Wang, Xinguang Wang, Ping Cui, Yupu Yang, and Bowen Sun. Apan: Asynchronous propagation attention network for real-time temporal graph embedding. In Proceedings of the 2021 International Conference on Management of Data, pages 2628-2638, 2021. 3
|
| 304 |
+
|
| 305 |
+
[47] Hongkuan Zhou, Da Zheng, Israt Nisa, Vasileios Ioannidis, Xiang Song, and George Karypis. Tgl: A general framework for temporal gnn training on billion-scale graphs. In Proceedings of the VLDB Endowment, 2022. 3, 16
|
| 306 |
+
|
| 307 |
+
[48] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Self-attention with functional time representation learning. In NeurIPS, 2019. 4
|
| 308 |
+
|
| 309 |
+
[49] Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, and Marcus Brubaker. Time2vec: Learning a vector representation of time. arXiv preprint arXiv:1907.05321, 2019. 4
|
| 310 |
+
|
| 311 |
+
[50] Mark EJ Newman. Clustering and preferential attachment in growing networks. volume 64, page 025102. APS, 2001. 5
|
| 312 |
+
|
| 313 |
+
[51] Hawoong Jeong, Zoltan Néda, and Albert-László Barabási. Measuring preferential attachment in evolving networks. EPL (Europhysics Letters), 61(4):567, 2003. 5
|
| 314 |
+
|
| 315 |
+
[52] Haoteng Yin, Muhan Zhang, Yanbang Wang, Jianguo Wang, and Pan Li. Algorithm and system co-design for efficient subgraph-based graph representation learning. 15, 2022. 5
|
| 316 |
+
|
| 317 |
+
[53] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, 2017. 5
|
| 318 |
+
|
| 319 |
+
[54] Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. Graphsaint: Graph sampling based inductive learning method. In ICLR, 2020.
|
| 320 |
+
|
| 321 |
+
[55] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In ${KDD},{2019}$ . 5
|
| 322 |
+
|
| 323 |
+
[56] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 7
|
| 324 |
+
|
| 325 |
+
[57] The Git Commit That Attempts to Fix an Attention Bug in CAWN But Causes Under-performance in Multiple Datasets.7,8,14
|
| 326 |
+
|
| 327 |
+
[58] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018. 15
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+
Figure 5: The procedure to find unique node IDs and the indices for pooling, which are used for parallel construction of DEs and joint representations.
|
| 332 |
+
|
| 333 |
+
Algorithm 2: Construct Joint Neighborhood Features $\left( {{Z}_{u}^{\left( k\right) },{Z}_{v}^{\left( k\right) }\text{for}k \in \{ 0,1,2\} }\right)$
|
| 334 |
+
|
| 335 |
+
---
|
| 336 |
+
|
| 337 |
+
${\mathrm{{KEY}}}_{uv} \leftarrow \operatorname{concat}\left( {{s}_{u}^{\left( k\right) }\text{ for }k \in \{ 0,1,2\} ,{s}_{v}^{\left( k\right) }}\right.$ for $\left. {k \in \{ 0,1,2\} }\right)$ ;
|
| 338 |
+
|
| 339 |
+
2 VALUE ${}_{uv} \leftarrow$ concat(value $\left( {Z}_{u}^{\left( k\right) }\right)$ for $k \in \{ 0,1,2\}$ , value $\left( {Z}_{v}^{\left( k\right) }\right)$ for $k \in \{ 0,1,2\}$ );
|
| 340 |
+
|
| 341 |
+
${s}_{uv} \leftarrow$ Remove EMPTY from ${\mathrm{{KEY}}}_{uv}$ ;
|
| 342 |
+
|
| 343 |
+
Remove the corresponding EMPTY entries from VALUE ${}_{uv}$ ;
|
| 344 |
+
|
| 345 |
+
${\mathcal{N}}_{uv} \leftarrow$ unique $\left( {s}_{uv}\right) ,{\phi }_{uv} \leftarrow$ the index in ${\mathcal{N}}_{uv}$ for each of ${s}_{uv}$ ;
|
| 346 |
+
|
| 347 |
+
Initialize ${Q}_{uv}$ with length $\left( {\mathcal{N}}_{uv}\right)$ vectors as seen in Eq (2); // to aggregate nbr. representations.
|
| 348 |
+
|
| 349 |
+
Scatterly add VALUE ${}_{uv}$ into ${Q}_{uv}$ according to indices ${\phi }_{uv}$ ;
|
| 350 |
+
|
| 351 |
+
Initialize ${\mathrm{{DE}}}_{u},{\mathrm{{DE}}}_{v}$ with length $\left( {\mathcal{N}}_{uv}\right)$ vectors;
|
| 352 |
+
|
| 353 |
+
for $i$ from 0 to length $\left( {\mathcal{N}}_{uv}\right)$ , in parallel (implement with scatter add using indices ${\phi }_{uv}$ ), do
|
| 354 |
+
|
| 355 |
+
for $w \in u, v$ do
|
| 356 |
+
|
| 357 |
+
${\mathrm{{DE}}}_{w}\left\lbrack i\right\rbrack \leftarrow \left\lbrack {\mathbf{{if}}{\mathcal{N}}_{uv}\left\lbrack i\right\rbrack \text{ is one of }{s}_{w}^{\left( k\right) }\text{ then }1\text{ else }0\text{ for }k \in \{ 0,1,2\} }\right\rbrack ;$
|
| 358 |
+
|
| 359 |
+
Return concat( ${\mathrm{{DE}}}_{u},{\mathrm{{DE}}}_{v},{Q}_{uv}$ ) along the last dimension;
|
| 360 |
+
|
| 361 |
+
---
|
| 362 |
+
|
| 363 |
+
## A Efficient Joint Neighborhood Features Implementation
|
| 364 |
+
|
| 365 |
+
Here, we detail the efficient implementation that generates joint neighborhood structural features based on N-Caches as introduced in Sec. 4.2. This implementation is summarized in Alg. 2.
|
| 366 |
+
|
| 367 |
+
Both DE (Eq (1)) and representation aggregation (Eq (2)) can be done for multiple nodes in parallel on GPUs using PyTorch built-in functions. Specifically, for a mini-batch of temporal links $B = \{ \ldots ,\left( {u, v, t}\right) ,\ldots \}$ , NAT first collects the union of the current neighborhoods for each end-node ${s}_{u} = { \oplus }_{k = 1}^{K}{s}_{u}^{\left( k\right) },{s}_{v} = { \oplus }_{k = 1}^{K}{s}_{v}^{\left( k\right) }$ for all $\left( {u, v, t}\right) \in B$ . Then, NAT follows the steps of Fig. 5: (1) Remove the empty entries in the joint neighborhood ${s}_{u} \oplus {s}_{v}$ with PyTorch function nonzero, denoted as ${s}_{uv}$ . (2) Find unique nodes ${\mathcal{N}}_{uv}$ in the joint neighborhood ${s}_{uv}$ . (3) Generate array ${\phi }_{uv}$ which stores the index in ${\mathcal{N}}_{uv}$ for each node in ${s}_{uv}$ . The last two steps can be computed using PyTorch function unique with parameter return_inverse set to true. (4) Compute DE features and aggregation neighborhood features via the scatter_add operation with indices recorded in ${\phi }_{uv}$ . All these operations support GPU parallel computation.
|
| 368 |
+
|
| 369 |
+
## B Dataset Description
|
| 370 |
+
|
| 371 |
+
The following are the detailed descriptions of the seven datasets we tested.
|
| 372 |
+
|
| 373 |
+
<table><tr><td>Task</td><td>Method</td><td>Wikipedia</td><td>Reddit</td><td>Social E. $1\mathrm{\;m}$ .</td><td>Social E.</td><td>Enron</td><td>UCI</td><td>Ubuntu</td><td>Wiki-talk</td></tr><tr><td rowspan="7">Inductive</td><td>CAWN</td><td>${98.16} \pm {0.06}$</td><td>${97.97} \pm {0.01}$</td><td>${78.36} \pm {2.94}$</td><td>${50.00} \pm {0.00}$</td><td>${94.29} \pm {0.15}$</td><td>${79.35} \pm {0.48}$</td><td>${50.00} \pm {0.00}$</td><td>${50.00} \pm {0.00}$</td></tr><tr><td>JODIE</td><td>${95.16} \pm {0.42}$</td><td>${96.31} \pm {0.16}$</td><td>${85.16} \pm {1.24}$</td><td>${86.14} \pm {0.67}$</td><td>${82.56} \pm {1.88}$</td><td>${85.02} \pm {0.38}$</td><td>${52.41} \pm {5.80}$</td><td>${65.94} \pm {4.26}$</td></tr><tr><td>DyRep</td><td>${93.97} \pm {0.18}$</td><td>${96.86} \pm {0.29}$</td><td>${84.38} \pm {1.69}$</td><td>${49.84} \pm {0.35}$</td><td>${76.69} \pm {2.64}$</td><td>${67.36} \pm {1.47}$</td><td>${53.22} \pm {0.03}$</td><td>${50.37} \pm {0.42}$</td></tr><tr><td>TGN</td><td>${97.84} \pm {0.06}$</td><td>${97.63} \pm {0.09}$</td><td>${88.43} \pm {0.38}$</td><td>${70.86} \pm {10.30}$</td><td>${75.28} \pm {1.81}$</td><td>${81.65} \pm {1.44}$</td><td>${62.98} \pm {3.36}$</td><td>${59.24} \pm {2.34}$</td></tr><tr><td>TGN-pg</td><td>${94.96} \pm {0.33}$</td><td>${94.53} \pm {3.04}$</td><td>${63.17} \pm {4.69}$</td><td>${90.24} \pm {3.72}$</td><td>${67.99} \pm {1.78}$</td><td>${86.02} \pm {3.34}$</td><td>${74.85} \pm {1.44}$</td><td>${83.25} \pm {2.96}$</td></tr><tr><td>TGAT</td><td>${97.25} \pm {0.18}$</td><td>${96.37} \pm {0.10}$</td><td>${51.23} \pm {0.69}$</td><td>${50.0} \pm {0.00}$</td><td>${55.86} \pm {1.01}$</td><td>${70.83} \pm {0.58}$</td><td>${55.73} \pm {6.47}$</td><td>${74.50} \pm {3.71}$</td></tr><tr><td>NAT</td><td>$\mathbf{{98.27} \pm {0.12}}$</td><td>$\mathbf{{98.56} \pm {0.21}}$</td><td>$\mathbf{{92.62} \pm {1.66}}$</td><td>$\mathbf{{96.13} \pm {0.46}}$</td><td>${95.25} \pm {1.37}$</td><td>$\mathbf{{90.18} \pm {1.30}}$</td><td>$\mathbf{{87.72} \pm {0.28}}$</td><td>$\mathbf{{92.73} \pm {1.35}}$</td></tr><tr><td rowspan="7">Transductive</td><td>CAWN</td><td>${98.39} \pm {0.08}$</td><td>${98.64} \pm {0.04}$</td><td>${79.59} \pm {0.32}$</td><td>${50.00} \pm {0.00}$</td><td>${92.32} \pm {0.26}$</td><td>${81.76} \pm {0.18}$</td><td>${50.00} \pm {0.00}$</td><td>${50.00} \pm {0.00}$</td></tr><tr><td>JODIE</td><td>${96.05} \pm {0.39}$</td><td>${97.63} \pm {0.05}$</td><td>${82.36} \pm {0.87}$</td><td>${76.87} \pm {0.32}$</td><td>${85.28} \pm {2.25}$</td><td>${91.69} \pm {0.40}$</td><td>${52.61} \pm {2.50}$</td><td>${73.32} \pm {4.37}$</td></tr><tr><td>DyRep</td><td>${95.34} \pm {0.18}$</td><td>${97.93} \pm {0.20}$</td><td>${80.58} \pm {3.55}$</td><td>${50.05} \pm {3.64}$</td><td>${79.28} \pm {1.84}$</td><td>${72.62} \pm {2.01}$</td><td>${52.38} \pm {0.02}$</td><td>${69.89} \pm {2.67}$</td></tr><tr><td>TGN</td><td>${98.42} \pm {0.05}$</td><td>${98.65} \pm {0.03}$</td><td>${90.37} \pm {0.40}$</td><td>${73.08} \pm {9.74}$</td><td>${82.08} \pm {4.36}$</td><td>${89.54} \pm {1.58}$</td><td>${54.13} \pm {2.52}$</td><td>${76.07} \pm {5.28}$</td></tr><tr><td>TGN-pg</td><td>${97.06} \pm {0.09}$</td><td>${98.58} \pm {0.08}$</td><td>${66.89} \pm {7.90}$</td><td>${66.14} \pm {10.7}$</td><td>${81.23} \pm {2.80}$</td><td>${91.16} \pm {0.30}$</td><td>${89.59} \pm {0.42}$</td><td>${93.69} \pm {0.06}$</td></tr><tr><td>TGAT</td><td>${96.65} \pm {0.06}$</td><td>${98.07} \pm {0.08}$</td><td>${56.98} \pm {0.53}$</td><td>${50.00} \pm {0.00}$</td><td>${62.08} \pm {1.08}$</td><td>${79.85} \pm {0.24}$</td><td>${57.23} \pm {6.55}$</td><td>${81.82} \pm {1.87}$</td></tr><tr><td>NAT</td><td>${98.51} \pm {0.05}$</td><td>${99.01} \pm {0.11}$</td><td>$\mathbf{{91.77} \pm {0.19}}$</td><td>$\mathbf{{93.63} \pm {0.36}}$</td><td>$\mathbf{{93.08} \pm {0.18}}$</td><td>$\mathbf{{92.08} \pm {0.18}}$</td><td>$\mathbf{{92.62} \pm {0.10}}$</td><td>$\mathbf{{95.33} \pm {0.26}}$</td></tr></table>
|
| 374 |
+
|
| 375 |
+
Table 6: Performance in AUC (mean in percentage $\pm {95}\%$ confidence level.) bold font and underline highlight the best performance on average and the second best performance on average. Timeout means the time of training for one epoch is more than one hour.
|
| 376 |
+
|
| 377 |
+
<table><tr><td>Params</td><td>Wikipedia</td><td>Reddit</td><td>Social E. $1\mathrm{\;m}$ .</td><td>Social E.</td><td>Enron</td><td>UCI</td><td>Ubuntu</td><td>Wiki-talk</td></tr><tr><td>${M}_{1}$</td><td>32</td><td>32</td><td>40</td><td>40</td><td>32</td><td>32</td><td>16</td><td>16</td></tr><tr><td>${M}_{2}$</td><td>16</td><td>16</td><td>20</td><td>20</td><td>16</td><td>16</td><td>2</td><td>0</td></tr><tr><td>$F$</td><td>4</td><td>4</td><td>2</td><td>2</td><td>2</td><td>2</td><td>4</td><td>4</td></tr><tr><td>$\left( {{M}_{1} + {M}_{2}}\right) * F$</td><td>192</td><td>192</td><td>120</td><td>120</td><td>96</td><td>96</td><td>72</td><td>64</td></tr><tr><td>Self Rep. Dim.</td><td>72</td><td>72</td><td>32</td><td>72</td><td>72</td><td>32</td><td>50</td><td>72</td></tr></table>
|
| 378 |
+
|
| 379 |
+
Table 7: Hyperparameters of NAT.
|
| 380 |
+
|
| 381 |
+
- Wikipedia ${}^{1}$ logs the edit events on wiki pages. A set of nodes represents the editors and another set represents the wiki pages. It is a bipartite graph which has timestamped links between the two sets. It has both node and edge features. The edge features are extracted from the contents of wiki pages.
|
| 382 |
+
|
| 383 |
+
- Reddit ${}^{2}$ is a dataset of the post events by users on subreddits. It is also an attributed bipartite graph between users and subreddits.
|
| 384 |
+
|
| 385 |
+
- Social Evolution ${}^{3}$ records physical proximity between students living in the dormitory overtime. The original dataset spans one year but CAWN [34] fails to perform on large datasets probably caused by a recent code change due to a bug [57]. To compare the performance, we split out the data over a month, termed Social Evolve $1\mathrm{\;m}$ ., and evaluate over all baselines.
|
| 386 |
+
|
| 387 |
+
- Enron ${}^{4}$ is a network of email communications between employees of a corporation.
|
| 388 |
+
|
| 389 |
+
- ${\mathrm{{UCI}}}^{5}$ is a graph recording posts to an online forum. The nodes are university students and the edges are forum messages. It is non-attributed.
|
| 390 |
+
|
| 391 |
+
- Ubuntu ${}^{6}$ or Ask Ubuntu, is a dataset recording the interactions on the stack exchange web site Ask Ubuntu ${}^{7}$ . Nodes are users and there are three different types of edges,(1) user $u$ answering user $v$ ’s question,(2) user $u$ commenting on user $v$ ’s question, and (3) user $w$ commenting on user $u$ ’s answer. It is a relatively large dataset with more than ${100}\mathrm{\;K}$ nodes.
|
| 392 |
+
|
| 393 |
+
- Wiki-talk ${}^{8}$ is dataset that represents the edit events on Wikipedia user talk pages. The dataset spans approximately 5 years so it accumulates a large number of nodes and edges. This is the largest dataset with more than $1\mathrm{M}$ nodes.
|
| 394 |
+
|
| 395 |
+
## C Baselines and the experiment setup
|
| 396 |
+
|
| 397 |
+
CAWN [34] with source code provided here is a very recent work that samples temporal random walks and anonymizes node identities to achieve motif information. It backtracks historical events to extract neighboring nodes. It achieves high prediction performance but it is both time-consuming and memory-intensive. We pull the most recent commit from their repository. When measuring the CPU usage, we also notice a garbage collection bug. It causes the CPU memory consumption to keep on increasing after every batch and every epoch without any decrease. We fix the bug such that CPU memory remains constant. Our metrics in Table 3 is recorded based on our bug fix. We tune with walk length either 1 or 2 . For Wikipedia, Reddit and SocialEvolve we use walk length of two, and others with only first-hop neighbors. We tune sampling sizes of the first walk between 20 and 64, and the second between 1 and 32 .
|
| 398 |
+
|
| 399 |
+
---
|
| 400 |
+
|
| 401 |
+
${}^{1}$ http://snap.stanford.edu/jodie/wikipedia.csv
|
| 402 |
+
|
| 403 |
+
${}^{2}$ http://snap.stanford.edu/jodie/reddit.csv
|
| 404 |
+
|
| 405 |
+
${}^{3}$ http://realitycommons.media.mit.edu/socialevolution.html
|
| 406 |
+
|
| 407 |
+
${}^{4}$ https://www.cs.cmu.edu/~./enron/
|
| 408 |
+
|
| 409 |
+
${}^{5}$ http://konect.cc/networks/opsahl-ucforum/
|
| 410 |
+
|
| 411 |
+
${}^{6}$ https://snap.stanford.edu/data/sx-askubuntu.html
|
| 412 |
+
|
| 413 |
+
${}^{7}$ http://askubuntu.com/
|
| 414 |
+
|
| 415 |
+
${}^{8}$ https://snap.stanford.edu/data/wiki-talk-temporal.html
|
| 416 |
+
|
| 417 |
+
---
|
| 418 |
+
|
| 419 |
+
<table><tr><td>No.</td><td>Ablation</td><td>Task</td><td>Social E.</td><td>Ubuntu</td></tr><tr><td rowspan="2">1.</td><td>remove</td><td>inductive</td><td>$- {0.74} \pm {1.01}$</td><td>$- {1.54} \pm {0.10}$</td></tr><tr><td>T-encoding</td><td>transductive</td><td>$- {1.10} \pm {0.31}$</td><td>$- {1.25} \pm {0.54}$</td></tr><tr><td rowspan="2">2.</td><td rowspan="2">remove RNN</td><td>inductive</td><td>$- {1.18} \pm {0.87}$</td><td>$- {1.19} \pm {0.86}$</td></tr><tr><td>transductive</td><td>$- {1.26} \pm {0.50}$</td><td>$- {5.68} \pm {4.45}$</td></tr><tr><td rowspan="2">3.</td><td rowspan="2">remove attention</td><td>inductive</td><td>$- {0.77} \pm {1.14}$</td><td>$- {0.28} \pm {0.16}$</td></tr><tr><td>transductive</td><td>$- {0.39} \pm {0.43}$</td><td>$- {0.01} \pm {0.20}$</td></tr><tr><td rowspan="2">4.</td><td rowspan="2">remove $\mathrm{{DE}}$</td><td>inductive</td><td>$- {3.78} \pm {2.14}$</td><td>$- {5.67} \pm {2.87}$</td></tr><tr><td>transductive</td><td>$- {3.43} \pm {1.64}$</td><td>$- {1.55} \pm {0.16}$</td></tr></table>
|
| 420 |
+
|
| 421 |
+
Table 8: Ablation study with other modules of NAT (changes recorded w.r.t Table 2).
|
| 422 |
+
|
| 423 |
+
<table><tr><td>Param</td><td>Size</td><td>Inductive</td><td>Transductive</td><td>Train</td><td>Test</td><td>GPU</td></tr><tr><td rowspan="3">${M}_{1}$</td><td>8</td><td>${89.50} \pm {0.37}$</td><td>${93.56} \pm {0.30}$</td><td>124.4</td><td>41.1</td><td>9.85</td></tr><tr><td>16</td><td>${90.35} \pm {0.20}$</td><td>${93.50} \pm {0.34}$</td><td>125.8</td><td>41.2</td><td>10.1</td></tr><tr><td>24</td><td>${88.39} \pm {0.46}$</td><td>${93.37} \pm {0.46}$</td><td>123.5</td><td>41.1</td><td>11.0</td></tr><tr><td rowspan="3">${M}_{2}$</td><td>2</td><td>${90.35} \pm {0.20}$</td><td>${93.50} \pm {0.34}$</td><td>125.8</td><td>41.2</td><td>10.1</td></tr><tr><td>4</td><td>${89.86} \pm {0.46}$</td><td>${93.46} \pm {0.27}$</td><td>125.7</td><td>41.5</td><td>10.2</td></tr><tr><td>8</td><td>${89.33} \pm {0.40}$</td><td>$\mathbf{{93.50} \pm {0.27}}$</td><td>124.7</td><td>40.9</td><td>10.5</td></tr><tr><td rowspan="3">F</td><td>2</td><td>${88.82} \pm {1.64}$</td><td>${93.51} \pm {0.17}$</td><td>124.6</td><td>41.3</td><td>9.69</td></tr><tr><td>4</td><td>${90.35} \pm {0.20}$</td><td>${93.50} \pm {0.34}$</td><td>125.8</td><td>41.2</td><td>10.1</td></tr><tr><td>8</td><td>${90.29} \pm {0.33}$</td><td>${93.42} \pm {0.18}$</td><td>125.2</td><td>41.2</td><td>11.0</td></tr></table>
|
| 424 |
+
|
| 425 |
+
Table 9: Sensitivity of N-cache sizes on Ubuntu.
|
| 426 |
+
|
| 427 |
+
JODIE [28] with source code provided here is a method that learns the embeddings of evolving trajectories based on past interactions. Its backbone is RNNs. It was proposed for bipartite networks, so we adapt the model for non-bipartite temporal networks using the TGN framework. We use a time embedding module, and a vanilla RNN as the memory update module. We use 100 dimensions for its dynamic embedding which gives around the same scale as the other models and provide a fair comparison on both performance and scalability.
|
| 428 |
+
|
| 429 |
+
DyRep [27] with source code provided here proposes a two-time scale deep temporal point process model that learns the dynamics of graphs both structurally and temporally. We use 100 gradient clips, and hidden size and embedding size both 100 for a fair comparison on both performance and scalability.
|
| 430 |
+
|
| 431 |
+
TGN [20] with source code provided here is a very recent work as well. It does not perform as well as CAWN on certain datasets but it runs much more efficiently. It keeps track of a memory state for each node and update with new interactions. We train TGN with 300 dimensions in total for all of memory module, time feature and node embedding, and we only consider sampling the first-hop neighbors because it takes much longer to train with second-hop neighbors and the performance does not have significant improvements.
|
| 432 |
+
|
| 433 |
+
TGN-pg with source code is provided in the PyTorch Geometric library ${}^{9}$ here. This link gives an example use of the library code. This is the same model design as TGN. However, it is much more efficient than TGN because it is more parallelized. Like TGN, we use 300 dimensions in total for all datasets except the largest dataset Wiki-talk. Given the limited GPU memory (11 GB), we have to tune it to 75 dimensions in total such that it can fit the GPU memory.
|
| 434 |
+
|
| 435 |
+
TGAT with source code provided here is an analogy to GAT [58] for static graph, which leverages attention mechanism on graph message passing. TGAT incorporates temporal encoding to the pipeline. Similar to CAWN, TGAT also has to sample neighbors from the history. We use 2 attention heads and and 100 hidden dimensions. We tune with either 1 or 2 graph attention layers and the samping sizes between 20 and 64.
|
| 436 |
+
|
| 437 |
+
NAT Since our model can provide the trade-off between performance and scalability, we tune the model with an upperbound on the GPU memory we consider acceptable. Thus, the major parameters we tuned are related to the $\mathrm{N}$ -caches size: ${M}_{1},{M}_{2}$ and $F$ . During tuning, we try to keep $\left( {{M}_{1} + {M}_{2}}\right) * F$ the same. We make sure that NAT’s GPU consumption has to be at the same level as the baselines for all datasets. For example, for the large scale dataset Wiki-talk, the estimated upperbound for GPU is based on the consumption of other baselines as presented in Table 3. The resulting hyperparameter values are given in Table 7. We tune the attention head in the final output layer from 1 to 8 and the overwriting probability for hashing collision $\alpha$ from 0 to 1 . We eventually keep $\alpha = {0.9}$ as it gives the good results for all datasets. Regarding the choice of RNN, we test both GRU and LSTM, but GRU performs better and runs faster.
|
| 438 |
+
|
| 439 |
+
### C.1 Inductive evaluation of NAT
|
| 440 |
+
|
| 441 |
+
Our evaluation pipeline for inductive learning is different from others with one added process. For other sampling methods such as TGN [20] and TGAT [29], when they do inductive evaluations, the entire training and evaluation data is available to be accessed, including events that are masked for inductive test. They sample neighbors of test nodes based on their historical interactions to get neighborhood information. However, NAT does not depend on sampling. Instead NAT adopts N-caches for quick access of neighborhood information. Hence, NAT cannot build up the N-caches for the masked nodes during the training stage for inductive tasks. By the end of the training, even all historical events become accessible, NAT cannot leverage them unless they have been aggregated into the N-caches. Therefore, to ensure a fair comparison, after training, NAT processes the full train and validation data with all nodes unmasked, and then processes the test data. Note that in this last pass over the full train and validation data, we do not perform training anymore.
|
| 442 |
+
|
| 443 |
+
---
|
| 444 |
+
|
| 445 |
+
${}^{9}$ https://github.com/pyg-team/pytorch_geometric
|
| 446 |
+
|
| 447 |
+
---
|
| 448 |
+
|
| 449 |
+
<table><tr><td>Method</td><td>Wikipedia</td><td>Reddit</td><td>Social E. $1\mathrm{\;m}$ .</td><td>Social E.</td><td>Enron</td><td>UCI</td><td>Ubuntu</td><td>Wiki-talk</td></tr><tr><td>TGN-TGL</td><td>${99.18} \pm {0.26}$</td><td>${99.67} \pm {0.05}$</td><td>${83.51} \pm {1.20}$</td><td>${86.14} \pm {1.45}$</td><td>${70.96} \pm {2.98}$</td><td>${86.99} \pm {2.69}$</td><td>${81.15} \pm {0.55}$</td><td>${86.60} \pm {0.32}$</td></tr><tr><td>NAT-2-hop</td><td>${98.68} \pm {0.04}$</td><td>${99.10} \pm {0.09}$</td><td>$\mathbf{{90.20} \pm {0.20}}$</td><td>$\mathbf{{91.75} \pm {0.37}}$</td><td>$\mathbf{{92.42} \pm {0.09}}$</td><td>$\mathbf{{93.92} \pm {0.15}}$</td><td>$\mathbf{{93.50} \pm {0.34}}$</td><td>-</td></tr><tr><td>NAT-1-hop</td><td>${98.60} \pm {0.04}$</td><td>${98.94} \pm {0.08}$</td><td>${88.07} \pm {0.13}$</td><td>${90.77} \pm {0.26}$</td><td>${90.67} \pm {0.13}$</td><td>${93.28} \pm {0.17}$</td><td>${93.48} \pm {0.34}$</td><td>${95.82} \pm {0.31}$</td></tr></table>
|
| 450 |
+
|
| 451 |
+
Table 10: Comparison on the transductive average precisions between TGN with TGL and NAT.
|
| 452 |
+
|
| 453 |
+
<table><tr><td/><td>Method</td><td>Train</td><td>Test</td><td>Total</td><td>RAM</td><td>GPU</td><td>Epoch</td></tr><tr><td rowspan="3">Ubuntu</td><td>TGN-TGL</td><td>100.5</td><td>38.3</td><td>1,506</td><td>40.8</td><td>19.0</td><td>7.0</td></tr><tr><td>NAT-2-hop</td><td>125.8</td><td>41.2</td><td>1,321</td><td>28.9</td><td>10.1</td><td>5.4</td></tr><tr><td>NAT-1-hop</td><td>111.3</td><td>35.7</td><td>927</td><td>21.9</td><td>9.95</td><td>3.0</td></tr><tr><td rowspan="2">Wiki-talk</td><td>TGN-TGL</td><td>809.7</td><td>310.0</td><td>9,157</td><td>43.8</td><td>26.5</td><td>3.7</td></tr><tr><td>NAT-1-hop</td><td>833.1</td><td>280.1</td><td>7,802</td><td>37.1</td><td>22.3</td><td>2.7</td></tr></table>
|
| 454 |
+
|
| 455 |
+
Table 11: Scalability evaluation on Ubuntu and Wiki-talk between TGN with TGL and NAT.
|
| 456 |
+
|
| 457 |
+
## D Additional Experiments
|
| 458 |
+
|
| 459 |
+
Further Ablation study. We further conduct ablation experiments on other components related to modeling capability, as shown in Table 8. For Ab. 1, 2, 3, and 4, we remove temporal encodings, replace RNN with a linear layer, replace the final attention layer with mean aggregation, and remove distance encoding respectively. All the ablations generate worse results. For both datasets, removing distance encoding shows significant impact as it fails to learn from joint neighborhood structures. Removing RNN generally has worse performance than removing temporal encoding. We think this is because RNN is critical in encoding temporal dependencies and is able to implicitly encode temporal information given a series of edges. Overall, we conclude that these modules are helpful to some extent for achieving a high performance.
|
| 460 |
+
|
| 461 |
+
More on Sensitivity of N-cache sizes. We further test the sensitivity of N-cache sizes with the Ubuntu dataset as shown in Table 9. Similar to the study on Wiki-talk, the GPU memory cost scales almost linearly while the model running time fluctuates. It also shows more evidence that a larger model size does not guarantee a better prediction performance. Similar to the study on Wiki-talk, Ubuntu only needs a tiny $F$ for the model to be successful.
|
| 462 |
+
|
| 463 |
+
## E One Concurrent Work
|
| 464 |
+
|
| 465 |
+
TGL [47] is a concurrent work of this work where it has got published very recently. TGL proposes a general framework for large-scale Temporal Graph Neural Network training. It aims to maintain the same level of prediction accuracy as baseline models while providing speedups on training and evaluation. Its major contribution is to support parallelization on multiple GPUs, which enables training on billion-scale data. The models that this framework can support include TGN [20], JODIE [28], TGAT [29], etc. However, it neither supports the joint neighborhood features nor it is extendable to our dictionary type representations. We conduct some experiments to compare TGL with our model.
|
| 466 |
+
|
| 467 |
+
We pull the TGL framework from our repo. We compare NAT with TGN implemented with the framework as it is the best performing model they provided. Similar to TGN, we use embedding dimensions 100 and we follow the same setup as described in Sec. 5.1. We tune the sampling neighbor size to be around 10 to 40. If different sizes generate similar accuracy, we use the smaller size for scalability comparison. We run TGN-TGL on single GPU for a fair comparison with our model. Since TGL does not support inductive learning, we only evaluate the transductive tasks. Finally, we compare TGN-TGL with not only our baseline model, but also NAT with only the 1-hop N-cache. We document the prediction performances in Table 10 and the scalability metrics in Table 11. Although TGN-TGL gives marginally better scores on Wikipedia and Reddit, NAT performs much better on all other datasets $\left( {{5.6} - {21.5}\% }\right)$ . We think the reason is that given that both Wikipedia and Reddit have node and edge features, the ambiguity issue in the toy example of Fig. 1 is reduced. However, for other datasets, TGN-TGL still suffers from missing capturing the structural features in 613 the joint neighborhood.
|
| 468 |
+
|
| 469 |
+
In terms of scalability, TGN-TGL runs faster than NAT on training for both Ubuntu and Wiki-talk, though TGN-TGL still uses a greater number of epochs and therefore longer total time. On Ubuntu, when 2-hop N-cache is involved, it has longer inference time than TGN-TGL. However, when only 1-hop N-cache is used, TGN-TGL takes 7% and 11% longer time compared to NAT on Ubuntu and Wiki-talk respectively. TGN-TGL performs almost all training procedures in the GPU and TGN-TGL leverages the multi-core CPU to parallelize the sampling of temporal neighbors. However, because it still has to sample neighbors, TGN-TGL is slower than NAT on large networks in testing procedures.
|
papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ NEIGHBORHOOD-AWARE SCALABLE TEMPORAL NETWORK REPRESENTATION LEARNING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Anonymous Affiliation
|
| 6 |
+
|
| 7 |
+
Anonymous Email
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Temporal networks have been widely used to model real-world complex systems such as financial systems and e-commerce systems. In a temporal network, the joint neighborhood of a set of nodes often provides crucial structural information useful for predicting whether they may interact at a certain time. However, recent representation learning methods for temporal networks often fail to extract such information or depend on online construction of structural features, which is time-consuming. To address the issue, this work proposes Neighborhood-Aware Temporal network model (NAT). For each node in the network, NAT abandons the commonly-used one-single-vector-based representation while adopting a novel dictionary-type neighborhood representation. Such a dictionary representation records a down-sampled set of the neighboring nodes as keys, and allows fast construction of structural features for a joint neighborhood of multiple nodes. We also design dedicated data structure termed $N$ -cache to support parallel access and update of those dictionary representations on GPUs. NAT gets evaluated over seven real-world large-scale temporal networks. NAT not only outperforms all cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy, respectively, but also keeps scalable by achieving a speed-up of ${4.1} - {76.7} \times$ against the baselines that adopts joint structural features and achieves a speed-up of ${1.6} - {4.0} \times$ against the baselines that cannot adopt those features. The link to the code: https://anonymous.4open.science/r/NAT-617D.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Temporal networks are widely used as abstractions of real-world complex systems [1]. They model interacting elements as nodes, interactions as links, and when those interactions happen as timestamps on those links. Temporal networks often evolve by following certain patterns. Ranging from triadic closure [2] to higher-order motif closure [3-6], the interacting behaviors between multiple nodes have been shown to strongly depend on the network structure of their joint neighborhood. Researchers have leveraged this observation and built many practical systems to monitor and make prediction on temporal networks such as anomaly detection in financial networks [7-9], friend recommendation in social networks [10], and collaborative filtering techniques in e-commerce systems [11].
|
| 16 |
+
|
| 17 |
+
Recently, graph neural networks (GNNs) have been widely used to encode network-structured data [12] and have achieved state-of-the-art (SOTA) performance in many tasks such as node/graph classification [13-15]. However, to predict how nodes interact with each other in temporal networks, a direct generalization of GNNs may not work well. Traditional GNNs often learn a vector representation for each node, and predict whether two node may interact (aka. a link) based on a combination (e.g. the inner product) of the two vector representations. This link prediction strategy often fails to capture the structure features of the joint neighborhood of the two nodes [16-19]. Consider a toy example with a temporal network in Fig. 1: Node $w$ and node $v$ share the same local structure before ${t}_{3}$ , so GNNs including their variants on temporal networks (e.g., TGN [20]) will associate $w$ and $v$ with the same vector representation. Hence, GNNs will fail to make correct prediction to tell whether $u$ will interact with $w$ or $v$ at ${t}_{3}$ . Here, GNNs cannot capture the important joint structural feature that $u$ and $v$ have a common neighbor $a$ before ${t}_{3}$ . This issue makes almost all previous works that generalize GNNs for temporal networks provide only subpar performance [20-29]. Some recent works have been proposed to address such an issue on static networks [18, 19, 30]. Their key ideas are to construct node structural features to learn the two-node joint neighborhood representations. Specifically, for two nodes of interest, they either label one linked node and construct its distance to the other node $\left\lbrack {{31},{32}}\right\rbrack$ , or label all nodes in the neighborhood with their distances to these two linked nodes $\left\lbrack {{18},{33}}\right\rbrack$ . Traditional GNNs can afterwards encode such feature-augmented neighborhood to achieve better inference. Although these ideas are theoretically powerful [18, 19] and provide good empirical performance on small networks, the induced models are not scaled up to large networks. This is because constructing such structural features is time-consuming and should be done separately for each link to be predicted. This issue becomes even more severe over temporal networks, because two nodes may interact many times and thus the number of links to be predicted is often much larger than the corresponding number in static networks.
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Figure 1: A toy example to predict how a temporal network evolves. Given the historical temporal network as shown in the left, the task is to predict whether $u$ prefers to interact with $v$ or $w$ at timestamp ${t}_{3}$ . If this is a social network,(u, v)is likely to happen because $u,v$ share a common neighbor $a$ and follow the principle of triadic closure [2]. However, traditional GNNs, even for their generalization on temporal networks fail here as they learn the same representations for node $v$ and node $w$ due to their common structural contexts, as shown in the middle. In the right, we show a high-level abstraction of joint neighborhood features based on $\mathrm{N}$ -caches of $\mathbf{u}$ and $\mathbf{v}$ : In the N-caches for 1-hop neighborhoods of both node $u$ and node $v,a$ appears as the keys. Joining these keys can provide a structural feature that encodes such common-neighbor information at least for prediction.
|
| 22 |
+
|
| 23 |
+
In this work, we propose Neighborhood-Aware Temporal network model (NAT) that can address the aforementioned modeling issue while keeping a good scalability of the model. The key novelty of NAT is to incorporate dictionary-type neighborhood representations in place of one-single-vector node representation and a computation-friendly neighborhood cache (N-cache) to maintain such dictionary-type respresentations. Specifically, the N-cache of a node stores several size-constrained dictionaries on GPUs. Each dictionary has a sampled collection of historical neighbors of the center node as keys, and aggregates the timestamps and the features on the links connected to these neighbors as values (vector representations). With N-caches, NAT can in parallel construct the joint neighborhood structural features for a batch of node pairs to achieve fast link predictions. NAT can also update the N-caches with new interacted neighbors efficiently by adopting hash-based search functions which support GPU parallel computation.
|
| 24 |
+
|
| 25 |
+
NAT provides a novel solution for scalable temporal network representation learning. We evaluate NAT over 7 real-world temporal networks, among which, one contains $1\mathrm{M} +$ nodes and almost 10 $\mathrm{M}$ temporal links to evaluate the scalability of NAT. NAT outperforms cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy respectively. NAT achieves 4.1-76.7 - speed-up compared to the baseline CAWN [34] that constructs joint neighborhood features based on random walk sampling. NAT also achieves ${1.6} - {4.0} \times$ speed-up of the fastest baselines that do not construct joint neighborhood features (and thus suffer from the issue in Fig. 1) on large networks.
|
| 26 |
+
|
| 27 |
+
§ 2 RELATED WORKS
|
| 28 |
+
|
| 29 |
+
Neighborhood structure often governs how temporal networks evolve over time. Early-time temporal network prediction models count motifs $\left\lbrack {{35},{36}}\right\rbrack$ or subgraphs $\left\lbrack {37}\right\rbrack$ in the historical neighborhood of two interacting objects as features to predict their future interactions. These models cannot use network attributes and often suffer from scalability issues because counting combinatorial structures is complicated and hard to be executed in parallel. Network-embedding approaches for temporal networks [38-42] suffer from the similar problem, because the optimization problem used to compute node embeddings is often too complex to be solved again and again as the network evolves.
|
| 30 |
+
|
| 31 |
+
Recent works based on neural networks often provide more accurate and faster models, which benefit from the parallel computation hardware and scalable system support $\left\lbrack {{43},{44}}\right\rbrack$ for deep learning. Some of these works simply aggregate the sequence of links into network snapshots and treat temporal networks as a sequence of static network snapshots [21-26]. These methods may offer low prediction accuracy as they cannot model the interactions that lie in different levels of time granularity.
|
| 32 |
+
|
| 33 |
+
Move advanced methods deal with link streams directly [20, 27-29, 45-47]. They generalize GNNs to encode temporal networks by associating each node with a vector representation and update it based on the nodes that one interacts with. Some works use the representation of the node that one is currently interacting with $\left\lbrack {{27},{28},{45}}\right\rbrack$ . Other works use those of the nodes that one has interacted with in the history $\left\lbrack {{20},{29},{46},{47}}\right\rbrack$ . However, in either way, these methods suffer from the limited power of GNNs to capture the structural features from the joint neighborhood of multiple nodes [17, 19]. Recently, CAWN [34] and HIT [4], inspired by the theory in static networks [18, 19], have proposed to construct such structural features to improve the representation learning on temporal networks, CAWN for link prediction and HIT for higher-order interaction prediction. However, their computational complexity is high, as for every queried link, they need to sample a large group of random walks and construct the structural features on CPUs that limit the level of parallelism. However, NAT addresses these problems via neighborhood representations and N-caches.
|
| 34 |
+
|
| 35 |
+
§ 3 PRELIMINARIES: NOTATIONS AND PROBLEM FORMULATION
|
| 36 |
+
|
| 37 |
+
In this section, we introduce some notations and the problem formulation. We consider temporal network as a sequence of timestamped interactions between pairs of nodes.
|
| 38 |
+
|
| 39 |
+
Definition 3.1 (Temporal network) A temporal network $\mathcal{E}$ can be represented as $\mathcal{E} =$ $\left\{ {\left( {{u}_{1},{v}_{1},{t}_{1}}\right) ,\left( {{u}_{2},{v}_{2},{t}_{2}}\right) ,\cdots }\right\} ,{t}_{1} < {t}_{2} < \cdots$ where ${u}_{i},{v}_{i}$ denote interacting node IDs of the ith link, ${t}_{i}$ denotes the timestamp. Each temporal link(u, v, t)may have link feature ${e}_{u,v}^{t}$ . We also denote the entire node set as $\mathcal{V}$ . Without loss of generality, we use integers as node IDs, i.e., $\mathcal{V} = \{ 1,2,\ldots \}$ .
|
| 40 |
+
|
| 41 |
+
A good representation learning of temporal networks is able to efficiently and accurately predict how temporal networks evolve over time. Hence, we formulate our problem as follows.
|
| 42 |
+
|
| 43 |
+
Definition 3.2 (Problem formulation) Our problem is to learn a model that may use the historical information before $t$ , i.e., $\left\{ {\left( {{u}^{\prime },{v}^{\prime },{t}^{\prime }}\right) \in \mathcal{E} \mid {t}^{\prime } < t}\right\}$ , to accurately and efficiently predict whether there will be a temporal link between two nodes at time $t$ , i.e.,(u, v, t).
|
| 44 |
+
|
| 45 |
+
Next, we define neighborhood in temporal networks.
|
| 46 |
+
|
| 47 |
+
Definition 3.3 ( $k$ -hop neighborhood in a temporal network) Given a timestamp $t$ , denote a static network constructed by all the temporal links before $t$ as ${\mathcal{G}}_{t}$ . Remove all timestamps in ${\mathcal{G}}_{t}$ . Given a node $v$ , define $k$ -hop neighborhood of $v$ before time $t$ , denoted by ${\mathcal{N}}_{v}^{t,k}$ , as the set of all nodes $u$ such that there exists at least one walk of length $k$ from $u$ to $v$ over ${\mathcal{G}}_{t}$ . For two nodes $u,v$ , their joint neighborhood up-to $K$ hops refers to ${ \cup }_{k = 1}^{K}\left( {{\mathcal{N}}_{v}^{t,k} \cup {\mathcal{N}}_{u}^{t,k}}\right)$ .
|
| 48 |
+
|
| 49 |
+
§ 4 METHODOLOGY
|
| 50 |
+
|
| 51 |
+
In this section, we introduce NAT. NAT consists of two major components: neighborhood representations and N-caches, constructing joint neighborhood features and NN-based encoding.
|
| 52 |
+
|
| 53 |
+
§ 4.1 NEIGHBORHOOD REPRESENTATIONS AND N-CACHES
|
| 54 |
+
|
| 55 |
+
In NAT, a node representation is tracked by a fixed-sized memory module, i.e., N-cache over time as the temporal network evolves. Fig. 2 Left gives an illustration. In contrast to all previous methods that adopt a single vector representation for each node $u$ , NAT adopts neighborhood representations $\left( {{Z}_{u}^{\left( 0\right) }\left( t\right) ,{Z}_{u}^{\left( 1\right) }\left( t\right) ,\ldots ,{Z}_{u}^{\left( K\right) }\left( t\right) }\right)$ , where ${Z}_{u}^{\left( k\right) }\left( t\right)$ denotes the $k$ -hop neighborhood representation, for $k = 0,1,\ldots ,K$ . Note that these representations may evolve over time. For notation simplicity, the timestamps in these notations are ignored while they typically can be inferred from the context. The main goal of tracking these neighborhood representations is to enable efficient construction of structural features, which will be detailed in Sec. 4.2. Next, we first explain these neighborhood representations from the perspective of modeling and how they evolve over time. Then, we introduce the scalable implementation of $\mathrm{N}$ -caches.
|
| 56 |
+
|
| 57 |
+
Modeling. For a node $u$ , the 0 -hop representation, or termed self-representation ${Z}_{u}^{\left( 0\right) }$ simply works as the standard node representation for $u$ . It gets updated via an RNN ${Z}_{u}^{\left( 0\right) } \leftarrow$ $\mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u,v}}\right\rbrack }\right)$ when node $u$ interacts with another node $v$ as shown in Fig. 2 Left. The rest neighborhood representations are more complicated. To give some intuition, we first introduce the 1-hop representation ${Z}_{u}^{\left( 1\right) }.{Z}_{u}^{\left( 1\right) }$ is a dictionary whose keys, denoted by $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , correspond to a down-sampled set of the (IDs of) nodes in the 1-hop neighborhood of $u$ . For a node $a$ in $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , the dictionary value denoted by ${Z}_{u,a}^{\left( 1\right) }$ is a vector representation as a summary of previous interactions between $u$ and $a.{Z}_{u}^{\left( 1\right) }$ will be updated as temporal network evolves. For example, in Fig. 1, as $v$ interacts with $u$ at time ${t}_{3}$ with the link feature ${e}_{u,v}$ , the entry in ${Z}_{u}^{\left( 1\right) }$ that corresponds to $v,{Z}_{u,v}^{\left( 1\right) }$ will get updated via an RNN ${Z}_{u,v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u,v}^{\left( 1\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u,v}}\right\rbrack }\right)$ . If ${Z}_{u,v}^{\left( 1\right) }$ does not exist in current ${Z}_{u}^{\left( 1\right) }$ (e.g., in the first $v,u$ interaction), a default initialization of ${Z}_{u,v}^{\left( 1\right) }$ is used. Once updated, the new value ${Z}_{u,v}^{\left( 1\right) }$ paired with the key (node ID) $v$ will be inserted into ${Z}_{u}^{\left( 1\right) }$ .
|
| 58 |
+
|
| 59 |
+
max width=
|
| 60 |
+
|
| 61 |
+
$\mathbf{{No}.}$ Notations Definitions
|
| 62 |
+
|
| 63 |
+
1-3
|
| 64 |
+
1. ${Z}_{u}^{\left( k\right) }$ A dictionary (with values ${Z}_{u,a}^{\left( k\right) }$ , of size ${M}_{k}$ ) denoting the $k$ -hop neighborhood representation for node $u$ .
|
| 65 |
+
|
| 66 |
+
1-3
|
| 67 |
+
2. ${Z}_{u,a}^{\left( k\right) }$ A vector (of length $F$ for $k \geq 1$ ) in the values of ${Z}_{u}^{\left( k\right) }$ representing node $v$ as a $k$ -hop neighbor of $u$ .
|
| 68 |
+
|
| 69 |
+
1-3
|
| 70 |
+
3. ${s}_{u}^{\left( k\right) }$ An auxiliary array to record the node IDs who are currently recorded as the keys of ${Z}_{u}^{\left( k\right) }$ .
|
| 71 |
+
|
| 72 |
+
1-3
|
| 73 |
+
4. ${\mathrm{{DE}}}_{u}^{t}\left( a\right)$ The distance encoding of node $a$ based on the keys of N-caches of node $u$ at time $t$ (Eq. (1)).
|
| 74 |
+
|
| 75 |
+
1-3
|
| 76 |
+
5. hash(a) The hash function mapping a node ID $a$ to the position of ${Z}_{u,a}^{\left( k\right) }$ in the $k$ -hop N-cache of any node $u$ .
|
| 77 |
+
|
| 78 |
+
1-3
|
| 79 |
+
|
| 80 |
+
< g r a p h i c s >
|
| 81 |
+
|
| 82 |
+
Figure 2: Neighborhood representations and Joining Neighborhood Features & Representations to make predictions. Left: Neighborhood representations of a node. Node $u$ interacts with $v$ at ${t}_{3}$ in the example in Fig. 1. The 0-hop (self) representation and 1-hop representations will be updated based on ${Z}_{v}^{\left( 0\right) }$ . The 2-hop representations will be updated by inserting ${Z}_{v}^{\left( 1\right) }.{Z}_{u}^{\left( k\right) }$ ’s are maintained in N-caches. Right: In the example of Fig. 1, to predict the link $\left( {u,v,{t}_{3}}\right)$ , the neighborhood representations of node $u$ and node $v$ will be joined: The structural feature DE is constructed according to Eq. (1); The representations are sum-pooled according to Eq. (2). Then, an attention layer (Eq. (3)) is adopted to make the final prediction.
|
| 83 |
+
|
| 84 |
+
One remark is that for the input timestamps ${t}_{i}$ , we adopt Fourier features to encode them before filling them into RNNs, i.e., with learnable parameter ${\omega }_{i}$ ’s, $1 \leq i \leq d$ , T-encoding $\left( t\right) =$ $\left\lbrack {\cos \left( {{\omega }_{1}t}\right) ,\sin \left( {{\omega }_{1}t}\right) ,\ldots ,\cos \left( {{\omega }_{d}t}\right) ,\sin \left( {{\omega }_{d}t}\right) }\right\rbrack$ , which has been proved to be useful for temporal network representation learning [4, 20, 29, 34, 48, 49].
|
| 85 |
+
|
| 86 |
+
The large-hop $\left( { > 1}\right)$ neighborhood representation ${Z}_{u}^{\left( k\right) }$ is also a dictionary. Similarly, the keys of ${Z}_{u}^{\left( k\right) }$ correspond to the nodes who lie in the $k$ -hop neighborhood of $u$ . The update of ${Z}_{u}^{\left( k\right) }$ is as follows: If $u$ interacts with $v,v$ ’s(k - 1)-hop neighborhood by definition becomes a part of $k$ -hop neighborhood of $u$ after the interaction. Given this observation, ${Z}_{u}^{\left( k\right) }$ can also be updated by using ${Z}_{v}^{\left( k - 1\right) }$ . However, we avoid using a RNN for the large-hop update to reduce complexity. Instead, we directly insert ${Z}_{v}^{\left( k - 1\right) }$ into ${Z}_{u}^{\left( k\right) }$ , i.e., setting ${Z}_{u,a}^{\left( k\right) } \leftarrow {Z}_{v,a}^{\left( k - 1\right) }$ for all $a \in \operatorname{key}\left\lbrack {Z}_{v}^{\left( k - 1\right) }\right\rbrack$ . If ${Z}_{u,a}^{\left( k\right) }$ has already existed before the insertion, we simply replace it.
|
| 87 |
+
|
| 88 |
+
Next, we will introduce the implementation of the above representations via N-caches. Readers who only care about the learning models can skip this part and directly go to Sec. 4.2. The maintenance of N-caches (aka. neighborhood representations) as the network evolves is summarized in Alg. 1.
|
| 89 |
+
|
| 90 |
+
Scalable Implementation. Neighborhood representations cannot be directly implemented via python dictionary to achieve scalable maintenance. Instead, we adopt the following three design techniques: (a) Setting size limit; (b) Parallelizing hash-maps; (c) Addressing collisions.
|
| 91 |
+
|
| 92 |
+
Algorithm 1: N-caches construction and update $\left( {\mathcal{V},\mathcal{E},\alpha }\right)$
|
| 93 |
+
|
| 94 |
+
for $k$ from 0 to 2 (consider only two hops) do
|
| 95 |
+
|
| 96 |
+
for $u$ in $\mathcal{V}$ , in parallel, do
|
| 97 |
+
|
| 98 |
+
Initialize fixed-size dictionaries ${Z}_{u}^{\left( k\right) }$ in GPU with key spaces ${s}_{u}^{\left( k\right) }$ and value spaces;
|
| 99 |
+
|
| 100 |
+
I for(u, v, t, e)in each mini-batch(u, v, t, e)of $\mathcal{E}$ , in parallel, do
|
| 101 |
+
|
| 102 |
+
${Z}_{u}^{\left( 0\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },t,e}\right\rbrack }\right) //$ update 0-hop self-representation
|
| 103 |
+
|
| 104 |
+
${Z}_{\text{ prev }} \leftarrow {Z}_{u,v}^{\left( 1\right) }$ if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals $v$ , else 0 // check if ${Z}_{u,v}^{\left( 1\right) }$ is recorded in ${Z}_{u}^{\left( 1\right) }$ or not;
|
| 105 |
+
|
| 106 |
+
if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals ( $v$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
|
| 107 |
+
|
| 108 |
+
${s}_{u}^{\left( 1\right) }\left\lbrack {\text{ hash }\left( v\right) }\right\rbrack \leftarrow v,{Z}_{u,v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{\text{ prev }},\left\lbrack {{Z}_{v}^{\left( 0\right) },t,e}\right\rbrack }\right) ;//$ update 1-hop nbr. representation
|
| 109 |
+
|
| 110 |
+
for $w$ in ${s}_{v}^{\left( 1\right) }$ , in parallel, do
|
| 111 |
+
|
| 112 |
+
if ${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack$ equals ( $w$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
|
| 113 |
+
|
| 114 |
+
${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack \leftarrow w,{Z}_{u,w}^{\left( 2\right) } \leftarrow {Z}_{v,w}^{\left( 1\right) };//$ update 2-hop nbr. representations
|
| 115 |
+
|
| 116 |
+
repeat lines 5-11 with(v, u, t, e)
|
| 117 |
+
|
| 118 |
+
(a) Limiting size: In a real-world network, the size of neighborhood of a node typically follows a long-tailed distribution [50, 51]. So, it is irregular and memory inefficient to record the entire neighborhood. Instead, we set an upper limit ${M}_{k}$ to the size of each-hop representation ${Z}_{u}^{\left( k\right) }$ , which means ${Z}_{u}^{\left( k\right) }$ may record only a subset of nodes in the $k$ -hop neighborhood of node $u$ . This idea is inspired by previous works that have shown structural features constructed based on a down-sampled neighborhood is sufficient to provide good performance [34, 52]. To further decrease the memory overhead, we only set each representation ${Z}_{u,a}^{\left( k\right) },k \geq 1$ as a vector of small dimension $F$ . Overall, the memory overhead of the $\mathrm{N}$ -cache per node is $O\left( {\mathop{\sum }\limits_{{k = 1}}^{K}{M}_{k} \times F}\right)$ . In our experiments, we consider at most $K = 2$ hops, and set the numbers of tracked neighbors ${M}_{1},{M}_{2} \in \left\lbrack {2,{40}}\right\rbrack$ and the size of each representation $F \in \left\lbrack {2,8}\right\rbrack$ , which already gives very good performance. Based on the above design, the overall memory overhead is just about hundreds per node, which is comparable to the commonly-used memory cost of tracking a big single-vector representation for each node.
|
| 119 |
+
|
| 120 |
+
(b) The hash-map: As NAT needs to frequently access N-caches, a fast implementation of using node IDs to search within N-caches in parallel is needed. To enable the parallel search, we design GPU dictionaries to implement N-caches. Specifically, for every node $u$ , we pre-allocate $O\left( {{M}_{k} \times F}\right)$ space in GPU-RAM to record the values in ${Z}_{u}^{\left( k\right) }$ . A hash function is adopted to access the values in ${Z}_{u}^{\left( k\right) }$ . For some node $a$ , we compute $\operatorname{hash}\left( a\right) \equiv \left( {q * a}\right) \left( {\;\operatorname{mod}\;{M}_{k}}\right)$ for a fixed large prime number $q$ to decide the row-index in ${Z}_{u}^{\left( k\right) }$ that records ${Z}_{u,a}^{\left( k\right) }$ . Such a simple hashing allows NAT accessing multiple neighborhood representations in N-caches in parallel.
|
| 121 |
+
|
| 122 |
+
However, as the size ${M}_{k}$ of each $\mathrm{N}$ -cache is small, in particular smaller than the corresponding neighborhood, the hash-map may encounter collisions. To detect such collisions, we also pre-allocate $O\left( {M}_{k}\right)$ space in each $\mathrm{N}$ -cache ${Z}_{u}^{\left( k\right) }$ for an array ${s}_{u}^{\left( k\right) }$ to record the IDs of the nodes who are the most recent ones recorded in ${Z}_{u}^{\left( k\right) }$ . Specifically, we use ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ to check whether node $a$ is a key of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is $a,{Z}_{u,a}^{\left( k\right) }$ is recorded at the position hash(a)of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is neither $a$ nor EMPTY, the position hash(a)of ${Z}_{u}^{\left( k\right) }$ records the representation of another node.
|
| 123 |
+
|
| 124 |
+
(c) Addressing collisions: If encountering a collision when NAT works on an evolving network, NAT addresses that collision in a random manner. Specifically, suppose we are to write ${Z}_{u,a}^{\left( k\right) }$ into ${Z}_{u}^{\left( k\right) }$ . If another node $b$ satisfies $\operatorname{hash}\left( a\right) = \operatorname{hash}\left( b\right) = p$ and ${Z}_{u,b}^{\left( k\right) }$ has occupied the position $p$ of ${Z}_{u}^{\left( k\right) }$ , then, we replace ${Z}_{u,b}^{\left( k\right) }$ by ${Z}_{u,a}^{\left( k\right) }$ (and ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack \leftarrow a$ simultaneously) with probability $\alpha$ . Here, $\alpha \in (0,1\rbrack$ is a hyperparameter. Although the above random replacement strategy sounds heuristic, it is essentially equivalent to random-sampling nodes from the neighborhood without replacement (random dropping $\leftrightarrow$ random sampling). Note that random-sampling neighbors is a common strategy used to scale up GNNs for static networks [53-55], so here we essentially apply an idea of similar spirit to temporal networks. We find a small size ${M}_{k}\left( { \leq {40}}\right)$ can give a good empirical performance while keeping the model scalable, and NAT is relatively robust to a wide range of $\alpha$ .
|
| 125 |
+
|
| 126 |
+
§ 4.2 JOINT NEIGHBORHOOD STRUCTURAL FEATURES AND NEURAL-NETWORK-BASED ENCODING
|
| 127 |
+
|
| 128 |
+
As illustrated in the toy example in Fig. 1, structural features from the joint neighborhood are critical to reveal how temporal networks evolve. Previous methods in static networks adopt distance encoding (DE) (or called labeling tricks more broadly) to formulate these features [18, 19]. Recently, this idea has got generalized to temporal networks [34]. However, the model CAWN in [34] uses online random-walk sampling, which cannot be parallelized on GPUs and is thus extremely slow. Our design of N-caches allows addressing such a problem. Fig. 2 Right illustrates the procedure.
|
| 129 |
+
|
| 130 |
+
NAT generates joint neighborhood structural features as follows. Suppose our prediction is made for a temporal link(u, v, t). For every node $a$ in the joint neighborhood of $u$ and $v$ decided by their N-caches at timestamp $t$ , i.e., $a \in \left\lbrack {{ \cup }_{k = 0}^{K}\operatorname{key}\left( {Z}_{u}^{\left( k\right) }\right) }\right\rbrack \cup \left\lbrack {{ \cup }_{{k}^{\prime } = 0}^{K}\operatorname{key}\left( {Z}_{v}^{\left( {k}^{\prime }\right) }\right) }\right\rbrack$ , we associate it with a DE
|
| 131 |
+
|
| 132 |
+
${\mathrm{{DE}}}_{uv}^{t}\left( a\right) = {\mathrm{{DE}}}_{u}^{t}\left( a\right) \oplus {\mathrm{{DE}}}_{v}^{t}\left( a\right)$ , where ${\mathrm{{DE}}}_{w}^{t}\left( a\right) = \left\lbrack {\chi \left\lbrack {a \in {Z}_{w}^{\left( 0\right) }}\right\rbrack ,\ldots ,\chi \left\lbrack {a \in {Z}_{w}^{\left( K\right) }}\right\rbrack }\right\rbrack ,w \in \{ u,v\}$(1)
|
| 133 |
+
|
| 134 |
+
Here, $\chi \left\lbrack {a \in {Z}_{w}^{\left( i\right) }}\right\rbrack$ is 1 if $a$ is among the keys of N-cache ${Z}_{w}^{\left( i\right) }$ or 0 otherwise. $\oplus$ denotes vector concatenation. As for the example to predict $\left( {u,v,{t}_{3}}\right)$ in Fig. 1, the DEs of four nodes $u,a,v,b$ are as shown in Fig. 2 Right. Note that ${\mathrm{{DE}}}_{uv}^{{t}_{3}}\left( a\right) = \left\lbrack {0,1,0}\right\rbrack \oplus \left\lbrack {0,1,0}\right\rbrack$ because $a$ appears in the keys of both ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{v}^{\left( 1\right) }$ , which further implies $a$ as a common neighbor of $u$ and $v$ .
|
| 135 |
+
|
| 136 |
+
Simultaneously, NAT also aggregates neighborhood representations for every node $a$ in the common neighborhood of $u$ and $v$ . Specifically, for node $a$ , we aggregate the representations via a sum pool
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
{Q}_{uv}^{t}\left( a\right) = \mathop{\sum }\limits_{{k = 0}}^{K}\mathop{\sum }\limits_{{w \in \{ u,v\} }}{Z}_{w,a}^{\left( k\right) } \times \chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack . \tag{2}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
Here, if $a$ is not in the neighborhood ${Z}_{w}^{\left( k\right) },\chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack = 0$ and thus ${Z}_{w,a}^{\left( k\right) }$ does not participate in the aggregation. Both DE (Eq (1)) and representation aggregation (Eq (2)) can be done for multiple node pairs in parallel on GPUs. We detail the parallel steps in Appendix A. After joining DE and neighborhood representations, for each link(u, v, t)to be predicted, NAT has a collection of representations ${\Omega }_{u,v}^{t} = \left\{ {{\mathrm{{DE}}}_{uv}^{t}\left( a\right) \oplus {Q}_{uv}^{t}\left( a\right) \mid a \in {\mathcal{N}}_{u,v}^{t}}\right\}$ .
|
| 143 |
+
|
| 144 |
+
Ultimately, we propose to use attention to aggregate the collected representations in ${\Omega }_{u,v}^{t}$ to make the final prediction for the link(u, v, t). Let MLP denote a multi-layer perceptron and we adopt
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\text{ logit } = \operatorname{MLP}\left( {\mathop{\sum }\limits_{{h \in {\Omega }_{u,v}^{t}}}{\alpha }_{h}\operatorname{MLP}\left( h\right) }\right) \text{ , where }\left\{ {\alpha }_{h}\right\} = \operatorname{softmax}\left( \left\{ {{w}^{T}\operatorname{MLP}\left( h\right) \mid h \in {\Omega }_{u,v}^{t}}\right\} \right) \text{ , } \tag{3}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where $w$ is a learnable vector parameter and the logit can be plugged in the cross-entropy loss for training or compared with a threshold to make the final prediction.
|
| 151 |
+
|
| 152 |
+
§ 5 EXPERIMENTS
|
| 153 |
+
|
| 154 |
+
In this section, we evaluate the performance and the scalability of NAT against a variety of baselines on real-world temporal networks. We further conduct ablation study on relevant modules and hyperparameter analysis. Unless specified for comparison, the hyperparameters of NAT (such as ${M}_{1},{M}_{2},F,\alpha$ ) are detailed in Appendix C and Table 7 (in the Appendix).
|
| 155 |
+
|
| 156 |
+
§ 5.1 EXPERIMENTAL SETUP
|
| 157 |
+
|
| 158 |
+
Datasets. We use seven real-world datasets that are available to the public, whose statistics are listed in Table 1. Further details of these datasets can be found in Appendix B. We preprocess all datasets by following previous literatures. We transform the node and edge features of Wikipedia and Reddit to 172-dim feature vectors. For other datasets, those features will be zeros since they are non-attributed. We split the datasets into training, validation and testing data according to the ratio of 70/15/15. For inductive test, we sample the unique nodes in validation and testing data with probability 0.1 and remove them and their associated edges from the networks during the model training. We detail the procedure of inductive evaluation for NAT in Appendix C.1.
|
| 159 |
+
|
| 160 |
+
Baselines. We run experiments against 6 strong baselines that give the SOTA approaches for modeling temporal networks. Out of the 6 baselines, CAWN [34], TGAT [29] and TGN [20] need to sample neighbors from the historical events, while JODIE [28], DyRep [27], keep track of dynamic node representations to avoid sampling. CAWN is the only model that constructs neighborhood structural features. As we are interested in both prediction performance and model scalability, we include an efficient implementation of TGN sourced from Pytorch Geometric (TGN-pg), a library built upon PyTorch including different variants of GNNs [56]. TGN is slower than TGN-pg because TGN in [20] does not process a batch fully in parallel while TGN-pg does. Additional details about the baselines can be found in appendix $\mathrm{C}$ .
|
| 161 |
+
|
| 162 |
+
max width=
|
| 163 |
+
|
| 164 |
+
Measurement Wikipedia Reddit Social E. $1\mathrm{\;m}$ . Social E. Enron UCI Ubuntu Wiki-talk
|
| 165 |
+
|
| 166 |
+
1-9
|
| 167 |
+
nodes 9,227 10,985 71 74 184 1,899 159,316 1,140,149
|
| 168 |
+
|
| 169 |
+
1-9
|
| 170 |
+
temporal links 157,474 672,447 176,090 2,099,519 125,235 59,835 964,437 7,833,140
|
| 171 |
+
|
| 172 |
+
1-9
|
| 173 |
+
static links 18,257 78,516 2,457 4486 3,125 20,296 596,933 3,309,592
|
| 174 |
+
|
| 175 |
+
1-9
|
| 176 |
+
node & link attributes 172 & 172 172 & 172 0 & 0 0 & 0 0 & 0 0 & 0 0 & 0 0 & 0
|
| 177 |
+
|
| 178 |
+
1-9
|
| 179 |
+
bipartite true true false false false true false false
|
| 180 |
+
|
| 181 |
+
1-9
|
| 182 |
+
|
| 183 |
+
Table 1: Summary of dataset statistics.
|
| 184 |
+
|
| 185 |
+
max width=
|
| 186 |
+
|
| 187 |
+
Task Method Wikipedia Reddit Social E. $1\mathrm{\;m}$ . Social E. Enron UCI Ubuntu Wiki-talk
|
| 188 |
+
|
| 189 |
+
1-10
|
| 190 |
+
7*Inductive CAWN ${98.52} \pm {0.04}$ ${98.19} \pm {0.03}$ ${80.09} \pm {1.89}$ ${50.00} \pm {0.00}{}^{ * }$ ${93.28} \pm {0.01}$ ${80.37} \pm {0.65}$ ${50.00} \pm {0.00}^{ * }$ ${50.00} \pm {0.00}^{ * }$
|
| 191 |
+
|
| 192 |
+
2-10
|
| 193 |
+
JODIE ${95.58} \pm {0.37}$ ${95.96} \pm {0.29}$ ${80.61} \pm {1.55}$ ${81.13} \pm {0.52}$ ${81.69} \pm {2.21}$ ${86.13} \pm {0.34}$ ${56.68} \pm {0.49}$ ${65.89} \pm {4.72}$
|
| 194 |
+
|
| 195 |
+
2-10
|
| 196 |
+
DyRep ${94.72} \pm {0.14}$ ${97.04} \pm {0.29}$ ${81.54} \pm {1.81}$ ${52.68} \pm {0.11}$ ${77.44} \pm {2.28}$ ${68.38} \pm {1.30}$ ${53.25} \pm {0.03}$ ${51.87} \pm {0.93}$
|
| 197 |
+
|
| 198 |
+
2-10
|
| 199 |
+
TGN ${98.01} \pm {0.06}$ ${97.76} \pm {0.05}$ ${86.00} \pm {0.70}$ ${67.01} \pm {10.3}$ ${75.72} \pm {2.55}$ ${83.21} \pm {1.16}$ ${62.14} \pm {3.17}$ ${56.73} \pm {2.88}$
|
| 200 |
+
|
| 201 |
+
2-10
|
| 202 |
+
TGN-pg ${94.91} \pm {0.35}$ ${94.34} \pm {3.22}$ ${63.44} \pm {3.54}$ ${88.10} \pm {4.81}$ ${69.55} \pm {1.62}$ ${86.36} \pm {3.60}$ ${79.44} \pm {0.85}$ ${85.35} \pm {2.96}$
|
| 203 |
+
|
| 204 |
+
2-10
|
| 205 |
+
TGAT ${97.25} \pm {0.18}$ ${96.69} \pm {0.11}$ ${54.66} \pm {0.66}$ ${50.00} \pm {0.00}$ ${57.09} \pm {0.89}$ ${70.47} \pm {0.59}$ ${54.73} \pm 4.{.94}$ ${71.04} \pm {3.59}$
|
| 206 |
+
|
| 207 |
+
2-10
|
| 208 |
+
NAT $\mathbf{{98.55} \pm {0.09}}$ $\mathbf{{98.56} \pm {0.21}}$ $\mathbf{{91.82} \pm {1.91}}$ $\mathbf{{95.16} \pm {0.66}}$ ${94.94} \pm {1.15}$ $\mathbf{{92.46} \pm {0.93}}$ $\mathbf{{90.35} \pm {0.20}}$ $\mathbf{{93.81} \pm {1.16}}$
|
| 209 |
+
|
| 210 |
+
1-10
|
| 211 |
+
7*Transductive CAWN ${98.62} \pm {0.05}$ ${98.66} \pm {0.09}$ ${79.59} \pm {0.21}$ ${50.00} \pm {0.00}{}^{ * }$ ${91.46} \pm {0.35}$ ${82.84} \pm {0.16}$ ${50.00} \pm {0.00}^{ * }$ ${50.00} \pm {0.00}^{ * }$
|
| 212 |
+
|
| 213 |
+
2-10
|
| 214 |
+
JODIE ${96.15} \pm {0.36}$ ${97.29} \pm {0.05}$ ${77.02} \pm {1.11}$ ${69.30} \pm {0.21}$ ${83.42} \pm {2.63}$ ${91.09} \pm {0.69}$ ${60.29} \pm {2.66}$ ${75.00} \pm {4.90}$
|
| 215 |
+
|
| 216 |
+
2-10
|
| 217 |
+
DyRep ${95.81} \pm {0.15}$ ${98.00} \pm {0.19}$ ${76.96} \pm {4.05}$ ${51.14} \pm {0.24}$ ${78.04} \pm {2.08}$ ${72.25} \pm {1.81}$ ${52.22} \pm {0.02}$ ${62.07} \pm {0.06}$
|
| 218 |
+
|
| 219 |
+
2-10
|
| 220 |
+
TGN ${98.57} \pm {0.05}$ ${98.70} \pm {0.03}$ ${88.72} \pm {0.65}$ ${69.39} \pm {10.50}$ ${80.87} \pm {4.37}$ ${89.53} \pm {1.49}$ ${53.80} \pm {2.23}$ ${66.01} \pm {4.79}$
|
| 221 |
+
|
| 222 |
+
2-10
|
| 223 |
+
TGN-pg ${97.26} \pm {0.10}$ ${98.62} \pm {0.07}$ ${66.39} \pm {6.90}$ ${64.03} \pm {8.97}$ ${80.85} \pm {2.70}$ ${91.47} \pm {0.29}$ ${90.56} \pm {0.44}$ ${94.16} \pm {0.09}$
|
| 224 |
+
|
| 225 |
+
2-10
|
| 226 |
+
TGAT ${96.65} \pm {0.06}$ ${98.19} \pm {0.08}$ ${58.10} \pm {0.47}$ ${50.00} \pm {0.00}$ ${61.25} \pm {0.99}$ ${77.88} \pm {0.31}$ ${55.46} \pm {5.47}$ ${78.43} \pm {2.15}$
|
| 227 |
+
|
| 228 |
+
2-10
|
| 229 |
+
NAT $\mathbf{{98.68} \pm {0.04}}$ $\mathbf{{99.10} \pm {0.09}}$ $\mathbf{{90.20} \pm {0.20}}$ ${94.43} \pm {1.67}$ $\mathbf{{92.42} \pm {0.09}}$ $\mathbf{{93.92} \pm {0.15}}$ $\mathbf{{93.50} \pm {0.34}}$ ${95.82} \pm {0.31}$
|
| 230 |
+
|
| 231 |
+
1-10
|
| 232 |
+
|
| 233 |
+
Table 2: Performance in average precision (AP) (mean in percentage $\pm {95}\%$ confidence level). Bold font and underline highlight the best performance and the second best performance on average. *The under-performance of CAWN on Social E., Ubuntu and Wiki-talk may be caused by a recent code change due to a bug [57].
|
| 234 |
+
|
| 235 |
+
Regarding hyperparameters, if a dataset has been tested by a baseline, we use the set of hyperparame-ters that are provided in the corresponding paper. Otherwise, we tune the parameters such that similar components have sizes in the same scale. For example, matching the number of neighbors sampled and the embedding sizes. We also fix the training and inference batch sizes so that the comparison of training and inference time can be fair between different models. For training, since CAWN uses 32 as the default while others use 200, we decide on using 100 that is between the two. For validation and testing, we use batch size 32 over all baselines. We also apply the early stopping strategy for all models to record the number of epochs to converge and the total model running time to converge. We also set a time limit of 10 hours for training. once that time is reached, we will use the best epoch so far for evaluation. More detailed hyperparameters are provided in Appendix C.
|
| 236 |
+
|
| 237 |
+
Hardware. We run all experiments using the same device that is equipped with eight Intel Core i7-4770HQ CPU @ 2.20GHz with 15.5 GiB RAM and one GPU (GeForce GTX 1080 Ti).
|
| 238 |
+
|
| 239 |
+
Evaluation Metrics. For prediction performance, we evaluation all models with Average Precision (AP) and Area Under the ROC curve (AUC). In the main text, the prediction performance in all tables is evaluated in AP. The AUC results are given in the appendix. All results are summarized based on 5 time independent experiments. For computing performance, the metrics include (a) average training and inference time (in seconds) per epoch, denoted as Train and Test respectively, (b) averaged total time (in seconds) of a model run, including training of all epochs, and testing, denoted as Total, (c) the averaged number of epochs for convergence, denoted as Epoch, (d) the maximum GPU memory and RAM occupancy percentage monitored throughout the entire processes, denoted as GPU and $\mathbf{{RAM}}$ , respectively. We ensure that there are no other applications running during our evaluations.
|
| 240 |
+
|
| 241 |
+
§ 5.2 RESULTS AND DISCUSSION
|
| 242 |
+
|
| 243 |
+
Overall, our method achieves SOTA performance on all 7 datasets. The modeling capacity of NAT exceeds all of the baselines and the time complexities of training and inference are either lower or comparable to the fastest baselines. Let us provide the detailed analysis next.
|
| 244 |
+
|
| 245 |
+
Prediction Performance. We give the result of AP in Table 2 and AUC in Appendix Table 6.
|
| 246 |
+
|
| 247 |
+
max width=
|
| 248 |
+
|
| 249 |
+
X Method Train Test Total RAM GPU Epoch
|
| 250 |
+
|
| 251 |
+
1-8
|
| 252 |
+
7*Wikipedia CAWN 1,006 174 11,845 30.2 58.0 6.7
|
| 253 |
+
|
| 254 |
+
2-8
|
| 255 |
+
JODIE 28.8 30.6 1,482 28.3 17.9 19.1
|
| 256 |
+
|
| 257 |
+
2-8
|
| 258 |
+
DyRep 32.4 32.5 1,681 28.3 17.8 21.5
|
| 259 |
+
|
| 260 |
+
2-8
|
| 261 |
+
TGN 37.1 33.0 2,047 28.3 19.3 23.1
|
| 262 |
+
|
| 263 |
+
2-8
|
| 264 |
+
TGN-pg 24.2 6.04 624.8 30.8 18.1 15.6
|
| 265 |
+
|
| 266 |
+
2-8
|
| 267 |
+
TGAT 225 63.0 3,657 28.5 24.6 12.0
|
| 268 |
+
|
| 269 |
+
2-8
|
| 270 |
+
NAT 21.0 6.94 154.4 29.1 12.1 2.6
|
| 271 |
+
|
| 272 |
+
1-8
|
| 273 |
+
7*Reddit CAWN 2,983 812 17,056 38.8 41.2 16.3
|
| 274 |
+
|
| 275 |
+
2-8
|
| 276 |
+
JODIE 234.4 176 8,082 36.4 23.7 15.3
|
| 277 |
+
|
| 278 |
+
2-8
|
| 279 |
+
DyRep 252.9 184 7,716 33.3 24.3 12.7
|
| 280 |
+
|
| 281 |
+
2-8
|
| 282 |
+
TGN 271.7 189 8,487 33.7 25.4 15.3
|
| 283 |
+
|
| 284 |
+
2-8
|
| 285 |
+
TGN-pg 155.1 27.1 2,142 39.2 23.6 6.6
|
| 286 |
+
|
| 287 |
+
2-8
|
| 288 |
+
TGAT 1,203 291 16,462 37.2 31.0 8.4
|
| 289 |
+
|
| 290 |
+
2-8
|
| 291 |
+
NAT 90.6 28.5 771.3 37.7 18.5 3.0
|
| 292 |
+
|
| 293 |
+
1-8
|
| 294 |
+
|
| 295 |
+
max width=
|
| 296 |
+
|
| 297 |
+
X Method Train Test Total RAM GPU Epoch
|
| 298 |
+
|
| 299 |
+
1-8
|
| 300 |
+
7*Ubuntu CAWN 1,066 222 5,385 38.9 17.4 1.0
|
| 301 |
+
|
| 302 |
+
2-8
|
| 303 |
+
JODIE 66.70 2,860 76,220 35.3 18.7 5.5
|
| 304 |
+
|
| 305 |
+
2-8
|
| 306 |
+
DyRep 2,195 2,857 39,148 38.5 16.6 1.0
|
| 307 |
+
|
| 308 |
+
2-8
|
| 309 |
+
TGN 5,975 2,391 73,633 39 19.6 5.5
|
| 310 |
+
|
| 311 |
+
2-8
|
| 312 |
+
TGN-pg 188.7 36.5 3,682 37.0 32.1 11.4
|
| 313 |
+
|
| 314 |
+
2-8
|
| 315 |
+
TGAT 887 330 18,431 47.3 17.0 2.5
|
| 316 |
+
|
| 317 |
+
2-8
|
| 318 |
+
NAT 125.8 41.2 1,321 28.9 10.1 5.4
|
| 319 |
+
|
| 320 |
+
1-8
|
| 321 |
+
7*Wiki-talk CAWN 13,685 2,419 34,368 99.1 19.4 1.0
|
| 322 |
+
|
| 323 |
+
2-8
|
| 324 |
+
JODIE 284,789 145,909 566,607 58.2 20.9 1.0
|
| 325 |
+
|
| 326 |
+
2-8
|
| 327 |
+
DyRep 280,659 135,491 514,621 84.4 49.6 1.0
|
| 328 |
+
|
| 329 |
+
2-8
|
| 330 |
+
TGN 281,267 136,780 534,827 77.9 24.1 1.0
|
| 331 |
+
|
| 332 |
+
2-8
|
| 333 |
+
TGN-pg 1,236 311.5 12,761 60.9 59.0 5.1
|
| 334 |
+
|
| 335 |
+
2-8
|
| 336 |
+
TGAT 6,164 2,451 186,513 65.0 17.6 16.0
|
| 337 |
+
|
| 338 |
+
2-8
|
| 339 |
+
$\mathbf{{NAT}}$ 833.1 280.1 7,802 37.1 22.3 2.7
|
| 340 |
+
|
| 341 |
+
1-8
|
| 342 |
+
|
| 343 |
+
Table 3: Scalability evaluation on Wikipedia, Reddit, Ubuntu and Wiki-talk.
|
| 344 |
+
|
| 345 |
+
< g r a p h i c s >
|
| 346 |
+
|
| 347 |
+
< g r a p h i c s >
|
| 348 |
+
|
| 349 |
+
Figure 3: Convergence v.s. wall-clock time on Reddit Figure 4: Sensitivity (mean) of the overwriting (left) and Wiki-talk (right). Each dot on the curves gets probability $\alpha$ for hash-map collisions on Ubuntu collected per epoch. (Left) & Reddit (Right).
|
| 350 |
+
|
| 351 |
+
On Wikipedia and Reddit, a lot of baselines achieve high performance because of the valid attributes. However, NAT still gains marginal improvements. On Wikipedia, Reddit and Enron, CAWN outperforms all baselines on inductive study and most baselines on transductive. We believe the reason is that it captures neighborhood structural information via its temporal random walk sampling. However, we are not able to reproduce comparable scores on Social Evolve, Ubuntu and Wiki-talk even tuning training batch size to 32 . We notice there is a recent code change to debug the CAWN implementation[57], which might be the cause of its under-performance.
|
| 352 |
+
|
| 353 |
+
TGN and its efficient implementation TGN-pg are strong baselines without constructing structure features. On both large-scale datasets Ubuntu and Wiki-talk, TGN-pg gives impressive results on transductive learning. However, NAT still outperforms it consistently. Furthermore, TGN-pg performs poorly for inductive tasks on both datasets, while NAT gains 8-11% lift for these tasks.
|
| 354 |
+
|
| 355 |
+
On Social Evolve, NAT significantly outperforms all baselines by at least 25% on transductive and 7% on inductive predictions. From Table 1, we can see that Social Evolve has a small number of nodes but many interactions. This highlights one of the advantages of NAT on dense temporal graphs. NAT keeps the neighborhood representation for a node's every individual neighbor separately so the older interactions are not squashed with the more recent ones into a single representation. Pairing with N-caches, NAT can effectively denoise the dense history and extract neighborhood features.
|
| 356 |
+
|
| 357 |
+
Scalability. Table 3 shows that NAT is always trained much faster than all baselines. The inference speed of NAT is significantly faster than CAWN that can also constructs neighborhood structural features, which achieves 25-29 times speedup on inference for attributed networks. NAT also achieves at least four times faster inference than TGN, JODIE and DyRep. Compared to TGN-pg, NAT achieves comparable inference time in most cases while achieves about ${10}\%$ speed up over the largest dataset Wiki-talk. This is because when the network is large, online sampling of TGN-pg may dominate the time cost. We may expect NAT to show even better scalability for larger networks. Moreover, on the two large networks Ubuntu and Wiki-talk, NAT requires much less GPU memory. Note that albeit with just comparable or slightly better scalability, over all datasets, NAT significantly outperform TGN-pg in prediction performance.
|
| 358 |
+
|
| 359 |
+
Across all datasets, NAT does not need larger model sizes than baselines to achieve better performances. More impressively, we observe that NAT uniformly requires fewer epochs to converge than all baselines, especially on larger datasets. It can be attributed to the inductive power given by the joint structural features. Because of this, the total runtime of the model is much shorter than the baselines on all datasets. Specifically, on large datasets, Ubuntu and Wiki-talk, NAT is more than three times as fast as TGN-pg. We also plot the curves on the model convergence v.s. CPU/GPU wall-clock time on Reddit and Wiki-talk for comparison in Fig. 3.
|
| 360 |
+
|
| 361 |
+
max width=
|
| 362 |
+
|
| 363 |
+
Ablation Dataset Inductive Transductive Train Test GPU
|
| 364 |
+
|
| 365 |
+
1-7
|
| 366 |
+
3*original method Social E. ${95.16} \pm {0.66}$ ${91.75} \pm {0.37}$ 281.0 89.0 8.88
|
| 367 |
+
|
| 368 |
+
2-7
|
| 369 |
+
Ubuntu ${90.35} \pm {0.20}$ ${93.50} \pm {0.34}$ 125.8 41.2 10.1
|
| 370 |
+
|
| 371 |
+
2-7
|
| 372 |
+
Wiki-talk* ${93.81} \pm {1.16}$ ${95.00} \pm {0.31}$ 833.1 280.1 22.3
|
| 373 |
+
|
| 374 |
+
1-7
|
| 375 |
+
2*remove 2-hop N-cache Social E. ${94.30} \pm {0.90}$ ${90.77} \pm {0.26}$ 253.1 75.9 8.87
|
| 376 |
+
|
| 377 |
+
2-7
|
| 378 |
+
Ubuntu ${89.45} \pm {1.04}$ ${93.48} \pm {0.34}$ 111.3 35.7 9.95
|
| 379 |
+
|
| 380 |
+
1-7
|
| 381 |
+
remove Social E. ${55.10} \pm {11.54}$ ${62.12} \pm {3.53}$ 212.9 64.0 8.46
|
| 382 |
+
|
| 383 |
+
1-7
|
| 384 |
+
1-&-2-hop Ubuntu ${85.11} \pm {0.23}$ ${91.89} \pm {0.09}$ 98.1 29.5 9.07
|
| 385 |
+
|
| 386 |
+
1-7
|
| 387 |
+
N-cache Wiki-talk ${86.54} \pm {3.87}$ ${94.89} \pm {1.83}$ 409.5 125.4 16.2
|
| 388 |
+
|
| 389 |
+
1-7
|
| 390 |
+
|
| 391 |
+
Table 4: Ablation study on N-caches. *Original method for Wiki-talk does not use the second-hop N-cache.
|
| 392 |
+
|
| 393 |
+
max width=
|
| 394 |
+
|
| 395 |
+
Param Size Inductive Transductive Train Test GPU
|
| 396 |
+
|
| 397 |
+
1-7
|
| 398 |
+
5*${M}_{1}$ 4 ${92.95} \pm {2.95}$ ${95.26} \pm {0.49}$ 834.9 281.4 18.4
|
| 399 |
+
|
| 400 |
+
2-7
|
| 401 |
+
8 $\mathbf{{93.96} \pm {0.91}}$ ${95.39} \pm {0.28}$ 806.3 274.9 19.9
|
| 402 |
+
|
| 403 |
+
2-7
|
| 404 |
+
12 ${92.67} \pm {0.82}$ ${95.05} \pm {0.58}$ 818.2 277.6 21.0
|
| 405 |
+
|
| 406 |
+
2-7
|
| 407 |
+
16 ${93.81} \pm {1.16}$ ${95.82} \pm {0.31}$ 833.1 280.1 22.3
|
| 408 |
+
|
| 409 |
+
2-7
|
| 410 |
+
20 ${93.40} \pm {0.50}$ ${95.83} \pm {0.44}$ 841.3 284.8 23.8
|
| 411 |
+
|
| 412 |
+
1-7
|
| 413 |
+
4*${M}_{2}$ 0 ${93.81} \pm {1.16}$ ${95.82} \pm {0.31}$ 833.1 280.1 22.3
|
| 414 |
+
|
| 415 |
+
2-7
|
| 416 |
+
2 ${92.91} \pm {1.01}$ ${96.08} \pm {0.34}$ 960.5 330.9 22.7
|
| 417 |
+
|
| 418 |
+
2-7
|
| 419 |
+
4 ${94.26} \pm {0.89}$ $\mathbf{{96.29} \pm {0.09}}$ 935.3 322.9 23.8
|
| 420 |
+
|
| 421 |
+
2-7
|
| 422 |
+
8 ${94.53} \pm {0.51}$ ${95.90} \pm {0.07}$ 943.3 325.3 26.0
|
| 423 |
+
|
| 424 |
+
1-7
|
| 425 |
+
3*F 2 ${90.86} \pm {2.52}$ ${95.74} \pm {0.27}$ 843.6 284.0 18.5
|
| 426 |
+
|
| 427 |
+
2-7
|
| 428 |
+
4 $\mathbf{{93.81} \pm {1.16}}$ $\mathbf{{95.82} \pm {0.31}}$ 833.1 280.1 22.3
|
| 429 |
+
|
| 430 |
+
2-7
|
| 431 |
+
8 ${93.55} \pm {0.93}$ ${95.63} \pm {0.30}$ 828.7 281.1 26.2
|
| 432 |
+
|
| 433 |
+
1-7
|
| 434 |
+
|
| 435 |
+
Table 5: Sensitivity of N-cache sizes on Wiki-talk.
|
| 436 |
+
|
| 437 |
+
§ 5.3 FURTHER ANALYSIS
|
| 438 |
+
|
| 439 |
+
Ablation study. We conduct ablation studies on the effectiveness of the N-caches. Table 4 shows the results of removing the second-hop N-caches ${Z}_{u}^{\left( 2\right) }$ and removing both the first-hop and second-hop $\mathrm{N}$ -caches ${Z}_{u}^{\left( 1\right) },{Z}_{u}^{\left( 2\right) }$ . As expected, dropping the $\mathrm{N}$ -caches reduces the training, inference time and the GPU cost. However, it also results in prediction performance decay. Just removing ${Z}_{u}^{\left( 2\right) }$ can hurt performance by up to $1\%$ . By removing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ but keeping only the self representation, the performance drops significantly, especially on inductive settings. Keeping only self representation is analogous to some baselines such as TGN which keeps a memory state. However, since we use a smaller dimension usually between 32 to 72, the self representation itself cannot be generalized well on these datasets. Ablation studies on other components including joint neighborhood structural features, T-encoding, RNNs, and DE are detailed in Table 8 (in the appendix).
|
| 440 |
+
|
| 441 |
+
Sensitivity of the sizes of N-cache. Since N-caches induce the major consumption of the GPU memory, we study how the memory size correlates with the model performance on Wiki-talk. We compare the performances between different values of ${M}_{1},{M}_{2}$ and $F$ of $\mathrm{N}$ -caches. The baseline has ${M}_{1} = {16},{M}_{2} = 0$ and $F = 4$ and we study each parameter by fixing the other two. Table 5 details the changes in the model performance. We also study for the ubuntu dataset in Appendix Table 9.
|
| 442 |
+
|
| 443 |
+
We can see that GPU memory cost scales close to a linear function for all param changes. However, increasing the model size does not necessarily improve the performance. Changing ${M}_{1}$ to either a smaller or a larger value may decrease both the transductive and the inductive performance. Increasing ${M}_{2}$ boosts the transductive performance but hurts the inductive performance. In general, changing ${M}_{2}$ is less sensitive than changing ${M}_{1}$ . Lastly, a larger $F$ could overfit the model as we can see a slight drop in the inductive prediction with the largest $F$ . Overall, training and inference time remains stable because of the parallelization of NAT. Interestingly, with larger ${M}_{1}$ and ${M}_{2}$ , we sometimes even see a decrease in running time. We hypothesize it is because it avoids hash collisions and short-circuits $\mathrm{N}$ -cache overwriting steps.
|
| 444 |
+
|
| 445 |
+
Sensitivity of overwriting probability $\alpha$ . We also experiment on $\alpha$ to study whether N-cache refresh frequency is related to the prediction quality. Here, we use a large dataset Ubuntu and a medium dataset Reddit. Results can be found in Fig. 4. For Ubuntu, we update from the original sizes to ${M}_{1} = 4,{M}_{2} = 1,F = 4$ and for Reddit, we change to ${M}_{1} = {16},{M}_{2} = 2,F = 8$ to increase the number of potential collisions so that the effect of $\alpha$ can be better observed. On both datasets, we can see an overall trend that a larger $\alpha$ gives a better transductive performance. However, if $\alpha = 1$ and we always replace old neighbors, it is slightly worse than the optimal $\alpha$ . This pattern shows that the neighborhood information has to keep updated in order to gain a better performance. Some randomness can be useful because it preserves more diverse time ranges of interactions. The inductive performance is relatively more sensitive to the selection of $\alpha$ . We do not find a case when having two different probabilities for replacing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ significantly benefits model performance, so we use a single $\alpha$ for $\mathrm{N}$ -caches of different hops to keep it simple.
|
| 446 |
+
|
| 447 |
+
§ 6 CONCLUSION AND FUTURE WORKS
|
| 448 |
+
|
| 449 |
+
In this work, we proposed NAT, the first method that adopts dictionary-type representations for nodes to track the neighborhood of nodes in temporal networks. Such representations support efficient construction of neighborhood structural features that are crucial to predict how temporal network evolves. NAT also develops N-caches to manage these representations in a parallel way. Our extensive experiments demonstrate the effectiveness of NAT in both prediction performance and scalability. In the future, we plan to extend NAT to process even larger networks that the GPU memory cannot hold the entire networks.
|