diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7b979e6ea88b6343e5fe6aec03314e62b37f394d
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,251 @@
+# FeatureOnto: A Schema on Textual Features for Social Data Analysis
+
+Sumit Dalal ${}^{1 * }$ , Sarika Jain ${}^{1}$ and Mayank Dave ${}^{2}$
+
+${}^{1}$ Department of Computer Applications, National Institute of Technology, Kurukshetra, India ${}^{2}$ Computer Engineering, National Institute of Technology, Kurukshetra, India
+
+sumitdala19050@gmail.com
+
+Abstract. Social media is one of the valuable information sources which present much data to the researchers. This information is mainly analyzed by machine learning and the deep learning methods, which lack semantics and interpretation in their outputs. Also, much attention is paid to the feature engineering there. We present a taxonomy of the different feature categories. The categories relate to the features learned during the training for analyzing the textual information, specifically available on social platforms. The ontological view of the data will represent knowledge in a more understandable form besides interpreting the machine learning results for various tasks related to the social data analysis. We chose Depression as the use case purpose. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
+
+Keywords: Deep Learning, Depression, Knowledge Graph, Machine learning, Ontology, Social Data, Twitter.
+
+## 1 Introduction
+
+Mental health is an essential aspect of human life to live a productive and energetic life. People tend to ignore their mental health for various reasons like inaccessible health services, limited time for themselves, etc. Nevertheless, technological advancements provide researchers an opportunity for pervasive monitoring to include users' social data for their mental health assessment without interfering with their daily life. People share their feelings, emotion, daily activities related to work and family on social media platforms (Facebook, Twitter, Reddit). These posts can be used for extracting features or looking for particular words, phrases that can be used to assess if a user has depression or not.
+
+Various machine learning and deep learning methods have been devised and applied for mental health assessment from users' social data. These techniques mainly consider correlation or structural information of the text for classification purposes. They miss contextual information of the domain. Analyzing social data with traditional statistical and machine learning approaches has limitations like poor big data handling capacity, semantics, and contextual/ background knowledge inclusion. Recently deep learning approaches have been widespread, but interpretability is a significant issue. So a hybrid approach that handles semantics and big data should be considered for better results.
+
+Contextual information can be represented by a logic-based model [McCarthy, J. 1993], Key-Value pair [Schilit, B., 1993], object-oriented model [Schmidt, A., 1999], UML diagram [Sheng, Q. Z., & Benatallah, B. 2005], or markup schema. Nevertheless, these models have limited capacity in representing real-world situations. We propose to develop an ontology to represent the domain information. Ontology is a formalization of a domain's knowledge [Gruber, T. R. 1995]. The main principles of ontology are to reuse sharing domain knowledge between agents in a language understandable to them (user or software). Ontology has been developed and used in different application domains. [Konjengbam A.2018, Wang D. 2018] design ontology for analyzing user reviews in social media. [Malik S. & Jain S. 2021 and Allahyari M. 2014] employ ontology for text documents classification while [Taghva, K., 2003] uses ontology for email classification. [Dutta B. & DeBellis M. 2020, Patel A. 2021] developed an ontology for collecting and analyzing the covid-19 data. [Magumba, M. A., & Nabende, P. 2016] develop an ontology for disease event detection from Twitter. [Chowdhury, S., & Zhu, J. 2019] use topic modeling methods to extract essential topics from transportation planning documents for constructing intelligent transportation infrastructure planning ontology. However, ontology-based techniques for depression classification and monitoring from social data have been insufficiently studied.
+
+The machine learning and statistical approaches consider limited contextual information. Moreover, it is not easy to interpret their results. For this reason, deep learning models are considered complete black boxes. Personalization of the system is another issue that needs to be in focus. For the implementation purpose, we chose depression as the domain. We aim to develop an underlying ontology for the personalized and disease-specific knowledge graph, to monitor a depressive user through his publicly available textual social data. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
+
+Our Contributions in this paper are as follows:
+
+1. To develop the FeatureOnto ontology for analyzing social media posts. The features of social posts manipulated by machine learning and deep learning techniques are arranged in a taxonomy. This way, structured data help in the interpretation of output produced.
+
+2. We write competency questions to describe the scope of FeatureOnto to detect and monitor depression through social media posts.
+
+The remaining paper is organized into four sections. Section 2 discusses the related literature. The FeatureOnto development approach and its scope will be discussed in section 3. Section 4 discusses the conceptual design of the FeatureOnto and the evaluation of the same. Conclusion and future work is discussed in the last section.
+
+## 2 Literature
+
+This section discusses previous research on depression ontology development using various sources or employing the available ontology for depression detection or monitoring.
+
+## a. Ontology Based Sentiment Analysis.
+
+Sentiment analysis is a crucial aspect in detecting depression from social posts. However, there are applications other than mental health assessment where it is functional. Sentiment extraction of user posts/reviews is a popular application that considers the affective features of the posts. The authors consider eight emotion categories to develop an emotion ontology for the sentiment classification [Sykora, M., 2013]. [Saif, H., 2012] employ entity extraction tools for extracting entities and mapping semantic concepts from user reviews. They use the extracted semantic features with unigrams for Twitter sentiment analysis. [Kardinata E. A. 2021 and] apply the ontology-based approach for sentiment analysis.
+
+## b. Ontology in Healthcare Domain.
+
+In the healthcare domain, ontologies have been employed for quite a long time. [Bat-baatar, E., & Ryu, K. H. 2019] employ Unified Medical Language System (UMLS) ontology to extract health-related named entities from user tweets. [Krishnamurthy, M. 2016] implement DBpedia, Freebase, and YAGO2 ontologies for determining behavior addiction category of social users'. In [Kim, J., & Chung, K. Y. 2014], authors develop ontology as a bridge between the device and space-specific ontologies for ubiquitous and personalized healthcare service environment. [Lokala, U.,2020] build ontology as a catalog of drug abuse, use, and addiction concepts for the social data investigation. [On, J., 2019] extract concepts and their relations from clinical practice guidelines, literature, and social posts to build an ontology for social media sentiment analysis on childhood vaccination. [Alamsyah, A., 2018] build ontology with personality traits and their facets as classes and sub-classes, respectively, for personality measurements from Twitter posts. [Ali, F., 2021] design a monitoring framework for diabetes and blood pressure patients that consider various available ontologies medical domain and patient's medical records, wearable sensor and social data.
+
+## c. Depression monitoring & Ontology.
+
+Authors employ ontologies in depression diagnosis and monitoring. Either they build ontology or use available ones. [Martín-Rodilla, Patricia 2020] propose to add temporal dimension in ontology for analyzing depressed user's linguistic patterns and ontology evolution over time in his social data. [Benfares, C. 2018] represents explicitly defined patient data, self-questionnaire and diagnosis result in the semantic network for preventing and detecting depression among cancer patients. [Birjali, M. 2017] constructs a vocabulary of suicide-related themes and divides them into subclasses concerning the degree of threat. WordNet is further used for semantic analysis of machine learning predictions for suicide sentiments on Twitter. Some works that build an ontology for depression diagnosis are discussed below. We assign ontologies unique ids such as O1, O2, etc. These ids are used in table 2 to mention the particular ontology.
+
+O1. [Petry, M. M. 2020] provides a ubiquitous framework based on ontology to assist the treatment of people suffering from depression. The ontology consists of concepts related to the user's depression, person, activity, and depression symptoms. Activity has subclasses related to the social network, email, and geographical activities. The person has a subclass PersonType which further defines a person into User, Medical, and Auxiliary. We are not sure if a patient history is considered or not.
+
+O2. [Kim, H. H., 2018] extract concepts and their relationships from posts on the dailyStrength, to develop the OntoDepression ontology for depression detection. They use tweets of family caregivers of Alzheimer's. OntoDepression has four main classes: Symptoms, Treatments, Feelings, and Life. The symptom is categorized into general, medical, physical, and mental. Feelings represent positive and negative aspects. Life class captures what the family caregivers' are talking about. Treatments represent concepts of medical treatment.
+
+O3. [Jung, H., 2016/2017] develop ontology from clinical practice guidelines and related literature to detect depression in adolescents from their social data. The ontology consists of five main classes: measurement, diagnostic result & management care, risk factors, and sign & symptoms.
+
+04. [Chang, Y. S., 2013] build an ontology for depression diagnosis using Bayesian networks. The ontology consists of three main classes: Patient, Disease, and Depression_Symptom. Depression symptoms are categorized into 36 symptoms.
+
+O5. [Hu, B., 2010] developed ontology based on Cognitive Behavioral Theory (CBT) to diagnose depression among online users at the current stage. Their focus is to lower the threshold access of online CBT. The ontology consists of the patient, doctor, patient record, and treatment diary concepts.
+
+Work in [Cao, L., et. al. 2020] created ontology for social media users to detect suicidal ideation from their knowledge graph. Their work is similar to our work but they considered limited features taxonomy, moreover we focus depression detection from personalized knowledge graph.
+
+Table 2 compares distinct ontologies built in different research papers to detect and monitor depression are compared on four parameters (Main Classes, Dimensions Covered, Entities Source Considered, and Availability & Re-usability). We extracted seven dimensions (Activity, Clinical Record, Patient Profile, Physician Profile, Sensor Data, Social Posts, and Social Profile) from the related literature. A description of each dimension, along with dimension ID, is given in table 1. We cannot find the ontologies built by other authors online. We are not sure if these are available for reuse or not, so the Availability & Re-usability column is blank. O1 ontology has scope over almost all the dimensions we have considered.
+
+Table1. Description of different dimensions considered
+
+
| Dimension | Dimension ID | Description |
| Activity | D1 | This facet covers physical movements, social platforms, daily life activities etc. |
| Clinical Record | D2 | It is related to patient profile, provides historical context, and covers clinical tests, physician observations, treatment diary, schedules, etc. |
| Patient Profile | D3 | The dimensions cover disease symptoms, education, work condition, economical, relationship status, family background etc. |
| Physician Profile | D4 | This aspect describes a physician in terms of his expertise, experience, etc. |
| Sensor Data | D5 | This element is related to the smartphone, body, and back- ground sensors. |
| Social Posts | D6 | It is affiliated with the content of posts by a user on SNS. |
| Social Profile | D7 | Social media profile provides an essential aspect of user per- sonality. |
+
+Table2. Comparison of the FeatureOnto and depression ontologies used in literature
+
+ | Main Classes | Dimensions Covered | Entities Source Considered | Availability & Re-usability |
| O1 | Depression, Symptom, Activity | D1, D2, D3, D4, D5, D6 | Literature | ... |
| O2 | Symptoms, Treatments, Life, Feelings | D1, D6 | SNSs | ... |
| O3 | Diagnostics, Subtypes, Risk Factors, Sign& Symptoms, Intervention | D6 | CPG, Literature, SNSs, FAQs | ... |
| 04 | Patient, Disease, Symp- tom | D2, D3 | Literature | ... |
| O5 | Patient, Doctor, Activity, Diagnosis, Treatment Diary | D1, D2, D3, D4 | General Scenario | ... |
| Our Approach | Patient, Symptom, Posts, User Profile, Feature | D2, D3, D6, D7 | Literature | Yes |
+
+## 3 Designing FeatureOnto Ontology
+
+The focus of ontology development is to analyze the social textual data and interpret the results produced by the machine learning or deep learning models. Mainly, authors focus on n-gram features of social media posts, but FeatureOnto also considers other features. We follow the 'Ontology Development 101' methodology for Fea-tureOnto development [Noy, N. F., & McGuinness, D. L. 2001]. An iterative process is followed while designing the ontology lifecycle.
+
+## Step 1. Determining Domain and Scope of the Ontology
+
+We create a list of competency questions to determine the ontology's domain and scope [Grüninger, M., & Fox, M. S. 1995]. FeatureOnto ontology should be able to answer these questions. E.g., What are the textual features of social media posts? The ontology will be evaluated with these questions. Table 3a and 3b provide the sample of competency questions where 3a is derived to check the ontology schema, i.e., ontology without any instance. In comparison, questions in $3\mathrm{\;b}$ are derived keeping in mind the use case of depression monitoring of a social user. Queries of table $3\mathrm{\;b}$ are out of scope for this paper as here we are only presenting the schema.
+
+Table3a. Schema Based Competency Questions.
+
+Competency Questions
+
+1. Retrieve the labels for every subclass of the class Content?
+
+2. "Topics" is the subclass of?
+
+3. What type of feature is "Anger"?
+
+Table3b. Knowledge Graph Based Competency Questions.
+
+| Competency Questions |
| 1. What is the sleeping pattern of a user/patient (user can be normal patient)? |
| 2. In which hour user messages frequently? |
| 3. How many posts has low valence in a week? |
| 4. Emotional behavior pattern considering week as a unit? |
| 5. Daily/weekly average frequency of negative emotions? |
| 6. Compare daily/weekly/overall average number of first person pronoun and second/third |
| person pronouns? |
| 7. What are the topics of interest for a depressed user? |
| 8. Anger related words used frequently or not? |
| 9. Find the pattern of psycholinguistic features? |
+
+## Step 2. Re-using the Existing Ontologies
+
+We search for available conceptual frameworks and ontologies on social data analysis at BioPortal [Musen, M. A. 2012], OBOFoundary, and LOD cloud. Ontologies representing sentiment analysis or depression classification, or other social media analysis tasks on the web (Google Scholar, Pubmed) and the kinds of literature are searched for the required concepts and relationships. We have done a comprehensive search but could not find a suitable ontology that could be re-used fully. We find some ontology and can inherit one or more classes from them. Most of the inherited classes are given attributes as per our requirements. Table 4 shows our efforts toward implementing the reusability principle of the semantic web. Figure 2, present in the next section, gives a diagrammatical representation of the inherited entities. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, MFOEM, and HORD, while APAONTO, Obo are the least inherited ontologies. We did not find suitable classes for UniGrams, BiGrams, Emoticon, and POSTags. So we use our schema to represent these classes. The solid and the dotted line represent the property and the subclass relationship between two entities.
+
+Table4. Entities and Namespaces considered in the FeatureOnto.
+
+| Entity | Sub Entities | Schema Selected | Available Schemas |
| Content | UniGrams, BiGrams, POSTags | ... | ... |
| Emoticon | ... | ... | ... |
| Emotion | Arousal, Positive, Negative | MFOEM | MFOEM, SIO, VEO. |
| Dominance | APAONTO | APAONTO, FB-CV. |
| GenderType | ... | Schema | Schema, GND |
| Person | Patient | Schema | FOAF, Schema, Wiki- data, DUL. |
| User | HORD | NCIT, SIO, HORD. |
| Post | ... | HORD | HORD |
| Psycholinguistic | Anger, Anxiety, Sad | MFOEM | MFOEM, SIO, VEO, NCIT. |
| Pronoun | ... | ... |
| Symptoms | ... | Obo | NCIT, SYMP, RADLEX, Obo |
| Topic | ... | | EDAM, ITO |
+
+Step 3. Extracting Terms and Concepts
+
+Keeping in mind our use case, we read literature on depression and mental disorders detection from social data using machine learning or lexicon-based approaches and extract terms related to features considered for classification. We found that different textual features are extracted and learned in machine learning or deep learning training phase [Dalal, S., 2019, Dalal, S., & Jain, S. 2021], e.g., bigrams, unigrams, positive or negative sentiment words. Table 4 shows different entities and sub-entities present in the FeatureOnto ontology. It also provides information about the various available schemas for an entity and the schema used for the inheritance. We also search social networking data to extract additional terms. The extracted terms are used for describing the class concepts.
+
+## Step 4. Developing the Ontology and Terminology
+
+We have defined the classes and the class hierarchy using the top-down approach. The ontology is developed using Protégé [Musen, M. A. 2015]. The ontology is uploaded on BioPortal.
+
+## Step 5. Evaluating the Scope of the Ontology
+
+A set of competency questions is given in Tables 3a and 3b. For scope evaluation of the FeatureOnto, answers to the SPARQL queries built on the questions from table 3a are considered. Results of the queries are discussed in the coming sections.
+
+## 4 FeatureOnto Ontology Model
+
+Following the steps discussed in the previous section, we design FeatureOnto ontology. A high-level view of the FeatureOnto ontology is represented in Figure no. 1 Complete FeatureOnto structure (at the current stage) has five dimensions (Patient, Symptom, Posts, User Profile, and Feature) covered by various classes in the figure. Most of the entities in our ontology belong to the Social Post dimension. The solid and the dotted line represent the property and the subclass relationship between two entities. Figure 1 gives a conceptual schema of the proposed model. FeatureOnto uses existing ontologies to pursue the basic principle of ontology implementation. Figure 2 represents the terms inherited by FeatureOnto from available schemas. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, and MFOEM, while FOAF, and APAONTO are the least inherited ontologies.
+
+## Scope Evaluation of the FeatureOnto.
+
+Table 3a and 3b presents the competency questions related to the schema and instances. This work is related to the building of the schema only, and hence we executed queries on schema only. Below, queries are built on questions from table 3a. 9
+
+Question1. Retrieve the labels for every subclass of sf:Content?
+
+Query. PREFIX rdfs: PREFIX sf. Results.
+
+SELECT ?subClass ?label WHERE \{ ?subClass rdfs:subClassOf sf:Content . ?subClass rdfs:label ?label . \}
+
+Results. POSTags, UniGrams, BiGrams
+
+Question2. "Topics" is the subclass of (find immediate parent)?
+
+Query. PREFIX rdfs: PREFIX ns:
+
+SELECT ?superClass WHERE \{ ns:Topics rdfs:subClassOf ?superClass . \}
+
+Results. Feature.
+
+Question3. What type of feature is "Anger" (find all parents)?
+
+Query. PREFIX rdfs: PREFIX ns:
+
+SELECT ?superClass WHERE \{ ns:Topics rdfs:subClassOf* ?superClass .\}
+
+Results. Psycholinguistic, Feature.
+
+Ontology is still under construction, when but prototype is available on https://github.com/sumitnitkkr.For generalization we have not mentioned any namespace here for our own entities.
+
+
+
+Figure2. Classes Inherited from the available Ontologies.
+
+
+
+## CONCLUSION
+
+We developed the FeatureOnto ontology to provide a taxonomy of social media posts' features (use mental health assessment or depression classification/ monitoring is taken). Posts carry huge information regarding many aspects. This information can be placed into different feature categories. These features are widely used in sentiment analysis, mental health assessment, event detection, user profiling, document classification, and other natural language and image processing tasks. The ontology will be used to create a personalized depression knowledge graph in the future. For this reason, it does not focus on the concepts from clinical practice guidelines and depression literature at the current stage. We will also extend the ontology to include other concepts related to depression in the future.
+
+## References
+
+1. Alamsyah, A., Putra, M. R. D., Fadhilah, D. D., Nurwianti, F., & Ningsih, E. (2018, May). Ontology modelling approach for personality measurement based on social media activity. In 2018 6th International Conference on Information and Communication Technology (ICoICT) (pp. 507-513). IEEE.
+
+2. Ali, F., El-Sappagh, S., Islam, S. R., Ali, A., Attique, M., Imran, M., & Kwak, K. S. (2021). An intelligent healthcare monitoring framework using wearable sensors and social networking data. Future Generation Computer Systems, 114, 23-43.
+
+3. Allahyari, M., Kochut, K. J., & Janik, M. (2014, June). Ontology-based text classification into dynamically defined topics. In 2014 IEEE international conference on semantic computing (pp. 273-278). IEEE.
+
+4. Batbaatar, E., & Ryu, K. H. (2019). Ontology-based healthcare named entity recognition from twitter messages using a recurrent neural network approach. International journal of environmental research and public health, 16(19), 3628.
+
+5. Benfares, C., Idrissi, Y. E. B. E., & Hamid, K. (2018, July). Personalized healthcare system based on ontologies. In International Conference on Advanced Intelligent Systems for Sustainable Development (pp. 185-196). Springer, Cham.
+
+6. Birjali, M., Beni-Hssane, A., & Erritali, M. (2017). Machine learning and semantic sentiment analysis based algorithms for suicide sentiment prediction in social networks. Proce-dia Computer Science, 113, 65-72.
+
+7. Cao, L., Zhang, H., & Feng, L. (2020). Building and using personal knowledge graph to improve suicidal ideation detection on social media. IEEE Transactions on Multimedia.
+
+8. Ceusters, W., & Smith, B. (2010). Foundations for a realist ontology of mental disease. Journal of biomedical semantics, 1(1), 1-23.
+
+9. Chang, Y. S., Fan, C. T., Lo, W. T., Hung, W. C., & Yuan, S. M. (2015). Mobile cloud-based depression diagnosis using an ontology and a Bayesian network. Future Generation Computer Systems, 43, 87-98.
+
+10. Chowdhury, S., & Zhu, J. (2019). Towards the ontology development for smart transportation infrastructure planning via topic modeling. In ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction (Vol. 36, pp. 507-514). IAARC Publications.
+
+11. Dalal, S., Jain, S., & Dave, M. (2019, December). A systematic review of smart mental healthcare. In Proceedings of the 5th International Conference on Cyber Security & Privacy in Communication Networks (ICCS).
+
+12. Dalal, S., & Jain, S. (2021). Smart mental healthcare systems. In Web Semantics (pp. 153- 163). Academic Press.
+
+13. Dutta, B., & DeBellis, M. (2020). CODO: an ontology for collection and analysis of COVID-19 data. arXiv preprint arXiv:2009.01210.
+
+14. Gruber, T. R. (1995). Toward principles for the design of ontologies used for knowledge sharing?. International journal of human-computer studies, 43(5-6), 907-928.
+
+15. Grüninger, M., & Fox, M. S. (1995). Methodology for the design and evaluation of ontologies.
+
+16. Gyrard, A., Gaur, M., Shekarpour, S., Thirunarayan, K., & Sheth, A. (2018). Personalized health knowledge graph.
+
+17. Hadzic, M., Chen, M., & Dillon, T. S. (2008, November). Towards the mental health ontology. In 2008 IEEE International Conference on Bioinformatics and Biomedicine (pp. 284-288). IEEE.
+
+18. Huang, Z., Yang, J., Harmelen, F. V., & Hu, Q. (2017, October). Constructing knowledge graphs of depression. In International conference on health information science (pp. 149- 161). Springer, Cham.
+
+19. Hu, B., Hu, B., Wan, J., Dennis, M., Chen, H. H., Li, L., & Zhou, Q. (2010). Ontology-based ubiquitous monitoring and treatment against depression. Wireless communications and mobile computing, 10(10), 1303-1319.
+
+20. Jung, H., Park, H., & Song, T. M. (2016). Development and evaluation of an adolescents' depression ontology for analyzing social data. In Nursing Informatics 2016 (pp. 442-446). IOS Press.
+
+21. Jung, H., Park, H. A., & Song, T. M. (2017). Ontology-based approach to social data sentiment analysis: detection of adolescent depression signals. Journal of medical internet research, 19(7), e7452.
+
+22. Kardinata, E. A., Rakhmawati, N. A., & Zuhroh, N. A. (2021, April). Ontology-Based Sentiment Analysis on News Title. In 2021 3rd East Indonesia Conference on Computer and Information Technology (EIConCIT) (pp. 360-364). IEEE.
+
+23. Kim, J., & Chung, K. Y. (2014). Ontology-based healthcare context information model to implement ubiquitous environment. Multimedia Tools and Applications, 71(2), 873-888.
+
+24. Kim, H. H., Jeong, S., Kim, A., & Shin, D. (2018). Analyzing Twitter Data of Family Caregivers of Alzheimer's Disease Patients Based on the Depression Ontology. In Advances in Computer Science and Ubiquitous Computing (pp. 30-35). Springer, Singapore.
+
+25. Krishnamurthy, M., Mahmood, K., & Marcinek, P. (2016, August). A hybrid statistical and semantic model for identification of mental health and behavioral disorders using social network analysis. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 1019-1026). IEEE.
+
+26. Konjengbam, A., Dewangan, N., Kumar, N., & Singh, M. (2018). Aspect ontology based review exploration. Electronic Commerce Research and Applications, 30, 62-71.
+
+27. Lokala, U., Daniulaityte, R., Lamy, F., Gaur, M., Thirunarayan, K., Kursuncu, U., & Sheth, A. P. (2020). Dao: An ontology for substance use epidemiology on social media and dark web. JMIR Public Health and Surveillance.
+
+28. Lytvyn, V., Vysotska, V., Veres, O., Rishnyak, I., & Rishnyak, H. (2017). Classification methods of text documents using ontology based approach. In Advances in Intelligent Systems and Computing (pp. 229-240). Springer, Cham.
+
+29. Magumba, M. A., & Nabende, P. (2016). Ontology Driven Disease Incidence Detection on Twitter. arXiv preprint arXiv:1611.06671.
+
+30. Malik, S., & Jain, S. (2021, February). Semantic ontology-based approach to enhance text classification. In International Semantic Intelligence Conference, Delhi, India. 25-27 Feb 2021. CEUR Workshop Proceedings (Vol. 2786, pp. 85-98).
+
+31. Martin-Rodilla, Patricia. "Adding Temporal Dimension to Ontology Learning Models for Depression Signs Detection from Social Media Texts." In ENASE, pp. 323-330. 2020.
+
+32. McCarthy, J. (1993). Notes on formalizing context.
+
+33. Musen, M. A. (2015). The protégé project: a look back and a look forward. AI matters, 1(4), 4-12.
+
+34. Musen, M. A., Noy, N. F., Shah, N. H., Whetzel, P. L., Chute, C. G., Story, M. A., ... & NCBO team. (2012). The national center for biomedical ontology. Journal of the American Medical Informatics Association, 19(2), 190-195.
+
+35. Noy, N. F., & McGuinness, D. L. (2001). Ontology development 101: A guide to creating your first ontology.
+
+36. On, J., Park, H. A., & Song, T. M. (2019). Sentiment analysis of social media on childhood vaccination: development of an ontology. Journal of medical Internet research, 21(6), e13456.
+
+37. Patel, A., Debnath, N. C., Mishra, A. K., & Jain, S. (2021). Covid19-IBO: a Covid-19 impact on Indian banking ontology along with an efficient schema matching approach. New Generation Computing, 39(3), 647-676.
+
+38. Petry, M. M., Barbosa, J. L. V., Rigo, S. J., Dias, L. P. S., & Büttenbender, P. C. (2020). Toward a ubiquitous model to assist the treatment of people with depression. Universal Access in the Information Society, 19(4), 841-854.
+
+39. Rastogi, N., & Zaki, M. J. (2020). Personal Health Knowledge Graphs for Patients. arXiv preprint arXiv:2004.00071.
+
+40. Saif, H., He, Y., & Alani, H. (2012, November). Semantic sentiment analysis of twitter. In International semantic web conference (pp. 508-524). Springer, Berlin, Heidelberg.
+
+41. Schilit, B., Adams, N., & Want, R. (1994, December). Context-aware computing applications. In 1994 first workshop on mobile computing systems and applications (pp. 85-90). IEEE.
+
+42. Schmidt, A., Beigl, M., & Gellersen, H. W. (1999). There is more to context than location. Computers & Graphics, 23(6), 893-901.
+
+43. Sheng, Q. Z., & Benatallah, B. (2005, July). ContextUML: a UML-based modeling language for model-driven development of context-aware web services. In International Conference on Mobile Business (ICMB'05) (pp. 206-212). IEEE.
+
+44. Singla, S. (2020). Role of Ontology in Health Care. Ontology-Based Information Retrieval for Healthcare Systems, 1-18.
+
+45. Sykora, M., Jackson, T., O'Brien, A., & Elayan, S. (2013). Emotive ontology: Extracting fine-grained emotions from terse, informal messages.
+
+46. Taghva, K., Borsack, J., Coombs, J., Condit, A., Lumos, S., & Nartker, T. (2003, April). Ontology-based classification of email. In Proceedings ITCC 2003. International Conference on Information Technology: Coding and Computing (pp. 194-198). IEEE.
+
+47. Wang, D., Xu, L., & Younas, A. (2018, July). Social Media Sentiment Analysis Based on Domain Ontology and Semantic Mining. In International Conference on Machine Learning and Data Mining in Pattern Recognition (pp. 28-39). Springer, Cham.
+
+48. Wei, D. H., Kang, T., Pincus, H. A., & Weng, C. (2019). Construction of disease similarity networks using concept embedding and ontology. Studies in health technology and informatics, 264, 442.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..44509fa81d1487bd672d9e7285a39994544a5b88
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HL9lo3_Ft-9/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,274 @@
+§ FEATUREONTO: A SCHEMA ON TEXTUAL FEATURES FOR SOCIAL DATA ANALYSIS
+
+Sumit Dalal ${}^{1 * }$ , Sarika Jain ${}^{1}$ and Mayank Dave ${}^{2}$
+
+${}^{1}$ Department of Computer Applications, National Institute of Technology, Kurukshetra, India ${}^{2}$ Computer Engineering, National Institute of Technology, Kurukshetra, India
+
+sumitdala19050@gmail.com
+
+Abstract. Social media is one of the valuable information sources which present much data to the researchers. This information is mainly analyzed by machine learning and the deep learning methods, which lack semantics and interpretation in their outputs. Also, much attention is paid to the feature engineering there. We present a taxonomy of the different feature categories. The categories relate to the features learned during the training for analyzing the textual information, specifically available on social platforms. The ontological view of the data will represent knowledge in a more understandable form besides interpreting the machine learning results for various tasks related to the social data analysis. We chose Depression as the use case purpose. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
+
+Keywords: Deep Learning, Depression, Knowledge Graph, Machine learning, Ontology, Social Data, Twitter.
+
+§ 1 INTRODUCTION
+
+Mental health is an essential aspect of human life to live a productive and energetic life. People tend to ignore their mental health for various reasons like inaccessible health services, limited time for themselves, etc. Nevertheless, technological advancements provide researchers an opportunity for pervasive monitoring to include users' social data for their mental health assessment without interfering with their daily life. People share their feelings, emotion, daily activities related to work and family on social media platforms (Facebook, Twitter, Reddit). These posts can be used for extracting features or looking for particular words, phrases that can be used to assess if a user has depression or not.
+
+Various machine learning and deep learning methods have been devised and applied for mental health assessment from users' social data. These techniques mainly consider correlation or structural information of the text for classification purposes. They miss contextual information of the domain. Analyzing social data with traditional statistical and machine learning approaches has limitations like poor big data handling capacity, semantics, and contextual/ background knowledge inclusion. Recently deep learning approaches have been widespread, but interpretability is a significant issue. So a hybrid approach that handles semantics and big data should be considered for better results.
+
+Contextual information can be represented by a logic-based model [McCarthy, J. 1993], Key-Value pair [Schilit, B., 1993], object-oriented model [Schmidt, A., 1999], UML diagram [Sheng, Q. Z., & Benatallah, B. 2005], or markup schema. Nevertheless, these models have limited capacity in representing real-world situations. We propose to develop an ontology to represent the domain information. Ontology is a formalization of a domain's knowledge [Gruber, T. R. 1995]. The main principles of ontology are to reuse sharing domain knowledge between agents in a language understandable to them (user or software). Ontology has been developed and used in different application domains. [Konjengbam A.2018, Wang D. 2018] design ontology for analyzing user reviews in social media. [Malik S. & Jain S. 2021 and Allahyari M. 2014] employ ontology for text documents classification while [Taghva, K., 2003] uses ontology for email classification. [Dutta B. & DeBellis M. 2020, Patel A. 2021] developed an ontology for collecting and analyzing the covid-19 data. [Magumba, M. A., & Nabende, P. 2016] develop an ontology for disease event detection from Twitter. [Chowdhury, S., & Zhu, J. 2019] use topic modeling methods to extract essential topics from transportation planning documents for constructing intelligent transportation infrastructure planning ontology. However, ontology-based techniques for depression classification and monitoring from social data have been insufficiently studied.
+
+The machine learning and statistical approaches consider limited contextual information. Moreover, it is not easy to interpret their results. For this reason, deep learning models are considered complete black boxes. Personalization of the system is another issue that needs to be in focus. For the implementation purpose, we chose depression as the domain. We aim to develop an underlying ontology for the personalized and disease-specific knowledge graph, to monitor a depressive user through his publicly available textual social data. The ontology is designed using Ontology Web Language and Resource Description Framework in the Protégé. The validation of the ontology is carried out with designed competency questions.
+
+Our Contributions in this paper are as follows:
+
+1. To develop the FeatureOnto ontology for analyzing social media posts. The features of social posts manipulated by machine learning and deep learning techniques are arranged in a taxonomy. This way, structured data help in the interpretation of output produced.
+
+2. We write competency questions to describe the scope of FeatureOnto to detect and monitor depression through social media posts.
+
+The remaining paper is organized into four sections. Section 2 discusses the related literature. The FeatureOnto development approach and its scope will be discussed in section 3. Section 4 discusses the conceptual design of the FeatureOnto and the evaluation of the same. Conclusion and future work is discussed in the last section.
+
+§ 2 LITERATURE
+
+This section discusses previous research on depression ontology development using various sources or employing the available ontology for depression detection or monitoring.
+
+§ A. ONTOLOGY BASED SENTIMENT ANALYSIS.
+
+Sentiment analysis is a crucial aspect in detecting depression from social posts. However, there are applications other than mental health assessment where it is functional. Sentiment extraction of user posts/reviews is a popular application that considers the affective features of the posts. The authors consider eight emotion categories to develop an emotion ontology for the sentiment classification [Sykora, M., 2013]. [Saif, H., 2012] employ entity extraction tools for extracting entities and mapping semantic concepts from user reviews. They use the extracted semantic features with unigrams for Twitter sentiment analysis. [Kardinata E. A. 2021 and] apply the ontology-based approach for sentiment analysis.
+
+§ B. ONTOLOGY IN HEALTHCARE DOMAIN.
+
+In the healthcare domain, ontologies have been employed for quite a long time. [Bat-baatar, E., & Ryu, K. H. 2019] employ Unified Medical Language System (UMLS) ontology to extract health-related named entities from user tweets. [Krishnamurthy, M. 2016] implement DBpedia, Freebase, and YAGO2 ontologies for determining behavior addiction category of social users'. In [Kim, J., & Chung, K. Y. 2014], authors develop ontology as a bridge between the device and space-specific ontologies for ubiquitous and personalized healthcare service environment. [Lokala, U.,2020] build ontology as a catalog of drug abuse, use, and addiction concepts for the social data investigation. [On, J., 2019] extract concepts and their relations from clinical practice guidelines, literature, and social posts to build an ontology for social media sentiment analysis on childhood vaccination. [Alamsyah, A., 2018] build ontology with personality traits and their facets as classes and sub-classes, respectively, for personality measurements from Twitter posts. [Ali, F., 2021] design a monitoring framework for diabetes and blood pressure patients that consider various available ontologies medical domain and patient's medical records, wearable sensor and social data.
+
+§ C. DEPRESSION MONITORING & ONTOLOGY.
+
+Authors employ ontologies in depression diagnosis and monitoring. Either they build ontology or use available ones. [Martín-Rodilla, Patricia 2020] propose to add temporal dimension in ontology for analyzing depressed user's linguistic patterns and ontology evolution over time in his social data. [Benfares, C. 2018] represents explicitly defined patient data, self-questionnaire and diagnosis result in the semantic network for preventing and detecting depression among cancer patients. [Birjali, M. 2017] constructs a vocabulary of suicide-related themes and divides them into subclasses concerning the degree of threat. WordNet is further used for semantic analysis of machine learning predictions for suicide sentiments on Twitter. Some works that build an ontology for depression diagnosis are discussed below. We assign ontologies unique ids such as O1, O2, etc. These ids are used in table 2 to mention the particular ontology.
+
+O1. [Petry, M. M. 2020] provides a ubiquitous framework based on ontology to assist the treatment of people suffering from depression. The ontology consists of concepts related to the user's depression, person, activity, and depression symptoms. Activity has subclasses related to the social network, email, and geographical activities. The person has a subclass PersonType which further defines a person into User, Medical, and Auxiliary. We are not sure if a patient history is considered or not.
+
+O2. [Kim, H. H., 2018] extract concepts and their relationships from posts on the dailyStrength, to develop the OntoDepression ontology for depression detection. They use tweets of family caregivers of Alzheimer's. OntoDepression has four main classes: Symptoms, Treatments, Feelings, and Life. The symptom is categorized into general, medical, physical, and mental. Feelings represent positive and negative aspects. Life class captures what the family caregivers' are talking about. Treatments represent concepts of medical treatment.
+
+O3. [Jung, H., 2016/2017] develop ontology from clinical practice guidelines and related literature to detect depression in adolescents from their social data. The ontology consists of five main classes: measurement, diagnostic result & management care, risk factors, and sign & symptoms.
+
+04. [Chang, Y. S., 2013] build an ontology for depression diagnosis using Bayesian networks. The ontology consists of three main classes: Patient, Disease, and Depression_Symptom. Depression symptoms are categorized into 36 symptoms.
+
+O5. [Hu, B., 2010] developed ontology based on Cognitive Behavioral Theory (CBT) to diagnose depression among online users at the current stage. Their focus is to lower the threshold access of online CBT. The ontology consists of the patient, doctor, patient record, and treatment diary concepts.
+
+Work in [Cao, L., et. al. 2020] created ontology for social media users to detect suicidal ideation from their knowledge graph. Their work is similar to our work but they considered limited features taxonomy, moreover we focus depression detection from personalized knowledge graph.
+
+Table 2 compares distinct ontologies built in different research papers to detect and monitor depression are compared on four parameters (Main Classes, Dimensions Covered, Entities Source Considered, and Availability & Re-usability). We extracted seven dimensions (Activity, Clinical Record, Patient Profile, Physician Profile, Sensor Data, Social Posts, and Social Profile) from the related literature. A description of each dimension, along with dimension ID, is given in table 1. We cannot find the ontologies built by other authors online. We are not sure if these are available for reuse or not, so the Availability & Re-usability column is blank. O1 ontology has scope over almost all the dimensions we have considered.
+
+Table1. Description of different dimensions considered
+
+max width=
+
+Dimension Dimension ID Description
+
+1-3
+Activity D1 This facet covers physical movements, social platforms, daily life activities etc.
+
+1-3
+Clinical Record D2 It is related to patient profile, provides historical context, and covers clinical tests, physician observations, treatment diary, schedules, etc.
+
+1-3
+Patient Profile D3 The dimensions cover disease symptoms, education, work condition, economical, relationship status, family background etc.
+
+1-3
+Physician Profile D4 This aspect describes a physician in terms of his expertise, experience, etc.
+
+1-3
+Sensor Data D5 This element is related to the smartphone, body, and back- ground sensors.
+
+1-3
+Social Posts D6 It is affiliated with the content of posts by a user on SNS.
+
+1-3
+Social Profile D7 Social media profile provides an essential aspect of user per- sonality.
+
+1-3
+
+Table2. Comparison of the FeatureOnto and depression ontologies used in literature
+
+max width=
+
+X Main Classes Dimensions Covered Entities Source Considered Availability & Re-usability
+
+1-5
+O1 Depression, Symptom, Activity D1, D2, D3, D4, D5, D6 Literature ...
+
+1-5
+O2 Symptoms, Treatments, Life, Feelings D1, D6 SNSs ...
+
+1-5
+O3 Diagnostics, Subtypes, Risk Factors, Sign& Symptoms, Intervention D6 CPG, Literature, SNSs, FAQs ...
+
+1-5
+04 Patient, Disease, Symp- tom D2, D3 Literature ...
+
+1-5
+O5 Patient, Doctor, Activity, Diagnosis, Treatment Diary D1, D2, D3, D4 General Scenario ...
+
+1-5
+Our Approach Patient, Symptom, Posts, User Profile, Feature D2, D3, D6, D7 Literature Yes
+
+1-5
+
+§ 3 DESIGNING FEATUREONTO ONTOLOGY
+
+The focus of ontology development is to analyze the social textual data and interpret the results produced by the machine learning or deep learning models. Mainly, authors focus on n-gram features of social media posts, but FeatureOnto also considers other features. We follow the 'Ontology Development 101' methodology for Fea-tureOnto development [Noy, N. F., & McGuinness, D. L. 2001]. An iterative process is followed while designing the ontology lifecycle.
+
+§ STEP 1. DETERMINING DOMAIN AND SCOPE OF THE ONTOLOGY
+
+We create a list of competency questions to determine the ontology's domain and scope [Grüninger, M., & Fox, M. S. 1995]. FeatureOnto ontology should be able to answer these questions. E.g., What are the textual features of social media posts? The ontology will be evaluated with these questions. Table 3a and 3b provide the sample of competency questions where 3a is derived to check the ontology schema, i.e., ontology without any instance. In comparison, questions in $3\mathrm{\;b}$ are derived keeping in mind the use case of depression monitoring of a social user. Queries of table $3\mathrm{\;b}$ are out of scope for this paper as here we are only presenting the schema.
+
+Table3a. Schema Based Competency Questions.
+
+Competency Questions
+
+1. Retrieve the labels for every subclass of the class Content?
+
+2. "Topics" is the subclass of?
+
+3. What type of feature is "Anger"?
+
+Table3b. Knowledge Graph Based Competency Questions.
+
+max width=
+
+Competency Questions
+
+1-1
+1. What is the sleeping pattern of a user/patient (user can be normal patient)?
+
+1-1
+2. In which hour user messages frequently?
+
+1-1
+3. How many posts has low valence in a week?
+
+1-1
+4. Emotional behavior pattern considering week as a unit?
+
+1-1
+5. Daily/weekly average frequency of negative emotions?
+
+1-1
+6. Compare daily/weekly/overall average number of first person pronoun and second/third
+
+1-1
+person pronouns?
+
+1-1
+7. What are the topics of interest for a depressed user?
+
+1-1
+8. Anger related words used frequently or not?
+
+1-1
+9. Find the pattern of psycholinguistic features?
+
+1-1
+
+§ STEP 2. RE-USING THE EXISTING ONTOLOGIES
+
+We search for available conceptual frameworks and ontologies on social data analysis at BioPortal [Musen, M. A. 2012], OBOFoundary, and LOD cloud. Ontologies representing sentiment analysis or depression classification, or other social media analysis tasks on the web (Google Scholar, Pubmed) and the kinds of literature are searched for the required concepts and relationships. We have done a comprehensive search but could not find a suitable ontology that could be re-used fully. We find some ontology and can inherit one or more classes from them. Most of the inherited classes are given attributes as per our requirements. Table 4 shows our efforts toward implementing the reusability principle of the semantic web. Figure 2, present in the next section, gives a diagrammatical representation of the inherited entities. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, MFOEM, and HORD, while APAONTO, Obo are the least inherited ontologies. We did not find suitable classes for UniGrams, BiGrams, Emoticon, and POSTags. So we use our schema to represent these classes. The solid and the dotted line represent the property and the subclass relationship between two entities.
+
+Table4. Entities and Namespaces considered in the FeatureOnto.
+
+max width=
+
+Entity Sub Entities Schema Selected Available Schemas
+
+1-4
+Content UniGrams, BiGrams, POSTags ... ...
+
+1-4
+Emoticon ... ... ...
+
+1-4
+2*Emotion Arousal, Positive, Negative MFOEM MFOEM, SIO, VEO.
+
+2-4
+ Dominance APAONTO APAONTO, FB-CV.
+
+1-4
+GenderType ... Schema Schema, GND
+
+1-4
+2*Person Patient Schema FOAF, Schema, Wiki- data, DUL.
+
+2-4
+ User HORD NCIT, SIO, HORD.
+
+1-4
+Post ... HORD HORD
+
+1-4
+2*Psycholinguistic Anger, Anxiety, Sad MFOEM MFOEM, SIO, VEO, NCIT.
+
+2-4
+ Pronoun ... ...
+
+1-4
+Symptoms ... Obo NCIT, SYMP, RADLEX, Obo
+
+1-4
+Topic ... X EDAM, ITO
+
+1-4
+
+Step 3. Extracting Terms and Concepts
+
+Keeping in mind our use case, we read literature on depression and mental disorders detection from social data using machine learning or lexicon-based approaches and extract terms related to features considered for classification. We found that different textual features are extracted and learned in machine learning or deep learning training phase [Dalal, S., 2019, Dalal, S., & Jain, S. 2021], e.g., bigrams, unigrams, positive or negative sentiment words. Table 4 shows different entities and sub-entities present in the FeatureOnto ontology. It also provides information about the various available schemas for an entity and the schema used for the inheritance. We also search social networking data to extract additional terms. The extracted terms are used for describing the class concepts.
+
+§ STEP 4. DEVELOPING THE ONTOLOGY AND TERMINOLOGY
+
+We have defined the classes and the class hierarchy using the top-down approach. The ontology is developed using Protégé [Musen, M. A. 2015]. The ontology is uploaded on BioPortal.
+
+§ STEP 5. EVALUATING THE SCOPE OF THE ONTOLOGY
+
+A set of competency questions is given in Tables 3a and 3b. For scope evaluation of the FeatureOnto, answers to the SPARQL queries built on the questions from table 3a are considered. Results of the queries are discussed in the coming sections.
+
+§ 4 FEATUREONTO ONTOLOGY MODEL
+
+Following the steps discussed in the previous section, we design FeatureOnto ontology. A high-level view of the FeatureOnto ontology is represented in Figure no. 1 Complete FeatureOnto structure (at the current stage) has five dimensions (Patient, Symptom, Posts, User Profile, and Feature) covered by various classes in the figure. Most of the entities in our ontology belong to the Social Post dimension. The solid and the dotted line represent the property and the subclass relationship between two entities. Figure 1 gives a conceptual schema of the proposed model. FeatureOnto uses existing ontologies to pursue the basic principle of ontology implementation. Figure 2 represents the terms inherited by FeatureOnto from available schemas. Different colors define each schema. The solid and the dotted line show immediate and remote child-parent relations between classes. Most inherited entities belong to Schema, and MFOEM, while FOAF, and APAONTO are the least inherited ontologies.
+
+§ SCOPE EVALUATION OF THE FEATUREONTO.
+
+Table 3a and 3b presents the competency questions related to the schema and instances. This work is related to the building of the schema only, and hence we executed queries on schema only. Below, queries are built on questions from table 3a. 9
+
+Question1. Retrieve the labels for every subclass of sf:Content?
+
+Query. PREFIX rdfs: PREFIX sf. Results.
+
+SELECT ?subClass ?label WHERE { ?subClass rdfs:subClassOf sf:Content . ?subClass rdfs:label ?label . }
+
+Results. POSTags, UniGrams, BiGrams
+
+Question2. "Topics" is the subclass of (find immediate parent)?
+
+Query. PREFIX rdfs: PREFIX ns:
+
+SELECT ?superClass WHERE { ns:Topics rdfs:subClassOf ?superClass . }
+
+Results. Feature.
+
+Question3. What type of feature is "Anger" (find all parents)?
+
+Query. PREFIX rdfs: PREFIX ns:
+
+SELECT ?superClass WHERE { ns:Topics rdfs:subClassOf* ?superClass .}
+
+Results. Psycholinguistic, Feature.
+
+Ontology is still under construction, when but prototype is available on https://github.com/sumitnitkkr.For generalization we have not mentioned any namespace here for our own entities.
+
+ < g r a p h i c s >
+
+Figure2. Classes Inherited from the available Ontologies.
+
+ < g r a p h i c s >
+
+§ CONCLUSION
+
+We developed the FeatureOnto ontology to provide a taxonomy of social media posts' features (use mental health assessment or depression classification/ monitoring is taken). Posts carry huge information regarding many aspects. This information can be placed into different feature categories. These features are widely used in sentiment analysis, mental health assessment, event detection, user profiling, document classification, and other natural language and image processing tasks. The ontology will be used to create a personalized depression knowledge graph in the future. For this reason, it does not focus on the concepts from clinical practice guidelines and depression literature at the current stage. We will also extend the ontology to include other concepts related to depression in the future.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..6eec58e05cfdd89bf7ccdcbbf3dfe2b019abeadb
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,163 @@
+# Devising Mapping Interoperability with Mapping Translation
+
+Ana Iglesias-Molina ${}^{1}$ , Andrea Cimmino ${}^{1}$ and Oscar Corcho ${}^{1}$
+
+${}^{1}$ Ontology Engineering Group, Universidad Politécnica de Madrid
+
+## Abstract
+
+Nowadays, Knowledge Graphs are extensively created using very different techniques, mapping languages among them. The wide variety of use cases, data peculiarities, and potential uses has had a substantial impact in how these languages have been created, extended, and applied. This situation is closely related to the global adoption of these languages and their associated tools. The large number of languages, compliant tools, and usually the lack of information of the combination of both leads users to use other techniques to construct Knowledge Graphs. Often, users choose to create their own ad hoc programming scripts that suit their needs. This choice is normally less reproducible and maintainable, what ultimately affects the quality of the generated RDF data, particularly in long-term scenarios. We devise with mapping translation an enhancement to the interoperability of existing mapping languages. This position paper analyses the possible language translation approaches, presents the scenarios in which it is being applied and discusses how it can be implemented.
+
+## Keywords
+
+Mapping languages, Ontology Description, Mapping Translation
+
+## 1. Introduction
+
+Knowledge Graphs (KG) are increasingly used in academia and industry to represent and manage the increasing amount of data on the Web [1]. A large number of techniques to create KGs have been proposed. These techniques may follow, namely, two approaches: RDF materialization, that consists of translating data from one or more heterogeneous sources into RDF; or Virtualization, (Ontology Based Data Access) [2] that consists in translating a SPARQL query into one or more equivalent queries which are distributed and executed on the original data source(s) and where its results are transformed back to the SPARQL results format [3]. Both approaches rely on an essential element, a mapping document, which is the key-enabler for performing the translations.
+
+Mapping languages represent the relationships between the structure or the model of heterogeneous data and an RDF version following an ontology, i.e., the rules on how to translate from non-RDF data into RDF. This data can be originally expressed in a variety of formats, such as tabular, JSON, or XML. Due to the heterogeneous nature of data, the wide corpus of techniques and the specific requirements that some scenarios may impose, an increasing number of mapping languages have been proposed $\left\lbrack {4,5}\right\rbrack$ . The differences among them are usually based on three aspects: (a) the focus on one or more particular data formats, e.g., the W3C Recommendations R2RML focuses on SQL tabular data [6]; (b) an addressed specific feature, e.g. SPARQL-Generate [7] allows the definition of functions in the mapping for cleaning or linking the generated RDF data; or (c) if they are designed for a particular technique or scenario that has special requirements, e.g. the WoT-mappings [8] where designed as an extension of the WoT standard [9].
+
+---
+
+Third International Workshop On Knowledge Graph Construction, Co-located with the ESWC 2022, Crete - 30th May 2022
+
+Qana.iglesiasm@upm.es (A. Iglesias-Molina); andreajesus.cimmino@upm.es (A. Cimmino); oscar.corcho@upm.es (O. Corcho)
+
+© 0000-0001-5375-8024 (A. Iglesias-Molina); 0000-0002-1823-4484 (A. Cimmino); 0000-0002-9260-0753 (O. Corcho) (C) (C) (C) (C) Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)
+
+---
+
+As a result, the diversity of mapping languages allows the construction of KG from heterogeneous data sources in many different scenarios. Current mapping languages may be categorized by their schema: RDF-based (e.g. R2RML [6] and extensions, CSVW [10]), SPARQL-based (e.g., SPARQL-Generate [7], SPARQL-Anything [11]) or based on other schemas (e.g. ShExML [12], Helio mappingsHelio ${}^{1}$ ). Nevertheless, the existing techniques usually implement just one mapping language, and sometimes not even the whole language specification [13]. Deciding which language and technique should be used in each scenario becomes a costly task, since the choice of one language may not cover all needed requirements [14]. Some scenarios require a combination of mapping languages because of their differential features, which entails using different techniques. In many cases, this diversity leads to ad hoc solutions that reduce reproducibility, maintainability, and reusability [15].
+
+The increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions that are, more commonly than desired, not compatible with existing ones. This position paper develops the concept of mapping translation, proposed by Corcho et al. [16], a concept that can enhance the interoperability among existing mapping languages and thus, improve the user experience of these technologies by allowing communication and understanding among them. This paper presents some approaches for language translation, shows the current situations in which mapping translation is being applied and their benefits, and proposes different techniques to extend it to more languages.
+
+The remaining of this article is structured as follows: Section 2 provides some insights about language translation and the situations in which it is being applied. Section 3 proposes three different techniques to address mapping translation at a larger scale. Finally, Section 4 draw some conclusions of the concepts presented in the paper.
+
+## 2. Mapping translation: Context
+
+In this section, we introduce mapping translation describing some approaches to language translation and present a set of scenarios in which mapping translation has been applied. Authors assume the reader is familiar with current mapping languages and their general characteristics.
+
+---
+
+${}^{1}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv--xml-and-json-files-using-rml
+
+---
+
+
+
+Figure 1: Types of language translations (Adapted from [18]).
+
+### 2.1. Approaches to language translation
+
+In the context of language translation, there are several approaches that carry out translations among a set of languages. Depending on the situation at hand, an approach can be advantageous with respect to the other ones. We highlight the following [17]:
+
+Peer-to-peer translation (Fig. 1a) supports ad hoc translation solutions between pairs of languages. This one may seem as the most straightforward approach, requiring the development of only the translator services needed for the situation at hand and with the possibility of adjusting it ad hoc for each situation. However, it becomes decreasingly feasible as the number of required translations increases.
+
+Common interchange language (Fig. 1b) uses a language that serves as an intermediary among several languages. This approach reduces the number of translator services needed to develop and it is the most feasible of the three to scale in amount. It involves creating (or luckily having) a language able to represent the expressiveness of all languages, to avoid information loss. Additionally, this implies that there are common patterns shared by the languages independently of their representation, and that an abstract manner of gathering them is possible, which may not be thus for highly heterogeneous languages.
+
+Family of languages (Fig. 1c) considers sets of languages and translations between the representatives of each set. This approach stands out for situations where there are clear subgroups of languages similar among them but among languages from other groups.
+
+### 2.2. Mapping translation scenarios
+
+Regarding mapping languages, there are currently some implementations that unidirectionally translate pairs of mapping languages. ShExML and YARRRML in their respective online editors ${}^{2,3}$ enable translation to RML. Another case is when tools implement RML/R2RML mapping translation into the language they are designed to parse; such is the case of Helio ${}^{4}$ and SPARQL-Generate ${}^{5}$ , that translate from RML to their respective language; and Ontop [19], that translates R2RML into its proprietary language, OBDA mappings [20]. These translation makes it possible to extend the outreach of the tool, since they enable the possibility of using them without the need of learning their specific language, but using one that is widely used and extended, such as R2RML and RML.
+
+---
+
+${}^{2}$ http://shexml.herminiogarcia.com/editor/
+
+${}^{3}$ https://rml.io/yarrrml/matey/#
+
+---
+
+Another case we want to present is Mapeathor [21], a tool that takes the mapping rules specified in spreadsheets and transforms them into a mapping in either R2RML, RML or YARRRML. It aims to lower the learning curve of those languages for new users and ease the mapping writing process. Finally, we remark the case where tools provide a set of optimizations on the construction of RDF graphs exploiting the translation of mapping rules, this is the case of Morph-CSV [22] and FunMap [23]. Morph-CSV first performs a transformation over the tabular data with RML+FnO mappings and CSVW annotations, and outputs a database and R2RML mappings ready to be transformed by an R2RML-compliant tool. FunMap takes an RML+FnO mapping, performs the transformation functions indicated, outputs the parsed data and generates a function-free RML mapping.
+
+The approaches presented are, mainly, examples of peer-to-peer translation for specific uses. The exception is Mapeathor, that abstracts the rules from R2RML, RML and YARRRML in a spreadsheet-based representation, which aligns with the approach of a common interchange language. Even though most of these translation examples involve R2RML or RML, there is no holistic approach of a general translation framework.
+
+## 3. Mapping translation: Techniques
+
+This section presents three proposals to implement a mapping translator service general enough to enable translation among several languages. These proposals are, namely, (1) Software-based, (2) construct query-based, and (3) Executable mapping-based. These implementations can be applied to any of the language translation approaches presented in Section 2.1.
+
+Software-based translation. It consists on ad-hoc software implementation for each pair of languages to perform bidirectional translations between them. As any ad hoc solution, it benefits from adjusting specifically to any situation with the (almost) unlimited possibilities that programming languages provide. This is the approach that all situations presented in Section 2.2 have applied, although with unidirectional translations.
+
+Construct query-based translation. This approach takes advantage of SPARQL query language with construct queries, which return an RDF graph. These particular queries extract the data by matching graph patterns of the query (with the WHERE clause) and builds the output graph based on a template (with the CONSTRUCT clause). Since many languages are RDF-based, that is, follow the schema of an ontology and are usually written in the Turtle syntax (e.g., R2RML and extensions), this approach can be applicable to them. This approach benefits from relying on a well-stablished standard, as SPARQL is nowadays, and its compliant engines. However, it would leave out languages with other schemas, such as ShExML and SPARQL-based, wthout relying on software-based solutions.
+
+---
+
+${}^{4}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv-xml-and-json-files-using-rml
+
+${}^{5}$ https://github.com/sparql-generate/rml-to-sparql-generate
+
+---
+
+Executable mapping-based translation. This last approach makes use of executable mappings automatically generated from ontology alignment to perform data translation between the two ontologies [24]. Similarly to the previous approach, this one also makes use of construct queries from SPARQL in the executable mappings. While the previous one relied on manual effort to build queries, this one takes advantage of the ontologies that define RDF-based mapping languages. In addition to the benefits and setbacks that the previous approach has, this approach may be hindered by the language constructs to build mappings. That is to say, single one-to-one correspondences of ontology entities may not be enough to gather and be able to translate their expressiveness and capabilities, especially for considerably different languages.
+
+The techniques proposed are presented in decreasing order of manual effort required. The first one is completely ad hoc, and even though it could use some modules of the developed solutions presented in Section 2.2, many more would be needed to provide a complete set of bidirectional translations covering a good number of languages. The second one requires considerable effort to build queries for RDF-based languages, assuming no extra help from software implementation is needed. The third one could ideally be automatically done, from ontology alignments creation to mapping execution generation. However, the rate of success of this approach without manual intervention is not expected to be high, especially for the ontology alignment part when the input ontologies considerably differ from one another or present different constructs (with different number of elements or differently structured).
+
+## 4. Conclusions
+
+This paper develops the concept of mapping translation, proposed by Corcho et al. [16]. It analyses the possible language translation approaches, updates the scenarios in which it is being applied, and proposes some implementation techniques to perform it.
+
+There are several possibilities in order to fully develop a complete solution to achieve mapping translation that ensures information preservation, as described in previous sections. It not only requires choosing the technical implementation according to the available efforts and resources, but more importantly, it involves deciding wisely the language translation approach that suits best this particular case of mapping languages. As presented previously, we categorize current mapping languages by their schema: RDF-based, SPARQL-based and based on other schemas. All of them have been designed for a basic purpose: describing non-RDF data to allow either materialization or virtualization. Intuitively, we can assume that the rules that the different mappings create can be represented in an abstract, language-independent manner. However, the sometimes large differences among these languages may question this assumption. Some languages, inside their categories, are similar to each other, R2RML and its extensions, for instance. Languages from different groups can be related, such as ShExML and RML, despite some inevitable differences in their features. There are others that are more unique, such as CSVW. Lastly, the SPARQL-based group is more isolated from the others due to the great possibilities that provide relying on SPARQL. This scenario poses challenges for every language translation approach. Peer-to-peer translation would require a substantial amount of effort for divergent languages. Using families of languages would improve in comparison with the previous one, but it still would have to face several challenges in language representation and the amount of translator services required. Meanwhile, using a common interchange language would be the one that reduces most efforts, but there is no absolute certainty that a common interchange language could be able to represent them all. Still, some steps have been taken to draft this language ${}^{6}$ , with the base idea that the mapping rules can be abstracted and represented in an ontology-based language.
+
+Even though it does not present as an easy task, mapping translation is a concept that can only benefit the current landscape of heterogeneous mapping languages. After years of KG construction, in which the increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions, sometimes ad hoc, sometimes with extensions of standards or widely used languages. Mapping translation has the potential to build bridges among the past (but still used) and new solutions to improve interoperability.
+
+## Acknowledgments
+
+The work presented in this paper is partially funded by Knowledge Spaces project (Grant PID2020-118274RB-I00 funded by MCIN/AEI/ 10.13039/501100011033); and partially funded by the European Union's Horizon 2020 Research and Innovation Programme through the AURORAL project, Grant Agreement No. 101016854.
+
+## References
+
+[1] A. Hogan, E. Blomqvist, M. Cochez, C. d'Amato, G. D. Melo, C. Gutierrez, S. Kirrane, J. E. L. Gayo, R. Navigli, S. Neumaier, et al., Knowledge graphs, ACM Computing Surveys (CSUR) 54 (2021) 1-37.
+
+[2] A. Poggi, D. Lembo, D. Calvanese, G. De Giacomo, M. Lenzerini, R. Rosati, Linking data to ontologies, Journal on data semantics X (2008) 133--173.
+
+[3] A. Chebotko, S. Lu, F. Fotouhi, Semantics preserving sparql-to-sql translation, Data & Knowledge Engineering 68 (2009) 973-1000.
+
+[4] A. Dimou, M. V. Sande, P. Colpaert, R. Verborgh, E. Mannens, R. Van De Walle, RML: A generic language for integrated RDF mappings of heterogeneous data, in: LDOW, 2014.
+
+[5] ShExML: improving the usability of heterogeneous data mapping languages for first-time users, PeerJ Computer Science 6 (2020) e318. URL: https://peerj.com/articles/cs-318.
+
+[6] S. Das, S. Sundara, R. Cyganiak, R2RML: RDB to RDF Mapping Language, W3C Recommendation 27 September 2012, www.w3.org/TR/r2rml (2012).
+
+[7] M. Lefrançois, A. Zimmermann, N. Bakerally, A SPARQL extension for generating RDF from heterogeneous formats, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10249 LNCS (2017) 35-50.
+
+---
+
+${}^{6}$ https://oeg-upm.github.io/Conceptual-Mapping/index.html
+
+---
+
+[8] A. Cimmino, M. Poveda-Villalón, R. García-Castro, ewot: A semantic interoperability approach for heterogeneous iot ecosystems based on the web of things, Sensors 20 (2020) 822.
+
+[9] M. Kovatsch, R. Matsukura, M. Lagally, T. Kawaguchi, K. Kajimoto, Web of Things (WoT) Architecture, W3C Recommendation 9 April 2020, https://www.w3.org/TR/wot-architecture/ (2020).
+
+[10] J. Tennison, G. Kellogg, I. Herman, Model for tabular data and metadata on the web, W3C Recommendation (2015).
+
+[11] E. Daga, L. Asprino, P. Mulholland, A. Gangemi, Facade-x: an opinionated approach to sparql anything, arXiv preprint arXiv:2106.02361 (2021).
+
+[12] H. García-González, A shexml perspective on mapping challenges: already solved ones, language modifications and future required actions, in: Proceedings of the 2nd International Workshop on Knowledge Graph Construction, 2021.
+
+[13] D. Chaves-Fraga, F. Priyatna, A. Cimmino, J. Toledo, E. Ruckhaus, O. Corcho, Gtfs-madrid-bench: A benchmark for virtual knowledge graph access in the transport domain, Journal of Web Semantics 65 (2020) 100596.
+
+[14] B. De Meester, W. Maroy, A. Dimou, R. Verborgh, E. Mannens, Declarative data transformations for linked data generation: The case of DBpedia, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10250 LNCS (2017) 33-48.
+
+[15] A. Iglesias-Molina, D. Chaves-Fraga, F. Priyatna, O. Corcho, Enhancing the maintainability of the bio2rdf project using declarative mappings., in: SWAT4HCLS, 2019.
+
+[16] O. Corcho, F. Priyatna, D. Chaves-Fraga, Towards a new generation of ontology based data access, Semantic Web 11 (2020) 153-160.
+
+[17] J. Euzenat, H. Stuckenschmidt, The 'family of languages' approach to semantic interoper-ability, Knowledge transformation for the semantic web 95 (2003) 49.
+
+[18] O. Corcho, A. Gómez-Pérez, A layered approach to ontology translation with knowledge representation, Ph.D. thesis, UPM, 2004.
+
+[19] D. Calvanese, B. Cogrel, S. Komla-Ebri, R. Kontchakov, D. Lanti, M. Rezk, M. Rodriguez-Muro, G. Xiao, Ontop: Answering sparql queries over relational databases, Semantic Web 8 (2017) 471-487.
+
+[20] M. Rodriguez-Muro, M. Rezk, Efficient sparql-to-sql with r2rml mappings, Journal of Web Semantics 33 (2015) 141-169.
+
+[21] A. Iglesias-Molina, L. Pozo-Gilo, D. Doña, E. Ruckhaus, D. Chaves-Fraga, Ö. Corcho, Mapeathor: Simplifying the specification of declarative rules for knowledge graph construction, in: ISWC (Demos/Industry), 2020.
+
+[22] D. Chaves-Fraga, E. Ruckhaus, F. Priyatna, M.-E. Vidal, O. Corcho, Enhancing virtual ontology based access over tabular data with morph-csv, Semantic Web (2021) 1-34.
+
+[23] S. Jozashoori, D. Chaves-Fraga, E. Iglesias, M.-E. Vidal, O. Corcho, Funmap: Efficient execution of functional mappings for knowledge graph creation, in: International Semantic Web Conference, Springer, 2020, pp. 276-293.
+
+[24] C. R. Rivero, I. Hernández, D. Ruiz, R. Corchuelo, Generating sparql executable mappings to integrate ontologies, in: International Conference on Conceptual Modeling, Springer, 2011, pp. 118-131.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7c91b20481204484140db59a423685ae8c9caa28
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HSex2XJK9Zc/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,91 @@
+§ DEVISING MAPPING INTEROPERABILITY WITH MAPPING TRANSLATION
+
+Ana Iglesias-Molina ${}^{1}$ , Andrea Cimmino ${}^{1}$ and Oscar Corcho ${}^{1}$
+
+${}^{1}$ Ontology Engineering Group, Universidad Politécnica de Madrid
+
+§ ABSTRACT
+
+Nowadays, Knowledge Graphs are extensively created using very different techniques, mapping languages among them. The wide variety of use cases, data peculiarities, and potential uses has had a substantial impact in how these languages have been created, extended, and applied. This situation is closely related to the global adoption of these languages and their associated tools. The large number of languages, compliant tools, and usually the lack of information of the combination of both leads users to use other techniques to construct Knowledge Graphs. Often, users choose to create their own ad hoc programming scripts that suit their needs. This choice is normally less reproducible and maintainable, what ultimately affects the quality of the generated RDF data, particularly in long-term scenarios. We devise with mapping translation an enhancement to the interoperability of existing mapping languages. This position paper analyses the possible language translation approaches, presents the scenarios in which it is being applied and discusses how it can be implemented.
+
+§ KEYWORDS
+
+Mapping languages, Ontology Description, Mapping Translation
+
+§ 1. INTRODUCTION
+
+Knowledge Graphs (KG) are increasingly used in academia and industry to represent and manage the increasing amount of data on the Web [1]. A large number of techniques to create KGs have been proposed. These techniques may follow, namely, two approaches: RDF materialization, that consists of translating data from one or more heterogeneous sources into RDF; or Virtualization, (Ontology Based Data Access) [2] that consists in translating a SPARQL query into one or more equivalent queries which are distributed and executed on the original data source(s) and where its results are transformed back to the SPARQL results format [3]. Both approaches rely on an essential element, a mapping document, which is the key-enabler for performing the translations.
+
+Mapping languages represent the relationships between the structure or the model of heterogeneous data and an RDF version following an ontology, i.e., the rules on how to translate from non-RDF data into RDF. This data can be originally expressed in a variety of formats, such as tabular, JSON, or XML. Due to the heterogeneous nature of data, the wide corpus of techniques and the specific requirements that some scenarios may impose, an increasing number of mapping languages have been proposed $\left\lbrack {4,5}\right\rbrack$ . The differences among them are usually based on three aspects: (a) the focus on one or more particular data formats, e.g., the W3C Recommendations R2RML focuses on SQL tabular data [6]; (b) an addressed specific feature, e.g. SPARQL-Generate [7] allows the definition of functions in the mapping for cleaning or linking the generated RDF data; or (c) if they are designed for a particular technique or scenario that has special requirements, e.g. the WoT-mappings [8] where designed as an extension of the WoT standard [9].
+
+Third International Workshop On Knowledge Graph Construction, Co-located with the ESWC 2022, Crete - 30th May 2022
+
+Qana.iglesiasm@upm.es (A. Iglesias-Molina); andreajesus.cimmino@upm.es (A. Cimmino); oscar.corcho@upm.es (O. Corcho)
+
+© 0000-0001-5375-8024 (A. Iglesias-Molina); 0000-0002-1823-4484 (A. Cimmino); 0000-0002-9260-0753 (O. Corcho) (C) (C) (C) (C) Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)
+
+As a result, the diversity of mapping languages allows the construction of KG from heterogeneous data sources in many different scenarios. Current mapping languages may be categorized by their schema: RDF-based (e.g. R2RML [6] and extensions, CSVW [10]), SPARQL-based (e.g., SPARQL-Generate [7], SPARQL-Anything [11]) or based on other schemas (e.g. ShExML [12], Helio mappingsHelio ${}^{1}$ ). Nevertheless, the existing techniques usually implement just one mapping language, and sometimes not even the whole language specification [13]. Deciding which language and technique should be used in each scenario becomes a costly task, since the choice of one language may not cover all needed requirements [14]. Some scenarios require a combination of mapping languages because of their differential features, which entails using different techniques. In many cases, this diversity leads to ad hoc solutions that reduce reproducibility, maintainability, and reusability [15].
+
+The increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions that are, more commonly than desired, not compatible with existing ones. This position paper develops the concept of mapping translation, proposed by Corcho et al. [16], a concept that can enhance the interoperability among existing mapping languages and thus, improve the user experience of these technologies by allowing communication and understanding among them. This paper presents some approaches for language translation, shows the current situations in which mapping translation is being applied and their benefits, and proposes different techniques to extend it to more languages.
+
+The remaining of this article is structured as follows: Section 2 provides some insights about language translation and the situations in which it is being applied. Section 3 proposes three different techniques to address mapping translation at a larger scale. Finally, Section 4 draw some conclusions of the concepts presented in the paper.
+
+§ 2. MAPPING TRANSLATION: CONTEXT
+
+In this section, we introduce mapping translation describing some approaches to language translation and present a set of scenarios in which mapping translation has been applied. Authors assume the reader is familiar with current mapping languages and their general characteristics.
+
+${}^{1}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv–xml-and-json-files-using-rml
+
+ < g r a p h i c s >
+
+Figure 1: Types of language translations (Adapted from [18]).
+
+§ 2.1. APPROACHES TO LANGUAGE TRANSLATION
+
+In the context of language translation, there are several approaches that carry out translations among a set of languages. Depending on the situation at hand, an approach can be advantageous with respect to the other ones. We highlight the following [17]:
+
+Peer-to-peer translation (Fig. 1a) supports ad hoc translation solutions between pairs of languages. This one may seem as the most straightforward approach, requiring the development of only the translator services needed for the situation at hand and with the possibility of adjusting it ad hoc for each situation. However, it becomes decreasingly feasible as the number of required translations increases.
+
+Common interchange language (Fig. 1b) uses a language that serves as an intermediary among several languages. This approach reduces the number of translator services needed to develop and it is the most feasible of the three to scale in amount. It involves creating (or luckily having) a language able to represent the expressiveness of all languages, to avoid information loss. Additionally, this implies that there are common patterns shared by the languages independently of their representation, and that an abstract manner of gathering them is possible, which may not be thus for highly heterogeneous languages.
+
+Family of languages (Fig. 1c) considers sets of languages and translations between the representatives of each set. This approach stands out for situations where there are clear subgroups of languages similar among them but among languages from other groups.
+
+§ 2.2. MAPPING TRANSLATION SCENARIOS
+
+Regarding mapping languages, there are currently some implementations that unidirectionally translate pairs of mapping languages. ShExML and YARRRML in their respective online editors ${}^{2,3}$ enable translation to RML. Another case is when tools implement RML/R2RML mapping translation into the language they are designed to parse; such is the case of Helio ${}^{4}$ and SPARQL-Generate ${}^{5}$ , that translate from RML to their respective language; and Ontop [19], that translates R2RML into its proprietary language, OBDA mappings [20]. These translation makes it possible to extend the outreach of the tool, since they enable the possibility of using them without the need of learning their specific language, but using one that is widely used and extended, such as R2RML and RML.
+
+${}^{2}$ http://shexml.herminiogarcia.com/editor/
+
+${}^{3}$ https://rml.io/yarrrml/matey/#
+
+Another case we want to present is Mapeathor [21], a tool that takes the mapping rules specified in spreadsheets and transforms them into a mapping in either R2RML, RML or YARRRML. It aims to lower the learning curve of those languages for new users and ease the mapping writing process. Finally, we remark the case where tools provide a set of optimizations on the construction of RDF graphs exploiting the translation of mapping rules, this is the case of Morph-CSV [22] and FunMap [23]. Morph-CSV first performs a transformation over the tabular data with RML+FnO mappings and CSVW annotations, and outputs a database and R2RML mappings ready to be transformed by an R2RML-compliant tool. FunMap takes an RML+FnO mapping, performs the transformation functions indicated, outputs the parsed data and generates a function-free RML mapping.
+
+The approaches presented are, mainly, examples of peer-to-peer translation for specific uses. The exception is Mapeathor, that abstracts the rules from R2RML, RML and YARRRML in a spreadsheet-based representation, which aligns with the approach of a common interchange language. Even though most of these translation examples involve R2RML or RML, there is no holistic approach of a general translation framework.
+
+§ 3. MAPPING TRANSLATION: TECHNIQUES
+
+This section presents three proposals to implement a mapping translator service general enough to enable translation among several languages. These proposals are, namely, (1) Software-based, (2) construct query-based, and (3) Executable mapping-based. These implementations can be applied to any of the language translation approaches presented in Section 2.1.
+
+Software-based translation. It consists on ad-hoc software implementation for each pair of languages to perform bidirectional translations between them. As any ad hoc solution, it benefits from adjusting specifically to any situation with the (almost) unlimited possibilities that programming languages provide. This is the approach that all situations presented in Section 2.2 have applied, although with unidirectional translations.
+
+Construct query-based translation. This approach takes advantage of SPARQL query language with construct queries, which return an RDF graph. These particular queries extract the data by matching graph patterns of the query (with the WHERE clause) and builds the output graph based on a template (with the CONSTRUCT clause). Since many languages are RDF-based, that is, follow the schema of an ontology and are usually written in the Turtle syntax (e.g., R2RML and extensions), this approach can be applicable to them. This approach benefits from relying on a well-stablished standard, as SPARQL is nowadays, and its compliant engines. However, it would leave out languages with other schemas, such as ShExML and SPARQL-based, wthout relying on software-based solutions.
+
+${}^{4}$ https://github.com/oeg-upm/helio/wiki/Streamlined-use-cases#materialising-rdf-from-csv-xml-and-json-files-using-rml
+
+${}^{5}$ https://github.com/sparql-generate/rml-to-sparql-generate
+
+Executable mapping-based translation. This last approach makes use of executable mappings automatically generated from ontology alignment to perform data translation between the two ontologies [24]. Similarly to the previous approach, this one also makes use of construct queries from SPARQL in the executable mappings. While the previous one relied on manual effort to build queries, this one takes advantage of the ontologies that define RDF-based mapping languages. In addition to the benefits and setbacks that the previous approach has, this approach may be hindered by the language constructs to build mappings. That is to say, single one-to-one correspondences of ontology entities may not be enough to gather and be able to translate their expressiveness and capabilities, especially for considerably different languages.
+
+The techniques proposed are presented in decreasing order of manual effort required. The first one is completely ad hoc, and even though it could use some modules of the developed solutions presented in Section 2.2, many more would be needed to provide a complete set of bidirectional translations covering a good number of languages. The second one requires considerable effort to build queries for RDF-based languages, assuming no extra help from software implementation is needed. The third one could ideally be automatically done, from ontology alignments creation to mapping execution generation. However, the rate of success of this approach without manual intervention is not expected to be high, especially for the ontology alignment part when the input ontologies considerably differ from one another or present different constructs (with different number of elements or differently structured).
+
+§ 4. CONCLUSIONS
+
+This paper develops the concept of mapping translation, proposed by Corcho et al. [16]. It analyses the possible language translation approaches, updates the scenarios in which it is being applied, and proposes some implementation techniques to perform it.
+
+There are several possibilities in order to fully develop a complete solution to achieve mapping translation that ensures information preservation, as described in previous sections. It not only requires choosing the technical implementation according to the available efforts and resources, but more importantly, it involves deciding wisely the language translation approach that suits best this particular case of mapping languages. As presented previously, we categorize current mapping languages by their schema: RDF-based, SPARQL-based and based on other schemas. All of them have been designed for a basic purpose: describing non-RDF data to allow either materialization or virtualization. Intuitively, we can assume that the rules that the different mappings create can be represented in an abstract, language-independent manner. However, the sometimes large differences among these languages may question this assumption. Some languages, inside their categories, are similar to each other, R2RML and its extensions, for instance. Languages from different groups can be related, such as ShExML and RML, despite some inevitable differences in their features. There are others that are more unique, such as CSVW. Lastly, the SPARQL-based group is more isolated from the others due to the great possibilities that provide relying on SPARQL. This scenario poses challenges for every language translation approach. Peer-to-peer translation would require a substantial amount of effort for divergent languages. Using families of languages would improve in comparison with the previous one, but it still would have to face several challenges in language representation and the amount of translator services required. Meanwhile, using a common interchange language would be the one that reduces most efforts, but there is no absolute certainty that a common interchange language could be able to represent them all. Still, some steps have been taken to draft this language ${}^{6}$ , with the base idea that the mapping rules can be abstracted and represented in an ontology-based language.
+
+Even though it does not present as an easy task, mapping translation is a concept that can only benefit the current landscape of heterogeneous mapping languages. After years of KG construction, in which the increasing and heterogeneous emergence of new use cases still motivates the community to keep developing solutions, sometimes ad hoc, sometimes with extensions of standards or widely used languages. Mapping translation has the potential to build bridges among the past (but still used) and new solutions to improve interoperability.
+
+§ ACKNOWLEDGMENTS
+
+The work presented in this paper is partially funded by Knowledge Spaces project (Grant PID2020-118274RB-I00 funded by MCIN/AEI/ 10.13039/501100011033); and partially funded by the European Union's Horizon 2020 Research and Innovation Programme through the AURORAL project, Grant Agreement No. 101016854.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7d5e752dde451a4d9aa40f7977cd11e87fafd96d
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,259 @@
+# Implementation-independent Knowledge Graph Construction Workflows using FnO Composition
+
+Gertjan De Mulder (C) and Ben De Meester (C)
+
+IDLab, Department of Electronics and Information Systems,
+
+Ghent University - imec, Technologiepark-Zwijnaarde 122, 9052 Ghent, Belgium
+
+\{firstname.lastname\}@ugent.be
+
+Abstract. Knowledge Graph construction is typically a task within larger workflows, with a tight coupling between the abstract workflow and its execution. Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended over the entire Knowledge Graph construction workflow. In this paper, we introduce an interoperable and reproducible solution for defining Knowledge Graph construction workflows leveraging Semantic Web technologies. We describe how a data flow workflow can be described interoperable (i.e., independent from the underlying technology stack) and reproducible (i.e., with detailed provenance) by composing semantic abstract function descriptions; and how such a semantic workflow can be automatically executed across technology stacks. We demonstrate that composing functions using the Function Ontology allows for functional descriptions of entire workflows, automatically executable using a Function Ontology Handler implementation. The semantic descriptions allow for interoperable workflows, the alignment with P-PLAN and PROV-O allows for reproducibility, and the mapping to concrete implementations allows for automatic execution.
+
+## 1 Introduction
+
+Knowledge Graph (KG) construction - i.e., RDF graph construction - involves computational tasks on data, and is typically a task within larger (business or scientific) workflows. The construction of a KG itself can also be considered an overarching and more complex task that is composed of smaller tasks, e.g., extracting data from a database, mapping it to RDF, and publishing it using a web API (i.e., Extract-Transform-Load or ETL). Such a process - i.e., a set of tasks that can be automated - can be facilitated using a workflow system.
+
+When a tight coupling between the abstract workflow and its execution exists, interoperability diminishes and composing tasks into a workflow introduces challenges to connect tools that implement a task. Similar issues arise when integrating a KG construction task into a larger workflow. For example, connecting a mapping implemented in JAVA and a web API tool implemented in JavaScript.
+
+Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended to the entire KG construction workflow. The lack of interoperability inhibits use of different tools for a task, making it harder to adapt to changing requirements and constraints. For example, Tool A might initially suffice for the RDF-generation task given the size of the source data. Later on, the data size might become unmanageable for Tool A. Tool B is available that can handle larger data sets, however, the lack of interoperability prevents the flexibility in switching from one tool to the other.
+
+In this paper, we represent tasks within a workflow through the composition of implementation-independent semantic function descriptions. By providing in-teroperability between tasks and the tools that execute them, users can focus on the overarching task for which the workflow was created, for example, managing the KG construction life cycle using different mapping processors that generate RDF, and different endpoints on which the RDF is published.
+
+Section 2 presents related work. In Section 3, we show how interoperability between tasks and tools within a workflow can be achieved through the composition of declarative function descriptions. We showcase this in Section 4 by leveraging the Function Ontology $\left( \mathrm{{FnO}}\right) \left\lbrack 7\right\rbrack$ to obtain a data flow workflow that is decoupled from the tools that are used, therefore, illustrating the flexibility in choosing the technology to be used for each task. In Section 5, we demonstrate the resulting workflow composition in FnO. We conclude in Section 6 and give additional pointers for future work.
+
+## 2 Related work
+
+In this section, we discuss existing RDF graph construction workflows, and work-flow systems' interoperability and reproducibility characteristics.
+
+Compared to scripting, using a mapping language improves interoperability of the KG construction process [6]. Mapping languages can provide features to cover many steps within the KG construction process, i.e., not only specify how to map to RDF, but also how to extract data from different data sources [8], and how to publish using various methods [16]. Even when mapping languages provide enough features to be deemed end-to-end, executing a KG construction exists within a wider context, e.g., being part of a Knowledge Graph Lifecycle [4, or as a collection of subtasks to allow for optimization [13]. As such, even though KG construction rules can be described interoperably using, e.g., a mapping language, its position within the wider and narrower tasks makes it interpretable as being (a part of) a workflow.
+
+Flexible workflows are needed, as requirements and constraints are subject to change. Thus, interoperability is essential for tasks designed in one system to be used by another [14]. The state of the art puts forward following characteristics for interoperability: 1) declarative paradigm, 2) separation of description and implementation, and 3) standardized language.
+
+Statements within an imperative paradigm are exact instructions of what needs to be done and inherently define the control flow: the exact order in which a program must be executed. An imperative paradigm is suitable for processes that are unlikely to change, however, a declarative approach is recommended when workflows resemble processes with changing requirements and constraints that require them to be executed in different ways. Declarative paradigms can be used to represent data flow, i.e., the data dependencies between tasks, and are more robust to change as they describe what needs to be done, instead of how [1].
+
+Interoperability diminishes when there is a tight coupling between tasks and implementations [12], e.g., when using ad hoc approaches. Thus, the separation of description and implementation is crucial to interoperability [15].
+
+The use of standards is essential to achieve interoperability in heterogeneous environments. Several workflow specifications exist, and can be divided into two parts. On the hand, there are executable specifications, such as the Common Workflow Language (CWL), and on the other hand, descriptive specifications, such as P-PLAN, and Open Provenance Model for Workflows (OPMW). CWL allows for describing a computational workflow and the command-line tools used for executing its tasks [3], with a tight coupling between tasks and implementations. P-PLAN extends the W3C standard PROV. It allows for describing workflow steps and link them to execution traces, and was applied in projects that focus on interoperability [10] and reproducibility [11]. OPMW is an extension of P-PLAN [10]: a simple interchange format for representing workflows at different levels of granularity (ie. abstract model, instances, executions). These specifications are either focused on being executable or descriptive. To the best of our knowledge, however, no specification exists that supports both.
+
+The Function Ontology (FnO) [7] presents a similar approach towards inter-operable data transformations using Semantic Web technologies. An implementation-independent function description allows for a decoupled architecture that separates the definition from its execution, and the inputs and outputs of a function are explicitly described. Furthermore, a recent update to FnO includes composition: compose a new function from other functions.
+
+Reproducibility is another key characteristic within workflows, as it requires the tasks to be described in sufficient detail so that it can be reproduced in different environments [11]. In order to be reproducible by other scientists, provenance information including the execution details is required [2].
+
+## 3 Method and Implementation
+
+In this paper we put forward our approach towards interoperable and reproducible workflows through implementation-independent and declarative descriptions, allowing the flexibility of tasks being implemented by different tools. We discussed several existing description languages for defining workflows. The complexity of the language increases with the constructs that are supported. However, it appears that simplicity often pays greater dividends when considering interoperability. In that regard, we decided to look for lightweight - yet flexible and interoperable - solutions.
+
+The previous section shows that to have interoperable and reproducible work-flow, we need a declarative paradigm that separates description from implementation in a standardized language, and allows for generating provenance information for individual tasks. In this section we elaborate on the decisions that were made to accommodate for these characteristics.
+
+We represent a workflow as a composition of tasks, and a task as a function which can have zero or more inputs and zero or more outputs. Being uniquely identifiable and unambiguously defined increases the reusability of tasks across workflows, as they are universally discoverable and linkable [7].
+
+We make the simplification that tasks can only be executed sequentially and currently do not consider control flow constructs other than a sequence. The data flow between tasks within a composition is represented by input and output mappings between functions. Such a composition mapping describes how an input or output of one function is linked to the input or output of another function. For example, within a KG construction workflow this is needed to connect the output of an RDF generation task to the input of the subsequent publishing task.
+
+We consider the Function Ontology(FnO)as a model to describe functions and function compositions to represent tasks and workflows. Its simple model aligns with our goal without preventing us to add additional complexity such as mapping to concrete implementations and composition of functions. Both additions are part of the Function Ontology specification ${}^{1}$ .
+
+The addition of composition to the FnO specification allows us to align function compositions with workflows as defined in P-PLAN [9], complementary to the existing alignment between FnO and PROV-O [5]. Several related works used or extended P-PLAN and led to the creation of several applications. Consequently, by aligning with P-PLAN we benefit from existing work that provides interoperability with several prominent workflow systems [10]. We use FnO because it allows for linking functions to actual implementations, hence, providing sufficient detail to be directly executed.
+
+Therefore, by mapping the workflows defined as function compositions, to workflow descriptions in P-PLAN, we can benefit from those applications, such as the workflow mining, browsing, and provenance visualization solutions discussed
+
+## in 10
+
+The following shows how FnO and P-PLAN align, and Listing 1.1 shows how construct P-PLAN descriptions from FnO compositions:
+
+- fno:Execution is-a p-plan:Step
+
+- fnoc:Composition is-a p-plan:Plan
+
+- fno:Parameter is-a p-plan:Variable
+
+- fno:Output is-a p-plan:Variable
+
+- fno:expects is-a p-plan:isInputVarOf
+
+- fno:returns is-a p-plan:isOutputVarOf
+
+---
+
+${}^{1}$ https://w3id.org/function/spec/
+
+---
+
+---
+
+PREFIX p-plan:
+
+PREFIX fnoc:
+
+PREFIX fno:
+
+PREFIX rdf:
+
+CONSTRUCT \{
+
+ ?s a p-plan:Plan .
+
+ ?exX a p-plan:Step ; p-plan:isStepOfPlan ?s .
+
+ ?exY a p-plan:Step ; p-plan:isStepOfPlan ?s ; p-plan:isPrecededBy ?exX .
+
+\}
+
+WHERE \{
+
+ ?s rdf:type fnoc:Composition ;
+
+ fnoc:composedOf [ fnoc:mapFrom [ fnoc:constituentFunction ?fx ;
+
+ fnoc:functionOutput ?fxOut ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction ?fy ;
+
+ fnoc:functionParameter ?fyParameter ] ] .
+
+ ?exX fno:executes ?fx .
+
+ ?exY fno:executes ?fy .
+
+\}
+
+---
+
+Listing 1.1. Pseudo-SPARQL query for constructing the precedence relations in P-PLAN from the CompositionMappings in FnO.
+
+## 4 Use case
+
+In this section we discuss POSH (Predictive Optimized Supply Chain): a motivating use case showcasing the need for an interoperable KG construction work-flow.
+
+POSH is an imec.icon research project in which methods and software solutions are researched that leverage data to optimize integrated procurement and inventory management strategies. A data integration and quality framework is deemed necessary to increase the accuracy and reliability of supply chain data that has been collected from heterogeneous data sources (suppliers, customers, service providers, etc.). Within POSH, we developed a semantically-enhanced knowledge integration framework that uses various data repositories and external (meta)data to provide a clear overview of the current state of the supply chain and the necessary inputs for the prediction, optimization and decision support methods.
+
+To this end, a KG is generated from the heterogeneous supply chain data and consequently exposed through a triple store endpoint. This enables our partners to take advantage of running queries against a uniform data model without being burdened with heterogeneous sources from which it constitutes, and focus on the designing algorithms for optimizing the supply chain. However, not all data was made available from the start but rather added progressively, and the requirements together with the mappings rules that satisfy them changed in parallel. Hence, the KG generation tasks need to be executed iteratively to incorporate the changes, which can become time-consuming when done manually. To iteratively accommodate for changing requirements and constraints, an implementation-independent workflow system was needed. Within POSH, we applied our method to provide workflow system flexibly enough to adapt to different technology stacks.
+
+## 5 Demonstration
+
+In this section we demonstrate a working example of an ETL workflow comprising two tasks: i) generating RDF; and ii) publishing the generated RDF. Due to space restrictions only excerpts of the descriptions are shown.
+
+First, we define the task of generating RDF as a function that takes the URI to a mapping, and the URI to which the result should be written. We make use of the RML mapping language to have an interoperable RDF generation step. Secondly, we define the publishing task as a function which takes the URI to the generated RDF data as input parameter and outputs a URI to the endpoint through which it is published. These descriptions are shown in Listing 1.2
+
+---
+
+@prefix fno: .
+
+@prefix fns: .
+
+fns:generateRDF a fno:Function ;
+
+ fno:expects ( fns:fpathMappingParameter ) ; fno:returnOutput ) .
+
+fns:publish a fno:Function ;
+
+ fno:expects ( fns:inputRDFParameter ) ; fno:returns ( fns:returnOutput ) .
+
+ Listing 1.2. Task descriptions in FnO
+
+---
+
+We describe an overarching ETL task as the composition of these two functions, illustrated in Listing 1.3. We define how the data flows between the composed functions using fnoc:CompositionMapping. fnoc:Composition links the output of the first task to the second task by means of a fnoc:CompositionMapping. Note that, using composition, we are able to describe the workflow at multiple levels of abstraction. In analogy with an ETL workflow, for example, the highest level of abstraction represents the three Extract, Transform, and Load tasks. The second level can contain more specific, yet abstract, tasks that are required to fulfill each of the three Extract, Transform, and Load tasks. Depending on the complexity of each task, it can be described further in a lower level of abstraction.
+
+---
+
+@prefix fno: .
+
+@prefix fnoc: .
+
+@prefix fns: .
+
+fns:ETL a fno:Function ;
+
+ fno:expects ( fns:fpathMappingParameter fns:fpathOutputParameter ) ; fno:returnOutput ) .
+
+fns:ETLComposition a fnoc:Composition ;
+
+ fnoc:composedOf
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
+
+ fnoc:functionParameter fns:fpathMappingParameter ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
+
+ fnoc:functionParameter fns:fpathMappingParameter ] ] ,
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
+
+ fnoc:functionParameter fns:fpathOutputParameter ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
+
+ fnoc:functionParameter fns:fpathOutputParameter ] ] ,
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:generateRDF ;
+
+ fnoc:functionOutput fns:returnOutput ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:publish ;
+
+ fnoc:functionParameter fns:inputRDFParameter ] ] ,
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:publish ;
+
+ fnoc:functionOutput fns:returnOutput ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:ETL ;
+
+---
+
+fnoc:functionOutput fns:returnOutput ] ] .
+
+Listing 1.3. ETL Workflow description using FnO composition
+
+We created a proof-of-concept Function Handler that automatically executes these descriptions using different implementations, available at https://github.com/FnOio/function-handler-js/tree/kgc-etl.Furthermore, we provide tests ${}^{2}$ in which we verify the execution sequence of a function composition, and demonstrate the interoperability through function compositions that resemble a KG construction workflow in which the RDF-generation task can be implemented by different tools.
+
+## 6 Conclusion
+
+Declarative function descriptions, and compositions thereof, allow us to define workflows that are decoupled from the execution environment. The explicit semantics allow for the unambiguous definition of inputs, outputs and implementations. Hence, allowing for automatically determine the functions that can be used to execute a task. Alignment with PROV allows for a reproducible workflow as both tasks and execution details are provided, which enables to exactly determine which functions were applied throughout the execution of the workflow.
+
+Defining a workflow through compositions allows for different levels of abstractions. When rapid prototyping is required, only high-level tasks can be described. As requirements become more concrete, a high-level task can be described in greater detail as a composition of more fine-grained tasks.
+
+These various levels of abstraction also allows for various levels of provenance information and thus various levels of reproducibility. For example, at one end of the spectrum, a function can be implemented by a command-line tool: no provenance information is available about the transformations that have been applied to produce the output. At the other end of the spectrum, a task can be described as a (nested) composition of fine-grained functions: provenance information is available up to the level of atomic functions.
+
+For future work, we can see a mapping language as a way to describe compositions of transformation tasks. By representing, e.g., a Triples Map in RML as a composition of data and schema transformation tasks, we can provide insights in what a mapping does, and in what order. These insights could help to provide optimization strategies to such kind of engines.
+
+## References
+
+1. van der Aalst, W.M.P., Pesic, M., Schonenberg, H.: Declarative workflows: Balancing between flexibility and support. Computer Science - Research and Development (2),99-113 (2009)
+
+---
+
+${}^{2}$ https://github.com/FnOio/function-handler-js/blob/kgc-etl/src/FunctionHandler.test.ts
+
+---
+
+2. Barker, A., van Hemert, J.: Scientific workflow: A survey and research directions. In: Parallel Processing and Applied Mathematics. pp. 746-753 (2008)
+
+3. Crusoe, M., Abeln, S., Iosup, A., Amstutz, P., Chilton, J., Tijanić, N., Ménager, H., Soiland-Reyes, S., Goble, C.: Methods included: Standardizing computational reuse and portability with the common workflow language. arXiv.org pp. 1-11 (2021)
+
+4. Şimşek, U., Angele, K., Kärle, E., Opdenplatz, J., Sommer, D., Umbrich, J., Fensel, D.: Knowledge Graph Lifecycle: Building and Maintaining Knowledge graphs. In: Proceedings of the ${2}^{\text{nd }}$ International Workshop on Knowledge Graph Construction co-located with ${18}^{\text{th }}$ Extended Semantic Web Conference (ESWC 2021) (2021)
+
+5. De Meester, B., Dimou, A., Verborgh, R., Mannens, E.: Detailed Provenance Capture of Data Processing. In: Proceedings of the First Workshop on Enabling Open Semantic Science (SemSci). pp. 31-38 (2017)
+
+6. De Meester, B., Heyvaert, P., Verborgh, R., Dimou, A.: Mapping language analysis of comparative characteristics. In: Joint Proceedings of the ${1}^{\text{st }}$ International Workshop on Knowledge Graph Building and ${1}^{\text{st }}$ International Workshop on Large Scale RDF Analytics co-located with ${16}^{\text{th }}$ Extended Semantic Web Conference (ESWC). pp. 37-45 (2019)
+
+7. De Meester, B., Seymoens, T., Dimou, A., Verborgh, R.: Implementation-independent Function Reuse. Future Generation Computer Systems pp. 946-959 (2020)
+
+8. Dimou, A., Verborgh, R., Sande, M.V., Mannens, E., de Walle, R.V.: Machine-interpretable dataset and service descriptions for heterogeneous data access and retrieval. In: Proceedings of the ${11}^{\text{th }}$ International Conference on Semantic Systems - SEMANTICS '15 (2015)
+
+9. Garijo, D., Gil, Y.: The P-PLAN Ontology. Tech. rep., Ontology Engineering Group (2014), http://purl.org/net/p-plan#
+
+10. Garijo, D., Gil, Y., Corcho, O.: Towards workflow ecosystems through semantic and standard representations. In: ${2014}{9}^{\text{th }}$ Workshop on Workflows in Support of Large-Scale Science. pp. 94-104 (2014)
+
+11. Gil, Y., Garijo, D., Knoblock, M., Deng, A., Adusumilli, R., Ratnakar, V., Mallick, P.: Improving Publication and Reproducibility of Computational Experiments through Workflow Abstractions. In: K-CAP Workshops (2017)
+
+12. Goble, C., Cohen-Boulakia, S., Soiland-Reyes, S., Garijo, D., Gil, Y., Crusoe, M.R., Peters, K., Schober, D.: FAIR computational workflows. Data Intelligence (1-2), 108-121 (2020)
+
+13. Jozashoori, S., Vidal, M.E.: MapSDI: A scaled-up semantic data integration framework for knowledge graph creation. In: On the Move to Meaningful Internet Systems: OTM 2019 Conferences. pp. 58-75 (2019)
+
+14. Plankensteiner, K., Montagnat, J., Prodan, R.: Iwir: A language enabling portability across grid workflow systems. In: Proceedings of the ${6}^{\text{th }}$ workshop on Workflows in support of large-scale science - WORKS '11. pp. 97-106 (2011)
+
+15. Ferreira da Silva, e.a.: Workflows Community Summit: Bringing the Scientific Workflows Community Together. Tech. rep. (2021)
+
+16. Van Assche, D., Haesendonck, G., De Mulder, G., Delva, T., Heyvaert, P., De Meester, B., Dimou, A.: Leveraging Web of Things W3C Recommendations for Knowledge Graphs Generation. In: Web Engineering. pp. 337-352. Lecture Notes in Computer Science, Springer (2021)
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..e60ad350586346d253f9aa848563bebfd07e2f73
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HYWx0sLUYW9/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,203 @@
+§ IMPLEMENTATION-INDEPENDENT KNOWLEDGE GRAPH CONSTRUCTION WORKFLOWS USING FNO COMPOSITION
+
+Gertjan De Mulder (C) and Ben De Meester (C)
+
+IDLab, Department of Electronics and Information Systems,
+
+Ghent University - imec, Technologiepark-Zwijnaarde 122, 9052 Ghent, Belgium
+
+{firstname.lastname}@ugent.be
+
+Abstract. Knowledge Graph construction is typically a task within larger workflows, with a tight coupling between the abstract workflow and its execution. Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended over the entire Knowledge Graph construction workflow. In this paper, we introduce an interoperable and reproducible solution for defining Knowledge Graph construction workflows leveraging Semantic Web technologies. We describe how a data flow workflow can be described interoperable (i.e., independent from the underlying technology stack) and reproducible (i.e., with detailed provenance) by composing semantic abstract function descriptions; and how such a semantic workflow can be automatically executed across technology stacks. We demonstrate that composing functions using the Function Ontology allows for functional descriptions of entire workflows, automatically executable using a Function Ontology Handler implementation. The semantic descriptions allow for interoperable workflows, the alignment with P-PLAN and PROV-O allows for reproducibility, and the mapping to concrete implementations allows for automatic execution.
+
+§ 1 INTRODUCTION
+
+Knowledge Graph (KG) construction - i.e., RDF graph construction - involves computational tasks on data, and is typically a task within larger (business or scientific) workflows. The construction of a KG itself can also be considered an overarching and more complex task that is composed of smaller tasks, e.g., extracting data from a database, mapping it to RDF, and publishing it using a web API (i.e., Extract-Transform-Load or ETL). Such a process - i.e., a set of tasks that can be automated - can be facilitated using a workflow system.
+
+When a tight coupling between the abstract workflow and its execution exists, interoperability diminishes and composing tasks into a workflow introduces challenges to connect tools that implement a task. Similar issues arise when integrating a KG construction task into a larger workflow. For example, connecting a mapping implemented in JAVA and a web API tool implemented in JavaScript.
+
+Mapping languages increase interoperability and reproducibility of the mapping process, however, this should be extended to the entire KG construction workflow. The lack of interoperability inhibits use of different tools for a task, making it harder to adapt to changing requirements and constraints. For example, Tool A might initially suffice for the RDF-generation task given the size of the source data. Later on, the data size might become unmanageable for Tool A. Tool B is available that can handle larger data sets, however, the lack of interoperability prevents the flexibility in switching from one tool to the other.
+
+In this paper, we represent tasks within a workflow through the composition of implementation-independent semantic function descriptions. By providing in-teroperability between tasks and the tools that execute them, users can focus on the overarching task for which the workflow was created, for example, managing the KG construction life cycle using different mapping processors that generate RDF, and different endpoints on which the RDF is published.
+
+Section 2 presents related work. In Section 3, we show how interoperability between tasks and tools within a workflow can be achieved through the composition of declarative function descriptions. We showcase this in Section 4 by leveraging the Function Ontology $\left( \mathrm{{FnO}}\right) \left\lbrack 7\right\rbrack$ to obtain a data flow workflow that is decoupled from the tools that are used, therefore, illustrating the flexibility in choosing the technology to be used for each task. In Section 5, we demonstrate the resulting workflow composition in FnO. We conclude in Section 6 and give additional pointers for future work.
+
+§ 2 RELATED WORK
+
+In this section, we discuss existing RDF graph construction workflows, and work-flow systems' interoperability and reproducibility characteristics.
+
+Compared to scripting, using a mapping language improves interoperability of the KG construction process [6]. Mapping languages can provide features to cover many steps within the KG construction process, i.e., not only specify how to map to RDF, but also how to extract data from different data sources [8], and how to publish using various methods [16]. Even when mapping languages provide enough features to be deemed end-to-end, executing a KG construction exists within a wider context, e.g., being part of a Knowledge Graph Lifecycle [4, or as a collection of subtasks to allow for optimization [13]. As such, even though KG construction rules can be described interoperably using, e.g., a mapping language, its position within the wider and narrower tasks makes it interpretable as being (a part of) a workflow.
+
+Flexible workflows are needed, as requirements and constraints are subject to change. Thus, interoperability is essential for tasks designed in one system to be used by another [14]. The state of the art puts forward following characteristics for interoperability: 1) declarative paradigm, 2) separation of description and implementation, and 3) standardized language.
+
+Statements within an imperative paradigm are exact instructions of what needs to be done and inherently define the control flow: the exact order in which a program must be executed. An imperative paradigm is suitable for processes that are unlikely to change, however, a declarative approach is recommended when workflows resemble processes with changing requirements and constraints that require them to be executed in different ways. Declarative paradigms can be used to represent data flow, i.e., the data dependencies between tasks, and are more robust to change as they describe what needs to be done, instead of how [1].
+
+Interoperability diminishes when there is a tight coupling between tasks and implementations [12], e.g., when using ad hoc approaches. Thus, the separation of description and implementation is crucial to interoperability [15].
+
+The use of standards is essential to achieve interoperability in heterogeneous environments. Several workflow specifications exist, and can be divided into two parts. On the hand, there are executable specifications, such as the Common Workflow Language (CWL), and on the other hand, descriptive specifications, such as P-PLAN, and Open Provenance Model for Workflows (OPMW). CWL allows for describing a computational workflow and the command-line tools used for executing its tasks [3], with a tight coupling between tasks and implementations. P-PLAN extends the W3C standard PROV. It allows for describing workflow steps and link them to execution traces, and was applied in projects that focus on interoperability [10] and reproducibility [11]. OPMW is an extension of P-PLAN [10]: a simple interchange format for representing workflows at different levels of granularity (ie. abstract model, instances, executions). These specifications are either focused on being executable or descriptive. To the best of our knowledge, however, no specification exists that supports both.
+
+The Function Ontology (FnO) [7] presents a similar approach towards inter-operable data transformations using Semantic Web technologies. An implementation-independent function description allows for a decoupled architecture that separates the definition from its execution, and the inputs and outputs of a function are explicitly described. Furthermore, a recent update to FnO includes composition: compose a new function from other functions.
+
+Reproducibility is another key characteristic within workflows, as it requires the tasks to be described in sufficient detail so that it can be reproduced in different environments [11]. In order to be reproducible by other scientists, provenance information including the execution details is required [2].
+
+§ 3 METHOD AND IMPLEMENTATION
+
+In this paper we put forward our approach towards interoperable and reproducible workflows through implementation-independent and declarative descriptions, allowing the flexibility of tasks being implemented by different tools. We discussed several existing description languages for defining workflows. The complexity of the language increases with the constructs that are supported. However, it appears that simplicity often pays greater dividends when considering interoperability. In that regard, we decided to look for lightweight - yet flexible and interoperable - solutions.
+
+The previous section shows that to have interoperable and reproducible work-flow, we need a declarative paradigm that separates description from implementation in a standardized language, and allows for generating provenance information for individual tasks. In this section we elaborate on the decisions that were made to accommodate for these characteristics.
+
+We represent a workflow as a composition of tasks, and a task as a function which can have zero or more inputs and zero or more outputs. Being uniquely identifiable and unambiguously defined increases the reusability of tasks across workflows, as they are universally discoverable and linkable [7].
+
+We make the simplification that tasks can only be executed sequentially and currently do not consider control flow constructs other than a sequence. The data flow between tasks within a composition is represented by input and output mappings between functions. Such a composition mapping describes how an input or output of one function is linked to the input or output of another function. For example, within a KG construction workflow this is needed to connect the output of an RDF generation task to the input of the subsequent publishing task.
+
+We consider the Function Ontology(FnO)as a model to describe functions and function compositions to represent tasks and workflows. Its simple model aligns with our goal without preventing us to add additional complexity such as mapping to concrete implementations and composition of functions. Both additions are part of the Function Ontology specification ${}^{1}$ .
+
+The addition of composition to the FnO specification allows us to align function compositions with workflows as defined in P-PLAN [9], complementary to the existing alignment between FnO and PROV-O [5]. Several related works used or extended P-PLAN and led to the creation of several applications. Consequently, by aligning with P-PLAN we benefit from existing work that provides interoperability with several prominent workflow systems [10]. We use FnO because it allows for linking functions to actual implementations, hence, providing sufficient detail to be directly executed.
+
+Therefore, by mapping the workflows defined as function compositions, to workflow descriptions in P-PLAN, we can benefit from those applications, such as the workflow mining, browsing, and provenance visualization solutions discussed
+
+§ IN 10
+
+The following shows how FnO and P-PLAN align, and Listing 1.1 shows how construct P-PLAN descriptions from FnO compositions:
+
+ * fno:Execution is-a p-plan:Step
+
+ * fnoc:Composition is-a p-plan:Plan
+
+ * fno:Parameter is-a p-plan:Variable
+
+ * fno:Output is-a p-plan:Variable
+
+ * fno:expects is-a p-plan:isInputVarOf
+
+ * fno:returns is-a p-plan:isOutputVarOf
+
+${}^{1}$ https://w3id.org/function/spec/
+
+PREFIX p-plan:
+
+PREFIX fnoc:
+
+PREFIX fno:
+
+PREFIX rdf:
+
+CONSTRUCT {
+
+ ?s a p-plan:Plan .
+
+ ?exX a p-plan:Step ; p-plan:isStepOfPlan ?s .
+
+ ?exY a p-plan:Step ; p-plan:isStepOfPlan ?s ; p-plan:isPrecededBy ?exX .
+
+}
+
+WHERE {
+
+ ?s rdf:type fnoc:Composition ;
+
+ fnoc:composedOf [ fnoc:mapFrom [ fnoc:constituentFunction ?fx ;
+
+ fnoc:functionOutput ?fxOut ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction ?fy ;
+
+ fnoc:functionParameter ?fyParameter ] ] .
+
+ ?exX fno:executes ?fx .
+
+ ?exY fno:executes ?fy .
+
+}
+
+Listing 1.1. Pseudo-SPARQL query for constructing the precedence relations in P-PLAN from the CompositionMappings in FnO.
+
+§ 4 USE CASE
+
+In this section we discuss POSH (Predictive Optimized Supply Chain): a motivating use case showcasing the need for an interoperable KG construction work-flow.
+
+POSH is an imec.icon research project in which methods and software solutions are researched that leverage data to optimize integrated procurement and inventory management strategies. A data integration and quality framework is deemed necessary to increase the accuracy and reliability of supply chain data that has been collected from heterogeneous data sources (suppliers, customers, service providers, etc.). Within POSH, we developed a semantically-enhanced knowledge integration framework that uses various data repositories and external (meta)data to provide a clear overview of the current state of the supply chain and the necessary inputs for the prediction, optimization and decision support methods.
+
+To this end, a KG is generated from the heterogeneous supply chain data and consequently exposed through a triple store endpoint. This enables our partners to take advantage of running queries against a uniform data model without being burdened with heterogeneous sources from which it constitutes, and focus on the designing algorithms for optimizing the supply chain. However, not all data was made available from the start but rather added progressively, and the requirements together with the mappings rules that satisfy them changed in parallel. Hence, the KG generation tasks need to be executed iteratively to incorporate the changes, which can become time-consuming when done manually. To iteratively accommodate for changing requirements and constraints, an implementation-independent workflow system was needed. Within POSH, we applied our method to provide workflow system flexibly enough to adapt to different technology stacks.
+
+§ 5 DEMONSTRATION
+
+In this section we demonstrate a working example of an ETL workflow comprising two tasks: i) generating RDF; and ii) publishing the generated RDF. Due to space restrictions only excerpts of the descriptions are shown.
+
+First, we define the task of generating RDF as a function that takes the URI to a mapping, and the URI to which the result should be written. We make use of the RML mapping language to have an interoperable RDF generation step. Secondly, we define the publishing task as a function which takes the URI to the generated RDF data as input parameter and outputs a URI to the endpoint through which it is published. These descriptions are shown in Listing 1.2
+
+@prefix fno: .
+
+@prefix fns: .
+
+fns:generateRDF a fno:Function ;
+
+ fno:expects ( fns:fpathMappingParameter ) ; fno:returnOutput ) .
+
+fns:publish a fno:Function ;
+
+ fno:expects ( fns:inputRDFParameter ) ; fno:returns ( fns:returnOutput ) .
+
+ Listing 1.2. Task descriptions in FnO
+
+We describe an overarching ETL task as the composition of these two functions, illustrated in Listing 1.3. We define how the data flows between the composed functions using fnoc:CompositionMapping. fnoc:Composition links the output of the first task to the second task by means of a fnoc:CompositionMapping. Note that, using composition, we are able to describe the workflow at multiple levels of abstraction. In analogy with an ETL workflow, for example, the highest level of abstraction represents the three Extract, Transform, and Load tasks. The second level can contain more specific, yet abstract, tasks that are required to fulfill each of the three Extract, Transform, and Load tasks. Depending on the complexity of each task, it can be described further in a lower level of abstraction.
+
+@prefix fno: .
+
+@prefix fnoc: .
+
+@prefix fns: .
+
+fns:ETL a fno:Function ;
+
+ fno:expects ( fns:fpathMappingParameter fns:fpathOutputParameter ) ; fno:returnOutput ) .
+
+fns:ETLComposition a fnoc:Composition ;
+
+ fnoc:composedOf
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
+
+ fnoc:functionParameter fns:fpathMappingParameter ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
+
+ fnoc:functionParameter fns:fpathMappingParameter ] ] ,
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:ETL ;
+
+ fnoc:functionParameter fns:fpathOutputParameter ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:generateRDF ;
+
+ fnoc:functionParameter fns:fpathOutputParameter ] ] ,
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:generateRDF ;
+
+ fnoc:functionOutput fns:returnOutput ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:publish ;
+
+ fnoc:functionParameter fns:inputRDFParameter ] ] ,
+
+ [ fnoc:mapFrom [ fnoc:constituentFunction fns:publish ;
+
+ fnoc:functionOutput fns:returnOutput ] ;
+
+ fnoc:mapTo [ fnoc:constituentFunction fns:ETL ;
+
+fnoc:functionOutput fns:returnOutput ] ] .
+
+Listing 1.3. ETL Workflow description using FnO composition
+
+We created a proof-of-concept Function Handler that automatically executes these descriptions using different implementations, available at https://github.com/FnOio/function-handler-js/tree/kgc-etl.Furthermore, we provide tests ${}^{2}$ in which we verify the execution sequence of a function composition, and demonstrate the interoperability through function compositions that resemble a KG construction workflow in which the RDF-generation task can be implemented by different tools.
+
+§ 6 CONCLUSION
+
+Declarative function descriptions, and compositions thereof, allow us to define workflows that are decoupled from the execution environment. The explicit semantics allow for the unambiguous definition of inputs, outputs and implementations. Hence, allowing for automatically determine the functions that can be used to execute a task. Alignment with PROV allows for a reproducible workflow as both tasks and execution details are provided, which enables to exactly determine which functions were applied throughout the execution of the workflow.
+
+Defining a workflow through compositions allows for different levels of abstractions. When rapid prototyping is required, only high-level tasks can be described. As requirements become more concrete, a high-level task can be described in greater detail as a composition of more fine-grained tasks.
+
+These various levels of abstraction also allows for various levels of provenance information and thus various levels of reproducibility. For example, at one end of the spectrum, a function can be implemented by a command-line tool: no provenance information is available about the transformations that have been applied to produce the output. At the other end of the spectrum, a task can be described as a (nested) composition of fine-grained functions: provenance information is available up to the level of atomic functions.
+
+For future work, we can see a mapping language as a way to describe compositions of transformation tasks. By representing, e.g., a Triples Map in RML as a composition of data and schema transformation tasks, we can provide insights in what a mapping does, and in what order. These insights could help to provide optimization strategies to such kind of engines.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8d124387f25e48b8f62e839ce047676b8a6fe098
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,205 @@
+# A Human-in-the-Loop Approach for Personal Knowledge Graph Construction from File Names
+
+Markus Schröder, Christian Jilek, and Andreas Dengel
+
+${}^{1}$ Smart Data & Knowledge Services Dept., DFKI GmbH, Kaiserslautern, Germany
+
+${}^{2}$ Computer Science Dept., TU Kaiserslautern, Germany
+
+\{markus.schroeder, christian.jilek, andreas.dengel\}@dfki.de
+
+Abstract. Knowledge workers' personal and work related concepts (e.g. persons, projects, topics) are usually not sufficiently covered by knowledge graphs. Yet, already handmade classification schemes, prominently folder structures, naturally mention several of their concepts in file names. Thus, such data could be a promising source for constructing personal knowledge graphs. However, this idea poses several challenges: file names are usually noisy non-grammatical text snippets, while folder structures do not clearly define how concepts relate to each other. To cope with this semantic gap, we include knowledge workers as humans-in-the-loop to guide the building process with their feedback. Our semi-automatic personal knowledge graph construction approach consists of four major stages: domain term extraction, ontology population, taxonomic and non-taxonomic relation learning. We conduct a case study with four expert interviews from different domains in an industrial scenario. Results indicate that file systems are promising sources and, combined with our approach, already yield useful personal knowledge graphs with moderate effort spent.
+
+Keywords: Knowledge Graph Construction - Personal Knowledge Graph - Human-in-the-Loop - File System
+
+## 1 Introduction
+
+Knowledge graphs (KGs) have become a popular technology to support knowledge workers in various applications (for a survey see [8]). Since such KGs are constructed from domain-specific document corpora, personal concepts of knowledge workers in these domains are usually not sufficiently covered. To fill this gap, there is the emerging concept of Personal Knowledge Graphs (PKGs) which focus on resources users are personally related to (also in their professional life). The population and maintenance of such graphs is still an open research question [1], especially, when knowledge is not modeled yet (cold start problem). Various sources in a user's personal information sphere may be worth considering to kick-start a population [12].
+
+
+
+Fig. 1: A file system (left) with file names containing relevant words (green) and irrelevant words (red). They form a personal knowledge graph (right) with nontaxonomic and taxonomic relations. Due to readability, some edges are omitted.
+
+When users self-organize diverse documents in daily business, they often manage them in a form of classification schema, prominently in file systems [4]. Here, documents are hierarchically arranged and freely named according to aspects such as projects, organizations, persons, topics and task-related concepts. In file and folder names such concepts are typically mentioned in order to let users guess their contents. Because file systems allow to name them mostly free ${}^{3}$ , users tend to label them with their own vocabulary which can contain technical terms, made-up words or even puns [2]. Thus, we hypothesize that file names could be a promising source for constructing PKG.
+
+This idea poses several challenges due to the nature of the data source. Literature already showed that users have a large variety of file naming strategies $\left\lbrack {5,3}\right\rbrack$ . File names are usually short ungrammatical (sometimes noisy) text snippets and contain differently ordered and concatenated keywords. These circumstances make it difficult to discover and extract relevant named entities from them. Besides labeling, users can also assemble files in hierarchically structured folders [14]. Yet, this "folder contains file" structure typically does not explicitly define how named entities relate to each other.
+
+To give a visual example, Figure 1 depicts a small file system (left) and a possible personal knowledge graph (right). Because some keywords in the file names are too general (images) or have a technical meaning (Thumbs), they may be irrelevant for the user (underlined in red). Relevant keywords (green) become resources in the PKG, while a foaf:topic property keeps track in which file resource it is mentioned (only one is shown due to readability). Named individuals (Zenphase, Parker, Mercurtainment) are assigned to their classes (Project, Person, Organization) and are connected meaningfully (:hasProject, :worksFor). The remaining ones are rather abstract ideas and thus become skos: Concepts according to the Simple Knowledge Organization System (SKOS). A taxonomy tree is formed (top-right side) by adding broader concepts (: DocumentType, : DocumentState). Since ${WIP}$ is an abbreviation, its skos:prefLabel contains the long form. Synonyms and other spellings are captured in skos:hiddenLabels: for the user the term Drawing is synonym to treeDiagram and docs in file names indicate the concept Document. Due to the lack of space, labels and some other properties are not visualized.
+
+---
+
+${}^{3}$ Restricted only by illegal characters and maximum file name length.
+
+---
+
+In this paper, we present a semi-automatic personal knowledge graph construction approach which is able to build such a graph from a classification schema, in this case, a file system and expert feedback. A graphical user interface (GUI) assists a knowledge engineer (KE) in performing several tasks during construction: the discovery of concepts in file names, ontology population of concepts and learning of taxonomic as well as non-taxonomic relations. In an interview setting an expert can describe his or her personal view on their files to the KE who translates the explanations in suitable knowledge graph statements using the GUI. To reduce the manual effort for the KE, we make use of machine learning models which learn from feedback and predict new statements during usage. This proposed method yields several research questions (RQs), for which first answers are reported in this work.
+
+- RQ1: Are file systems promising sources for knowledge graph construction?
+
+- RQ2: Can our system suggest helpful statements during usage?
+
+- RQ3: How efficient is the construction in our approach?
+
+The rest of this paper is structured as follows: related approaches are covered in the next section (Sec. 2). This is followed by the presentation of our approach in Section 3 and a prototypical implementation in Section 3.6. The above research questions are then addressed in a case study with expert interviews in Section 4. Section 5 closes the paper with a conclusion and future work.
+
+## 2 Related Work
+
+To personally assist knowledge workers in their tasks, knowledge services benefit from personal information models about users [12]. For building such a model, personal concepts can be acquired from various texts in a user's personal information sphere [13]. Thus, folder structures could be useful for this purpose which is also investigated by other related works.
+
+Magnini et al. [10] as well consider hierarchical classifications and analyze the implicit knowledge hidden in the labeled nodes. They use logic formulas expressed in description logic and word senses discovered and disambiguated in labels to make knowledge explicit. Contextual interpretations such as implicit disjunctions and negations are performed by exploiting the hierarchy. In contrast to our work, their goal is the definition of an ontology with classes and properties (TBox) by relying on external language repositories containing word senses. For us the usage of such resources is limited, since word senses of personal concepts (like projects) are usually not contained. Moreover, they present a fully automatic approach without integrating domain experts in cases where labels do not match with any entry in dictionaries.
+
+More closely related is the work about knowledge extraction from classification schemes by Lamparter et al. [9]. Following the same motivation, the authors would like to acquire explicit semantic descriptions from legacy information such as local folder structures. To archive this, their processing pipeline include the identification of concept candidates, word sense disambiguation, taxonomy construction and identification of non-taxonomic relations. They distinguish ontology and instance layer by checking with dictionaries if terms are rather general (concepts) or specific (instances). In our approach, we only consider instances, but classify general ideas as skos:Concepts (e.g. Diagram). They also build a taxonomy by utilizing hyponym and hyperonym information. In case of non-taxonomic relations, the work reuses domain-specific ontologies, while the classification hierarchy as well as its labels are consulted to guess appropriate relations. Our procedure is similar, but additionally considers user feedback to train machine learning models in order to predict such relations.
+
+In conclusion, to the best of our knowledge, there is no approach like ours that constructs personal knowledge graphs from folder structures and at the same time includes experts with their feedback.
+
+## 3 Approach
+
+
+
+Fig. 2: Components of our approach from left to right.
+
+Our approach enables knowledge engineers (KEs) to construct personal knowledge graphs from a classification schema, for example, a folder structure as shown in Figure 1. In this process, we support them in four tasks which are depicted in Figure 2 and explained in individual sections: Domain Terminology Extraction (Section 3.2), Management of Named Individuals (Section 3.3), Taxonomy Creation (Section 3.4) and Non-Taxonomic Relation Learning (Section 3.5). During modeling using a dedicated GUI (Section 3.6) the KE is assisted by an artificial intelligence (AI) system which proactively makes statements on its own. For ontology population and non-taxonomic relations, machine learning models predict statements. To correctly store and distinguish these assertions, we first designed an appropriate data model.
+
+### 3.1 Knowledge Graph Model
+
+Our knowledge graph model is an RDF graph consisting of statements in the form of subject-predicate-object triples. However, in our scenario, we have to store additional feedback information for each statement. We consider exactly two agents in our system who are able to give feedback about statements: a knowledge engineer(KE)and an artificial intelligence(AI). Both contribute to the same personal knowledge graph with assertions which can be true, but also false (negative statement). To keep track about the provenance, we store the following meta data for each statement: (a) which agent stated it, (b) the date and time it was stated, (c) how is the statement rated (true, false or undecided) and (d) how confident is the agent (a real value between 0 and 1). Additionally, we use foaf:topic-statements to state that a classification schema node (subject) mentions a certain knowledge graph resource (object) (see an example in Figure 1). Regarding the rating, since natural intelligence is usually more reliable than an artificial one, the KE always outvotes suggestions from the AI. Yet, assertions of the AI are assumed to be true as long as the KE does not disagree.
+
+### 3.2 Domain Terminology Extraction
+
+Our extraction method uses heuristics to make a first guess for relevant terms in the user's domain. Since word boundaries are often not evident in rather messy file names, we tokenize their basenames (without considering file extensions) by character type and camel case. In addition, the acquired tokens are rated based on some simple rules: stop words and tokens containing a single letter or only symbols are negatively rated. This also applies for tokens which only contain digits, except they look like years (e.g. $n \in \left\lbrack {{1980},{2030}}\right\rbrack$ ). Applying these rules, the following example is tokenized (indicated by a pipe symbol '|') and rated (indicated by color) in the following way: WIP|_____|for|2007|-|tree|Diagram|!|(|28|)|A|.jpg. Thus, the rules let us assume that the tokens WIP, 2007, tree and Diagram are relevant. In case of multi-word terms, the KE is able to merge separated tokens to a single term again, like for the latter two (i.e. Tree Diagram).
+
+After adjusting the rating according to feedback from a domain expert, other occurrences of accepted terms are automatically searched using a regular expression, since they may occur in a classification scheme more than once. If the term contains multiple words, we also search for all possible word concatenations using the separators "-" (minus), "-" (underscore), " " (space) and also no separator at all. To give an example, for the term treeDiagram our system also checks the variations tree-Diagram, tree-Diagram and tree Diagram. Finally, the collected term variations are associated with a named individual (i.e. owl:NamedIndividual according to OWL).
+
+### 3.3 Management of Named Individuals
+
+After retrieving all found term variations $T$ , we have to decide if they (a) resemble an already existing named individual or (b) define a new one. Regarding the first case, each newly discovered term may be a variation that refers to an already created named individual. Thus, we calculate the Jaccard similarity coefficient [7] between the terms $T$ and the candidates’ labels $L$ . A named individual is picked which has the highest overlap between its labels and the given terms. If we cannot find such a resource above a sufficient similarity threshold, a new one is created. The longest term is used to give the resource a preferred label (skos:prefLabel) after some conversions are performed: German umlaut spellings are corrected (e.g. "ae" $\rightarrow$ "ä"), underscores are replaced with spaces, if available a lemma version is used (diagrams $\rightarrow$ diagram) and proper case is applied (Tree Diagram). The remaining terms form the named individual's synonym and differently spelled labels (skos:hiddenLabel). In both cases, we keep track in which file resource the named individuals are mentioned by using a foaf:topic-relation.
+
+Unification. If two or more named individuals have the same meaning, we can unify them to one resource. This is done by correctly substituting URIs and at the same time removing the source triples. The AI automatically detects potential individuals with the same meaning by looking at their labels and applying some rules: it checks for hidden labels if they overlap or if there is a prefix or postfix dependency, while preferred labels are compared with the Levenshtein distance and token-based equality. For example, for the following label pairs our procedure would suggest that their individuals are equal: ("Peter Parker", "Parker Peter"); ("Tree Diagram", "Diagram") and ("diagram", "diagramm").
+
+Ontology Population. The KE manually create ontology classes and type named individuals with them. To support the KE in this assignment, a random forest model [6] is trained with positive examples from feedback to be able to predict classes for individuals without a type. In order to acquire training features, we follow a gazetteer-based embedding technique by looking up words from several gazetteer lists in preferred labels of named individuals. Remaining characters are counted per character class such as spaces, quotes and digits. The coverage proportions of words and characters in the label serve as the final feature vector. To give some examples, "Tree Diagram 27" receives the vector ${v}_{1} = \left( {\text{English Noun} = {0.73},\text{Space} = {0.13},\text{Digit} = {0.13}}\right)$ , while "WIP" has ${v}_{2} =$ (Uppercase Letter $= {1.0}$ ). Having such feature vectors, the random forest model is able to learn decision trees which predict the same type for named individuals having preferred labels very similar in content. For instance, since the individual Tree Diagram 27 is assigned to skos: Concept and another individual Diagram 3 has a similar feature vector, our model predicts the same class for it.
+
+### 3.4 Taxonomy Creation
+
+Our intended taxonomy uses broader and narrower relations to structure concepts (skos: Concept) found in file names according to the Simple Knowledge Organization System (SKOS). Since we see these concepts as leafs in a taxonomy tree, our motivation is to find broader concepts for them. For this, our approach utilizes a language resource of synsets and hypernym relations. The concepts in the PKG are mapped via their labels to synsets of the lexical-semantic net. By traversing hypernym relations for all found synsets, two or more of them may share the same ancestor along their hypernym paths. If the average distance from synsets to ancestor is below a configurable threshold, it is suggested as a broader concept for them. This constraint avoids the recommendation of too general concepts (e.g. near the root node). To give an example, given the hypernym paths diagram $\rightarrow$ depiction and timetable $\rightarrow$ overview $\rightarrow$ depiction, our procedure would suggest the broader concept depiction for both leafs. Of course the KE may at any time create concepts manually and link them accordingly. Besides such taxonomic relations, our system also considers non-taxonomic ones between instances.
+
+### 3.5 Non-Taxonomic Relation Learning
+
+To predict non-taxonomic relations, we perform link prediction by training a model on positive examples from feedback and by exploiting the structure of the classification schema (CS). Our idea is that the same non-taxonomic predicate could be suggested between other resources (subjects and objects) which have a similar neighborhood in the CS. For this, we only consider class instances which are named individuals that have been assigned to an ontology class. Since instances are annotated on files via a foaf:topic-relation, we know in which places of the CS they are mentioned. This annotated CS needs to be transformed into an undirected graph of connected instances to perform link prediction on it. We make an edge from an instance $i$ mentioned in a given node to another instance $j$ , whenever $j$ is mentioned in the (a) node itself,(b) the node’s parent, (c) one of the node's children or (d) one of the node's siblings (i.e. children of parent). In other words, instances are connected in the graph if they are closely mentioned in the CS. With the given graph, we are able to calculate local similarity measures for links (for a survey see [11, Table 1]). Values of the calculated measures form feature vectors in a training set. The test set is acquired by iterating over all possible combinations of instances and properties by using their domain and range information as a filter. A promising triple in the test set is expected when we calculate a small euclidean distance (below a given threshold) between its test vector and a training vector.
+
+### 3.6 Prototypical Implementation
+
+To test our approach in a case study, we implemented a prototype. A demo video ${}^{4}$ and its source code ${}^{5}$ are publicly available. To assist the KE in entering feedback and constructing the PKG, a graphical user interface (GUI) in form of a web application is provided (see Figure 3). Throughout the interface, we make heavily use of thumbs-up and thumbs-down buttons as well as green and red colored elements to visualize positive and negative feedback (true and false assertions). The three-column layout presents tabs for individual components which give dedicated views for the tasks we have discussed.
+
+A typical Explorer view (top left) lists containing files of a currently browsed folder (/User/Downloads). The view presents for each file (from top to bottom) its file name, rated terms from the file name and annotated named individuals. To distinguish individuals from terms the well-known hashtag symbol is added to their preferred labels. In a separate Named Individuals view in the top middle, we itemize them together with their type. Two side-by-side views enable a Drag&Drop mechanism on individuals to let the KE define triples with a selected predicate (drop-down list in the middle). On the top right, classes and properties can be manually created, renamed and rated in an Ontology view. For each property, domain and range classes can be defined too. In separate tabs (bottom left) our GUI also presents suggestions for Unification, Typing, Taxonomic and Non-Taxonomic Relations (the screenshot shows an opened Typing tab). A list of proposals from the AI can be reviewed by the KE, who can accept or reject them individually or in bulk. Decisions are shown below and can always be undone in either way. In a detail view (bottom middle), the KE is able to change a selected individual's preferred label, type, hidden labels and file attachment. A Status view (bottom right) visualizes the current PKG construction state in four sections: the progress in tagging, typing, taxonomy tree and non-taxonomic graph as well as an overall assessment score. These estimations give hints to the KE where more feedback from the expert is necessary.
+
+---
+
+${}^{4}$ https://www.dfki.uni-kl.de/~mschroeder/demo/kecs
+
+${}^{5}$ https://github.com/mschroeder-github/kecs
+
+---
+
+
+
+Fig. 3: Our graphical user interface in a three-column layout with many feedback possibilities and components (top). Dedicated components are provided to preform certain tasks (bottom).
+
+## 4 Case Study: Expert Interviews
+
+A case study was conducted with expert interviews in which personal knowledge graphs (PKGs) were built with their feedback. The setup for these interviews is covered in Section 4.1. This is followed by a detailed description of all collected results (Section 4.2) which are then discussed with regard to our stated research questions (Section 4.3).
+
+Table 1: Four datasets with their meta data which are used in interviews with four experts.
+
+| Dataset | Expert | Branches | Leafs | Max. Depth | Avg. Depth | Avg. Name Length |
| SS1 | E1 | 103 | 198 | 3 | ${2.98} \pm {0.16}$ | ${8.84} \pm {9.86}$ |
| FS1 | E2 | 25, 988 | 95,760 | 17 | ${9.49} \pm {1.93}$ | ${23.30} \pm {16.88}$ |
| FS2 | E3 | 8, 939 | 64,571 | 17 | ${9.18} \pm {1.68}$ | ${32.43} \pm {16.77}$ |
| FS3 | E4 | 54,933 | 325,476 | 22 | ${10.08} \pm {2.22}$ | ${24.24} \pm {14.57}$ |
+
+### 4.1 Expert Interview Setup
+
+Since our institute has industry projects with several departments of a large power supply company, we had the great opportunity to get in contact with four individual experts from four departments (guideline management, property management, license management and accounting). Three of them work separately on individual shared drive file systems (FS), while one primarily manages spreadsheet (SS) data. Before the interviews, we received dumps of their data which are listed in Table 1. For each dataset an expert (E) is assigned and meta data about the asset is presented.
+
+Since spreadsheets may also contain work related concepts, but are not a form of classification schema, we had to convert the SS1 dataset to a tree structure in the following way. Table names become root folders, while column names are added as their subfolders. In the subfolders, we add files with distinct names from the column's rather short cell values. This way, potential work related concepts could be contained in this generated classification schema.
+
+Our system automatically captures several data points during usage. To reproduce the construction process, we keep a history of all stated assertions with their meta data as described in Section 3.1. By observing GUI inputs including mouse clicks, Drag&Drop operations and certain keystrokes, we quantify the KE's effort with the system. In a fixed interval (every 10 inputs) snapshots of the construction metrics (Status view) are saved to record the PKGs evolution over time. Additionally, memory consumption and time performance of certain system modules are monitored.
+
+Each one-hour long interview between the knowledge engineer (KE) and an expert had the same setting. One fixed author of this paper took over the role of KE and met the expert in a virtual telephone conference. The KE shared the screen and presented the GUI of our system (see Section 3.6) where the expert's data was already loaded. After a brief introduction, the KE started to ask questions about files and folders by traversing through the file system. The explanations of the participant enabled the KE to model the expert's personal knowledge as discussed in our approach (Section 3). Whenever the AI made predictions, the expert was asked if they are correct or not and feedback was entered accordingly. Every 10 minutes the KE reviewed the current construction state by opening the Status view and changed the focus on parts which needed more attention. After about 50 minutes the session ended and the remaining time was used to let the expert complete a questionnaire about the data source and the modeled knowledge graph. In the next section, we present the questionnaire and the results in detail as well as the data which was logged by our prototype during the interviews.
+
+Table 2: The seven questions from the questionnaire with the answers of the four experts and their average values.
+
+| Question | E1 | E2 | E3 | E4 | Avg. & SD |
| Q1: How many years have you been working with the data? | 13 | 7 | 4 | 0 | $6 \pm {5.48}$ |
| Q2: How much do words in the file names reflect your language use (vocabulary) at work (scale: 1-10)? | 9 | 8 | 9 | 9 | ${8.75} \pm {0.50}$ |
| Q3: Estimate how much your language use (vocabulary) at work is represented by the established tags (percentage). | 50 | 15 | 10 | 10 | ${21.25} \pm {19.3}$ |
| Q4: The established tags meaningfully reflect the language use (vocabulary) at your work (scale: 1-7). | 7 | 6 | 4 | 6 | ${5.75} \pm {1.26}$ |
| Q5: The established tags are assigned to meaningful classes (scale: $1 - 7$ ). | 6 | 7 | 6 | 7 | ${6.50} \pm {0.58}$ |
| Q6: The established tags are meaningfully structured in a taxonomy (scale: 1-7). | 7 | 6 | 5 | 4 | ${5.50} \pm {1.29}$ |
| Q7: The established tags meaningfully relate to each other (scale: $1 - 7$ ). | 5 | 7 | 6 | 7 | ${6.25} \pm {0.96}$ |
+
+### 4.2 Interview Results
+
+The questionnaire at the interview's end consists of seven questions (Q) which are presented in Table 2 together with the experts' answers (E), their average value and standard deviation (Avg. & SD). We stated the first question (Q1) to check how familiar the participants are with the data. The second question (Q2) was asked to figure out if the experts think that the given data actually contains work related words. While Q3 tries to give a rough estimation on the PKG's recall in percentage, Q4 gives an approximate measurement about its precision with regard to created named individuals ${}^{6}$ in the PKG. From the third question on, we are interested in the experts' opinions about the final result that was modeled during the interview. A seven-point Likert scale is used for our opinion-based questions ranging from 1 ("fully disagree") to 7 ("fully agree"). The remaining questions aim at the estimation of meaningfulness in the populated ontology (Q5) and taxonomic (Q6) as well as non-taxonomic relations (Q7).
+
+---
+
+${}^{6}$ The questions refer to "established tags", since we presented tags in the GUI for the named individuals in the personal knowledge graph (PKG).
+
+---
+
+Besides qualitative data, we also captured quantitative data points during the interview which are presented in Table 3. Measurements are listed per row, while dataset-expert pairs are ordered in columns. After the number of resources in the PKG (#Resources) and the counts regarding the knowledge engineer's (KE) effort in the GUI, we list the number of true and false assertions ${}^{7}$ made by KE and AI in individual construction phases. Furthermore, we calculate the AI's accuracy by counting how often the expert agrees (true positive and true negative) with reviewed predictions. The section about Management of Named Individuals is further split into Unification and Ontology Population. While the management includes assertions about types, preferred/hidden labels and foaf:topic-relations, the latter two only consider owl:sameAs and ontology related assertions. Due to a software error in the taxonomy-module during the first two interviews, unfortunately, no broader concepts could be predicted. On the table's bottom all assertions by the KE (whether true or false) and all inputs (clicks, enter keys, drag&drop operations) are aggregated to calculate a assertions per inputs ratio. The Management of Named Individuals does not have an accuracy value $\left( {\mathrm{N}/\mathrm{A}}\right)$ , since each term automatically turns into a named individual and no suggestions for preferred and hidden label are made.
+
+Since we continuously recorded measurements, we are able to examine the evolution of the PKG with respect to the inputs performed in the GUI. The development of the taxonomic and non-taxonomic part of the PKG is presented through several plots in Figure 4. We consider named individuals of type skos: Concept as taxonomy concepts (Figure 4a) and the remaining typed ones as non-taxonomic instances (Figure 4d). By looking at the number of graph components (Fig. 4b and 4e), one gets an idea of the connectedness over time. In addition, Figure 4c plots the number of concepts which are connected to at least one broader concept. Similarly, Figure 4f shows the average diameter (the greatest distance between any pair of instances) of non-taxonomic components to visualize the closeness among them.
+
+The next section will discuss the results with regard to our research questions.
+
+### 4.3 Discussion
+
+Since file names are rather unusual sources to build PKGs from, we ask at the beginning of the paper the following question (RQ1): Are file systems promising sources for knowledge graph construction? Our experts agree that words they saw in the file names reflect their language use at work with an average value of 8.75 out of 10 (Q2 in Table 2). Having a higher-level management background, expert E4 came in daily work not in touch with file system F3 (see Q1 in Table 2), but was still able to recognize and explain the terms. Answers to questions Q4 to Q7 in our questionnaire (Table 2) indicate that we modeled all individual PKGs in a meaningful way for the experts. For these reasons, we conclude that file systems are promising sources for building PKGs.
+
+---
+
+${}^{7}$ False assertions by AI mean that it later rejected initially true ones because of human feedback.
+
+---
+
+Table 3: Quantity of true and false assertions stated by the knowledge engineer (KE) and the AI for individual construction tasks. Additionally, the KE's GUI effort and the AI's accuracy is given.
+
+| Measurement | SS1 (E1) | FS1 (E2) | FS2 (E3) | FS3 (E4) |
| #Resources | 88 | 50 | 39 | 32 |
| KE Clicks | 599 | 602 | 359 | 356 |
| KE Enter-Key | 60 | 56 | 30 | 47 |
| KE Drag&Drop | 26 | 34 | 21 | 18 |
| Domain Terminology Extraction (Section 3.2) |
| KE True | 82 | 50 | 33 | 26 |
| KE False | 48 | 44 | 14 | 72 |
| AI True | 400 | 270168 | 242149 | 948405 |
| AI False | 286 | 220285 | 106573 | 617366 |
| AI Accuracy | ${0.67} = {45}/{67}$ | ${0.72} = {59}/{82}$ | ${0.83} = {35}/{42}$ | ${0.31} = {25}/{80}$ |
| Management of Named Individuals* (Section 3.3) |
| KE True | 102 | 68 | 39 | 58 |
| KE False | 30 | 24 | 15 | 25 |
| AI True | 462 | 32161 | 8223 | 37159 |
| AI False | 4 | 1 | 23 | 155 |
| AI Accuracy | N/A | N/A | N/A | N/A |
| Unification* (Section 3.3) |
| KE True | 10 | 2 | 2 | 0 |
| KE False | 6 | 18 | 12 | 4 |
| AI True | 8 | 10 | 7 | 2 |
| AI False | 0 | 0 | 0 | 2 |
| AI Accuracy | ${0.57} = 4/7$ | ${0.10} = 1/{10}$ | ${0.14} = 1/7$ | ${0.00} = 0/2$ |
| Ontology Population* (SSection 3.3 |
| KE True | 105 | 78 | 61 | 55 |
| KE False | 73 | 29 | 22 | 19 |
| AI True | 134 | 102 | 92 | 85 |
| AI False | 1 | 8 | 6 | 2 |
| AI Accuracy | ${0.23} = {18}/{78}$ | ${0.65} = {30}/{46}$ | ${0.66} = {23}/{35}$ | ${0.48} = {12}/{25}$ |
| TaxonomyCreation (Section 3.4) |
| KE True | 21 | 19 | 14 | 12 |
| KE False | 0 | 0 | 4 | 8 |
| AI True | N/A | N/A | 9 | 10 |
| AI False | N/A | N/A | 0 | 0 |
| AI Accuracy | N/A | N/A | ${0.56} = 5/9$ | ${0.20} = 2/{10}$ |
| Non-Taxonomic Relation Learning ((Section 3.5) |
| KE True | 5 | 23 | 33 | 7 |
| KE False | 0 | 42 | 20 | 0 |
| AI True | 0 | 52 | 42 | 0 |
| AI False | 4 | 11 | 5 | 0 |
| AI Accuracy | 0/0 | ${0.19} = {10}/{52}$ | ${0.52} = {22}/{42}$ | 0/0 |
| Aggregated |
| All KE Assertions | 482 | 397 | 269 | 286 |
| All KE Inputs | 685 | 692 | 410 | 421 |
| KE Assertions/Inputs | 0.70 | 0.57 | 0.66 | 0.68 |
+
+
+
+Fig. 4: Plots about the taxonomic and non-taxonomic parts of the PKG with respect to the number of inputs made in the GUI. For each dataset a symbol is assigned to recognize them: SS1 ( $\square$ ), FS1 ( $\circ$ ), FS2 ( $\times$ ) and FS3 ( $\bigtriangleup$ ).
+
+Because a completely manual construction can be time-consuming and thus AI could help in this process, we asked the next question (RQ2): Can our system suggest helpful statements during usage? In our approach, we consider the application of AI in several tasks ranging from (a) initial selection of domain relevant terms, (b) unification suggestions, (c) recommendation of class memberships, (d) suggestion of broader concepts and (e) prediction of non-taxonomic relations. How they performed can be obtained from Table 3 in form of accuracy values which calculate how often an expert agreed to suggestions stated by AI. (a) Since we do not consider multi-word terms in the extraction of domain relevant words, such terms had to be corrected frequently, which leads to a drop in performance. (b) Our unification rules tend to suggest more false positives leading to low accuracy scores, since they are designed with a high recall in mind. (c) The prediction of class assignments show mediocre results, since only preferred labels in combination with gazetteer lists are used to extract features. (d) For the taxonomy creation, our language resource GermaNet tended to suggest too general concepts which is why they were often considered unsuitable by our experts. (e) Regarding non-taxonomic relation learning, far to little examples were provided in case of SS1 and FS3 to be able to predict similar relations. All in all, there is a tendency that in certain cases helpful statements can be automatically suggested, but more research has to be done to further improve AI.
+
+Concerned about the approach's practicability, we stated the third question (RQ3): How efficient is the construction in our approach? Effort measurements in Table 3 indicate that one input operation results in 0.6 to 0.7 assertions, thus already two inputs lead to a true or false statement. We assume that a value below 1.0 comes from not negligible GUI navigation and search efforts. Still, many clickable (bulk) feedback buttons combined with suggestions from the AI seem to yield to this positive outcome. Especially the Drag&Drop feature turns out to be a simple and fast way to relate resources to each other. Figure 4 visualizes how taxonomies and graphs evolve over entered inputs ${}^{8}$ . In comparison, the maintenance of taxonomies seem to require less effort than the non-taxonomic graphs, probably because only skos:Concepts and the skos:broader-relation need to be considered. The high diameter values of non-taxonomic graphs further indicate that resources in subgraphs are rather loosely connected. In summary, with moderately spent effort our KE was able to create, accept and also reject many assertions that eventually formed a meaningful personal knowledge graph. Still, efficiency could be further improved by better supporting the construction of the graph's non-taxonomic part.
+
+## 5 Conclusion and Outlook
+
+In this paper, we investigated the construction of personal knowledge graphs from file names with a human-in-the-loop approach. A case study with four independent expert interviews showed that the file system is a promising source, while suggestions by AI help to build such graphs with moderate effort.
+
+Since we could not examine all of the aspects in detail, future work may further investigate in the challenges. For instance, there is potential for improvements in machine learning models, especially for the prediction of non-taxonomic relations. More sophisticated solutions could be applied in the extraction of domain terminology, including disambiguation and the discovery of multi-word terms.
+
+Acknowledgements This work was funded by the BMBF project SensAI (grant no. 01IW20007).
+
+## References
+
+1. Balog, K., Kenter, T.: Personal knowledge graphs: A research agenda. In: Proc. of the 2019 ACM SIGIR International Conf. on Theory of Information Retrieval, ICTIR 2019, Santa Clara, CA, USA, October 2-5, 2019. pp. 217-220. ACM (2019)
+
+2. Carroll, J.M.: Creative names for personal files in an interactive computing environment. International Journal of Man-Machine Studies 16(4), 405-438 (1982)
+
+${}^{8}$ It has to be noted that the clearly visible outlier SS1 (e.g. Figure 4d) comes from a bulk-import of several resources (categories) found in a spreadsheet column.
+
+3. Crowder, J.W., Marion, J.S., Reilly, M.: File naming in digital media research: Examples from the humanities and social sciences. Journal of Librarianship and Scholarly Communication $\mathbf{3}\left( 3\right) \left( {2015}\right)$
+
+4. Dinneen, J.D., Julien, C.: The ubiquitous digital file: A review of file management research. J. Assoc. Inf. Sci. Technol. 71(1), E1-E32 (2020)
+
+5. Hicks, B.J., Dong, A., Palmer, R., McAlpine, H.C.: Organizing and managing personal electronic files: A mechanical engineer's perspective. ACM Trans. Inf. Syst. 26(4), 23:1-23:40 (2008). https://doi.org/10.1145/1402256.1402262
+
+6. Ho, T.K.: Random decision forests. In: Proceedings of 3rd international conference on document analysis and recognition. vol. 1, pp. 278-282. IEEE (1995)
+
+7. Jaccard, P.: Lois de distribution florale dans la zone alpine. Bull Soc Vaudoise Sci Nat 38, 69-130 (1902)
+
+8. Ji, S., Pan, S., Cambria, E., Marttinen, P., Yu, P.S.: A survey on knowledge graphs: Representation, acquisition and applications. CoRR abs/2002.00388 (2020)
+
+9. Lamparter, S., Ehrig, M., Tempich, C.: Knowledge extraction from classification schemas. In: On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE, OTM Conf. Int'l. Conf., Agia Napa, Cyprus, October 25-29, 2004, Proceedings, Part I. LNCS, vol. 3290, pp. 618-636. Springer (2004)
+
+10. Magnini, B., Serafini, L., Speranza, M.: Making explicit the hidden semantics of hierarchical classifications. In: AI*IA 2003: Advances in AI, 8th Congress of the Italian Association for Artificial Intelligence, Pisa, Italy, September 23-26, 2003. Lecture Notes in Computer Science, vol. 2829, pp. 436-448. Springer (2003)
+
+11. Samad, A., Qadir, M., Nawaz, I., Islam, M.A., Aleem, M.: A comprehensive survey of link prediction techniques for social network. EAI Endorsed Trans. Ind. Networks Intell. Syst. 7(23), e3 (2020). https://doi.org/10.4108/eai.13-7-2018.163988
+
+12. Sauermann, L., Dengel, A., Van Elst, L., Lauer, A., Maus, H., Schwarz, S.: Personalization in the epos project. In: Proceedings of the Semantic Web Personalization Workshop at the ESWC Conference (2006)
+
+13. Schröder, M., Jilek, C., Dengel, A.: Interactive concept mining on personal data - bootstrapping semantic services. CoRR abs/1903.05872 (2019)
+
+14. Whitham, R., Cruickshank, L.: The function and future of the folder. Interact. Comput. $\mathbf{{29}}\left( 5\right) ,{629} - {647}\left( {2017}\right)$ . https://doi.org/10.1093/iwc/iww042
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..450e0e9a2245b4b4cf18a1498ff93dab4cf98167
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HgbGN3MHLZc/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,334 @@
+§ A HUMAN-IN-THE-LOOP APPROACH FOR PERSONAL KNOWLEDGE GRAPH CONSTRUCTION FROM FILE NAMES
+
+Markus Schröder, Christian Jilek, and Andreas Dengel
+
+${}^{1}$ Smart Data & Knowledge Services Dept., DFKI GmbH, Kaiserslautern, Germany
+
+${}^{2}$ Computer Science Dept., TU Kaiserslautern, Germany
+
+{markus.schroeder, christian.jilek, andreas.dengel}@dfki.de
+
+Abstract. Knowledge workers' personal and work related concepts (e.g. persons, projects, topics) are usually not sufficiently covered by knowledge graphs. Yet, already handmade classification schemes, prominently folder structures, naturally mention several of their concepts in file names. Thus, such data could be a promising source for constructing personal knowledge graphs. However, this idea poses several challenges: file names are usually noisy non-grammatical text snippets, while folder structures do not clearly define how concepts relate to each other. To cope with this semantic gap, we include knowledge workers as humans-in-the-loop to guide the building process with their feedback. Our semi-automatic personal knowledge graph construction approach consists of four major stages: domain term extraction, ontology population, taxonomic and non-taxonomic relation learning. We conduct a case study with four expert interviews from different domains in an industrial scenario. Results indicate that file systems are promising sources and, combined with our approach, already yield useful personal knowledge graphs with moderate effort spent.
+
+Keywords: Knowledge Graph Construction - Personal Knowledge Graph - Human-in-the-Loop - File System
+
+§ 1 INTRODUCTION
+
+Knowledge graphs (KGs) have become a popular technology to support knowledge workers in various applications (for a survey see [8]). Since such KGs are constructed from domain-specific document corpora, personal concepts of knowledge workers in these domains are usually not sufficiently covered. To fill this gap, there is the emerging concept of Personal Knowledge Graphs (PKGs) which focus on resources users are personally related to (also in their professional life). The population and maintenance of such graphs is still an open research question [1], especially, when knowledge is not modeled yet (cold start problem). Various sources in a user's personal information sphere may be worth considering to kick-start a population [12].
+
+ < g r a p h i c s >
+
+Fig. 1: A file system (left) with file names containing relevant words (green) and irrelevant words (red). They form a personal knowledge graph (right) with nontaxonomic and taxonomic relations. Due to readability, some edges are omitted.
+
+When users self-organize diverse documents in daily business, they often manage them in a form of classification schema, prominently in file systems [4]. Here, documents are hierarchically arranged and freely named according to aspects such as projects, organizations, persons, topics and task-related concepts. In file and folder names such concepts are typically mentioned in order to let users guess their contents. Because file systems allow to name them mostly free ${}^{3}$ , users tend to label them with their own vocabulary which can contain technical terms, made-up words or even puns [2]. Thus, we hypothesize that file names could be a promising source for constructing PKG.
+
+This idea poses several challenges due to the nature of the data source. Literature already showed that users have a large variety of file naming strategies $\left\lbrack {5,3}\right\rbrack$ . File names are usually short ungrammatical (sometimes noisy) text snippets and contain differently ordered and concatenated keywords. These circumstances make it difficult to discover and extract relevant named entities from them. Besides labeling, users can also assemble files in hierarchically structured folders [14]. Yet, this "folder contains file" structure typically does not explicitly define how named entities relate to each other.
+
+To give a visual example, Figure 1 depicts a small file system (left) and a possible personal knowledge graph (right). Because some keywords in the file names are too general (images) or have a technical meaning (Thumbs), they may be irrelevant for the user (underlined in red). Relevant keywords (green) become resources in the PKG, while a foaf:topic property keeps track in which file resource it is mentioned (only one is shown due to readability). Named individuals (Zenphase, Parker, Mercurtainment) are assigned to their classes (Project, Person, Organization) and are connected meaningfully (:hasProject, :worksFor). The remaining ones are rather abstract ideas and thus become skos: Concepts according to the Simple Knowledge Organization System (SKOS). A taxonomy tree is formed (top-right side) by adding broader concepts (: DocumentType, : DocumentState). Since ${WIP}$ is an abbreviation, its skos:prefLabel contains the long form. Synonyms and other spellings are captured in skos:hiddenLabels: for the user the term Drawing is synonym to treeDiagram and docs in file names indicate the concept Document. Due to the lack of space, labels and some other properties are not visualized.
+
+${}^{3}$ Restricted only by illegal characters and maximum file name length.
+
+In this paper, we present a semi-automatic personal knowledge graph construction approach which is able to build such a graph from a classification schema, in this case, a file system and expert feedback. A graphical user interface (GUI) assists a knowledge engineer (KE) in performing several tasks during construction: the discovery of concepts in file names, ontology population of concepts and learning of taxonomic as well as non-taxonomic relations. In an interview setting an expert can describe his or her personal view on their files to the KE who translates the explanations in suitable knowledge graph statements using the GUI. To reduce the manual effort for the KE, we make use of machine learning models which learn from feedback and predict new statements during usage. This proposed method yields several research questions (RQs), for which first answers are reported in this work.
+
+ * RQ1: Are file systems promising sources for knowledge graph construction?
+
+ * RQ2: Can our system suggest helpful statements during usage?
+
+ * RQ3: How efficient is the construction in our approach?
+
+The rest of this paper is structured as follows: related approaches are covered in the next section (Sec. 2). This is followed by the presentation of our approach in Section 3 and a prototypical implementation in Section 3.6. The above research questions are then addressed in a case study with expert interviews in Section 4. Section 5 closes the paper with a conclusion and future work.
+
+§ 2 RELATED WORK
+
+To personally assist knowledge workers in their tasks, knowledge services benefit from personal information models about users [12]. For building such a model, personal concepts can be acquired from various texts in a user's personal information sphere [13]. Thus, folder structures could be useful for this purpose which is also investigated by other related works.
+
+Magnini et al. [10] as well consider hierarchical classifications and analyze the implicit knowledge hidden in the labeled nodes. They use logic formulas expressed in description logic and word senses discovered and disambiguated in labels to make knowledge explicit. Contextual interpretations such as implicit disjunctions and negations are performed by exploiting the hierarchy. In contrast to our work, their goal is the definition of an ontology with classes and properties (TBox) by relying on external language repositories containing word senses. For us the usage of such resources is limited, since word senses of personal concepts (like projects) are usually not contained. Moreover, they present a fully automatic approach without integrating domain experts in cases where labels do not match with any entry in dictionaries.
+
+More closely related is the work about knowledge extraction from classification schemes by Lamparter et al. [9]. Following the same motivation, the authors would like to acquire explicit semantic descriptions from legacy information such as local folder structures. To archive this, their processing pipeline include the identification of concept candidates, word sense disambiguation, taxonomy construction and identification of non-taxonomic relations. They distinguish ontology and instance layer by checking with dictionaries if terms are rather general (concepts) or specific (instances). In our approach, we only consider instances, but classify general ideas as skos:Concepts (e.g. Diagram). They also build a taxonomy by utilizing hyponym and hyperonym information. In case of non-taxonomic relations, the work reuses domain-specific ontologies, while the classification hierarchy as well as its labels are consulted to guess appropriate relations. Our procedure is similar, but additionally considers user feedback to train machine learning models in order to predict such relations.
+
+In conclusion, to the best of our knowledge, there is no approach like ours that constructs personal knowledge graphs from folder structures and at the same time includes experts with their feedback.
+
+§ 3 APPROACH
+
+ < g r a p h i c s >
+
+Fig. 2: Components of our approach from left to right.
+
+Our approach enables knowledge engineers (KEs) to construct personal knowledge graphs from a classification schema, for example, a folder structure as shown in Figure 1. In this process, we support them in four tasks which are depicted in Figure 2 and explained in individual sections: Domain Terminology Extraction (Section 3.2), Management of Named Individuals (Section 3.3), Taxonomy Creation (Section 3.4) and Non-Taxonomic Relation Learning (Section 3.5). During modeling using a dedicated GUI (Section 3.6) the KE is assisted by an artificial intelligence (AI) system which proactively makes statements on its own. For ontology population and non-taxonomic relations, machine learning models predict statements. To correctly store and distinguish these assertions, we first designed an appropriate data model.
+
+§ 3.1 KNOWLEDGE GRAPH MODEL
+
+Our knowledge graph model is an RDF graph consisting of statements in the form of subject-predicate-object triples. However, in our scenario, we have to store additional feedback information for each statement. We consider exactly two agents in our system who are able to give feedback about statements: a knowledge engineer(KE)and an artificial intelligence(AI). Both contribute to the same personal knowledge graph with assertions which can be true, but also false (negative statement). To keep track about the provenance, we store the following meta data for each statement: (a) which agent stated it, (b) the date and time it was stated, (c) how is the statement rated (true, false or undecided) and (d) how confident is the agent (a real value between 0 and 1). Additionally, we use foaf:topic-statements to state that a classification schema node (subject) mentions a certain knowledge graph resource (object) (see an example in Figure 1). Regarding the rating, since natural intelligence is usually more reliable than an artificial one, the KE always outvotes suggestions from the AI. Yet, assertions of the AI are assumed to be true as long as the KE does not disagree.
+
+§ 3.2 DOMAIN TERMINOLOGY EXTRACTION
+
+Our extraction method uses heuristics to make a first guess for relevant terms in the user's domain. Since word boundaries are often not evident in rather messy file names, we tokenize their basenames (without considering file extensions) by character type and camel case. In addition, the acquired tokens are rated based on some simple rules: stop words and tokens containing a single letter or only symbols are negatively rated. This also applies for tokens which only contain digits, except they look like years (e.g. $n \in \left\lbrack {{1980},{2030}}\right\rbrack$ ). Applying these rules, the following example is tokenized (indicated by a pipe symbol '|') and rated (indicated by color) in the following way: WIP|_____|for|2007|-|tree|Diagram|!|(|28|)|A|.jpg. Thus, the rules let us assume that the tokens WIP, 2007, tree and Diagram are relevant. In case of multi-word terms, the KE is able to merge separated tokens to a single term again, like for the latter two (i.e. Tree Diagram).
+
+After adjusting the rating according to feedback from a domain expert, other occurrences of accepted terms are automatically searched using a regular expression, since they may occur in a classification scheme more than once. If the term contains multiple words, we also search for all possible word concatenations using the separators "-" (minus), "-" (underscore), " " (space) and also no separator at all. To give an example, for the term treeDiagram our system also checks the variations tree-Diagram, tree-Diagram and tree Diagram. Finally, the collected term variations are associated with a named individual (i.e. owl:NamedIndividual according to OWL).
+
+§ 3.3 MANAGEMENT OF NAMED INDIVIDUALS
+
+After retrieving all found term variations $T$ , we have to decide if they (a) resemble an already existing named individual or (b) define a new one. Regarding the first case, each newly discovered term may be a variation that refers to an already created named individual. Thus, we calculate the Jaccard similarity coefficient [7] between the terms $T$ and the candidates’ labels $L$ . A named individual is picked which has the highest overlap between its labels and the given terms. If we cannot find such a resource above a sufficient similarity threshold, a new one is created. The longest term is used to give the resource a preferred label (skos:prefLabel) after some conversions are performed: German umlaut spellings are corrected (e.g. "ae" $\rightarrow$ "ä"), underscores are replaced with spaces, if available a lemma version is used (diagrams $\rightarrow$ diagram) and proper case is applied (Tree Diagram). The remaining terms form the named individual's synonym and differently spelled labels (skos:hiddenLabel). In both cases, we keep track in which file resource the named individuals are mentioned by using a foaf:topic-relation.
+
+Unification. If two or more named individuals have the same meaning, we can unify them to one resource. This is done by correctly substituting URIs and at the same time removing the source triples. The AI automatically detects potential individuals with the same meaning by looking at their labels and applying some rules: it checks for hidden labels if they overlap or if there is a prefix or postfix dependency, while preferred labels are compared with the Levenshtein distance and token-based equality. For example, for the following label pairs our procedure would suggest that their individuals are equal: ("Peter Parker", "Parker Peter"); ("Tree Diagram", "Diagram") and ("diagram", "diagramm").
+
+Ontology Population. The KE manually create ontology classes and type named individuals with them. To support the KE in this assignment, a random forest model [6] is trained with positive examples from feedback to be able to predict classes for individuals without a type. In order to acquire training features, we follow a gazetteer-based embedding technique by looking up words from several gazetteer lists in preferred labels of named individuals. Remaining characters are counted per character class such as spaces, quotes and digits. The coverage proportions of words and characters in the label serve as the final feature vector. To give some examples, "Tree Diagram 27" receives the vector ${v}_{1} = \left( {\text{ English Noun } = {0.73},\text{ Space } = {0.13},\text{ Digit } = {0.13}}\right)$ , while "WIP" has ${v}_{2} =$ (Uppercase Letter $= {1.0}$ ). Having such feature vectors, the random forest model is able to learn decision trees which predict the same type for named individuals having preferred labels very similar in content. For instance, since the individual Tree Diagram 27 is assigned to skos: Concept and another individual Diagram 3 has a similar feature vector, our model predicts the same class for it.
+
+§ 3.4 TAXONOMY CREATION
+
+Our intended taxonomy uses broader and narrower relations to structure concepts (skos: Concept) found in file names according to the Simple Knowledge Organization System (SKOS). Since we see these concepts as leafs in a taxonomy tree, our motivation is to find broader concepts for them. For this, our approach utilizes a language resource of synsets and hypernym relations. The concepts in the PKG are mapped via their labels to synsets of the lexical-semantic net. By traversing hypernym relations for all found synsets, two or more of them may share the same ancestor along their hypernym paths. If the average distance from synsets to ancestor is below a configurable threshold, it is suggested as a broader concept for them. This constraint avoids the recommendation of too general concepts (e.g. near the root node). To give an example, given the hypernym paths diagram $\rightarrow$ depiction and timetable $\rightarrow$ overview $\rightarrow$ depiction, our procedure would suggest the broader concept depiction for both leafs. Of course the KE may at any time create concepts manually and link them accordingly. Besides such taxonomic relations, our system also considers non-taxonomic ones between instances.
+
+§ 3.5 NON-TAXONOMIC RELATION LEARNING
+
+To predict non-taxonomic relations, we perform link prediction by training a model on positive examples from feedback and by exploiting the structure of the classification schema (CS). Our idea is that the same non-taxonomic predicate could be suggested between other resources (subjects and objects) which have a similar neighborhood in the CS. For this, we only consider class instances which are named individuals that have been assigned to an ontology class. Since instances are annotated on files via a foaf:topic-relation, we know in which places of the CS they are mentioned. This annotated CS needs to be transformed into an undirected graph of connected instances to perform link prediction on it. We make an edge from an instance $i$ mentioned in a given node to another instance $j$ , whenever $j$ is mentioned in the (a) node itself,(b) the node’s parent, (c) one of the node's children or (d) one of the node's siblings (i.e. children of parent). In other words, instances are connected in the graph if they are closely mentioned in the CS. With the given graph, we are able to calculate local similarity measures for links (for a survey see [11, Table 1]). Values of the calculated measures form feature vectors in a training set. The test set is acquired by iterating over all possible combinations of instances and properties by using their domain and range information as a filter. A promising triple in the test set is expected when we calculate a small euclidean distance (below a given threshold) between its test vector and a training vector.
+
+§ 3.6 PROTOTYPICAL IMPLEMENTATION
+
+To test our approach in a case study, we implemented a prototype. A demo video ${}^{4}$ and its source code ${}^{5}$ are publicly available. To assist the KE in entering feedback and constructing the PKG, a graphical user interface (GUI) in form of a web application is provided (see Figure 3). Throughout the interface, we make heavily use of thumbs-up and thumbs-down buttons as well as green and red colored elements to visualize positive and negative feedback (true and false assertions). The three-column layout presents tabs for individual components which give dedicated views for the tasks we have discussed.
+
+A typical Explorer view (top left) lists containing files of a currently browsed folder (/User/Downloads). The view presents for each file (from top to bottom) its file name, rated terms from the file name and annotated named individuals. To distinguish individuals from terms the well-known hashtag symbol is added to their preferred labels. In a separate Named Individuals view in the top middle, we itemize them together with their type. Two side-by-side views enable a Drag&Drop mechanism on individuals to let the KE define triples with a selected predicate (drop-down list in the middle). On the top right, classes and properties can be manually created, renamed and rated in an Ontology view. For each property, domain and range classes can be defined too. In separate tabs (bottom left) our GUI also presents suggestions for Unification, Typing, Taxonomic and Non-Taxonomic Relations (the screenshot shows an opened Typing tab). A list of proposals from the AI can be reviewed by the KE, who can accept or reject them individually or in bulk. Decisions are shown below and can always be undone in either way. In a detail view (bottom middle), the KE is able to change a selected individual's preferred label, type, hidden labels and file attachment. A Status view (bottom right) visualizes the current PKG construction state in four sections: the progress in tagging, typing, taxonomy tree and non-taxonomic graph as well as an overall assessment score. These estimations give hints to the KE where more feedback from the expert is necessary.
+
+${}^{4}$ https://www.dfki.uni-kl.de/m̃schroeder/demo/kecs
+
+${}^{5}$ https://github.com/mschroeder-github/kecs
+
+ < g r a p h i c s >
+
+Fig. 3: Our graphical user interface in a three-column layout with many feedback possibilities and components (top). Dedicated components are provided to preform certain tasks (bottom).
+
+§ 4 CASE STUDY: EXPERT INTERVIEWS
+
+A case study was conducted with expert interviews in which personal knowledge graphs (PKGs) were built with their feedback. The setup for these interviews is covered in Section 4.1. This is followed by a detailed description of all collected results (Section 4.2) which are then discussed with regard to our stated research questions (Section 4.3).
+
+Table 1: Four datasets with their meta data which are used in interviews with four experts.
+
+max width=
+
+Dataset Expert Branches Leafs Max. Depth Avg. Depth Avg. Name Length
+
+1-7
+SS1 E1 103 198 3 ${2.98} \pm {0.16}$ ${8.84} \pm {9.86}$
+
+1-7
+FS1 E2 25, 988 95,760 17 ${9.49} \pm {1.93}$ ${23.30} \pm {16.88}$
+
+1-7
+FS2 E3 8, 939 64,571 17 ${9.18} \pm {1.68}$ ${32.43} \pm {16.77}$
+
+1-7
+FS3 E4 54,933 325,476 22 ${10.08} \pm {2.22}$ ${24.24} \pm {14.57}$
+
+1-7
+
+§ 4.1 EXPERT INTERVIEW SETUP
+
+Since our institute has industry projects with several departments of a large power supply company, we had the great opportunity to get in contact with four individual experts from four departments (guideline management, property management, license management and accounting). Three of them work separately on individual shared drive file systems (FS), while one primarily manages spreadsheet (SS) data. Before the interviews, we received dumps of their data which are listed in Table 1. For each dataset an expert (E) is assigned and meta data about the asset is presented.
+
+Since spreadsheets may also contain work related concepts, but are not a form of classification schema, we had to convert the SS1 dataset to a tree structure in the following way. Table names become root folders, while column names are added as their subfolders. In the subfolders, we add files with distinct names from the column's rather short cell values. This way, potential work related concepts could be contained in this generated classification schema.
+
+Our system automatically captures several data points during usage. To reproduce the construction process, we keep a history of all stated assertions with their meta data as described in Section 3.1. By observing GUI inputs including mouse clicks, Drag&Drop operations and certain keystrokes, we quantify the KE's effort with the system. In a fixed interval (every 10 inputs) snapshots of the construction metrics (Status view) are saved to record the PKGs evolution over time. Additionally, memory consumption and time performance of certain system modules are monitored.
+
+Each one-hour long interview between the knowledge engineer (KE) and an expert had the same setting. One fixed author of this paper took over the role of KE and met the expert in a virtual telephone conference. The KE shared the screen and presented the GUI of our system (see Section 3.6) where the expert's data was already loaded. After a brief introduction, the KE started to ask questions about files and folders by traversing through the file system. The explanations of the participant enabled the KE to model the expert's personal knowledge as discussed in our approach (Section 3). Whenever the AI made predictions, the expert was asked if they are correct or not and feedback was entered accordingly. Every 10 minutes the KE reviewed the current construction state by opening the Status view and changed the focus on parts which needed more attention. After about 50 minutes the session ended and the remaining time was used to let the expert complete a questionnaire about the data source and the modeled knowledge graph. In the next section, we present the questionnaire and the results in detail as well as the data which was logged by our prototype during the interviews.
+
+Table 2: The seven questions from the questionnaire with the answers of the four experts and their average values.
+
+max width=
+
+Question E1 E2 E3 E4 Avg. & SD
+
+1-6
+Q1: How many years have you been working with the data? 13 7 4 0 $6 \pm {5.48}$
+
+1-6
+Q2: How much do words in the file names reflect your language use (vocabulary) at work (scale: 1-10)? 9 8 9 9 ${8.75} \pm {0.50}$
+
+1-6
+Q3: Estimate how much your language use (vocabulary) at work is represented by the established tags (percentage). 50 15 10 10 ${21.25} \pm {19.3}$
+
+1-6
+Q4: The established tags meaningfully reflect the language use (vocabulary) at your work (scale: 1-7). 7 6 4 6 ${5.75} \pm {1.26}$
+
+1-6
+Q5: The established tags are assigned to meaningful classes (scale: $1 - 7$ ). 6 7 6 7 ${6.50} \pm {0.58}$
+
+1-6
+Q6: The established tags are meaningfully structured in a taxonomy (scale: 1-7). 7 6 5 4 ${5.50} \pm {1.29}$
+
+1-6
+Q7: The established tags meaningfully relate to each other (scale: $1 - 7$ ). 5 7 6 7 ${6.25} \pm {0.96}$
+
+1-6
+
+§ 4.2 INTERVIEW RESULTS
+
+The questionnaire at the interview's end consists of seven questions (Q) which are presented in Table 2 together with the experts' answers (E), their average value and standard deviation (Avg. & SD). We stated the first question (Q1) to check how familiar the participants are with the data. The second question (Q2) was asked to figure out if the experts think that the given data actually contains work related words. While Q3 tries to give a rough estimation on the PKG's recall in percentage, Q4 gives an approximate measurement about its precision with regard to created named individuals ${}^{6}$ in the PKG. From the third question on, we are interested in the experts' opinions about the final result that was modeled during the interview. A seven-point Likert scale is used for our opinion-based questions ranging from 1 ("fully disagree") to 7 ("fully agree"). The remaining questions aim at the estimation of meaningfulness in the populated ontology (Q5) and taxonomic (Q6) as well as non-taxonomic relations (Q7).
+
+${}^{6}$ The questions refer to "established tags", since we presented tags in the GUI for the named individuals in the personal knowledge graph (PKG).
+
+Besides qualitative data, we also captured quantitative data points during the interview which are presented in Table 3. Measurements are listed per row, while dataset-expert pairs are ordered in columns. After the number of resources in the PKG (#Resources) and the counts regarding the knowledge engineer's (KE) effort in the GUI, we list the number of true and false assertions ${}^{7}$ made by KE and AI in individual construction phases. Furthermore, we calculate the AI's accuracy by counting how often the expert agrees (true positive and true negative) with reviewed predictions. The section about Management of Named Individuals is further split into Unification and Ontology Population. While the management includes assertions about types, preferred/hidden labels and foaf:topic-relations, the latter two only consider owl:sameAs and ontology related assertions. Due to a software error in the taxonomy-module during the first two interviews, unfortunately, no broader concepts could be predicted. On the table's bottom all assertions by the KE (whether true or false) and all inputs (clicks, enter keys, drag&drop operations) are aggregated to calculate a assertions per inputs ratio. The Management of Named Individuals does not have an accuracy value $\left( {\mathrm{N}/\mathrm{A}}\right)$ , since each term automatically turns into a named individual and no suggestions for preferred and hidden label are made.
+
+Since we continuously recorded measurements, we are able to examine the evolution of the PKG with respect to the inputs performed in the GUI. The development of the taxonomic and non-taxonomic part of the PKG is presented through several plots in Figure 4. We consider named individuals of type skos: Concept as taxonomy concepts (Figure 4a) and the remaining typed ones as non-taxonomic instances (Figure 4d). By looking at the number of graph components (Fig. 4b and 4e), one gets an idea of the connectedness over time. In addition, Figure 4c plots the number of concepts which are connected to at least one broader concept. Similarly, Figure 4f shows the average diameter (the greatest distance between any pair of instances) of non-taxonomic components to visualize the closeness among them.
+
+The next section will discuss the results with regard to our research questions.
+
+§ 4.3 DISCUSSION
+
+Since file names are rather unusual sources to build PKGs from, we ask at the beginning of the paper the following question (RQ1): Are file systems promising sources for knowledge graph construction? Our experts agree that words they saw in the file names reflect their language use at work with an average value of 8.75 out of 10 (Q2 in Table 2). Having a higher-level management background, expert E4 came in daily work not in touch with file system F3 (see Q1 in Table 2), but was still able to recognize and explain the terms. Answers to questions Q4 to Q7 in our questionnaire (Table 2) indicate that we modeled all individual PKGs in a meaningful way for the experts. For these reasons, we conclude that file systems are promising sources for building PKGs.
+
+${}^{7}$ False assertions by AI mean that it later rejected initially true ones because of human feedback.
+
+Table 3: Quantity of true and false assertions stated by the knowledge engineer (KE) and the AI for individual construction tasks. Additionally, the KE's GUI effort and the AI's accuracy is given.
+
+max width=
+
+Measurement SS1 (E1) FS1 (E2) FS2 (E3) FS3 (E4)
+
+1-5
+#Resources 88 50 39 32
+
+1-5
+KE Clicks 599 602 359 356
+
+1-5
+KE Enter-Key 60 56 30 47
+
+1-5
+KE Drag&Drop 26 34 21 18
+
+1-5
+5|c|Domain Terminology Extraction (Section 3.2)
+
+1-5
+KE True 82 50 33 26
+
+1-5
+KE False 48 44 14 72
+
+1-5
+AI True 400 270168 242149 948405
+
+1-5
+AI False 286 220285 106573 617366
+
+1-5
+AI Accuracy ${0.67} = {45}/{67}$ ${0.72} = {59}/{82}$ ${0.83} = {35}/{42}$ ${0.31} = {25}/{80}$
+
+1-5
+5|c|Management of Named Individuals* (Section 3.3)
+
+1-5
+KE True 102 68 39 58
+
+1-5
+KE False 30 24 15 25
+
+1-5
+AI True 462 32161 8223 37159
+
+1-5
+AI False 4 1 23 155
+
+1-5
+AI Accuracy N/A N/A N/A N/A
+
+1-5
+5|c|Unification* (Section 3.3)
+
+1-5
+KE True 10 2 2 0
+
+1-5
+KE False 6 18 12 4
+
+1-5
+AI True 8 10 7 2
+
+1-5
+AI False 0 0 0 2
+
+1-5
+AI Accuracy ${0.57} = 4/7$ ${0.10} = 1/{10}$ ${0.14} = 1/7$ ${0.00} = 0/2$
+
+1-5
+5|c|Ontology Population* (SSection 3.3
+
+1-5
+KE True 105 78 61 55
+
+1-5
+KE False 73 29 22 19
+
+1-5
+AI True 134 102 92 85
+
+1-5
+AI False 1 8 6 2
+
+1-5
+AI Accuracy ${0.23} = {18}/{78}$ ${0.65} = {30}/{46}$ ${0.66} = {23}/{35}$ ${0.48} = {12}/{25}$
+
+1-5
+X 4|c|TaxonomyCreation (Section 3.4)
+
+1-5
+KE True 21 19 14 12
+
+1-5
+KE False 0 0 4 8
+
+1-5
+AI True N/A N/A 9 10
+
+1-5
+AI False N/A N/A 0 0
+
+1-5
+AI Accuracy N/A N/A ${0.56} = 5/9$ ${0.20} = 2/{10}$
+
+1-5
+X 4|c|Non-Taxonomic Relation Learning ((Section 3.5)
+
+1-5
+KE True 5 23 33 7
+
+1-5
+KE False 0 42 20 0
+
+1-5
+AI True 0 52 42 0
+
+1-5
+AI False 4 11 5 0
+
+1-5
+AI Accuracy 0/0 ${0.19} = {10}/{52}$ ${0.52} = {22}/{42}$ 0/0
+
+1-5
+5|c|Aggregated
+
+1-5
+All KE Assertions 482 397 269 286
+
+1-5
+All KE Inputs 685 692 410 421
+
+1-5
+KE Assertions/Inputs 0.70 0.57 0.66 0.68
+
+1-5
+
+ < g r a p h i c s >
+
+Fig. 4: Plots about the taxonomic and non-taxonomic parts of the PKG with respect to the number of inputs made in the GUI. For each dataset a symbol is assigned to recognize them: SS1 ( $\square$ ), FS1 ( $\circ$ ), FS2 ( $\times$ ) and FS3 ( $\bigtriangleup$ ).
+
+Because a completely manual construction can be time-consuming and thus AI could help in this process, we asked the next question (RQ2): Can our system suggest helpful statements during usage? In our approach, we consider the application of AI in several tasks ranging from (a) initial selection of domain relevant terms, (b) unification suggestions, (c) recommendation of class memberships, (d) suggestion of broader concepts and (e) prediction of non-taxonomic relations. How they performed can be obtained from Table 3 in form of accuracy values which calculate how often an expert agreed to suggestions stated by AI. (a) Since we do not consider multi-word terms in the extraction of domain relevant words, such terms had to be corrected frequently, which leads to a drop in performance. (b) Our unification rules tend to suggest more false positives leading to low accuracy scores, since they are designed with a high recall in mind. (c) The prediction of class assignments show mediocre results, since only preferred labels in combination with gazetteer lists are used to extract features. (d) For the taxonomy creation, our language resource GermaNet tended to suggest too general concepts which is why they were often considered unsuitable by our experts. (e) Regarding non-taxonomic relation learning, far to little examples were provided in case of SS1 and FS3 to be able to predict similar relations. All in all, there is a tendency that in certain cases helpful statements can be automatically suggested, but more research has to be done to further improve AI.
+
+Concerned about the approach's practicability, we stated the third question (RQ3): How efficient is the construction in our approach? Effort measurements in Table 3 indicate that one input operation results in 0.6 to 0.7 assertions, thus already two inputs lead to a true or false statement. We assume that a value below 1.0 comes from not negligible GUI navigation and search efforts. Still, many clickable (bulk) feedback buttons combined with suggestions from the AI seem to yield to this positive outcome. Especially the Drag&Drop feature turns out to be a simple and fast way to relate resources to each other. Figure 4 visualizes how taxonomies and graphs evolve over entered inputs ${}^{8}$ . In comparison, the maintenance of taxonomies seem to require less effort than the non-taxonomic graphs, probably because only skos:Concepts and the skos:broader-relation need to be considered. The high diameter values of non-taxonomic graphs further indicate that resources in subgraphs are rather loosely connected. In summary, with moderately spent effort our KE was able to create, accept and also reject many assertions that eventually formed a meaningful personal knowledge graph. Still, efficiency could be further improved by better supporting the construction of the graph's non-taxonomic part.
+
+§ 5 CONCLUSION AND OUTLOOK
+
+In this paper, we investigated the construction of personal knowledge graphs from file names with a human-in-the-loop approach. A case study with four independent expert interviews showed that the file system is a promising source, while suggestions by AI help to build such graphs with moderate effort.
+
+Since we could not examine all of the aspects in detail, future work may further investigate in the challenges. For instance, there is potential for improvements in machine learning models, especially for the prediction of non-taxonomic relations. More sophisticated solutions could be applied in the extraction of domain terminology, including disambiguation and the discovery of multi-word terms.
+
+Acknowledgements This work was funded by the BMBF project SensAI (grant no. 01IW20007).
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..86be3f583cbc5e12f56fa1aa8efa5bd02a7f1fca
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,195 @@
+# Declarative Description of Knowledge Graphs Construction Automation: Status & Challenges
+
+David Chaves-Fraga ${}^{1,2}$ , Anastasia Dimou ${}^{1}$
+
+${}^{1}$ KU Leuven, Department of Computer Science, Sint-Katelijne-Waver, Belgium
+
+${}^{2}$ Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, Spain
+
+## Abstract
+
+Nowadays, Knowledge Graphs (KG) are among the most powerful mechanisms to represent knowledge and integrate data from multiple domains. However, most of the available data sources are still described in heterogeneous data structures, schemes, and formats. The conversion of these sources into the desirable KG requires manual and time-consuming tasks, such as programming translation scripts, defining declarative mapping rules, etc. In this vision paper, we analyze the trends regarding the automation of KG construction but also the use of mapping languages for the same process, and align the two by analyzing their tasks and a few exemplary tools. Our aim is not to have a complete study but to investigate if there is potential in this direction and, if so, to discuss what challenges we need to address to guarantee the maintainability, explainability, and reproducibility of the KG construction.
+
+## Keywords
+
+Knowledge Graphs, Automation, Explainable AI, Declarative Rules
+
+## 1. Introduction
+
+A lot of works on knowledge graph (KG) construction are focused on defining mapping languages to declaratively describe the transformation process, and on optimizing the execution of such declarative rules. The mapping languages rely on either dedicated syntaxes, such as the family of languages around the W3C recommended R2RML ${}^{1}$ (e.g., RML [1] or R2RML-F [2]), or on re-purposing existing specifications, such as query languages like the W3C recommended SPARQL ${}^{2}$ (e.g., SPARQL-Generate [3] or SPARQL-Anything [4]), or constraints languages like ShEx ${}^{3}$ (e.g., ShExML [5,6]).
+
+Despite the plethora of mapping languages and the increasing number of optimizations for the execution of the declarative rules, these rules are still defined through a manual and time-consuming process, affecting negatively their adoption. Different solutions were proposed to automate the definition of mapping rules that describe how a KG should be constructed. On the one hand, MIRROR [7], D2RQ [8] and Ontop [9] follow a similar approach, extracting from the RDB schema a target ontology and the mapping correspondences. On the other hand, AutoMap4OBDA [10] and BootOX [11] consider an input ontology and generate actual R2RML mappings from the RDB. However, these solutions are focused on declarative solutions only for relational databases, while recent solutions investigate non-declarative automation of KG construction.
+
+---
+
+KGCW'22: International Workshop on Knolwedge Graph Construction, May 30, 2021, Creete, GRE
+
+(C) david.chaves@upm.es (D. Chaves-Fraga); anastasia.dimou@kuleuven.be (A. Dimou)
+
+© 0000-0003-3236-2789 (D. Chaves-Fraga); 0000-0003-2138-7972 (A. Dimou)
+
+(C) ${}_{67}$ (C) C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
+
+CEUR Workshop Proceedings (CEUR-WS.org)
+
+${}^{1}$ http://www.w3.org/TR/r2rml/
+
+${}^{2}$ https://www.w3.org/TR/sparql11-overview/
+
+${}^{3}$ https://shex.io/
+
+---
+
+Beyond relational databases, the recent SemTab challenge ${}^{4}$ present a set of tabular datasets [12] with the aim of matching them automatically to external KGs, such as DBpedia and Wikidata. The proposed solutions $\left\lbrack {{13},{14},{15}}\right\rbrack$ address the problem using different techniques, such as heuristic rules, fuzzy searching over the KGs, or knowledge graph embeddings. Although their final objective is the same (obtain high precision and recall results) and they perform similar procedures, each solution implements its own workflow and addresses each proposed task by SemTab in different ways. Hence, making a fair and fine-grained comparison among the different solutions to understand how they obtain the actual results is not an easy task.
+
+In this vision paper, we align the tasks followed by solutions for the automation of the semantic table annotation with concepts of existing declarative solutions. We indicatively select and analyze a few tools for the automation of KG construction and identify common steps. We discuss if they can be declaratively described relying on existing mapping languages, and what the challenges are to proceed in this direction. We consider the RDF Mapping Language (RML) [1] as a high-level and general representation to describe the schema transformations and its extension, the Function Ontology (FnO) [16] to describe the data transformations.
+
+Our objective is not to present a complete study, but to investigate if there is potential in this direction. By describing the steps followed by different solutions in a more fine-grained and standard manner, we make the steps comparable, and we can better discuss what challenges we need to address to guarantee the maintainability, explainability and reproducibility of the KG construction, as well as to ensure the provenance of each performed task.
+
+## 2. Task alignment with mapping languages
+
+We analyze the different steps of the SemTab challenge, inspect the relation between the SemTab challenge tasks and align them with concepts from the declarative construction of RDF graphs (Figure 1). To achieve this, we include the relationship between each of the tasks and their potential declarations within a mapping language. We considered the RML mapping language because it is commonly used and the authors are more familiar with, but we are confident that the other mapping languages could express the same concepts. Before we proceed with the alignment, we give a small introduction on the SemTab challenge and RML:
+
+SemTab challenge The SemTab challenge consists of three tasks: (i) cell to KG entity matching (CEA), which matches cells to individuals; (ii) column to KG class matching (CTA), which matches cells to classes; and (iii) column pair to KG property matching (CPA), which captures the relationships between pairs of columns.
+
+RML The RDF mapping language (RML), a superset of the W3C recommended R2RML, expresses schema transformations from heterogeneous data to RDF. An RML mapping contains one or more Triple Maps which on their own turn contain a Subject Map to generate the subjects of the RDF triples, and zero or more Predicate Object Maps with pairs of Predicate and Object Maps to generate the predicates and the objects respectively for each incoming data record. RML was aligned with the Function Ontology (FnO) [16] to describe the data transformations which are required to construct the desired RDF graph, ensuring that the functions are independent from any implementation.
+
+---
+
+${}^{4}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/
+
+---
+
+
+
+Figure 1: Automation tasks alignment within declarative mapping language. Example extracted from SemTab 2021 challenge, where the CEA, CTA and CPA tasks are aligned with a declarative construction of a knowledge graph using the RML mapping language (YARRRML serialisation).
+
+We analyze how the different tasks of the challenge contribute in constructing a part of an RDF triple, and we align these tasks with the corresponding concepts of the RML mapping language that construct the same part of an RDF triple.
+
+Cell-Entity Annotation (CEA): This task identifies the URI of an entity from a cell. In the target RDF graph, this is the subject or the object of the RDF triple. In Fig. 1, the Co10 values are used to obtain the subjects of the triples while the Co13 values generate the objects (both green colored in the RDF extract of Fig. 1). If a declarative approach is considered to generate these triples, for example in RML, the rr: subjectMap property is used (line 5 of RML doc in Fig. 1), which declares how the subjects of the triples are generated and the rr: object Map (line 8 of RML doc in Fig. 1), when the expected objects are in the form of URIs.
+
+Column-Type Annotation (CTA): This task predicts the common class of a set of items given a column from the table. SemTab assumes that a table only generates one kind of entity (i.e. the first column is used for CTA). In Figure 1, we can observe that the URIs retrieved using Co10 are considered for obtaining the corresponding shared concept (i.e., restaurant) (red colored in the RDF extract of Fig. 1). Declaring the class in RML can be done through the shortcut rr: class property within the rr: SubjectMap (line 7 of RML doc in Fig. 1) or using a rr:predicateObjectMap with a rdf:type fixed predicate.
+
+Columns-Property Annotation (CPA): This task aims to predict the property that relates the CTA column (subjects) to the rest of the columns. Fig. 1 shows a CPA task that relates the Co10 with the Co13 through the property architectural style (wdt : P149, yellow colored in the RDF extract). In RML, the predicates of the triples are declared using the rr: predicateMap property (line 8 of RML doc in Fig. 1), and unlike typical mapping rules, where it is usually assumed that predicates are constants (as they are declared in the input ontology), the predicates depend on the data, hence they are dynamically defined.
+
+Based on the aforementioned analysis, we conclude that the tasks performed to automate the KG construction can be aligned with concepts from declarative mapping languages. The CEA task is aligned with the RDF term construction for the subject or the object of the RDF triple, the CTA task assigns the class and the CPA task aligns with the Predicate and Object Map.
+
+## 3. Comparing semantic tabular matching systems
+
+In this section, we analyze in detail the steps performed by some of the tools proposed for solving the SemTab challenge. The comparative analysis among the three selected engines (summarized in Table 1), is not meant to be exhaustive. We aim to identify if there are common steps and functions that the engines perform to accomplish the challenge's tasks and ultimately if it is possible and desired to declaratively describe them with mapping languages.
+
+### 3.1. Selected Systems
+
+We indicatively selected the systems that: (i) obtained good results in the SemTab 2021 challenge ${}^{5}$ ; and (ii) have the source code openly available. Therefore, we included in this comparison JenTab [14], MTab [13] and MantisTable V [17]. The use of different terminologies for describing similar tasks (e.g., majority vote in Mantis V is referred as frequency) and the complexity of the proposed workflows, where the results from one of the task influence the others in a iterative way, create difficulties to compare the approaches and reproduce their results.
+
+JenTab ${}^{6}$ participated in SemTab 2020 and 2021, and it was always positioned among the top five solutions for most rounds. It follows a heuristic-based approach proposing the CFS (Create, Filter, Select) approach for all tasks and with different configurations and workflows.
+
+${\mathbf{{MTab}}}^{7}$ participated in all SemTab editions, winning the first prize in 2019 and 2020. Apart from the support of multilingual datasets, MTab implements several approaches for performing the entity search (i.e., CEA): keyword search, fuzzy search, and aggregation search ${}^{8}$ .
+
+MantisTable ${\mathrm{V}}^{9}$ is an extended and improved version of MantisTable [18]. Similarly to JenTab, MantisTable has also participated in SemTab 2020 and 2021 editions. It implements a set of heuristic rules (similar as JenTab) and complex string similarity functions for the entity recognition task (like MTab). Additionally, it provides a general and efficient tool (LamAPI) to fetch the necessary data for all SemTab tasks, independently of the target KG.
+
+---
+
+${}^{5}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/2021
+
+${}^{6}$ https://github.com/fusion-jena/JenTab
+
+${}^{7}$ https://github.com/phucty/mtab_tool
+
+${}^{8}$ https://mtab.app/mtabes/docs
+
+${}^{9}$ https://bitbucket.org/disco_unimib/mantistable-v/
+
+---
+
+### 3.2. Observations
+
+The systems we inspected follow the same steps: they perform a preprocessing step, and setup lookup and datatype prediction services. Then the CEA task is performed followed by the CTA and CPA tasks which depend on the CEA task. Given that the systems follow the same steps, we could map the three main tasks (CEA, CPA, CTA) to the Create-Filter-Select (CFS) procedure proposed by JenTab (see Table 1).
+
+We observe similarities in most tasks among the engines. The subtasks performed in the preprocessing step, are very similar in the three engines. The preprocessing tasks include several functions, such as fixing encoding issues, removing HTML tags or special characters, and detecting missing white spaces (see Table 1), and they usually delegate them to third-party libraries (e.g., ftfy ${}^{10}$ ). We observe similar tasks are performed when declarative solutions are used for cleaning and preparing the data. These preprocessing tasks are described with FnO in the case of RML and executed either together with the schema transformations or as a preprocessing task too.
+
+The same occurs for the datatype prediction, where regular expressions are often used to detect if cell values are entities or literals, and what type of literals (string, date, or numbers). In the case of declarative solutions, this datatype inspection task is performed manually. However, adjusting the datatype is possible relying on functions for data transformations.
+
+Most of them also incorporate a lookup step to retrieve the necessary data from the KGs (e.g., using SPARQL queries), including similarity functions or fuzzy search. The search engine for the KG lookups in JenTab and Mantis V is ElasticSearch, although the former implements the Jaro Winkler distance [19] while the later embeds it in a more efficient engine and exploits its query capabilities. Lookups were also incorporated in the case of declarative solutions [20], where lookup services retrieve a URI to identify an entity instead of assigning a new one.
+
+As far as the actual tasks is concerned, each engine performs its own approach for the CEA, CTA, and CPA tasks, although we also find some similarities. The most important ones that are implemented in the three engines are: (i) the Levenshtein distance [21] for filtering candidates, and (ii) the majority vote (called frequency in Mantis V) for selecting the final annotations. We believe that the use of declarative approaches, such as the Function Ontology [16] for describing common functions (e.g., Levenshtein), could make the solutions more comparable. It would also be clearer if they perform the same function, and more explainable, as current solutions for the automation of KG construction act like blackboxes: neither their implementations are open sourced nor the declarative descriptions of what they execute are available. Providing at least declarative descriptions of the performed tasks would enhance the transparency of these solutions.
+
+---
+
+${}^{10}$ https://pypi.org/project/ftfy/
+
+---
+
+Table 1
+
+Tasks comparison among different SemTab solutions
+
+ | JenTab | MTab | Mantis V |
| KG Lookup | ElasticSearch on top of KG SPARQL Queries | WikiGraph Generation Ad-hoc API | LamAPI(ElasticSearch, Mongo and Python) |
| Preprocessing | Fix encoding | Y | Y | $\mathrm{N}$ |
| Special characters | Y | N | Y |
| Restore missing spaces | Y | $\mathrm{N}$ | Y |
| Remove HTML tags | $\mathrm{N}$ | Y | $\mathrm{N}$ |
| Remove non-cell-values | $\mathrm{N}$ | Y | $\mathrm{N}$ |
| Datatype | REGEX Type-based cleaning | Cell values identification (literal, entity) SpaCy models for potential types Majority vote to define column type | REGEX for datatypes exceeding a threshold Entity columns that do not exceed the threshold |
| CEA | CREATE | Different query rewriting techniques | Keword search (BM25) Fuzzy search (Levenshtein distance) | LamAPI lookup with IB similarity |
| FILTER | Levenshtein distance (among others) | Filter and hashing (Symetric Delete) Context similarities by row | Levenshtein confidence score for entities Literals XXX |
| SELECT | Levenshtein distance | Highest context similarity | xxxx |
| CTA | CREATE | Types from CEA | Types from CEA | Types from CEA |
| FILTER | Remove the less popular types | - | - |
| SELECT | Maiority vote | Majority vote | Majority vote |
| CPA | CREATE | Cell annotations (CEA) and fuzzy match for data properties | Aggregate all properties from CEA by row | Properties from CEA lookups |
| FILTER | - | - | - |
| SELECT | Majority vote | Majority vote | Majority vote |
+
+## 4. Challenges for a declarative automation of KG Construction
+
+We identify a set of challenges that need to be addressed to declaratively describe solutions for automatic KG construction. These challenges can be divided into two main categories: technical challenges and conceptual challenges.
+
+On the technical side, there is a major difference between the solutions for the automation of KG construction and the execution of declarative KG construction solutions: The solutions for automatic KG construction rely on iterative processes that continuously refine and improves a task, while the different tasks influence each other. To the contrary, the declarative KG construction is a linear process that is executed only once. Not all declarative rules are executed linearly, solutions that restructure [6] or parallelize them [22, 23] are increasingly encountered. Thus, if the solutions for automatic KG construction are declaratively described, their iterative execution needs to be described as well. How do we do that with the mapping languages?
+
+Besides the overall execution process, the iteration patterns are different. The solutions for automatic KG construction are applied to all directions, both per column and per row, and even combined. To the contrary, the declarative solutions are applied only per row, and the mapping languages are designed under this assumption. Should the mapping languages be extended to support more iteration patterns? If so, would the rml:iteration for RML and the relevant constructs in the other mapping languages be sufficient or more adjustments are required?
+
+The solutions for automatic KG construction rely on interrelated tasks which may produce intermediate representations, and their results impact the rest of tasks. Thus, the declarative KG construction solutions need to deal with dynamic and recursive steps (e.g., intermediate representation of the input data sources and mapping rules, multiple function execution, etc.) that can negatively impact the generation process. Hence, declaratively describing is a challenge. Should the mapping languages be further extended then?
+
+On the conceptual side, there are two main differences with respect to the training and target KG. In most real projects that declarative solutions tackle, the input data and sometimes the target ontology are only provided, but there is neither similar data to train the solutions nor existing KGs that can be used to find entities or to predict the relationships. While relying on ontology matching techniques between existing KGs (e.g., DBPedia, Wikidata) and the target ontology or exploiting NLP approaches between ontology and input sources documentation could be a solution for the latter, would it be realistic given that most ontologies are not aligned and not all of them provide documentation?
+
+## 5. Conclusions and Future Work
+
+In this paper, we analyze the KG construction solutions and compare the automatic with the declarative. While the tasks can be aligned with respect to what they achieve, their execution is fundamentally different and a direct alignment is not feasible.
+
+Automatic solutions for KG construction are required to facilitate the adoption of KGs, but there are also merits when the automation tasks are declaratively described, with respect to maintenability, sustainability, and reproducibility. However, directly aligning the automatic solutions with the declarative solutions might be technically and conceptually challenging considering their different execution and iteration patterns. Extending the existing mapping languages would be a solution, but it would also require to address the identified challenges and not only. Would such extensions be feasible and desired or would they lead them beyond their purpose? Although, mapping languages are not the only approach to have declarative descriptions. Declarative descriptions of workflows emerge as well. Would that be a more viable solution? If so, would the automatic and declarative solutions keep on growing in different directions? These are questions that would be nice to reflect and discuss during the workshop.
+
+## References
+
+[1] A. Dimou, M. Vander Sande, P. Colpaert, R. Verborgh, E. Mannens, R. Van de Walle, RML: a generic language for integrated RDF mappings of heterogeneous data, in: Ldow, 2014.
+
+[2] C. Debruyne, D. O'Sullivan, R2RML-F: towards sharing and executing domain logic in R2RML mappings, in: LDOW@ WWW, 2016.
+
+[3] M. Lefrançois, A. Zimmermann, N. Bakerally, A SPARQL extension for generating RDF from heterogeneous formats, in: European Semantic Web Conference, Springer, 2017, pp. 35-50.
+
+[4] E. Daga, L. Asprino, P. Mulholland, A. Gangemi, Facade-X: an opinionated approach to SPARQL anything, Studies on the Semantic Web 53 (2021) 58-73.
+
+[5] E. Iglesias, S. Jozashoori, D. Chaves-Fraga, D. Collarana, M.-E. Vidal, SDM-RDFizer: An RML Interpreter for the Efficient Creation of RDF Knowledge Graphs, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 3039-3046.
+
+[6] S. Jozashoori, D. Chaves-Fraga, E. Iglesias, M.-E. Vidal, O. Corcho, Funmap: Efficient
+
+execution of functional mappings for knowledge graph creation, in: International Semantic Web Conference, Springer, 2020, pp. 276-293.
+
+[7] L. F. d. Medeiros, F. Priyatna, O. Corcho, MIRROR: Automatic R2RML mapping generation from relational databases, in: International Conference on Web Engineering, Springer, 2015, pp. 326-343.
+
+[8] C. Bizer, A. Seaborne, D2RQ-treating non-RDF databases as virtual RDF graphs, in: Proceedings of the 3rd international semantic web conference (ISWC2004), volume 2004, Springer Hiroshima, 2004.
+
+[9] D. Calvanese, B. Cogrel, S. Komla-Ebri, R. Kontchakov, D. Lanti, M. Rezk, M. Rodriguez-Muro, G. Xiao, Ontop: Answering SPARQL queries over relational databases, Semantic Web 8 (2017) 471-487.
+
+[10] Á. Sicilia, G. Nemirovski, AutoMap4OBDA: Automated generation of R2RML mappings for OBDA, in: European Knowledge Acquisition Workshop, Springer, 2016, pp. 577-592.
+
+[11] E. Jiménez-Ruiz, E. Kharlamov, D. Zheleznyakov, I. Horrocks, C. Pinkel, M. G. Skjæveland, E. Thorstensen, J. Mora, Bootox: Bootstrapping OWL 2 ontologies and R2RML mappings from relational databases, in: International Semantic Web Conference (P&D), 2015.
+
+[12] E. Jiménez-Ruiz, O. Hassanzadeh, V. Efthymiou, J. Chen, K. Srinivas, Semtab 2019: Resources to benchmark tabular data to knowledge graph matching systems, in: European Semantic Web Conference, Springer, 2020, pp. 514-530.
+
+[13] P. Nguyen, I. Yamada, N. Kertkeidkachorn, R. Ichise, H. Takeda, SemTab 2021: Tabular Data Annotation with MTab Tool, SemTab@ ISWC (2021) 92-101.
+
+[14] N. Abdelmageed, S. Schindler, JenTab Meets SemTab 2021's New Challenges, in: SemTab@ ISWC, 2021, pp. 42-53.
+
+[15] V.-P. Huynh, J. Liu, Y. Chabot, F. Deuzé, T. Labbé, P. Monnin, R. Troncy, DAGOBAH: Table and Graph Contexts For Efficient Semantic Annotation Of Tabular Data, in: SemTab@ ISWC, 2021, pp. 19-31.
+
+[16] B. De Meester, T. Seymoens, A. Dimou, R. Verborgh, Implementation-independent function reuse, Future Generation Computer Systems 110 (2020) 946-959.
+
+[17] R. Avogadro, M. Cremaschi, MantisTable V: a novel and efficient approach to Semantic Table Interpretation, SemTab@ ISWC (2021) 79-91.
+
+[18] M. Cremaschi, F. De Paoli, A. Rula, B. Spahiu, A fully automated approach to a complete semantic table interpretation, Future Generation Computer Systems 112 (2020) 478-500.
+
+[19] W. E. Winkler, String comparator metrics and enhanced decision rules in the Fellegi-Sunter model of record linkage (1990).
+
+[20] S. Jozashoori, A. Sakor, E. Iglesias, M.-E. Vidal, EABlock: A Declarative Entity Alignment Block for Knowledge Graph Creation Pipelines, in: Proceedings of the 37th ACM/SIGAPP Symposium On Applied Computing, 2022.
+
+[21] V. I. Levenshtein, et al., Binary codes capable of correcting deletions, insertions, and reversals, in: Soviet physics doklady, volume 10, Soviet Union, 1966, pp. 707-710.
+
+[22] G. Haesendonck, W. Maroy, P. Heyvaert, R. Verborgh, A. Dimou, Parallel RDF generation from heterogeneous big data, in: Proceedings of the International Workshop on Semantic Big Data, 2019, pp. 1-6.
+
+[23] J. Arenas-Guerrero, D. Chaves-Fraga, J. Toledo, M. S. Pérez, O. Corcho, Morph-kgc: Scalable knowledge graph materialization with mapping partitions, Semantic Web (2022).
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bf423c70fc9cb083984d53cc2902d9bec52e3bf8
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/HlbgMu-HqZq/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,181 @@
+§ DECLARATIVE DESCRIPTION OF KNOWLEDGE GRAPHS CONSTRUCTION AUTOMATION: STATUS & CHALLENGES
+
+David Chaves-Fraga ${}^{1,2}$ , Anastasia Dimou ${}^{1}$
+
+${}^{1}$ KU Leuven, Department of Computer Science, Sint-Katelijne-Waver, Belgium
+
+${}^{2}$ Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, Spain
+
+§ ABSTRACT
+
+Nowadays, Knowledge Graphs (KG) are among the most powerful mechanisms to represent knowledge and integrate data from multiple domains. However, most of the available data sources are still described in heterogeneous data structures, schemes, and formats. The conversion of these sources into the desirable KG requires manual and time-consuming tasks, such as programming translation scripts, defining declarative mapping rules, etc. In this vision paper, we analyze the trends regarding the automation of KG construction but also the use of mapping languages for the same process, and align the two by analyzing their tasks and a few exemplary tools. Our aim is not to have a complete study but to investigate if there is potential in this direction and, if so, to discuss what challenges we need to address to guarantee the maintainability, explainability, and reproducibility of the KG construction.
+
+§ KEYWORDS
+
+Knowledge Graphs, Automation, Explainable AI, Declarative Rules
+
+§ 1. INTRODUCTION
+
+A lot of works on knowledge graph (KG) construction are focused on defining mapping languages to declaratively describe the transformation process, and on optimizing the execution of such declarative rules. The mapping languages rely on either dedicated syntaxes, such as the family of languages around the W3C recommended R2RML ${}^{1}$ (e.g., RML [1] or R2RML-F [2]), or on re-purposing existing specifications, such as query languages like the W3C recommended SPARQL ${}^{2}$ (e.g., SPARQL-Generate [3] or SPARQL-Anything [4]), or constraints languages like ShEx ${}^{3}$ (e.g., ShExML [5,6]).
+
+Despite the plethora of mapping languages and the increasing number of optimizations for the execution of the declarative rules, these rules are still defined through a manual and time-consuming process, affecting negatively their adoption. Different solutions were proposed to automate the definition of mapping rules that describe how a KG should be constructed. On the one hand, MIRROR [7], D2RQ [8] and Ontop [9] follow a similar approach, extracting from the RDB schema a target ontology and the mapping correspondences. On the other hand, AutoMap4OBDA [10] and BootOX [11] consider an input ontology and generate actual R2RML mappings from the RDB. However, these solutions are focused on declarative solutions only for relational databases, while recent solutions investigate non-declarative automation of KG construction.
+
+KGCW'22: International Workshop on Knolwedge Graph Construction, May 30, 2021, Creete, GRE
+
+(C) david.chaves@upm.es (D. Chaves-Fraga); anastasia.dimou@kuleuven.be (A. Dimou)
+
+© 0000-0003-3236-2789 (D. Chaves-Fraga); 0000-0003-2138-7972 (A. Dimou)
+
+(C) ${}_{67}$ (C) C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
+
+CEUR Workshop Proceedings (CEUR-WS.org)
+
+${}^{1}$ http://www.w3.org/TR/r2rml/
+
+${}^{2}$ https://www.w3.org/TR/sparql11-overview/
+
+${}^{3}$ https://shex.io/
+
+Beyond relational databases, the recent SemTab challenge ${}^{4}$ present a set of tabular datasets [12] with the aim of matching them automatically to external KGs, such as DBpedia and Wikidata. The proposed solutions $\left\lbrack {{13},{14},{15}}\right\rbrack$ address the problem using different techniques, such as heuristic rules, fuzzy searching over the KGs, or knowledge graph embeddings. Although their final objective is the same (obtain high precision and recall results) and they perform similar procedures, each solution implements its own workflow and addresses each proposed task by SemTab in different ways. Hence, making a fair and fine-grained comparison among the different solutions to understand how they obtain the actual results is not an easy task.
+
+In this vision paper, we align the tasks followed by solutions for the automation of the semantic table annotation with concepts of existing declarative solutions. We indicatively select and analyze a few tools for the automation of KG construction and identify common steps. We discuss if they can be declaratively described relying on existing mapping languages, and what the challenges are to proceed in this direction. We consider the RDF Mapping Language (RML) [1] as a high-level and general representation to describe the schema transformations and its extension, the Function Ontology (FnO) [16] to describe the data transformations.
+
+Our objective is not to present a complete study, but to investigate if there is potential in this direction. By describing the steps followed by different solutions in a more fine-grained and standard manner, we make the steps comparable, and we can better discuss what challenges we need to address to guarantee the maintainability, explainability and reproducibility of the KG construction, as well as to ensure the provenance of each performed task.
+
+§ 2. TASK ALIGNMENT WITH MAPPING LANGUAGES
+
+We analyze the different steps of the SemTab challenge, inspect the relation between the SemTab challenge tasks and align them with concepts from the declarative construction of RDF graphs (Figure 1). To achieve this, we include the relationship between each of the tasks and their potential declarations within a mapping language. We considered the RML mapping language because it is commonly used and the authors are more familiar with, but we are confident that the other mapping languages could express the same concepts. Before we proceed with the alignment, we give a small introduction on the SemTab challenge and RML:
+
+SemTab challenge The SemTab challenge consists of three tasks: (i) cell to KG entity matching (CEA), which matches cells to individuals; (ii) column to KG class matching (CTA), which matches cells to classes; and (iii) column pair to KG property matching (CPA), which captures the relationships between pairs of columns.
+
+RML The RDF mapping language (RML), a superset of the W3C recommended R2RML, expresses schema transformations from heterogeneous data to RDF. An RML mapping contains one or more Triple Maps which on their own turn contain a Subject Map to generate the subjects of the RDF triples, and zero or more Predicate Object Maps with pairs of Predicate and Object Maps to generate the predicates and the objects respectively for each incoming data record. RML was aligned with the Function Ontology (FnO) [16] to describe the data transformations which are required to construct the desired RDF graph, ensuring that the functions are independent from any implementation.
+
+${}^{4}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/
+
+ < g r a p h i c s >
+
+Figure 1: Automation tasks alignment within declarative mapping language. Example extracted from SemTab 2021 challenge, where the CEA, CTA and CPA tasks are aligned with a declarative construction of a knowledge graph using the RML mapping language (YARRRML serialisation).
+
+We analyze how the different tasks of the challenge contribute in constructing a part of an RDF triple, and we align these tasks with the corresponding concepts of the RML mapping language that construct the same part of an RDF triple.
+
+Cell-Entity Annotation (CEA): This task identifies the URI of an entity from a cell. In the target RDF graph, this is the subject or the object of the RDF triple. In Fig. 1, the Co10 values are used to obtain the subjects of the triples while the Co13 values generate the objects (both green colored in the RDF extract of Fig. 1). If a declarative approach is considered to generate these triples, for example in RML, the rr: subjectMap property is used (line 5 of RML doc in Fig. 1), which declares how the subjects of the triples are generated and the rr: object Map (line 8 of RML doc in Fig. 1), when the expected objects are in the form of URIs.
+
+Column-Type Annotation (CTA): This task predicts the common class of a set of items given a column from the table. SemTab assumes that a table only generates one kind of entity (i.e. the first column is used for CTA). In Figure 1, we can observe that the URIs retrieved using Co10 are considered for obtaining the corresponding shared concept (i.e., restaurant) (red colored in the RDF extract of Fig. 1). Declaring the class in RML can be done through the shortcut rr: class property within the rr: SubjectMap (line 7 of RML doc in Fig. 1) or using a rr:predicateObjectMap with a rdf:type fixed predicate.
+
+Columns-Property Annotation (CPA): This task aims to predict the property that relates the CTA column (subjects) to the rest of the columns. Fig. 1 shows a CPA task that relates the Co10 with the Co13 through the property architectural style (wdt : P149, yellow colored in the RDF extract). In RML, the predicates of the triples are declared using the rr: predicateMap property (line 8 of RML doc in Fig. 1), and unlike typical mapping rules, where it is usually assumed that predicates are constants (as they are declared in the input ontology), the predicates depend on the data, hence they are dynamically defined.
+
+Based on the aforementioned analysis, we conclude that the tasks performed to automate the KG construction can be aligned with concepts from declarative mapping languages. The CEA task is aligned with the RDF term construction for the subject or the object of the RDF triple, the CTA task assigns the class and the CPA task aligns with the Predicate and Object Map.
+
+§ 3. COMPARING SEMANTIC TABULAR MATCHING SYSTEMS
+
+In this section, we analyze in detail the steps performed by some of the tools proposed for solving the SemTab challenge. The comparative analysis among the three selected engines (summarized in Table 1), is not meant to be exhaustive. We aim to identify if there are common steps and functions that the engines perform to accomplish the challenge's tasks and ultimately if it is possible and desired to declaratively describe them with mapping languages.
+
+§ 3.1. SELECTED SYSTEMS
+
+We indicatively selected the systems that: (i) obtained good results in the SemTab 2021 challenge ${}^{5}$ ; and (ii) have the source code openly available. Therefore, we included in this comparison JenTab [14], MTab [13] and MantisTable V [17]. The use of different terminologies for describing similar tasks (e.g., majority vote in Mantis V is referred as frequency) and the complexity of the proposed workflows, where the results from one of the task influence the others in a iterative way, create difficulties to compare the approaches and reproduce their results.
+
+JenTab ${}^{6}$ participated in SemTab 2020 and 2021, and it was always positioned among the top five solutions for most rounds. It follows a heuristic-based approach proposing the CFS (Create, Filter, Select) approach for all tasks and with different configurations and workflows.
+
+${\mathbf{{MTab}}}^{7}$ participated in all SemTab editions, winning the first prize in 2019 and 2020. Apart from the support of multilingual datasets, MTab implements several approaches for performing the entity search (i.e., CEA): keyword search, fuzzy search, and aggregation search ${}^{8}$ .
+
+MantisTable ${\mathrm{V}}^{9}$ is an extended and improved version of MantisTable [18]. Similarly to JenTab, MantisTable has also participated in SemTab 2020 and 2021 editions. It implements a set of heuristic rules (similar as JenTab) and complex string similarity functions for the entity recognition task (like MTab). Additionally, it provides a general and efficient tool (LamAPI) to fetch the necessary data for all SemTab tasks, independently of the target KG.
+
+${}^{5}$ https://www.cs.ox.ac.uk/isg/challenges/sem-tab/2021
+
+${}^{6}$ https://github.com/fusion-jena/JenTab
+
+${}^{7}$ https://github.com/phucty/mtab_tool
+
+${}^{8}$ https://mtab.app/mtabes/docs
+
+${}^{9}$ https://bitbucket.org/disco_unimib/mantistable-v/
+
+§ 3.2. OBSERVATIONS
+
+The systems we inspected follow the same steps: they perform a preprocessing step, and setup lookup and datatype prediction services. Then the CEA task is performed followed by the CTA and CPA tasks which depend on the CEA task. Given that the systems follow the same steps, we could map the three main tasks (CEA, CPA, CTA) to the Create-Filter-Select (CFS) procedure proposed by JenTab (see Table 1).
+
+We observe similarities in most tasks among the engines. The subtasks performed in the preprocessing step, are very similar in the three engines. The preprocessing tasks include several functions, such as fixing encoding issues, removing HTML tags or special characters, and detecting missing white spaces (see Table 1), and they usually delegate them to third-party libraries (e.g., ftfy ${}^{10}$ ). We observe similar tasks are performed when declarative solutions are used for cleaning and preparing the data. These preprocessing tasks are described with FnO in the case of RML and executed either together with the schema transformations or as a preprocessing task too.
+
+The same occurs for the datatype prediction, where regular expressions are often used to detect if cell values are entities or literals, and what type of literals (string, date, or numbers). In the case of declarative solutions, this datatype inspection task is performed manually. However, adjusting the datatype is possible relying on functions for data transformations.
+
+Most of them also incorporate a lookup step to retrieve the necessary data from the KGs (e.g., using SPARQL queries), including similarity functions or fuzzy search. The search engine for the KG lookups in JenTab and Mantis V is ElasticSearch, although the former implements the Jaro Winkler distance [19] while the later embeds it in a more efficient engine and exploits its query capabilities. Lookups were also incorporated in the case of declarative solutions [20], where lookup services retrieve a URI to identify an entity instead of assigning a new one.
+
+As far as the actual tasks is concerned, each engine performs its own approach for the CEA, CTA, and CPA tasks, although we also find some similarities. The most important ones that are implemented in the three engines are: (i) the Levenshtein distance [21] for filtering candidates, and (ii) the majority vote (called frequency in Mantis V) for selecting the final annotations. We believe that the use of declarative approaches, such as the Function Ontology [16] for describing common functions (e.g., Levenshtein), could make the solutions more comparable. It would also be clearer if they perform the same function, and more explainable, as current solutions for the automation of KG construction act like blackboxes: neither their implementations are open sourced nor the declarative descriptions of what they execute are available. Providing at least declarative descriptions of the performed tasks would enhance the transparency of these solutions.
+
+${}^{10}$ https://pypi.org/project/ftfy/
+
+Table 1
+
+Tasks comparison among different SemTab solutions
+
+max width=
+
+2|c|X JenTab MTab Mantis V
+
+1-5
+2|c|KG Lookup ElasticSearch on top of KG SPARQL Queries WikiGraph Generation Ad-hoc API LamAPI(ElasticSearch, Mongo and Python)
+
+1-5
+5*Preprocessing Fix encoding Y Y $\mathrm{N}$
+
+2-5
+ Special characters Y N Y
+
+2-5
+ Restore missing spaces Y $\mathrm{N}$ Y
+
+2-5
+ Remove HTML tags $\mathrm{N}$ Y $\mathrm{N}$
+
+2-5
+ Remove non-cell-values $\mathrm{N}$ Y $\mathrm{N}$
+
+1-5
+2|c|Datatype REGEX Type-based cleaning Cell values identification (literal, entity) SpaCy models for potential types Majority vote to define column type REGEX for datatypes exceeding a threshold Entity columns that do not exceed the threshold
+
+1-5
+3*CEA CREATE Different query rewriting techniques Keword search (BM25) Fuzzy search (Levenshtein distance) LamAPI lookup with IB similarity
+
+2-5
+ FILTER Levenshtein distance (among others) Filter and hashing (Symetric Delete) Context similarities by row Levenshtein confidence score for entities Literals XXX
+
+2-5
+ SELECT Levenshtein distance Highest context similarity xxxx
+
+1-5
+3*CTA CREATE Types from CEA Types from CEA Types from CEA
+
+2-5
+ FILTER Remove the less popular types - -
+
+2-5
+ SELECT Maiority vote Majority vote Majority vote
+
+1-5
+3*CPA CREATE Cell annotations (CEA) and fuzzy match for data properties Aggregate all properties from CEA by row Properties from CEA lookups
+
+2-5
+ FILTER - - -
+
+2-5
+ SELECT Majority vote Majority vote Majority vote
+
+1-5
+
+§ 4. CHALLENGES FOR A DECLARATIVE AUTOMATION OF KG CONSTRUCTION
+
+We identify a set of challenges that need to be addressed to declaratively describe solutions for automatic KG construction. These challenges can be divided into two main categories: technical challenges and conceptual challenges.
+
+On the technical side, there is a major difference between the solutions for the automation of KG construction and the execution of declarative KG construction solutions: The solutions for automatic KG construction rely on iterative processes that continuously refine and improves a task, while the different tasks influence each other. To the contrary, the declarative KG construction is a linear process that is executed only once. Not all declarative rules are executed linearly, solutions that restructure [6] or parallelize them [22, 23] are increasingly encountered. Thus, if the solutions for automatic KG construction are declaratively described, their iterative execution needs to be described as well. How do we do that with the mapping languages?
+
+Besides the overall execution process, the iteration patterns are different. The solutions for automatic KG construction are applied to all directions, both per column and per row, and even combined. To the contrary, the declarative solutions are applied only per row, and the mapping languages are designed under this assumption. Should the mapping languages be extended to support more iteration patterns? If so, would the rml:iteration for RML and the relevant constructs in the other mapping languages be sufficient or more adjustments are required?
+
+The solutions for automatic KG construction rely on interrelated tasks which may produce intermediate representations, and their results impact the rest of tasks. Thus, the declarative KG construction solutions need to deal with dynamic and recursive steps (e.g., intermediate representation of the input data sources and mapping rules, multiple function execution, etc.) that can negatively impact the generation process. Hence, declaratively describing is a challenge. Should the mapping languages be further extended then?
+
+On the conceptual side, there are two main differences with respect to the training and target KG. In most real projects that declarative solutions tackle, the input data and sometimes the target ontology are only provided, but there is neither similar data to train the solutions nor existing KGs that can be used to find entities or to predict the relationships. While relying on ontology matching techniques between existing KGs (e.g., DBPedia, Wikidata) and the target ontology or exploiting NLP approaches between ontology and input sources documentation could be a solution for the latter, would it be realistic given that most ontologies are not aligned and not all of them provide documentation?
+
+§ 5. CONCLUSIONS AND FUTURE WORK
+
+In this paper, we analyze the KG construction solutions and compare the automatic with the declarative. While the tasks can be aligned with respect to what they achieve, their execution is fundamentally different and a direct alignment is not feasible.
+
+Automatic solutions for KG construction are required to facilitate the adoption of KGs, but there are also merits when the automation tasks are declaratively described, with respect to maintenability, sustainability, and reproducibility. However, directly aligning the automatic solutions with the declarative solutions might be technically and conceptually challenging considering their different execution and iteration patterns. Extending the existing mapping languages would be a solution, but it would also require to address the identified challenges and not only. Would such extensions be feasible and desired or would they lead them beyond their purpose? Although, mapping languages are not the only approach to have declarative descriptions. Declarative descriptions of workflows emerge as well. Would that be a more viable solution? If so, would the automatic and declarative solutions keep on growing in different directions? These are questions that would be nice to reflect and discuss during the workshop.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..70610d729a0e0fbd4ce1ba4ee66d9fb7557df0d7
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,203 @@
+# Supporting Relational Database Joins for Generating Literals in R2RML
+
+Christophe Debruyne ${}^{1}$
+
+${}^{1}$ University of Liege - Montefiore Institute,4000 Liège, Belgium
+
+## Abstract
+
+Since its publication, R2RML has provided us with a powerful tool for generating RDF from relational data, not necessarily manifested as relational databases. R2RML has its limitations, which are being recognized by W3C's Knowledge Graph Construction Community Group. That same group is currently developing a specification that supersedes R2RML in terms of its functionalities and the types of resources it can transform into RDF-primarily hierarchical documents. The community has a good understanding of problems of relational data and documents, even if they might need to be approached differently because of their different formalisms. In this paper, we present a challenge that has not been addressed yet for relational databases-generating literals based on (outer-)joins. We propose a simple extension of the R2RML vocabulary and extend the reference algorithm to support the generation of literals based on (outer-)joins. Furthermore, we implemented a proof-of-concept and demonstrated it using a dataset built for benchmarking joins. While it is not (yet) an extension of RML, this contribution informs us how to include such support and how it allows us to create self-contained mappings rather than relying on less elegant solutions.
+
+## Keywords
+
+R2RML, Knowledge Graph Generation, Outer-joins, Joins
+
+## 1. Introduction
+
+R2RML [1] is a powerful technique for transforming relational data into RDF and was published almost a decade ago. R2RML was conceived for relational databases, but can be applied to relational data. Since then, it inspired many initiatives to generalize this approach for other types of data such as RML [2] and xR2RML [3]. Others looked at extending aspects of (R2)RML not pertaining to the sources being transformed, but to tackle unaddressed challenges and requirements such as RDF Collections [3, 4] and functions [5, 6].
+
+The R2RML Recommendation specified a reference algorithm in which relational joins (natural joins or equi-joins, to be specific) can be used to relate resources. The implementation can be broken into two parts: (1) the generation of triples based on a triples map $t{m}_{1}$ related to a logical source, and (2) the generation of triples relating subjects from $t{m}_{1}$ with those of another triples map $t{m}_{2}$ . While (2) does not use an outer-join, the combination of both (1) and (2) ensures that the data being transformed "behaves" as the result of an outer-join. The problem, however, is that support for such outer-joins is only limited to resources; there is no convenient way to do something similar for literals.
+
+---
+
+Third International Workshop On Knowledge Graph Construction Co-located with the ESWC 2022, 30th May 2022, Crete, Greece
+
+C. c.debruyne@uliege.be (C. Debruyne)
+
+D 0000-0003-4734-3847 (C. Debruyne)
+
+(C) ${}_{12}$ (C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
+
+CEUR Workshop Proceedings (CEUR-WS.org)
+
+---
+
+title aka_title id movie id title imdb index kind id production year phonetic code episode_of_id season nr episode nr series years md5sum title imdb index kind id production_year imdb id phonetic_code episode of id season nr episode nr series years md5sum
+
+Figure 1: The tables title and aka_title of the database. A title may be related to one or more aka_titles, and an aka_title may be related to one title.
+
+This paper proposes a simple extension of R2RML for supporting joins for the generation of literals. It furthermore proposes how the reference algorithm should be extended. We demonstrate this extension using a fairly big relational databases developed for bench marking joins [7]. This benchmark provides us also with a realistic case, motivating the need for such an extension. This paper furthermore positions this contribution with other initiatives developed by the Knowledge Graph Generation community with the aim of opening a discussion.
+
+### 2.The Problem
+
+We framed the problem in the previous section. In this section, we will rephrase the problem and discuss several approaches to achieve the desired result that one can observe in practice. To this end, we will be using a running example based on the database developed by [7].
+
+To benchmark the performance of joins, [7] developed a database based on the Internet Movie Database ${}^{1}$ (IMDb). In short, their motivation was that existing synthetic benchmarks may be biased and real and "messy" provided better grounds for comparison. While that aspect is not important for this paper, the database they developed did contain two big tables: title containing information about movies and their titles, and aka_t it le containing variations in titles (either an alternative title, or titles in different languages). Figure 1 depicts the relation between the two tables and their attributes. ${}^{2}$
+
+There are two approaches to solving this problem with R2RML:
+
+Sol1 The first is the creation of two triples maps with one dedicated to the generation of triples for the outer-join. The problem with this approach is that the mapping is not self-contained and that there are two distinct triples maps which need to be maintained. One also needs to document that this construct was necessary to facilitate this outer-join. The advantage is that there are two distinct processes for querying the underlying database and thus less overhead.
+
+---
+
+${}^{1}$ https://www.imdb.com/
+
+${}^{2}$ The files were loaded into a MySQL database, but required some minor pre-processing: a handful of encoding issues in the files and NULL values in aka_table were represented with the number 0 . We also introduced a foreign key constraint that was not present in the SQL schema provided by [7]. The reason being that the foreign key constraint optimizes joins on these two tables. The tables contain 2528312 and 361472 respectively. There are 93 records from aka_table not referring to a record in title and 2322682 records in title have no alternative titles.
+
+---
+
+Sol2 The second, more naïve approach, is the use of one triples map with a (outer-)-join in its logical table. While this makes the triples map self-contained, unlike the approach above, but may require the processor to process many logical rows that generate the same triples.
+
+We may observe, in the wild, cases of the first also being conducted for referencing object maps, especially when the processor used uses the reference algorithm. The problems with respect to self-containedness of triples maps still holds. An R2RML processor may internally "rewrite" referencing object maps as triples maps to optimize the process.
+
+In the next section, we propose a small extension of R2RML to provide support for joins on literal values.
+
+## 3. Proposed solution
+
+In Listing 1, we demonstrate the extension. It introduces the predicate rrf :parentLogicalTable. ${}^{3}$ The domain of that predicate is rr: RefObjectMap and the range is rr: LogicalTable. Our extension requires that a rr: RefObjectMap must have either a rrf:parentLogicalTable or rr:parentTriplesMap. A referencing object map may now also generate literals. Where necessary, we will refer to object maps with a parent-triples map as "regular" referencing object maps.
+
+---
+
+<#title>
+
+ rr:logicalTable [ rr:tableName "title" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/\{id\}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] ;
+
+ rr:predicateObjectMap [
+
+ rr:predicate ex:title ;
+
+ rr:objectMap [
+
+ rr:column "title" ;
+
+ rrf:parentLogicalTable [ rr:tableName "aka_title" ] ;
+
+ rr:joinCondition [ rr:child "id" ; rr:parent "movie_id" ] ;
+
+ ] ;
+
+ ] ;
+
+-
+
+---
+
+## Listing 1: Using parent-logical tables for managing joins
+
+The reference algorithm ${}^{4}$ is extended as follows: step 6 will now iterate over all referencing object maps with a rr : parentTriplesMap and we add a 7th step for each referencing object map that uses a parent-logical table. The steps for generating are mostly the same. The two differences are: 1) it may generate any term type, and 2) the column names referred to by the object map are those of the parent. In other words, if both logical tables share a column $\mathrm{X}$ , then a reference to $\mathrm{X}$ would be to that of the parent. This behavior is consistent with that of regular referencing object maps. An implementation of this algorithm is made available. ${}^{5}$
+
+---
+
+${}^{3}$ The namespace rrf refers to the namespace used in [6].
+
+${}^{4}$ https://www.w3.org/TR/r2rml/#generated-rdf
+
+---
+
+## 4. Demonstration
+
+We now present a limited experiment comparing the performance of Sol1, Sol2, and our proposal using the relational database introduced in Section 2. The mappings for Sol1 and Sol2 are in Appendix A. In this experiment, we join using the tables as a whole. As R2RML requires result sets to have unique names for each column, we created a third table aka_title2 where each column received the suffix ' 2 '. We also created a foreign key from aka_title2 to title. We wanted to avoid using subqueries to rename the columns, and these may become materialized and thus have an unfairly negative impact on the outcome.
+
+The experiment was run on a MacBook Pro with a 2.3 GHz Dual-Core Intel Core i5 processor and 16 GB 2133 MHz LPDDR3 RAM. The database was stored in a MySQL 8.0 database in a Docker container. The code for the experiment was written in Java and ran the result of each mapping 11 times, of which the first run was removed to avoid bias from a cold start. The code calls upon the extension of R2RML-F and registered timestamps before and after executing the mapping. We have not registered the time for writing the graph onto the hard disk.
+
+From Figure 2, which shows the average run times in seconds, it is clear that the approach of using two different triples maps (Sol1) is much faster than the two other approaches, which comes as no surprise. The problem, however, is that we have two distinct triples maps and their relationship is not explicit. Placing the outer-join in the logical table (Sol2) has the worst performance. The outer join yields a result set with 155749 more records than the referred table and contains twice the number of attributes. The overhead can be significantly reduced by only selecting the columns of interest, but the three mappings refer to the logical tables as a whole. Unsurprisingly, our solution is less efficient than Sol1 but considerably more efficient than Sol2.
+
+We may conclude from these initial results that the proposed solution is not only a viable solution. It also ensures that the mappings remain self-contained. While performance is crucial in knowledge graph generation, we argue that even the vocabulary is a contribution and that an R2RML processor can rewrite referencing object maps (both types) into distinct triples maps.
+
+## 5. Discussion
+
+In this paper, we extended the concept of rr: RefObjectMap to support joins for literal values. The reference algorithm for R2RML processes these in a separate loop for the generation of relations between subjects of two triples maps. Our approach added a similar step to the generation of literals based on a join. One may ask whether this approach may be adopted for term maps in general. The generation of subjects, predicates, and graphs for relational databases is based on a logical row. Generalizing this approach for such term maps may require a join per row, which is not efficient and is thus best done in the logical table of a triples map.
+
+---
+
+${}^{5}$ https://github.com/chrdebru/r2rml/tree/r2rml-join
+
+---
+
+Sol2 (outer join in TM) 100 105 110 115 120 Proposed solution 80 85 90
+
+Figure 2: Time taken to process three mappings: Sol1-2 triples maps for the outer-join, Sol2- one triples map with the outer-join in the logical table, and out proposed solution.
+
+As we can generate resources with our approach, one can question whether the notion of parent-triples maps is still necessary. The reference algorithm uses both logical tables, even though a processor can only select those used by the subject maps. The question rises: do we refer to (data in) sources, or do we refer to triples maps?
+
+Related to this work is the approach proposed by [8] where they proposed "fields" to manipulate and even combine the source prior to generating RDF. Their work, demonstrated with hierarchical data, aimed to address the problem of references that may yield multiple results and that sources may contain data of mixed formats. They also introduced an abstraction allowing one to retrieve information via a reference that does not depend on the underlying reference formulation. To the best of my knowledge, support for relational databases and the addition of fields from different tables has not yet been published. However, as they declare fields on the logical source, such an approach may boil down to a situation similar to Sol2 mentioned in Section 2.
+
+## 6. Conclusions
+
+We addressed the problem of generating literals from an outer-join, which R2RML does not support. While interesting initiatives are proposed for mostly hierarchical documents, we wanted to address this problem for relational databases by extending R2RML. We proposed a small extension with few implications regarding the R2RML vocabulary. We also extended the reference algorithm and provided an implementation that we have analyzed in an experiment.
+
+From this paper, we can conclude that, for relational databases, our approach is a viable solution. While not as efficient as disjoint triples maps, it may be worth considering not as an approach. It is essential not to consider this vocabulary extension as syntactic sugar, as that would imply it is shorthand for something semantically equivalent. In our approach, the mappings are self-contained, and the relationship between the two logical tables is thus explicit.
+
+We have addressed this problem for relational databases and R2RML. We could envisage that such an approach could be part of RML, which has the ambition to supersede R2RML. How this approach would work for non-relational data is to be studied.
+
+## A. Mappings Used in the Experiment
+
+---
+
+###MAPPING USED FOR SOL1 IN THE EXPERIMENT
+
+<#title_tm>
+
+ rr:logicalTable [ rr:tableName "title" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/\{id\}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
+
+<#aka_title_tm>
+
+ rr:logicalTable [ rr:tableName "aka_title" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/\{movie_id\}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
+
+###MAPPING USED FOR SOL2 IN THE EXPERIMENT
+
+<#title_tm>
+
+ rr:logicalTable [
+
+ rr:sqlQuery "SELECT * FROM title t LEFT OUTER JOIN aka_title2 a ON t.id = a.movie_ID2" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/\{id\}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ;
+
+ rr:objectMap [ rr:column "title2" ] ;
+
+ ] .
+
+---
+
+## References
+
+[1] S. Das, R. Cyganiak, S. Sundara, R2RML: RDB to RDF Mapping Language, 2012. URL: https://www.w3.org/TR/r2rml/.
+
+[2] A. Dimou, M. V. Sande, P. Colpaert, R. Verborgh, E. Mannens, R. V. de Walle, RML: A Generic Language for Integrated RDF Mappings of Heterogeneous Data, in: C. Bizer, T. Heath, S. Auer, T. Berners-Lee (Eds.), Proceedings of the Workshop on Linked Data on the Web co-located with the 23rd International World Wide Web Conference (WWW 2014), Seoul, Korea, April 8, 2014., volume 1184 of CEUR Workshop Proceedings, CEUR-WS.org, 2014. URL: http://ceur-ws.org/Vol-1184/ldow2014_paper_01.pdf.
+
+[3] F. Michel, L. Djimenou, C. Faron-Zucker, J. Montagnat, Translation of relational and non-relational databases into RDF with xr2rml, in: V. Monfort, K. Krempels, T. A. Majchrzak, Z. Turk (Eds.), WEBIST 2015 - Proceedings of the 11th International Conference on Web Information Systems and Technologies, Lisbon, Portugal, 20-22 May, 2015, SciTePress, 2015, pp. 443-454. URL: https://doi.org/10.5220/0005448304430454.doi:10.5220/0005448304430454.
+
+[4] C. Debruyne, L. McKenna, D. O'Sullivan, Extending R2RML with support for rdf collections and containers to generate MADS-RDF datasets, volume 10450 LNCS, 2017. doi:10.1007/978-3-319-67008-9_42.
+
+[5] B. D. Meester, W. Maroy, A. Dimou, R. Verborgh, E. Mannens, Declarative data transformations for linked data generation: The case of dbpedia, in: E. Blomqvist, D. Maynard, A. Gangemi, R. Hoekstra, P. Hitzler, O. Hartig (Eds.), The Semantic Web - 14th International Conference, ESWC 2017, Portoroz, Slovenia, May 28 - June 1, 2017, Proceedings, Part II, volume 10250 of Lecture Notes in Computer Science, 2017, pp. 33-48. URL: https://doi.org/10.1007/978-3-319-58451-5_3.doi:10.1007/ 978-3-319-58451-5\\_3.
+
+[6] C. Debruyne, D. O'Sullivan, R2RML-F: towards sharing and executing domain logic in R2RML mappings, in: S. Auer, T. Berners-Lee, C. Bizer, T. Heath (Eds.), Proceedings of the Workshop on Linked Data on the Web, LDOW 2016, co-located with 25th International World Wide Web Conference (WWW 2016), volume 1593 of CEUR Workshop Proceedings, CEUR-WS.org, 2016. URL: http://ceur-ws.org/Vol-1593/article-13.pdf.
+
+[7] V. Leis, A. Gubichev, A. Mirchev, P. A. Boncz, A. Kemper, T. Neumann, How good are query optimizers, really?, Proc. VLDB Endow. 9 (2015) 204-215. URL: http://www.vldb.org/pvldb/vol9/p204-leis.pdf.doi:10.14778/2850583.2850594.
+
+[8] T. Delva, D. V. Assche, P. Heyvaert, B. D. Meester, A. Dimou, Integrating nested data into knowledge graphs with RML fields, in: D. Chaves-Fraga, A. Dimou, P. Heyvaert, F. Priyatna, J. F. Sequeda (Eds.), Proceedings of the 2nd International Workshop on Knowledge Graph Construction co-located with 18th Extended Semantic Web Conference (ESWC 2021), Online, June 6, 2021, volume 2873 of CEUR Workshop Proceedings, CEUR-WS.org, 2021. URL: http://ceur-ws.org/Vol-2873/paper9.pdf.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..225eb43a329b4ec145ccec47eda35c23ee2dbbc8
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/Hzx73hzWzq/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,161 @@
+§ SUPPORTING RELATIONAL DATABASE JOINS FOR GENERATING LITERALS IN R2RML
+
+Christophe Debruyne ${}^{1}$
+
+${}^{1}$ University of Liege - Montefiore Institute,4000 Liège, Belgium
+
+§ ABSTRACT
+
+Since its publication, R2RML has provided us with a powerful tool for generating RDF from relational data, not necessarily manifested as relational databases. R2RML has its limitations, which are being recognized by W3C's Knowledge Graph Construction Community Group. That same group is currently developing a specification that supersedes R2RML in terms of its functionalities and the types of resources it can transform into RDF-primarily hierarchical documents. The community has a good understanding of problems of relational data and documents, even if they might need to be approached differently because of their different formalisms. In this paper, we present a challenge that has not been addressed yet for relational databases-generating literals based on (outer-)joins. We propose a simple extension of the R2RML vocabulary and extend the reference algorithm to support the generation of literals based on (outer-)joins. Furthermore, we implemented a proof-of-concept and demonstrated it using a dataset built for benchmarking joins. While it is not (yet) an extension of RML, this contribution informs us how to include such support and how it allows us to create self-contained mappings rather than relying on less elegant solutions.
+
+§ KEYWORDS
+
+R2RML, Knowledge Graph Generation, Outer-joins, Joins
+
+§ 1. INTRODUCTION
+
+R2RML [1] is a powerful technique for transforming relational data into RDF and was published almost a decade ago. R2RML was conceived for relational databases, but can be applied to relational data. Since then, it inspired many initiatives to generalize this approach for other types of data such as RML [2] and xR2RML [3]. Others looked at extending aspects of (R2)RML not pertaining to the sources being transformed, but to tackle unaddressed challenges and requirements such as RDF Collections [3, 4] and functions [5, 6].
+
+The R2RML Recommendation specified a reference algorithm in which relational joins (natural joins or equi-joins, to be specific) can be used to relate resources. The implementation can be broken into two parts: (1) the generation of triples based on a triples map $t{m}_{1}$ related to a logical source, and (2) the generation of triples relating subjects from $t{m}_{1}$ with those of another triples map $t{m}_{2}$ . While (2) does not use an outer-join, the combination of both (1) and (2) ensures that the data being transformed "behaves" as the result of an outer-join. The problem, however, is that support for such outer-joins is only limited to resources; there is no convenient way to do something similar for literals.
+
+Third International Workshop On Knowledge Graph Construction Co-located with the ESWC 2022, 30th May 2022, Crete, Greece
+
+C. c.debruyne@uliege.be (C. Debruyne)
+
+D 0000-0003-4734-3847 (C. Debruyne)
+
+(C) ${}_{12}$ (C) 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
+
+CEUR Workshop Proceedings (CEUR-WS.org)
+
+ < g r a p h i c s >
+
+Figure 1: The tables title and aka_title of the database. A title may be related to one or more aka_titles, and an aka_title may be related to one title.
+
+This paper proposes a simple extension of R2RML for supporting joins for the generation of literals. It furthermore proposes how the reference algorithm should be extended. We demonstrate this extension using a fairly big relational databases developed for bench marking joins [7]. This benchmark provides us also with a realistic case, motivating the need for such an extension. This paper furthermore positions this contribution with other initiatives developed by the Knowledge Graph Generation community with the aim of opening a discussion.
+
+§ 2.THE PROBLEM
+
+We framed the problem in the previous section. In this section, we will rephrase the problem and discuss several approaches to achieve the desired result that one can observe in practice. To this end, we will be using a running example based on the database developed by [7].
+
+To benchmark the performance of joins, [7] developed a database based on the Internet Movie Database ${}^{1}$ (IMDb). In short, their motivation was that existing synthetic benchmarks may be biased and real and "messy" provided better grounds for comparison. While that aspect is not important for this paper, the database they developed did contain two big tables: title containing information about movies and their titles, and aka_t it le containing variations in titles (either an alternative title, or titles in different languages). Figure 1 depicts the relation between the two tables and their attributes. ${}^{2}$
+
+There are two approaches to solving this problem with R2RML:
+
+Sol1 The first is the creation of two triples maps with one dedicated to the generation of triples for the outer-join. The problem with this approach is that the mapping is not self-contained and that there are two distinct triples maps which need to be maintained. One also needs to document that this construct was necessary to facilitate this outer-join. The advantage is that there are two distinct processes for querying the underlying database and thus less overhead.
+
+${}^{1}$ https://www.imdb.com/
+
+${}^{2}$ The files were loaded into a MySQL database, but required some minor pre-processing: a handful of encoding issues in the files and NULL values in aka_table were represented with the number 0 . We also introduced a foreign key constraint that was not present in the SQL schema provided by [7]. The reason being that the foreign key constraint optimizes joins on these two tables. The tables contain 2528312 and 361472 respectively. There are 93 records from aka_table not referring to a record in title and 2322682 records in title have no alternative titles.
+
+Sol2 The second, more naïve approach, is the use of one triples map with a (outer-)-join in its logical table. While this makes the triples map self-contained, unlike the approach above, but may require the processor to process many logical rows that generate the same triples.
+
+We may observe, in the wild, cases of the first also being conducted for referencing object maps, especially when the processor used uses the reference algorithm. The problems with respect to self-containedness of triples maps still holds. An R2RML processor may internally "rewrite" referencing object maps as triples maps to optimize the process.
+
+In the next section, we propose a small extension of R2RML to provide support for joins on literal values.
+
+§ 3. PROPOSED SOLUTION
+
+In Listing 1, we demonstrate the extension. It introduces the predicate rrf :parentLogicalTable. ${}^{3}$ The domain of that predicate is rr: RefObjectMap and the range is rr: LogicalTable. Our extension requires that a rr: RefObjectMap must have either a rrf:parentLogicalTable or rr:parentTriplesMap. A referencing object map may now also generate literals. Where necessary, we will refer to object maps with a parent-triples map as "regular" referencing object maps.
+
+<#title>
+
+ rr:logicalTable [ rr:tableName "title" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/{id}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] ;
+
+ rr:predicateObjectMap [
+
+ rr:predicate ex:title ;
+
+ rr:objectMap [
+
+ rr:column "title" ;
+
+ rrf:parentLogicalTable [ rr:tableName "aka_title" ] ;
+
+ rr:joinCondition [ rr:child "id" ; rr:parent "movie_id" ] ;
+
+ ] ;
+
+ ] ;
+
+-
+
+§ LISTING 1: USING PARENT-LOGICAL TABLES FOR MANAGING JOINS
+
+The reference algorithm ${}^{4}$ is extended as follows: step 6 will now iterate over all referencing object maps with a rr : parentTriplesMap and we add a 7th step for each referencing object map that uses a parent-logical table. The steps for generating are mostly the same. The two differences are: 1) it may generate any term type, and 2) the column names referred to by the object map are those of the parent. In other words, if both logical tables share a column $\mathrm{X}$ , then a reference to $\mathrm{X}$ would be to that of the parent. This behavior is consistent with that of regular referencing object maps. An implementation of this algorithm is made available. ${}^{5}$
+
+${}^{3}$ The namespace rrf refers to the namespace used in [6].
+
+${}^{4}$ https://www.w3.org/TR/r2rml/#generated-rdf
+
+§ 4. DEMONSTRATION
+
+We now present a limited experiment comparing the performance of Sol1, Sol2, and our proposal using the relational database introduced in Section 2. The mappings for Sol1 and Sol2 are in Appendix A. In this experiment, we join using the tables as a whole. As R2RML requires result sets to have unique names for each column, we created a third table aka_title2 where each column received the suffix ' 2 '. We also created a foreign key from aka_title2 to title. We wanted to avoid using subqueries to rename the columns, and these may become materialized and thus have an unfairly negative impact on the outcome.
+
+The experiment was run on a MacBook Pro with a 2.3 GHz Dual-Core Intel Core i5 processor and 16 GB 2133 MHz LPDDR3 RAM. The database was stored in a MySQL 8.0 database in a Docker container. The code for the experiment was written in Java and ran the result of each mapping 11 times, of which the first run was removed to avoid bias from a cold start. The code calls upon the extension of R2RML-F and registered timestamps before and after executing the mapping. We have not registered the time for writing the graph onto the hard disk.
+
+From Figure 2, which shows the average run times in seconds, it is clear that the approach of using two different triples maps (Sol1) is much faster than the two other approaches, which comes as no surprise. The problem, however, is that we have two distinct triples maps and their relationship is not explicit. Placing the outer-join in the logical table (Sol2) has the worst performance. The outer join yields a result set with 155749 more records than the referred table and contains twice the number of attributes. The overhead can be significantly reduced by only selecting the columns of interest, but the three mappings refer to the logical tables as a whole. Unsurprisingly, our solution is less efficient than Sol1 but considerably more efficient than Sol2.
+
+We may conclude from these initial results that the proposed solution is not only a viable solution. It also ensures that the mappings remain self-contained. While performance is crucial in knowledge graph generation, we argue that even the vocabulary is a contribution and that an R2RML processor can rewrite referencing object maps (both types) into distinct triples maps.
+
+§ 5. DISCUSSION
+
+In this paper, we extended the concept of rr: RefObjectMap to support joins for literal values. The reference algorithm for R2RML processes these in a separate loop for the generation of relations between subjects of two triples maps. Our approach added a similar step to the generation of literals based on a join. One may ask whether this approach may be adopted for term maps in general. The generation of subjects, predicates, and graphs for relational databases is based on a logical row. Generalizing this approach for such term maps may require a join per row, which is not efficient and is thus best done in the logical table of a triples map.
+
+${}^{5}$ https://github.com/chrdebru/r2rml/tree/r2rml-join
+
+ < g r a p h i c s >
+
+Figure 2: Time taken to process three mappings: Sol1-2 triples maps for the outer-join, Sol2- one triples map with the outer-join in the logical table, and out proposed solution.
+
+As we can generate resources with our approach, one can question whether the notion of parent-triples maps is still necessary. The reference algorithm uses both logical tables, even though a processor can only select those used by the subject maps. The question rises: do we refer to (data in) sources, or do we refer to triples maps?
+
+Related to this work is the approach proposed by [8] where they proposed "fields" to manipulate and even combine the source prior to generating RDF. Their work, demonstrated with hierarchical data, aimed to address the problem of references that may yield multiple results and that sources may contain data of mixed formats. They also introduced an abstraction allowing one to retrieve information via a reference that does not depend on the underlying reference formulation. To the best of my knowledge, support for relational databases and the addition of fields from different tables has not yet been published. However, as they declare fields on the logical source, such an approach may boil down to a situation similar to Sol2 mentioned in Section 2.
+
+§ 6. CONCLUSIONS
+
+We addressed the problem of generating literals from an outer-join, which R2RML does not support. While interesting initiatives are proposed for mostly hierarchical documents, we wanted to address this problem for relational databases by extending R2RML. We proposed a small extension with few implications regarding the R2RML vocabulary. We also extended the reference algorithm and provided an implementation that we have analyzed in an experiment.
+
+From this paper, we can conclude that, for relational databases, our approach is a viable solution. While not as efficient as disjoint triples maps, it may be worth considering not as an approach. It is essential not to consider this vocabulary extension as syntactic sugar, as that would imply it is shorthand for something semantically equivalent. In our approach, the mappings are self-contained, and the relationship between the two logical tables is thus explicit.
+
+We have addressed this problem for relational databases and R2RML. We could envisage that such an approach could be part of RML, which has the ambition to supersede R2RML. How this approach would work for non-relational data is to be studied.
+
+§ A. MAPPINGS USED IN THE EXPERIMENT
+
+###MAPPING USED FOR SOL1 IN THE EXPERIMENT
+
+<#title_tm>
+
+ rr:logicalTable [ rr:tableName "title" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/{id}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
+
+<#aka_title_tm>
+
+ rr:logicalTable [ rr:tableName "aka_title" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/{movie_id}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ; ] .
+
+###MAPPING USED FOR SOL2 IN THE EXPERIMENT
+
+<#title_tm>
+
+ rr:logicalTable [
+
+ rr:sqlQuery "SELECT * FROM title t LEFT OUTER JOIN aka_title2 a ON t.id = a.movie_ID2" ] ;
+
+ rr:subjectMap [ rr:template "http://data.example.com/movie/{id}" ; rr:class ex:Movie; ] ;
+
+ rr:predicateObjectMap [ rr:predicate ex:title ; rr:objectMap [ rr:column "title" ] ;
+
+ rr:objectMap [ rr:column "title2" ] ;
+
+ ] .
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..30b1d94683c7f875c05291d4a8c9f40986065d60
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,301 @@
+# What is needed in a Knowledge Graph Management Platform? A survey and a proposal
+
+Samira Babalou* 12, Franziska Zander 12, Erik Kleinsteuber 1, Badr El
+
+Haouni ${}^{1}$ , David Schellenberger Costa ${}^{2}$ , Jens Kattge ${}^{2}{}^{2}$ , Birgitta König-Ries-123
+
+${}^{1}$ Heinz-Nixdorf Chair for Distributed Information Systems
+
+Institute for Computer Science, Friedrich Schiller University Jena, Germany
+
+${}^{2}$ German Center for Integrative Biodiversity Research (iDiv), Halle-Jena-Leipzig, Germany
+
+${}^{3}$ Michael-Stifel-Center for Data-Driven and Simulation Science, Jena, Germany
+
+${}^{4}$ Institute of Biology/Geobotany and Botanical Garden, Martin Luther University, Halle, Germany
+
+corresponding author: samira.babalou@uni.jena.de
+
+Abstract. Knowledge Graphs (KGs) play a significant and growing role for semantics-based support of a wide variety of applications. Until recently, creating and maintaining such knowledge graphs was done in a one-off manner requiring significant manual effort and expertise. Over the last few years, the first KG management platforms supporting the lifecycle of KGs from their creation to their maintenance and use have appeared. In this paper, we first survey these platforms. We then take a step further and identify common functionalities across such platforms. We discuss nineteen such functionalities categorized into four groups: creating, extending, using, and maintaining KGs. Based on the findings of this analysis, we present our proposed KG management platform for the biodiversity domain, iKNOW. We focus on the architecture and the KG creation workflow, but also touch on other aspects.
+
+Keywords: Semantic Web . Knowledge Graph . Knowledge Graph Platform . Data Services and Functionality
+
+## 1 Introduction
+
+Increasingly, Knowledge Graphs (KGs) form the semantic data management backbone for a wide variety of applications. A KG [1] consists of nodes connected by edges. It is built from on a set of data sources via different techniques. Besides the instances, KGs can also contain schema information, which can be refined or augmented, e.g., by using a reasoner. Assigning unique identifiers to KG's entities can accelerate the interlinking with other resources on the web. The underlying structure of KGs opens a door for further functionalities such as visualization, supporting keyword search and complex queries via a SPARQL endpoint.
+
+Although KGs have widely gained attention in industry and academia, developing and managing their lifecycle requires a huge effort, expertise, and different functionalities. While, in the beginning, KGs were typically one-off manual efforts, there is a growing awareness that to exploit the capabilities of Knowledge Graph technologies to the maximal extent, support for their creation, access, update, and maintenance is needed. Many of these functionalities are not specific to any given $\mathrm{{KG}}$ but can be provided rather generically. $\mathrm{{KG}}$ platforms aim to do just that.
+
+As our contribution, in this paper, we survey existing KG management platforms and compare them in a general way. We then take a step further and analyze nineteen functionalities in four categories: creating, extending, using, and maintaining KGs. To the best of our knowledge, this is the first survey about KG platforms. Based on the findings of this survey and the needs in our domain, biodiversity research, we have designed our own KG platform. We present this platform, iKNOW, in the second part of the paper.
+
+The rest of the paper is organized as follows. Section 2 surveys existing KG management platforms. The common functionalities of platforms are discussed in Section 3. Our proposal for a KG management platform focussed on biodiversity, iKNOW, is presented in Section 4. The paper is concluded in Section 5.
+
+## 2 Literature Review
+
+In this paper, we define a Knowledge Graph Platform as a web-based platform for creating, managing, and making use of KGs. Such platforms mostly cover the whole lifecycle of KG application and include relevant services or functionalities for interaction and management of KGs.
+
+We contrast these from efforts to build an individual, specific KG. There have been many such efforts in different domains: e.g., Ozymandias [2] in the biodiversity domain, BCKG [3] in the biomedical domain, and I40KG [4] in the industrial domain. These KGs were built one time, and now their associated websites provide the KG access and usage. Such approaches are out of the scope of this paper. Rather, we focus on KG management platforms, which offer a set of operations such as generation and updates on the KG.
+
+In the following subsections, we first present the survey methodology used in this paper, then we briefly summarize the existing KG management platforms and compare them in a general way.
+
+### 2.1 Survey Methodology
+
+In this subsection, we describe our systematic approach to finding publications on KG platforms: We have queried for the keyword "Knowledge Graph Platform" in the Google Scholar search engine ${}^{1}$ . At the time of querying, this resulted in 162 papers (including citation and patents). We used Publish or Perish 8 tool ${}^{2}$ to save the result of the query. The result is available in our GitHub repository 3 Among the list of papers, we selected the relevant papers manually. We aimed to select papers that focus on the KG management platform. Some papers appeared in the result of google scholar because our keyword exists in their texts (e.g., in the literature review section), but those papers mainly do not propose a new KG platform. We did not include such cases. Moreover, we did not consider survey papers and papers written in a language other than English. In our repository, we specified which papers have been selected, and for non-selected ones, we clarified the reason. As a result, we came up with ${11}\mathrm{{KG}}$ platforms, briefly detailed in the following sub-section.
+
+---
+
+${}^{1}$ https://scholar.google.com/ access on 09.02.2022
+
+2 https://harzing.com/blog/2021/10/publish-or-perish-version-8
+
+---
+
+### 2.2 Existing KG Management Platforms
+
+In this section, we give a brief overview of existing platforms:
+
+- BBN (Blue Brain Nexus) [5] is an open-source platform. The KG in this platform can be built from datasets generated from heterogenous sources and formats. BBN has three main components: i) Nexus Delta, a set of services targeting developers for managing data and knowledge graph lifecycle; ii) Nexus Fusion, a web-based user interface enabling users to store, view, query, access, and share (meta)data and manage knowledge graphs; and iii) Nexus Forge, a Python user interface enabling data and knowledge engineers to build knowledge graphs from various data sources and formats using data mappings, transformations, and validations.
+
+- CPS (Corpus Processing Service) [6] is a cloud platform to create and serve Knowledge Graphs over a set of corpus. It uses state-of-the-art natural language understanding models to extract entities and relationships from documents.
+
+- HAPE (Heaven Ape) [7] is a programmable KG platform. The architecture of HAPE is designed in three parts: the client-side, which provides various kinds of services to the users; the server-side, which provides different knowledge management and processing, and the third part, which is KG's knowledge base. The applicability of the platform has been shown over DBpedia data. Moreover, the quality of created KG has been evaluated via metrics introduced in [8]. Although the authors in their published paper claimed that the platform is open to the public, to the best of our knowledge, there is no link to the platform source code or the online web portal.
+
+- Metaphactory [9] is an enterprise platform for building Knowledge Graph management applications. This platform supports different categories of users (end-users, expert users, and application developers), has a customizable UI, and enables the rapid building of use case-specific applications. Metaphactory allows configuring and managing connections to many data repositories. In this platform, data sources are virtually integrated with an ontology-based data access engine, i.e., on-the-fly integration of diverse data sources. The platform is assessed via assessment parameters introduced in [10].
+
+---
+
+3 https://github.com/fusion-jena/iKNOW
+
+---
+
+- Meng et al., [11] proposed a power marketing KG platform. The authors used a Machine Learning (ML) method to extract knowledge from unstructured text. The knowledge instances are stored in relational data. The relationship of knowledge is stored in a graph database.
+
+- MONOLITH [12] is a KG platform combined with Ontology-based Data Management (OBDM) capabilities over relational and non-relational databases to result in one (virtual) data source. The functionalities provided by MONOLITH can be split into two groups: one dedicated to managing OWL ontologies and providing OBDM services, exploiting the mappings between ontology and database; the other to managing KGs and providing services over them. These two groups are linked together, allowing to build the KGs through semantic data access from the results of the ontology queries.
+
+- News Hunter [13] is geared towards supporting journalism by aggregating and semantically integrating news from a variety of sources. It is based on a microservices architecture and consists of a number of independent such services: First, an extensible set of harvesters are aggregated from information from individual sources or existing news. Harvested news items and relevant metadata are deduplicated and stored in a source database. A translator converts items into a canonical language; this allows for cross-language news linking and the application of the broad range of existing NLP (Natural Language Processing) tools. This step, called Lifting in the paper, runs the extracted news items through an NLP pipeline which performs named-entity recognition as well as sentiment and topic analysis. Results of this step are stored in a graph database. ML-based classifiers are used to assign labels to news items thereby annotating them with terms from a common ontology modeling. Via an enricher, the KG can be augmented by information from external sources, e.g., DBpedia Spotlight.
+
+- TCMKG [14] is a KG platform for Traditional Chinese Medicine (TCM) based on the deep learning method. First, an ontology layer represents the knowledge-based diagnosis and treatment process. It includes core entities of the domain with their associated relations. Then, with the help of a named entity recognition (NER) model, TCM entities from unstructured data are extracted.
+
+- UWKGM [15] is a modular web-based platform for KG management. It enables users to integrate different functionalities as RESTful API services into the platform to help different user roles customize the platform as needed. The platform consists of three main components: the backend (API), the frontend (UI), and the system manager (for installation, upgrading, and deployment). The embedded entity suggestion module enables automatic triple extraction and maintains human involvement for quality control.
+
+- YABKO [16] is the successor of HAPE and aims to support the life cycle research on KGs. Researchers can upload their KGs and tools to the YABKO platform that can be free of use for other researchers' experiments. For any requested experiment, YABKO assigns necessary resources (space, time, KGs, tools) to it. After finishing an experiment, the short-term experiment will be dissolved, while the long-term ones can continue to exist on the condition of publishing their results. The core motivation of building YABKO is to help visitors use open-source techniques and resources to perform experiments on KGs and share experiences with other researchers.
+
+- Yang et al., [17] proposed a cloud computing cultural knowledge platform over multiple data sources such as Chinese Wikis, lexical databases, and cultural websites. The platform restricts the knowledge in the field of Chinese public cultural services instead of common sense knowledge. The platform has a set of services for building, updating, and maintaining the KG. It uses rule-based reasoning methods to analyze the existing KG relations to predict the new possible relations.
+
+### 2.3 Comparing Existing KG Management Platforms
+
+In Table 1, we summarize general information about the introduced KG platforms with respect to: their Name, the Year of release (based on the published paper), the used Source Data Type to build KGs, their target applications in industry or Academia, their Open-Source accessibilities, the availability of an Online Demo, a test with a Use Case Study, and, finally, the supported ${KG}$ Construction Method by the platform. Looking at the table, one can observe that:
+
+- most platforms have been introduced in the last three years. This shows that the field is still young and most likely still evolving. This observation is confirmed by our analysis of provided functionality (see below).
+
+- the platforms are very heterogeneous with respect to the number and type of data sources they support.
+
+- for KG construction, basically, all platforms follow an ETL (Extract, Transform, Load) process along with Machine Learning (ML) approaches. They differ in how adaptable this process is and, partially depending on the type of supported data sources, on the concrete steps involved in this process.
+
+- a (to us) surprisingly high percentage of platforms are designed for use within industry (as opposed to academia). This may be one of the reasons why quite many of these platforms are not open source.
+
+- all platforms had a use case study to show the capabilities of the platform by describing a specific KG's usage in a selected application domain.
+
+## 3 Common Functionalities in KG Management Platforms
+
+In this section, we take a closer look at the KG platforms, extract what functionalities they offer and compare them with respect to these functionalities. We consider a functionality for a platform if the functionality is mentioned in the respective papers. Platforms may possess other functionalities not mentioned in the papers. So a missing entry does not necessarily mean a platform does not offer certain functionality. Overall, many of the papers were surprisingly vague about what functionality the platforms offer, so that not always a clear decision was possible. From our analysis, we identified nineteen different functionalities which can be grouped into four categories as follows:
+
+Table 1: Comparing existing KG management platforms concerning their names, the year of release, the type of source data used to build KGs, targeting academia or not, being open-source, availability of an online demo, testing in a use case study, and the KG construction method. ${\checkmark }^{ * }$ means currently not available and - shows not mentioned.
+
+| no. | Platform | Year | Source Data Type | Academia | Open- Source | Online Demo | Use Case Study | KG Construction Method |
| 1 | BBN [5] | 2021 | different types | ✓ | ✓ | ✓ | ✓ | customized ETL process |
| 2 | CPS [6] | 2020 | text | ✘ | ✘ | ✘ | ✓ | Machine Learning |
| 3 | HAPE [7] | 2020 | different types | ✓ | ✘ | ✘ | ✓ | - |
| 4 | Metaphactory [9] | 2019 | different types | ✘ | ✘ | ✓ | ✓ | customized ETL process |
| 5 | Meng et al 11 | 2021 | unstructured text | ✘ | ✘ | ✘ | ✓ | Machine Learning |
| 6 | MONOLITH 12 | 2019 | - | ✘ | ✘ | ✘ | ✓ | customized ETL process |
| 7 | News Hunter [13 | 2020 | text | - | ✘ | ✘ | ✓ | Machine Learning |
| 8 | $\mathrm{{TCMKG}}\left\lbrack {14}\right\rbrack$ | 2020 | different types | - | ✘ | ✘ | ✓ | Machine Learning |
| 9 | UWKGM 15 | 2020 | unstructured text | - | ✓ | ✓ | ✓ | customized ETL process |
| 10 | $\mathrm{{YABKO}}\left\lbrack {16}\right\rbrack$ | 2021 | different types | ✓ | ✘ | ✘ | ✓ | - |
| 11 | Yang et al [17] | 2017 | different types | - | ✘ | ✘ | ✓ | Machine Learning |
+
+- Functionalities for creating a KG: The platform can support different functionalities to build the KG with the desired quality:
+
+- Data preprocessing [5,7,14,17]: Before information from a data source can be used in a KG, several preprocessing steps may be needed. These include data cleaning and data transformation in a format suitable for ingestion.
+
+- Entity and relation extraction [6,7,9,13-15,17]: In particular, when creating KGs out of unstructured information like documents, entity and relation extraction can require complex processing. But even for structured data, this step is often necessary.
+
+- Schema generation [7,9,12-14,17]: If a KG is supposed to contain not just a set of instances, but also type information about them, a schema needs to be created.
+
+- KG validation [5, 7, 9, 12, 16, 17: When a KG combines data from different sources, the initial data cleaning step, which happens at the level of an individual source, may not be sufficient to ensure that the integrated KG is consistent. Thus, the platform may take a further step on quality checking and validation of the KG.
+
+- Functionalities for extending and augmenting KGs: This group of functionalities allows for extending KGs with additional information from other sources or from within the KG itself. While cross-linking extends a $\mathrm{{KG}}$ with information provided somewhere else, a variety of techniques are used to extend KGs "from within". They include reasoning to infer hidden knowledge, KG refinement and the computation of KG embeddings as a basis for link prediction and similarity determination.
+
+- Cross-linking [5,9,13,17]: This functionality enables the cross-linking of KG'entities to other resources or KGs like Wikidata or DBpedia. According to the linked open data (LOD) principles [18], each knowledge resource on the web receives a stable, unique and resolvable identifier.
+
+- KG embedding [7,9,14-17]: This is a popular method in particular for link prediction and similarity detection and can help to uncover hidden information in a KG.
+
+- KG refinement [5, 15-17: In some cases, after checking the quality of the generated $\mathrm{{KG}}$ , a refinement process (e.g., validating the $\mathrm{{KG}}$ to identify errors and correcting the inconsistent statements) can take place.
+
+- Reasoning [7, 12, 13, 16, 17]: The reasoning functionality can help more knowledge be inferred in a KG mainly with the help of a reasoner. We consider this as KG augmentation, too.
+
+- Functionalities for using KGs: Depending mostly on the targeted user group, platforms can support one or several ways to interact with the created KG:
+
+- GUI (Graphical User Interface) [5-7,9,11-17]: A GUI in a platform is functionality that eases user interaction with the platform.
+
+- Visualization [5,7,9,11,14,15,17]: The platform can provide different types of visualization of the KG to help for better understanding. CPS [6 has a visualization type for building queries, only.
+
+- Keyword search [5, 7, 9, 11, 12, 15-17: This functionality enables searching for a keyword over the developed KG in the platform.
+
+- Query endpoint [5-7, 9, 11-14, 16, 17]: In the KG management platform, by a query endpoint functionality, the information over the KG can be queried mostly via SPARQL or using graph queries.
+
+- Query catalog [9, 12]: The functionality of having a query catalog in the KG management platform enables to use pre-determined (customized) queries or store the queries for future reuse.
+
+- Functionalities for maintaining and updating KGs: Once a KG has been built, it may be desirable to manage access, keep track of provenance, update the KG with new or additional sources, and curate it.
+
+- Provenance tracking [5, 6, 9, 13]: The platform can track the provenance of KG's entities. Such functionalities can ease the maintenance and updating the KGs.
+
+- Update KG 5, 9, 12, 14, 15 : A KG management platform can have the functionality to update and edit the previously generated KG. After this process, KG validation might be required.
+
+- KG curation [5, 9, 15, 17]: The platform can have KG curation functionality that mostly relies on human curation.
+
+- Different user roles [5, 7, 9, 11, 12, 15-17: The platform can have functionality that considers different user roles, such as end-users or expert users. This functionality can support different user groups with different access to the other platforms' functionalities.
+
+- User management and security [5-7, 9, 11, 12, 15-17: This functionality can manage user access based on their roles and check the access level and security over the KG in the platform.
+
+- Workflow management [5]: The platform can allow to store and replay the creation workflow that can be re-executed.
+
+Table 2 shows the distribution of the functionalities across the KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. In the last row, we show the total number of supported functionalities of each platform. From this table, our lessons learned are:
+
+- the functionalities in the "KG creation" category are a necessity; thus, they are covered by most platforms. However, one needs to keep in mind, that the platforms differ significantly in what exactly they offer here. Partly, this depends on the supported source data types (e.g., platforms geared towards building KGs from text typically provide NLP-based entity extraction).
+
+- there is a low effort on developing functionalities regarding the KG maintenance category.
+
+- the graphical user interface is the most supported functionality by all platforms.
+
+- the workflow management is the least supported functionality by the existing platforms.
+
+Overall, the figure quite clearly shows that this is a still young and immature field, where so far, no clear set of commonly offered functionality has evolved. We believe that this will happen over time. Meanwhile, potential users of a platform need to carefully check what their requirements are and whether a given platform meets them.
+
+## 4 Our Proposal: a KG Management Platform in the Biodiversity Domain
+
+Our work is motivated by a strong need for KGs in the Biodiversity Domain identified, e.g., by Page [2] and OpenBiodiv [19]. So far, in biodiversity as in many other domains, the few existing KGs have been created largely manually in one-off efforts. If the potential for KGs is to be leveraged for this important domain, it is our conviction, that a KG management platform providing both generic and discipline-specific (e.g., dealing with species) functionality is needed that allows Low-Code (or even No-Code) development, maintenance, and usage of KGs. Using such technologies will reduce the barriers for non-semantic web experts to use and finally benefit from KGs to explore new exciting findings.
+
+The iKNOW project [20] aims to create such a platform, built around a semantic-based toolbox. The project is a joined effort by computer scientists and domain experts from the German Centre for Integrative Biodiversity Research
+
+Table 2: Distribution of functionalities with respect to existing KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. The last row shows the number of supported functionalities of each platform.
+
+
+
+(iDiv) 4. The work benefits from the wealth of well-curated data sources and expert knowledge on their creation, cleaning, and harmonization available at iDiv. Thus, for now, iKNOW focuses on the (semi-)automatic, reproducible transformation of tabular biodiversity data into RDF statements. It also includes provenance tracking to ensure reproducibility and update ability. Further, options for visualization, search, and query are planned. Once established, this platform will be open-source and available to the biodiversity community. Thus, it can significantly contribute to making biodiversity data widely available, easily discoverable, and integrable.
+
+### 4.1 Workflow in the KG Creation Scenario
+
+After the quite abstract high-level description of iKNOW above, let us now take a closer look at one key functionality, the creation of a new KG. In this paper, we view Knowledge Graph generation as a construction process from scratch, i.e., using a set of operations on one or more data sources to create a Knowledge Graph.
+
+---
+
+4 https://www.idiv.de/en/index.html
+
+---
+
+
+
+Fig. 1: Workflow in the KG Creation Scenario at iKNOW.
+
+Figure 1 shows the planned iKNOW workflow for the KG creation scenario. It is a generalized one based on the existing platforms. The workflow shows the data flow between the steps towards KG generation. Not all steps are mandatory; some optional processes in each step can add further value to the KG based on the user's needs.
+
+For every uploaded dataset, we build a sub-KG. It will be the subgraph of the main KG in iKNOW. In the first step, users go through the authentication process. The verified users can upload their datasets. If required, the data cleaning process will take place. We offer different tools for this step, which users can select and adjust based on their needs. As we observed, most uploaded data in iKNOW are well-curated, so not all datasets might require this step. For this reason, we consider it as an optional step.
+
+In the Entity Extraction step, we map the entities of the dataset to the corresponding concepts in the real world (which build instances of sub-KGs). This mapping is the basis for interlinking entities with external KGs like Wikidata or domain-specific ones. Each mapped entity is a node in the KG. For this process, we have embedded different tools at iKNOW, in which users can select the desired tool along with the desired external KGs.
+
+In the Relation Extraction step, the relations between the KG's nodes will be extracted via the user-selected tool. Note that in the entity and relation extraction steps, the tools return the extracted entities and relations to the user. Through our GUI, the user can edit them (Data Authoring step).
+
+Each column from the relational dataset refers to a category in the world. We consider the types of the column as classes in the KG. Along with the extracted relations in the previous step, the schema of this sub-KG will be created in the Schema Generation step.
+
+In the Triple Generation step, (subject, predicate, object)-triples based on the extracted information from the previous steps will be created. Note that, nodes in the KG are subjects and objects, and relationships are predicates. The triples are generated for classes and instances in the sub-KG.
+
+After these processes, the generated sub-KG can be used directly. However, one can take further steps such as: Triple Augmentation (generate new triples and extra relations to ease KG completion), Schema Refinement (refine the schema, e.g., via logical reasoning for the KG completion and correctness), Quality Checking (check the quality of the generated sub-KG), and Query Building (create customized SPARQL queries for the generated sub-KG).
+
+In the Pushing step of our platform, the generated KGs are saved first at a temporal repository (shown by "non-curated repository" in Figure 1). After a manual data curation by domain experts in the Curation step, the KG will be published in the main repository of our platform. With this step, we aim to increase the trust and correctness of the information on the KG.
+
+All information regarding the user-selected tools with parameters and settings along with the initial dataset and intermediate results will be saved in every step of our platform. With the help of this, users can redo the previous steps (which shows by arrows in both directions). Moreover, this enables us to track the provenance of created sub-KG. In each step mentioned above, we plan to have a tool-recommendation service to help the user select the right tool for every process. For that, we will consider different parameters, such as the characteristics of the dataset and tools.
+
+### 4.2 iKNOW Architecture
+
+Figure 2 shows the planned architecture of iKNOW in five layers:
+
+- In the User Administration layer, access level and security will be controlled. Authorized users can generate or update the KG. All end-users can search and visualize the KG. The platform's admin can add new tools or functionalities and approve the user registration. The KG curator curates the recent changes on the KG (newly added sub-KG or updates on previous information on KG).
+
+- The Web-based UI layer shows different scenarios for KG management: building a KG, updating the KG, visualizing the KG's triples, and keyword and SPARQL search.
+
+- The Platform Services provides a set of required services for the KG management functionalities.
+
+- The Data Access Infrastructure manages the communication of services and data storage.
+
+- At the bottom level of the iKNOW platform, the Data Storage layer contains the graph database repository (triple management), provenance information, and user information management.
+
+
+
+Fig. 2: Architecture of iKNOW in five layers.
+
+### 4.3 Implementation
+
+The iKNOW platform is currently under development (https://planthub.idiv.de/iknow).The Python web framework Django 5 is used for the backend with a PostgreSQL 6 database to maintain users, services, tools, datasets, and the KG generation parameters in the iKNOW platform (used in provenance tracking). We use the compiler Svelte 7 with SvelteKit as a framework for building web applications to create a user-friendly web interface. For security, maintenance, and provenance reasons, all tools from external providers used within the workflow will be executed in a sandbox using Docker 8 . For managing the triplestore, we are using the graph database Blazegraph, Any sub-KG created by an end-user, first, will be placed at the non-curated triplestore. After curation by domain experts, the new sub-KG will be added to the curated triplestore. The curated triplestore also serves as the base for SPARQL queries and the keyword search via search engine Elasticsearch 10,
+
+---
+
+https://www.djangoproject.com
+
+https://www.postgresql.org/
+
+https://svelte.dev/
+
+https://www.docker.com/
+
+https://blazegraph.com/
+
+https://www.elastic.co/elasticsearch/
+
+---
+
+iKNOW is a modular platform, which increases the flexibility of our platform and allows adding new tools. Our ultimate goal is to provide a large set of tool choices for the end-user. Although only a few tools are embedded so far, we plan to add more tools for each functionality in the platform. Then users have a variety of choices with respect to different needs and use cases. Our open-source code and modular designs of our platform make both the front and backend of our platform easily extendable. We encourage users (new developers) to use or extend our reusable UI components to speed up their development.
+
+## 5 Outlook
+
+In this paper, we surveyed eleven KG management platforms and provided a general view of their differences on the used data sources, KG construction approaches, and availability. Taking a closer look, we identified nineteen functionalities offered by one, several or all of these platforms and categorized them into four groups along the lifecycle of a KG. We observed that none of the surveyed platforms supports all of the functionalities. The only category that all platforms strongly support is creation of KGs. Beyond that, so far, there seems to be no agreement on a core set of functionalities. Even within the "creation" category, approaches vary a lot. Partly, this can be attributed to the data source types or user groups targeted by a platform. This, together with the fact that many of the platforms are not open source and/or not available so far limits the choice of platform potential users have. They need to check very carefully whether a specific platform matches their needs.
+
+We did this analysis for our domain, biodiversity research. As a result, we presented our proposed platform, iKNOW.
+
+We conclude that further, domain-specific platforms (or domain-specific extensions of general platforms) are needed to fully leverage the power of KGs across domains. We also recommend, that platform developers should strive to support KGs along their lifecycle beyond just the creation stage. We do believe that both developments will occur as the field matures.
+
+## Acknowledgements
+
+The work described in this paper is conducted in the iKNOW Flexpool project of iDiv, the German Centre for Integrative Biodiversity Research, funded by DFG (Project number 202548816). It is supported by iBID, iDiv's Biodiversity Data and Code Support unit. We thank our college Sven Thiel for comments on the manuscript.
+
+## References
+
+1. M. Nickel, K. Murphy, V. Tresp, and E. Gabrilovich, "A review of relational machine learning for knowledge graphs," Proceedings of the IEEE, vol. 104, no. 1, pp. 11-33, 2015.
+
+2. R. D. Page, "Ozymandias: a biodiversity knowledge graph," PeerJ, vol. 7, p. e6739, 2019.
+
+3. M. Manica, C. Auer, V. Weber, F. Zipoli, M. Dolfi, P. Staar, T. Laino, C. Bekas, A. Fujita, H. Toda, et al., "An information extraction and knowledge graph platform for accelerating biochemical discoveries," arXiv preprint arXiv:1907.08400, 2019.
+
+4. S. R. Bader, I. Grangel-Gonzalez, P. Nanjappa, M.-E. Vidal, and M. Maleshkova, "A knowledge graph for industry 4.0," in European Semantic Web Conference, pp. 465-480, Springer, 2020.
+
+5. M. F. Sy, B. Roman, S. Kerrien, D. M. Mendez, H. Genet, W. Wajerowicz, M. Dupont, I. Lavriushev, J. Machon, K. Pirman, et al., "Blue brain nexus: An open, secure, scalable system for knowledge graph management and data-driven science,"
+
+6. P. W. Staar, M. Dolfi, and C. Auer, "Corpus processing service: A knowledge graph platform to perform deep data exploration on corpora," Applied AI Letters, vol. 1, no. 2, p. e20, 2020.
+
+7. L. Ruqian, F. Chaoqun, W. Chuanqing, G. Shunfeng, Q. Han, S. Zhang, and C. Cungen, "Hape: A programmable big knowledge graph platform," Information Sciences, vol. 509, pp. 87-103, 2020.
+
+8. R. Lu, X. Jin, S. Zhang, M. Qiu, and X. Wu, "A study on big knowledge and its engineering issues," IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 9, pp. 1630-1644, 2018.
+
+9. P. Haase, D. M. Herzig, A. Kozlov, A. Nikolov, and J. Trame, "metaphactory: A platform for knowledge graph management," Semantic Web, vol. 10, no. 6, pp. 1109-1125, 2019.
+
+10. M. Galkin, S. Auer, M.-E. Vidal, and S. Scerri, "Enterprise knowledge graphs: A semantic approach for knowledge management in the next generation of enterprise information systems.," in ICEIS (2), pp. 88-98, 2017.
+
+11. W. Meng, D. Zhang, T. Guo, Z. Zong, Y. Liu, Y. Wang, J. Li, and W. Zhu, "Design and implementation of knowledge graph platform of power marketing," in 2021 International Conference on Computer Engineering and Application (ICCEA), pp. 295-298, IEEE, 2021.
+
+12. L. Leporea, M. Namicia, G. Ronconia, M. Ruzzia, V. Santarellia, and D. F. Savoc, "Monolith: an obdm and knowledge graph management platform," in ISWC 2019 Satellites: Satellite Tracks (Posters & Demonstrations, Industry, and Outrageous Ideas) co-located with 18th International Semantic Web Conference (ISWC 2019), Auckland, New Zealand, 26-30 October 2019, vol. 2456, pp. 173-176, CEUR-WS.
+
+13. A. Berven, O. A. Christensen, S. Moldeklev, A. L. Opdahl, and K. J. Villanger, "A knowledge-graph platform for newsrooms," Computers in Industry, vol. 123, p. 103321, 2020.
+
+14. Z. Zheng, Y. Liu, Y. Zhang, and C. Wen, "Tcmkg: A deep learning based traditional chinese medicine knowledge graph platform," in 2020 IEEE International Conference on Knowledge Graph (ICKG), pp. 560-564, IEEE, 2020.
+
+15. N. Kertkeidkachorn, R. Nararatwong, and R. Ichise, "Uwkgm: A modular platform for knowledge graph management," in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 3421-3424, 2020.
+
+16. R. Lu, C. Fei, C. Wang, Y. Huang, and S. Zhang, "Yabko-yet another big knowledge organization," in 2021 IEEE International Conference on Big Knowledge (ICBK), pp. 245-252, IEEE, 2021.
+
+17. Y. Yang, G. Zhang, J. Wang, S. Ye, and J. Hu, "Public cultural knowledge graph platform," in 2017 IEEE 11th International Conference on Semantic Computing (ICSC), pp. 322-327, IEEE, 2017.
+
+18. C. Bizer, "The emerging web of linked data," IEEE intelligent systems, vol. 24, no. 5, pp. 87-92, 2009.
+
+19. L. Penev, M. Dimitrova, V. Senderov, G. Zhelezov, T. Georgiev, P. Stoev, and K. Simov, "Openbiodiv: a knowledge graph for literature-extracted linked open data in biodiversity science," Publications, vol. 7, no. 2, p. 38, 2019.
+
+20. S. Babalou, D. Schellenberger Costa, J. Kattge, C. Römermann, and B. König-Ries, "Towards a semantic toolbox for reproducible knowledge graph generation in the biodiversity domain-how to make the most out of biodiversity data," INFORMATIK 2021, 2021.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5d0287a94719cdbb702c03e88631cdc20ab23e8f
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SFIx1eHodWc/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,280 @@
+§ WHAT IS NEEDED IN A KNOWLEDGE GRAPH MANAGEMENT PLATFORM? A SURVEY AND A PROPOSAL
+
+Samira Babalou* 12, Franziska Zander 12, Erik Kleinsteuber 1, Badr El
+
+Haouni ${}^{1}$ , David Schellenberger Costa ${}^{2}$ , Jens Kattge ${}^{2}{}^{2}$ , Birgitta König-Ries-123
+
+${}^{1}$ Heinz-Nixdorf Chair for Distributed Information Systems
+
+Institute for Computer Science, Friedrich Schiller University Jena, Germany
+
+${}^{2}$ German Center for Integrative Biodiversity Research (iDiv), Halle-Jena-Leipzig, Germany
+
+${}^{3}$ Michael-Stifel-Center for Data-Driven and Simulation Science, Jena, Germany
+
+${}^{4}$ Institute of Biology/Geobotany and Botanical Garden, Martin Luther University, Halle, Germany
+
+corresponding author: samira.babalou@uni.jena.de
+
+Abstract. Knowledge Graphs (KGs) play a significant and growing role for semantics-based support of a wide variety of applications. Until recently, creating and maintaining such knowledge graphs was done in a one-off manner requiring significant manual effort and expertise. Over the last few years, the first KG management platforms supporting the lifecycle of KGs from their creation to their maintenance and use have appeared. In this paper, we first survey these platforms. We then take a step further and identify common functionalities across such platforms. We discuss nineteen such functionalities categorized into four groups: creating, extending, using, and maintaining KGs. Based on the findings of this analysis, we present our proposed KG management platform for the biodiversity domain, iKNOW. We focus on the architecture and the KG creation workflow, but also touch on other aspects.
+
+Keywords: Semantic Web . Knowledge Graph . Knowledge Graph Platform . Data Services and Functionality
+
+§ 1 INTRODUCTION
+
+Increasingly, Knowledge Graphs (KGs) form the semantic data management backbone for a wide variety of applications. A KG [1] consists of nodes connected by edges. It is built from on a set of data sources via different techniques. Besides the instances, KGs can also contain schema information, which can be refined or augmented, e.g., by using a reasoner. Assigning unique identifiers to KG's entities can accelerate the interlinking with other resources on the web. The underlying structure of KGs opens a door for further functionalities such as visualization, supporting keyword search and complex queries via a SPARQL endpoint.
+
+Although KGs have widely gained attention in industry and academia, developing and managing their lifecycle requires a huge effort, expertise, and different functionalities. While, in the beginning, KGs were typically one-off manual efforts, there is a growing awareness that to exploit the capabilities of Knowledge Graph technologies to the maximal extent, support for their creation, access, update, and maintenance is needed. Many of these functionalities are not specific to any given $\mathrm{{KG}}$ but can be provided rather generically. $\mathrm{{KG}}$ platforms aim to do just that.
+
+As our contribution, in this paper, we survey existing KG management platforms and compare them in a general way. We then take a step further and analyze nineteen functionalities in four categories: creating, extending, using, and maintaining KGs. To the best of our knowledge, this is the first survey about KG platforms. Based on the findings of this survey and the needs in our domain, biodiversity research, we have designed our own KG platform. We present this platform, iKNOW, in the second part of the paper.
+
+The rest of the paper is organized as follows. Section 2 surveys existing KG management platforms. The common functionalities of platforms are discussed in Section 3. Our proposal for a KG management platform focussed on biodiversity, iKNOW, is presented in Section 4. The paper is concluded in Section 5.
+
+§ 2 LITERATURE REVIEW
+
+In this paper, we define a Knowledge Graph Platform as a web-based platform for creating, managing, and making use of KGs. Such platforms mostly cover the whole lifecycle of KG application and include relevant services or functionalities for interaction and management of KGs.
+
+We contrast these from efforts to build an individual, specific KG. There have been many such efforts in different domains: e.g., Ozymandias [2] in the biodiversity domain, BCKG [3] in the biomedical domain, and I40KG [4] in the industrial domain. These KGs were built one time, and now their associated websites provide the KG access and usage. Such approaches are out of the scope of this paper. Rather, we focus on KG management platforms, which offer a set of operations such as generation and updates on the KG.
+
+In the following subsections, we first present the survey methodology used in this paper, then we briefly summarize the existing KG management platforms and compare them in a general way.
+
+§ 2.1 SURVEY METHODOLOGY
+
+In this subsection, we describe our systematic approach to finding publications on KG platforms: We have queried for the keyword "Knowledge Graph Platform" in the Google Scholar search engine ${}^{1}$ . At the time of querying, this resulted in 162 papers (including citation and patents). We used Publish or Perish 8 tool ${}^{2}$ to save the result of the query. The result is available in our GitHub repository 3 Among the list of papers, we selected the relevant papers manually. We aimed to select papers that focus on the KG management platform. Some papers appeared in the result of google scholar because our keyword exists in their texts (e.g., in the literature review section), but those papers mainly do not propose a new KG platform. We did not include such cases. Moreover, we did not consider survey papers and papers written in a language other than English. In our repository, we specified which papers have been selected, and for non-selected ones, we clarified the reason. As a result, we came up with ${11}\mathrm{{KG}}$ platforms, briefly detailed in the following sub-section.
+
+${}^{1}$ https://scholar.google.com/ access on 09.02.2022
+
+2 https://harzing.com/blog/2021/10/publish-or-perish-version-8
+
+§ 2.2 EXISTING KG MANAGEMENT PLATFORMS
+
+In this section, we give a brief overview of existing platforms:
+
+ * BBN (Blue Brain Nexus) [5] is an open-source platform. The KG in this platform can be built from datasets generated from heterogenous sources and formats. BBN has three main components: i) Nexus Delta, a set of services targeting developers for managing data and knowledge graph lifecycle; ii) Nexus Fusion, a web-based user interface enabling users to store, view, query, access, and share (meta)data and manage knowledge graphs; and iii) Nexus Forge, a Python user interface enabling data and knowledge engineers to build knowledge graphs from various data sources and formats using data mappings, transformations, and validations.
+
+ * CPS (Corpus Processing Service) [6] is a cloud platform to create and serve Knowledge Graphs over a set of corpus. It uses state-of-the-art natural language understanding models to extract entities and relationships from documents.
+
+ * HAPE (Heaven Ape) [7] is a programmable KG platform. The architecture of HAPE is designed in three parts: the client-side, which provides various kinds of services to the users; the server-side, which provides different knowledge management and processing, and the third part, which is KG's knowledge base. The applicability of the platform has been shown over DBpedia data. Moreover, the quality of created KG has been evaluated via metrics introduced in [8]. Although the authors in their published paper claimed that the platform is open to the public, to the best of our knowledge, there is no link to the platform source code or the online web portal.
+
+ * Metaphactory [9] is an enterprise platform for building Knowledge Graph management applications. This platform supports different categories of users (end-users, expert users, and application developers), has a customizable UI, and enables the rapid building of use case-specific applications. Metaphactory allows configuring and managing connections to many data repositories. In this platform, data sources are virtually integrated with an ontology-based data access engine, i.e., on-the-fly integration of diverse data sources. The platform is assessed via assessment parameters introduced in [10].
+
+3 https://github.com/fusion-jena/iKNOW
+
+ * Meng et al., [11] proposed a power marketing KG platform. The authors used a Machine Learning (ML) method to extract knowledge from unstructured text. The knowledge instances are stored in relational data. The relationship of knowledge is stored in a graph database.
+
+ * MONOLITH [12] is a KG platform combined with Ontology-based Data Management (OBDM) capabilities over relational and non-relational databases to result in one (virtual) data source. The functionalities provided by MONOLITH can be split into two groups: one dedicated to managing OWL ontologies and providing OBDM services, exploiting the mappings between ontology and database; the other to managing KGs and providing services over them. These two groups are linked together, allowing to build the KGs through semantic data access from the results of the ontology queries.
+
+ * News Hunter [13] is geared towards supporting journalism by aggregating and semantically integrating news from a variety of sources. It is based on a microservices architecture and consists of a number of independent such services: First, an extensible set of harvesters are aggregated from information from individual sources or existing news. Harvested news items and relevant metadata are deduplicated and stored in a source database. A translator converts items into a canonical language; this allows for cross-language news linking and the application of the broad range of existing NLP (Natural Language Processing) tools. This step, called Lifting in the paper, runs the extracted news items through an NLP pipeline which performs named-entity recognition as well as sentiment and topic analysis. Results of this step are stored in a graph database. ML-based classifiers are used to assign labels to news items thereby annotating them with terms from a common ontology modeling. Via an enricher, the KG can be augmented by information from external sources, e.g., DBpedia Spotlight.
+
+ * TCMKG [14] is a KG platform for Traditional Chinese Medicine (TCM) based on the deep learning method. First, an ontology layer represents the knowledge-based diagnosis and treatment process. It includes core entities of the domain with their associated relations. Then, with the help of a named entity recognition (NER) model, TCM entities from unstructured data are extracted.
+
+ * UWKGM [15] is a modular web-based platform for KG management. It enables users to integrate different functionalities as RESTful API services into the platform to help different user roles customize the platform as needed. The platform consists of three main components: the backend (API), the frontend (UI), and the system manager (for installation, upgrading, and deployment). The embedded entity suggestion module enables automatic triple extraction and maintains human involvement for quality control.
+
+ * YABKO [16] is the successor of HAPE and aims to support the life cycle research on KGs. Researchers can upload their KGs and tools to the YABKO platform that can be free of use for other researchers' experiments. For any requested experiment, YABKO assigns necessary resources (space, time, KGs, tools) to it. After finishing an experiment, the short-term experiment will be dissolved, while the long-term ones can continue to exist on the condition of publishing their results. The core motivation of building YABKO is to help visitors use open-source techniques and resources to perform experiments on KGs and share experiences with other researchers.
+
+ * Yang et al., [17] proposed a cloud computing cultural knowledge platform over multiple data sources such as Chinese Wikis, lexical databases, and cultural websites. The platform restricts the knowledge in the field of Chinese public cultural services instead of common sense knowledge. The platform has a set of services for building, updating, and maintaining the KG. It uses rule-based reasoning methods to analyze the existing KG relations to predict the new possible relations.
+
+§ 2.3 COMPARING EXISTING KG MANAGEMENT PLATFORMS
+
+In Table 1, we summarize general information about the introduced KG platforms with respect to: their Name, the Year of release (based on the published paper), the used Source Data Type to build KGs, their target applications in industry or Academia, their Open-Source accessibilities, the availability of an Online Demo, a test with a Use Case Study, and, finally, the supported ${KG}$ Construction Method by the platform. Looking at the table, one can observe that:
+
+ * most platforms have been introduced in the last three years. This shows that the field is still young and most likely still evolving. This observation is confirmed by our analysis of provided functionality (see below).
+
+ * the platforms are very heterogeneous with respect to the number and type of data sources they support.
+
+ * for KG construction, basically, all platforms follow an ETL (Extract, Transform, Load) process along with Machine Learning (ML) approaches. They differ in how adaptable this process is and, partially depending on the type of supported data sources, on the concrete steps involved in this process.
+
+ * a (to us) surprisingly high percentage of platforms are designed for use within industry (as opposed to academia). This may be one of the reasons why quite many of these platforms are not open source.
+
+ * all platforms had a use case study to show the capabilities of the platform by describing a specific KG's usage in a selected application domain.
+
+§ 3 COMMON FUNCTIONALITIES IN KG MANAGEMENT PLATFORMS
+
+In this section, we take a closer look at the KG platforms, extract what functionalities they offer and compare them with respect to these functionalities. We consider a functionality for a platform if the functionality is mentioned in the respective papers. Platforms may possess other functionalities not mentioned in the papers. So a missing entry does not necessarily mean a platform does not offer certain functionality. Overall, many of the papers were surprisingly vague about what functionality the platforms offer, so that not always a clear decision was possible. From our analysis, we identified nineteen different functionalities which can be grouped into four categories as follows:
+
+Table 1: Comparing existing KG management platforms concerning their names, the year of release, the type of source data used to build KGs, targeting academia or not, being open-source, availability of an online demo, testing in a use case study, and the KG construction method. ${\checkmark }^{ * }$ means currently not available and - shows not mentioned.
+
+max width=
+
+no. Platform Year Source Data Type Academia Open- Source Online Demo Use Case Study KG Construction Method
+
+1-9
+1 BBN [5] 2021 different types ✓ ✓ ✓ ✓ customized ETL process
+
+1-9
+2 CPS [6] 2020 text ✘ ✘ ✘ ✓ Machine Learning
+
+1-9
+3 HAPE [7] 2020 different types ✓ ✘ ✘ ✓ -
+
+1-9
+4 Metaphactory [9] 2019 different types ✘ ✘ ✓ ✓ customized ETL process
+
+1-9
+5 Meng et al 11 2021 unstructured text ✘ ✘ ✘ ✓ Machine Learning
+
+1-9
+6 MONOLITH 12 2019 - ✘ ✘ ✘ ✓ customized ETL process
+
+1-9
+7 News Hunter [13 2020 text - ✘ ✘ ✓ Machine Learning
+
+1-9
+8 $\mathrm{{TCMKG}}\left\lbrack {14}\right\rbrack$ 2020 different types - ✘ ✘ ✓ Machine Learning
+
+1-9
+9 UWKGM 15 2020 unstructured text - ✓ ✓ ✓ customized ETL process
+
+1-9
+10 $\mathrm{{YABKO}}\left\lbrack {16}\right\rbrack$ 2021 different types ✓ ✘ ✘ ✓ -
+
+1-9
+11 Yang et al [17] 2017 different types - ✘ ✘ ✓ Machine Learning
+
+1-9
+
+ * Functionalities for creating a KG: The platform can support different functionalities to build the KG with the desired quality:
+
+ * Data preprocessing [5,7,14,17]: Before information from a data source can be used in a KG, several preprocessing steps may be needed. These include data cleaning and data transformation in a format suitable for ingestion.
+
+ * Entity and relation extraction [6,7,9,13-15,17]: In particular, when creating KGs out of unstructured information like documents, entity and relation extraction can require complex processing. But even for structured data, this step is often necessary.
+
+ * Schema generation [7,9,12-14,17]: If a KG is supposed to contain not just a set of instances, but also type information about them, a schema needs to be created.
+
+ * KG validation [5, 7, 9, 12, 16, 17: When a KG combines data from different sources, the initial data cleaning step, which happens at the level of an individual source, may not be sufficient to ensure that the integrated KG is consistent. Thus, the platform may take a further step on quality checking and validation of the KG.
+
+ * Functionalities for extending and augmenting KGs: This group of functionalities allows for extending KGs with additional information from other sources or from within the KG itself. While cross-linking extends a $\mathrm{{KG}}$ with information provided somewhere else, a variety of techniques are used to extend KGs "from within". They include reasoning to infer hidden knowledge, KG refinement and the computation of KG embeddings as a basis for link prediction and similarity determination.
+
+ * Cross-linking [5,9,13,17]: This functionality enables the cross-linking of KG'entities to other resources or KGs like Wikidata or DBpedia. According to the linked open data (LOD) principles [18], each knowledge resource on the web receives a stable, unique and resolvable identifier.
+
+ * KG embedding [7,9,14-17]: This is a popular method in particular for link prediction and similarity detection and can help to uncover hidden information in a KG.
+
+ * KG refinement [5, 15-17: In some cases, after checking the quality of the generated $\mathrm{{KG}}$ , a refinement process (e.g., validating the $\mathrm{{KG}}$ to identify errors and correcting the inconsistent statements) can take place.
+
+ * Reasoning [7, 12, 13, 16, 17]: The reasoning functionality can help more knowledge be inferred in a KG mainly with the help of a reasoner. We consider this as KG augmentation, too.
+
+ * Functionalities for using KGs: Depending mostly on the targeted user group, platforms can support one or several ways to interact with the created KG:
+
+ * GUI (Graphical User Interface) [5-7,9,11-17]: A GUI in a platform is functionality that eases user interaction with the platform.
+
+ * Visualization [5,7,9,11,14,15,17]: The platform can provide different types of visualization of the KG to help for better understanding. CPS [6 has a visualization type for building queries, only.
+
+ * Keyword search [5, 7, 9, 11, 12, 15-17: This functionality enables searching for a keyword over the developed KG in the platform.
+
+ * Query endpoint [5-7, 9, 11-14, 16, 17]: In the KG management platform, by a query endpoint functionality, the information over the KG can be queried mostly via SPARQL or using graph queries.
+
+ * Query catalog [9, 12]: The functionality of having a query catalog in the KG management platform enables to use pre-determined (customized) queries or store the queries for future reuse.
+
+ * Functionalities for maintaining and updating KGs: Once a KG has been built, it may be desirable to manage access, keep track of provenance, update the KG with new or additional sources, and curate it.
+
+ * Provenance tracking [5, 6, 9, 13]: The platform can track the provenance of KG's entities. Such functionalities can ease the maintenance and updating the KGs.
+
+ * Update KG 5, 9, 12, 14, 15 : A KG management platform can have the functionality to update and edit the previously generated KG. After this process, KG validation might be required.
+
+ * KG curation [5, 9, 15, 17]: The platform can have KG curation functionality that mostly relies on human curation.
+
+ * Different user roles [5, 7, 9, 11, 12, 15-17: The platform can have functionality that considers different user roles, such as end-users or expert users. This functionality can support different user groups with different access to the other platforms' functionalities.
+
+ * User management and security [5-7, 9, 11, 12, 15-17: This functionality can manage user access based on their roles and check the access level and security over the KG in the platform.
+
+ * Workflow management [5]: The platform can allow to store and replay the creation workflow that can be re-executed.
+
+Table 2 shows the distribution of the functionalities across the KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. In the last row, we show the total number of supported functionalities of each platform. From this table, our lessons learned are:
+
+ * the functionalities in the "KG creation" category are a necessity; thus, they are covered by most platforms. However, one needs to keep in mind, that the platforms differ significantly in what exactly they offer here. Partly, this depends on the supported source data types (e.g., platforms geared towards building KGs from text typically provide NLP-based entity extraction).
+
+ * there is a low effort on developing functionalities regarding the KG maintenance category.
+
+ * the graphical user interface is the most supported functionality by all platforms.
+
+ * the workflow management is the least supported functionality by the existing platforms.
+
+Overall, the figure quite clearly shows that this is a still young and immature field, where so far, no clear set of commonly offered functionality has evolved. We believe that this will happen over time. Meanwhile, potential users of a platform need to carefully check what their requirements are and whether a given platform meets them.
+
+§ 4 OUR PROPOSAL: A KG MANAGEMENT PLATFORM IN THE BIODIVERSITY DOMAIN
+
+Our work is motivated by a strong need for KGs in the Biodiversity Domain identified, e.g., by Page [2] and OpenBiodiv [19]. So far, in biodiversity as in many other domains, the few existing KGs have been created largely manually in one-off efforts. If the potential for KGs is to be leveraged for this important domain, it is our conviction, that a KG management platform providing both generic and discipline-specific (e.g., dealing with species) functionality is needed that allows Low-Code (or even No-Code) development, maintenance, and usage of KGs. Using such technologies will reduce the barriers for non-semantic web experts to use and finally benefit from KGs to explore new exciting findings.
+
+The iKNOW project [20] aims to create such a platform, built around a semantic-based toolbox. The project is a joined effort by computer scientists and domain experts from the German Centre for Integrative Biodiversity Research
+
+Table 2: Distribution of functionalities with respect to existing KG management platforms. The functionalities are ordered from top to down based on their frequency of availability on the existing platforms. The last row shows the number of supported functionalities of each platform.
+
+ < g r a p h i c s >
+
+(iDiv) 4. The work benefits from the wealth of well-curated data sources and expert knowledge on their creation, cleaning, and harmonization available at iDiv. Thus, for now, iKNOW focuses on the (semi-)automatic, reproducible transformation of tabular biodiversity data into RDF statements. It also includes provenance tracking to ensure reproducibility and update ability. Further, options for visualization, search, and query are planned. Once established, this platform will be open-source and available to the biodiversity community. Thus, it can significantly contribute to making biodiversity data widely available, easily discoverable, and integrable.
+
+§ 4.1 WORKFLOW IN THE KG CREATION SCENARIO
+
+After the quite abstract high-level description of iKNOW above, let us now take a closer look at one key functionality, the creation of a new KG. In this paper, we view Knowledge Graph generation as a construction process from scratch, i.e., using a set of operations on one or more data sources to create a Knowledge Graph.
+
+4 https://www.idiv.de/en/index.html
+
+ < g r a p h i c s >
+
+Fig. 1: Workflow in the KG Creation Scenario at iKNOW.
+
+Figure 1 shows the planned iKNOW workflow for the KG creation scenario. It is a generalized one based on the existing platforms. The workflow shows the data flow between the steps towards KG generation. Not all steps are mandatory; some optional processes in each step can add further value to the KG based on the user's needs.
+
+For every uploaded dataset, we build a sub-KG. It will be the subgraph of the main KG in iKNOW. In the first step, users go through the authentication process. The verified users can upload their datasets. If required, the data cleaning process will take place. We offer different tools for this step, which users can select and adjust based on their needs. As we observed, most uploaded data in iKNOW are well-curated, so not all datasets might require this step. For this reason, we consider it as an optional step.
+
+In the Entity Extraction step, we map the entities of the dataset to the corresponding concepts in the real world (which build instances of sub-KGs). This mapping is the basis for interlinking entities with external KGs like Wikidata or domain-specific ones. Each mapped entity is a node in the KG. For this process, we have embedded different tools at iKNOW, in which users can select the desired tool along with the desired external KGs.
+
+In the Relation Extraction step, the relations between the KG's nodes will be extracted via the user-selected tool. Note that in the entity and relation extraction steps, the tools return the extracted entities and relations to the user. Through our GUI, the user can edit them (Data Authoring step).
+
+Each column from the relational dataset refers to a category in the world. We consider the types of the column as classes in the KG. Along with the extracted relations in the previous step, the schema of this sub-KG will be created in the Schema Generation step.
+
+In the Triple Generation step, (subject, predicate, object)-triples based on the extracted information from the previous steps will be created. Note that, nodes in the KG are subjects and objects, and relationships are predicates. The triples are generated for classes and instances in the sub-KG.
+
+After these processes, the generated sub-KG can be used directly. However, one can take further steps such as: Triple Augmentation (generate new triples and extra relations to ease KG completion), Schema Refinement (refine the schema, e.g., via logical reasoning for the KG completion and correctness), Quality Checking (check the quality of the generated sub-KG), and Query Building (create customized SPARQL queries for the generated sub-KG).
+
+In the Pushing step of our platform, the generated KGs are saved first at a temporal repository (shown by "non-curated repository" in Figure 1). After a manual data curation by domain experts in the Curation step, the KG will be published in the main repository of our platform. With this step, we aim to increase the trust and correctness of the information on the KG.
+
+All information regarding the user-selected tools with parameters and settings along with the initial dataset and intermediate results will be saved in every step of our platform. With the help of this, users can redo the previous steps (which shows by arrows in both directions). Moreover, this enables us to track the provenance of created sub-KG. In each step mentioned above, we plan to have a tool-recommendation service to help the user select the right tool for every process. For that, we will consider different parameters, such as the characteristics of the dataset and tools.
+
+§ 4.2 IKNOW ARCHITECTURE
+
+Figure 2 shows the planned architecture of iKNOW in five layers:
+
+ * In the User Administration layer, access level and security will be controlled. Authorized users can generate or update the KG. All end-users can search and visualize the KG. The platform's admin can add new tools or functionalities and approve the user registration. The KG curator curates the recent changes on the KG (newly added sub-KG or updates on previous information on KG).
+
+ * The Web-based UI layer shows different scenarios for KG management: building a KG, updating the KG, visualizing the KG's triples, and keyword and SPARQL search.
+
+ * The Platform Services provides a set of required services for the KG management functionalities.
+
+ * The Data Access Infrastructure manages the communication of services and data storage.
+
+ * At the bottom level of the iKNOW platform, the Data Storage layer contains the graph database repository (triple management), provenance information, and user information management.
+
+ < g r a p h i c s >
+
+Fig. 2: Architecture of iKNOW in five layers.
+
+§ 4.3 IMPLEMENTATION
+
+The iKNOW platform is currently under development (https://planthub.idiv.de/iknow).The Python web framework Django 5 is used for the backend with a PostgreSQL 6 database to maintain users, services, tools, datasets, and the KG generation parameters in the iKNOW platform (used in provenance tracking). We use the compiler Svelte 7 with SvelteKit as a framework for building web applications to create a user-friendly web interface. For security, maintenance, and provenance reasons, all tools from external providers used within the workflow will be executed in a sandbox using Docker 8 . For managing the triplestore, we are using the graph database Blazegraph, Any sub-KG created by an end-user, first, will be placed at the non-curated triplestore. After curation by domain experts, the new sub-KG will be added to the curated triplestore. The curated triplestore also serves as the base for SPARQL queries and the keyword search via search engine Elasticsearch 10,
+
+https://www.djangoproject.com
+
+https://www.postgresql.org/
+
+https://svelte.dev/
+
+https://www.docker.com/
+
+https://blazegraph.com/
+
+https://www.elastic.co/elasticsearch/
+
+iKNOW is a modular platform, which increases the flexibility of our platform and allows adding new tools. Our ultimate goal is to provide a large set of tool choices for the end-user. Although only a few tools are embedded so far, we plan to add more tools for each functionality in the platform. Then users have a variety of choices with respect to different needs and use cases. Our open-source code and modular designs of our platform make both the front and backend of our platform easily extendable. We encourage users (new developers) to use or extend our reusable UI components to speed up their development.
+
+§ 5 OUTLOOK
+
+In this paper, we surveyed eleven KG management platforms and provided a general view of their differences on the used data sources, KG construction approaches, and availability. Taking a closer look, we identified nineteen functionalities offered by one, several or all of these platforms and categorized them into four groups along the lifecycle of a KG. We observed that none of the surveyed platforms supports all of the functionalities. The only category that all platforms strongly support is creation of KGs. Beyond that, so far, there seems to be no agreement on a core set of functionalities. Even within the "creation" category, approaches vary a lot. Partly, this can be attributed to the data source types or user groups targeted by a platform. This, together with the fact that many of the platforms are not open source and/or not available so far limits the choice of platform potential users have. They need to check very carefully whether a specific platform matches their needs.
+
+We did this analysis for our domain, biodiversity research. As a result, we presented our proposed platform, iKNOW.
+
+We conclude that further, domain-specific platforms (or domain-specific extensions of general platforms) are needed to fully leverage the power of KGs across domains. We also recommend, that platform developers should strive to support KGs along their lifecycle beyond just the creation stage. We do believe that both developments will occur as the field matures.
+
+§ ACKNOWLEDGEMENTS
+
+The work described in this paper is conducted in the iKNOW Flexpool project of iDiv, the German Centre for Integrative Biodiversity Research, funded by DFG (Project number 202548816). It is supported by iBID, iDiv's Biodiversity Data and Code Support unit. We thank our college Sven Thiel for comments on the manuscript.
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_md/Initial_manuscript.md b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..6c1a8367c9fc26d980f3bf34ab7ef34a3458d320
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,317 @@
+# Transformation of Node to Knowledge Graph Embeddings for Faster Link Prediction in Social Networks
+
+Archit Parnami* ${}^{1}$ , Mayuri Deshpande ${}^{2}$ , Anant Kumar Mishra ${}^{2}$ , and Minwoo Le ${\mathrm{e}}^{1}$
+
+${}^{1}$ The University of North Carolina at Charlotte, NC, USA
+
+aparnami@uncc.edu, minwoo.lee@uncc.edu
+
+${}^{2}$ Siemens Corporate Technology, Charlotte, NC, USA
+
+Abstract. Recent advances in neural networks have solved common graph problems such as link prediction, node classification, node clustering, node recommendation by developing embeddings of entities and relations into vector spaces. Graph embeddings encode the structural information present in a graph. The encoded embeddings then can be used to predict the missing links in a graph. However, obtaining the optimal embeddings for a graph can be a computationally challenging task specially in an embedded system. Two techniques which we focus on in this work are 1) node embeddings from random walk based methods and 2) knowledge graph embeddings. Random walk based embeddings are computationally inexpensive to obtain but are sub-optimal whereas knowledge graph embeddings perform better but are computationally expensive. In this work, we investigate a transformation model which converts node embeddings obtained from random walk based methods to embeddings obtained from knowledge graph methods directly without an increase in the computational cost. Extensive experimentation shows that the proposed transformation model can be used for solving link prediction in real-time.
+
+Keywords: Knowledge Graphs - Node Embeddings - Link Prediction.
+
+## 1 INTRODUCTION
+
+With the advancement in internet technology, online social networks have become part of people's everyday life. Their analysis can be used for targeted advertising, crime detection, detection of epidemics, behavioural analysis etc. Consequently, a lot of research has been devoted to computational analysis of these networks as they represent interactions between a group of people or community and it is of great interest to understand these underlying interactions. Generally, these networks are modeled as graphs where a node represents people or entity and an edge represent interactions, relationships or communication between two of them. For example, in a social network such as Facebook and Twitter, people are represented by nodes and the existence of an edge between two nodes would represent their friendship. Other examples would include a network of products purchased together on an E-commerce website like Amazon, a network of scientists publishing in a conference where an edge would represent their collaboration or a network of employees in a company working on a common project.
+
+---
+
+* Work done while A. Parnami was as an intern at Siemens.
+
+---
+
+Inherent nature of social networks is that they are dynamic, i.e., over time new edges are added as a network grows. Therefore, understanding the likelihood of future association between two nodes is a fundamental problem and is commonly known as link prediction [19]. Concretely, link prediction is to predict whether there will be a connection between two nodes in the future based on the existing structure of the graph and the existing attribute information of the nodes. For example, in social networks, link prediction can suggest new friends; in E-commerce, link prediction can recommend products to be purchased together [11]; in bioinformatics, it can find interaction between proteins [2]; in co-authorship networks, it can suggest new collaborations and in the security domain, link prediction can assist in identifying hidden groups of terrorists or criminals [3].
+
+Over the years, a large number of link prediction methods have been proposed [21]. These methods are classified based on different aspects such as the network evolution rules that they model, the type and amount of information they used or their computational complexity. Similarity-based methods such as Common Neighbors [19], Jaccard's Coefficient, Adamic-Adar Index [1], Preferential Attachment [4], Katz Index [16] use different graph similarity metrics to predict links in a graph. Embedding learning methods [18,2,13,25] take a matrix representation of the network and factorize them to learn a low-dimensional latent representation/embedding for each node. Recently proposed network em-beddings such as DeepWalk [25] and node2vec [13] are in this category since they implicitly factorize some matrices [27].
+
+Similar to these node embedding methods, recent years have also witnessed a rapid growth in knowledge graph embedding methods. A knowledge graph (KG) is a graph with entities of different types of nodes and various relations among them as edges. Link prediction in such a graph is known as knowledge graph completion. It is similar to link prediction in social network analysis, but more challenging because of the presence of multiple types of nodes and edges. For knowledge graph completion, we not only determine whether there is a link between two entities or not, but also predict the specific type of the link. For this reason, the traditional approaches of link prediction are not capable of knowledge graph completion. Therefore, to tackle this issue, a new research direction known as knowledge graph embedding has been proposed [24,8,31,20,15,7,28]. The main idea is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG.
+
+Neither of these two approaches, however, can generate "optimal" embed-dings "quickly" for real-time link prediction on new graphs. Random walk based node embedding methods are computationally efficient but give poor results whereas KG-based methods produce optimal results but are computationally expensive. Thus, in this work, we mainly focus on embedding learning methods (i.e., Walk based node embedding methods and knowledge graph completion methods) which are capable of finding optimal embeddings quickly enough to meet real-time constraints for practical applications. To bridge the gap between computational time and performance of embeddings on link prediction, we propose the following contributions in this work:
+
+- We compare the embedding's performance and computational cost of both Random walk based node embedding and KG-based embedding methods and empirically determine that Random walk based node embedding methods are faster but give sub-optimal results on link prediction whereas KG based embedding methods are computationally expensive but perform better on link prediction.
+
+- We propose a transformation model that takes node embeddings from Random walk based node embedding methods and output near optimal embed-dings without an increase in computational cost.
+
+- We demonstrate the results of transformation through extensive experimentation on various social network datasets of different graph sizes and different combinations of node embeddings and KG embedding methods.
+
+## 2 Background
+
+### 2.1 Problem Definition
+
+Let ${G}_{\text{homo }} = \langle V, E, A\rangle$ be an unweighted, undirected homogeneous graph where $V$ is the set of vertices, $E$ is the set of observed links, i.e., $E \subset V \times V$ and $A$ is the adjacency matrix respectively. The graph $G$ represents the topological structure of the social network in which an edge $e = \langle u, v\rangle \in E$ represents an interaction that took place between $u$ and $v$ . Let $U$ denote the universal set containing all $\left( {\left| V\right| \times \left( {\left| V\right| - 1}\right) }\right) /2$ possible edges. Then, the set of non-existent links is $U - E$ . Our assumption is that there are some missing links (edges that will appear in future) in the set $U - E$ . Then the link prediction task is given the current network ${G}_{\text{homo }}$ , find out these missing edges.
+
+Similarly, let ${G}_{kg} = \langle V, E, A\rangle$ be a Knowledge Graph (KG). A KG is a directed graph whose nodes are entities and edges are subject-property-object triple facts. Each edge of the form (head entity, relation, tail entity) (denoted as $\langle h, r, t\rangle )$ indicates a relationship $r$ from entity $h$ to entity $t$ . For example, $\langle$ Bob, isFriendOf, Sam $\rangle$ and $\langle$ Bob, livesIn, NewYork $\rangle$ . Note that the entities and relations in a KG are usually of different types. Link prediction in KGs aims to predict the missing $\mathrm{h}$ or $\mathrm{t}$ for a relation fact triple $\langle h, r, t\rangle$ , used in [9,6.8]. In this task, for each position of missing entity, the system is asked to rank a set of candidate entities from the knowledge graph, instead of only giving one best result [9.8].
+
+We then formulate the problem of link prediction on graph $G$ such that $G \equiv {G}_{\text{homo }} \equiv {G}_{kg}$ , i.e., KG with only one type of entity and relation. Link prediction is then to predict the missing $h$ or $t$ for a relation fact triple $\langle h, r, t\rangle$ where both $h$ and $t$ are of same kind. For example $\langle {Bob},{isFriendOf},?\rangle$ or $\langle$ Sam, isFriendOf,? $\rangle$ .
+
+### 2.2 Graph Embedding Methods
+
+Graph embedding aims to represent a graph in a low dimensional space which preserves as much graph property information as possible. The differences between different graph embedding algorithms lie in how they define the graph property to be preserved. Different algorithms have different insights of the node (/edge/substructure/whole-graph) similarities and how to preserve them in the embedded space. Formally, given a graph $G = \langle V, E, A\rangle$ , a node embedding is a mapping ${f}_{1} : {v}_{i} \rightarrow {\mathbf{y}}_{\mathbf{i}} \in {\mathbb{R}}^{d}\;\forall i \in \left\lbrack n\right\rbrack$ where $d$ is the dimension of the embed-dings, $n$ the number of vertices and the function $f$ preserves some proximity measure defined on graph $G$ . If there are multiple types of links/relations in the graph then similar to node embeddings, relation embeddings can be obtained as $f : {r}_{j} \rightarrow {\mathbf{y}}_{\mathbf{j}} \in {\mathbb{R}}^{d}\;\forall j \in \left\lbrack k\right\rbrack$ where $k$ the number of types of relations.
+
+Node Embeddings using Random Walk Random walks have been used to approximate many properties in the graph including node centrality [23] and similarity [26]. Their key innovation is optimizing the node embeddings so that nodes have similar embeddings if they tend to co-occur on short random walks over the graph. Thus, instead of using a deterministic measure of graph proximity [5], these random walk methods employ a flexible, stochastic measure of graph proximity, which has led to superior performance in a number of settings [12]. Two well known examples of random walk based methods are node2vec [13] and DeepWalk [25].
+
+KG Embeddings KG embedding methods usually consists of three steps. The first step specifies the form in which entities and relations are represented in a continuous vector space. Entities are usually represented as vectors, i.e. deterministic points in the vector space [24, 8.31]. In the second step, a scoring function ${f}_{r}\left( {h, t}\right)$ is defined on each fact $\langle h, r, t\rangle$ to measure its plausibility. Facts observed in the KG tend to have higher scores than those that have not been observed. Finally, to learn those entity and relation representations (i.e., embed-dings), the third step solves an optimization problem that maximizes the total plausibility of observed facts as detailed in [30]. KG embedding methods which we use for experiments in this paper are TransE [8], TransH [31], TransD [20], RESCAL [32] and SimplE [17].
+
+
+
+Fig. 1: Transformation Model. Input Graph: Green edges are missing links and red edges represents present links. First, a random walk method outputs node embeddings (source) for a graph. These embeddings are then used to initialize KG embedding method, which outputs finetuned embeddings. A transformation model is then trained between source and finetuned embeddings.
+
+## 3 Methodology
+
+Transformation model is suggested to expedite fine-tuning process with KG-embedding methods. Let ${G}_{n, m}$ be a graph with $n$ vertices and $m$ edges. Given the node embeddings of the graph $G$ , we would want to transform them to optimal node embeddings.
+
+### 3.1 Node Embedding Generation
+
+The input graph ${G}_{n, m}$ is fed into one of the random walk based graph embed-dings methods (node2vec [13] or DeepWalk [25]), which gives us the node em-beddings. Let $f$ be a random walk based graph embedding method and ${E}_{\text{source }}^{i}$ denotes the output node embeddings:
+
+$$
+{E}_{\text{source }}^{i} = f\left( {G}^{i}\right) \tag{1}
+$$
+
+where ${G}^{i}$ is the ${i}^{th}$ graph in the dataset of graphs $D = \left\{ {{G}^{1},{G}^{2},\ldots }\right\}$ and ${E}_{\text{source }}^{i} \in$ ${\mathbb{R}}^{n \times d}$ with the embedding dimension $d$ .
+
+### 3.2 Knowledge Embedding Generation
+
+In a KG-based embedding algorithm (such as TransE), the input is a graph and the initial embeddings are randomly initialized. The algorithm uses a scoring function and optimizes the initial embeddings to output the trained embeddings for the given graph. Since we are working with homogeneous graph with only one type of relation, we don't need to learn the embeddings for the relation, hence they are kept constant and only node embeddings are learnt. Let ${E}_{\text{initial }}^{i}$ be the initial node embeddings, ${E}_{\text{target }}^{i}$ be the trained embeddings and $g$ the KG method with parameters $\alpha$ .
+
+$$
+{E}_{\text{target }}^{i} = g\left( {{G}^{i},{E}_{\text{initial }}^{i};\alpha }\right) \tag{2}
+$$
+
+where ${E}_{\text{target }}^{i} \in {R}^{n \times d}$ and ${E}_{\text{initial }}^{i} \in {R}^{n \times d}$ .
+
+Instead of using randomly initialized embeddings ${E}_{\text{initial }}^{i}$ to obtain target embeddings ${E}_{\text{target }}^{i}$ , we can initialize with ${E}_{\text{source }}^{i}$ in Eq. (1) as
+
+$$
+{E}_{\text{finetuned }}^{i} = g\left( {{G}^{i},{E}_{\text{source }}^{i};\alpha }\right) \tag{3}
+$$
+
+where ${E}_{\text{finetuned }}^{i} \in {R}^{n \times d}$ are fine tuned output embeddings. This idea of better initialization has also been explored previously in [22, 10] where it has been shown to result in embeddings of higher quality.
+
+### 3.3 Transformation Model with Self-Attention
+
+Using the node embeddings ${E}_{\text{source }}^{i}$ from Eq. (1) and fine-tuned KG embed-dings ${E}_{\text{finetuned }}^{i}$ from Eq. (3), we train a transformation model which can learn to transform the node embeddings from a node-based method to KG embed-dings. We adopt self-attention [29] on graph adjacency matrix as explained in Algorithm 1:
+
+$$
+{E}_{\text{transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{source }}^{i};\theta }\right) \tag{4}
+$$
+
+where ${E}_{\text{transformed }}^{i} \in {R}^{n \times d}$ are the transformed embeddings and $\theta$ are the parameters of the self-attention model.
+
+The error between the fine-tuned and transformed embeddings is calculated using squared euclidean distance as:
+
+$$
+{E}_{\text{error }}^{i} = 1/n\sum {\begin{Vmatrix}{E}_{\text{transformed }}^{i} - {E}_{\text{finetuned }}^{i}\end{Vmatrix}}^{2}. \tag{5}
+$$
+
+The loss on batch $\mathbf{X}$ of graphs is measured as:
+
+$$
+\operatorname{Loss}\left( \mathbf{X}\right) = 1/b\mathop{\sum }\limits_{{i = 1}}^{b}{E}_{\text{error }}^{i} \tag{6}
+$$
+
+where $\mathbf{X} = \left\{ \left( {{E}_{\text{transformed }}^{i},{E}_{\text{finetuned }}^{i}}\right) \right\}$ and $b$ is the batch size. Since KG em-beddings are trained from facts/triplets which are obtained from the adjacency matrix of the graph, a self-attention model reinforced with information of the adjacency matrix when applied to node-embeddings is able to learn the transformation function as observed in our experiments (Figure 3). The proposed algorithm is summarized in Algorithm 2.
+
+Algorithm 1: Self-attention on graph adjacency matrix
+
+---
+
+Function SelfAttention $\left( {{G}_{n, m},{E}_{n \times d}}\right)$
+
+ ${A}_{n \times n} =$ Adjacency Matrix of ${G}_{n, m}$
+
+ ${K}_{n \times d} =$ affine(E, d)
+
+ ${Q}_{n \times d} = \operatorname{affine}\left( {\mathrm{E},\mathrm{d}}\right)$
+
+ ${\text{ Logits }}_{n \times n} =$ matmul(Q, transpose(K))
+
+ AttendedLogit ${s}_{n \times n} =$ Logits $+ \mathrm{A}$
+
+ ${V}_{n \times d} =$ affine(E, d)
+
+ ${\text{Output}}_{n \times d} =$ matmul(AttendedLogits, V)
+
+ return Output
+
+---
+
+Algorithm 2: Training the transformation model
+
+---
+
+Input: Dataset of Graphs ${D}_{\text{train }} = \left\{ {{G}^{1},{G}^{2},\ldots ,{G}^{n}}\right\}$
+
+foreach ${G}^{i}$ in ${D}_{\text{train }}$ do
+
+ ${E}_{\text{source }}^{i} \leftarrow f\left( {G}^{i}\right)$
+
+end
+
+foreach ${G}^{i}$ in ${D}_{\text{train }}$ do
+
+$\;;\;{E}_{finetuned}^{i} \leftarrow g\left( {{G}^{i},{E}_{source}^{i};\alpha }\right)$
+
+end
+
+while true do
+
+ $\mathbf{B} = \left\{ \left( {{E}_{\text{source }}^{i},{E}_{\text{finetuned }}^{i}}\right) \right\}$ DSample batch
+
+ foreach ${E}_{\text{source }}^{i}$ in $\mathbf{B}$ do
+
+ ${E}_{\text{transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{source }}^{i};\theta }\right)$
+
+ end
+
+ $\mathbf{X} = \left\{ \left( {{E}_{\text{transformed }}^{i},{E}_{\text{finetuned }}^{i}}\right) \right\}$
+
+ $\theta \leftarrow \theta - \beta {\nabla }_{\theta }\operatorname{Loss}\left( \mathbf{X}\right)$ DUpdate
+
+end
+
+---
+
+## 4 Experiments
+
+### 4.1 Datasets
+
+Yang, et. al [33] introduced social network datasets with ground-truth communities. Each dataset $D$ is a network having a total of $N$ nodes, $E$ edges and a set of communities (Table 1).
+
+| Dataset | Description | Nodes | Edges | Communities |
| YouTube | Friendship | 1,134,890 | 2,987,624 | 8,385 |
| DBLP | Co-authorship | 317,080 | 1,049,866 | 13,477 |
| Amazon | Co-purchasing | 334,863 | 925,872 | 75,149 |
| LiveJournal | Friendship | 3,997,962 | 34,681,189 | 287,512 |
| Orkut | Friendship | 3,072,441 | 117,185,083 | 6,288,363 |
+
+Table 1: Datasets
+
+
+
+Fig. 2: Histogram showing community size vs its frequency. DBLP, YouTube and Amazon datasets have smaller size communities and LiveJournal and Orkut have larger size communities.
+
+The communities in each dataset are of different sizes. They range from a small size (1-20) to bigger sizes (380-400). There are more communities with small sizes and their frequency decreases as their size increases. This trend is depicted in Figure 2.
+
+YouTube ${}^{3}$ , Orkut ${}^{3}$ and LiveJournal ${}^{3}$ are friendship networks where each community is a user-defined group. Nodes in the community represent users, and edges represent their friendship.
+
+${\mathrm{{DBLP}}}^{3}$ is a co-authorship network where two authors are connected if they publish at least one paper together. A community is represented by a publication venue, e.g., journal or conference. Authors who published to a certain journal or conference form a community.
+
+Amazon ${}^{3}$ co-purchasing network is based on Customers Who Bought This Item Also Bought feature of the Amazon website. If a product $i$ is frequently co-purchased with product $j$ , the graph contains an undirected edge from $i$ to $j$ . Each connected component in a product category defined by Amazon acts as a community where nodes represent products in the same category and edges indicate that we were purchased together.
+
+### 4.2 Training
+
+We consider each community in a dataset as an individual graph ${G}_{n, m}$ with vertices representing the entity in the community and edges representing the relationship. For training the transformation model, we select communities of particular size range which acts as dataset $D$ of graphs (Table 2). We randomly disable ${20}\%$ of the links (edges) in each graph to act as missing links for link prediction. In all the experiments, the embedding dimension is set to 32 , which works best in our pilot test. We used OpenNE ${}^{4}$ for generating node2vec and DeepWalk embeddings and OpenKE [14] for generating KG embeddings. The dataset $D$ of graphs is split into train, validation and test split of ${64}\% ,{16}\%$ , and ${20}\%$ respectively.
+
+---
+
+${}^{3}$ http://snap.stanford.edu/data/index.html#communities
+
+---
+
+| Dataset | Graph Size | Number of Graphs | Average Degree | Average Density |
| YouTube | 16-21 | 338 | 3.00 | 0.17 |
| DBLP | 16-21 | 654 | 4.93 | 0.29 |
| Amazon | 21-25 | 1425 | 4.00 | 0.18 |
| LiveJournal | 51-55 | 1504 | 6.11 | 0.12 |
| LiveJournal | 61-65 | 1101 | 7.20 | 0.11 |
| LiveJournal | 71-75 | 806 | 7.53 | 0.10 |
| LiveJournal | 81-85 | 672 | 6.58 | 0.08 |
| LiveJournal | 91-95 | 497 | 8.01 | 0.08 |
| LiveJournal | 101-105 | 400 | 6.85 | 0.06 |
| LiveJournal | 111-115 | 351 | 5.89 | 0.05 |
| LiveJournal | 121-125 | 332 | 7.67 | 0.06 |
| Orkut | 151-155 | 1868 | 7.20 | 0.04 |
| Orkut | 251-255 | 654 | 7.21 | 0.028 |
| Orkut | 351-355 | 335 | 7.33 | 0.020 |
+
+Table 2: Selected datasets and graph size for experiments.
+
+### 4.3 Evaluation Metrics
+
+For evaluation, we use MRR and Precision@K.The algorithm predicts a list of ranked candidates for the incoming query. To remove pre-existing triples in the knowledge graph, filtering operation cleans them up from the list. MRR computes the mean of the reciprocal rank of the correct candidate in the list, and Precision@K evaluates the rate of correct candidates appearing in the top $\mathrm{K}$ candidates predicted. Due to space constraints, we only present the results for MRR. Results of Precision@K can be found at our GitHub ${}^{5}$ .
+
+## 5 Results & Discussions
+
+From the results depicted in Figure 3, we observe that the target KG embeddings (TransE, TransH, etc.) almost always outperforms random-walk based source embeddings (node2vec and DeepWalk) except in case of SimplE and DistMult where both the methods perform poorly. This can also be observed in Figure 4.
+
+Finetuned KG embeddings achieved better or equivalent performance as compared to target KG embeddings. This can be confirmed by ANOVA test in Figure 4 where there is no significant difference between the MRRs obtained from finetuned and target KG embeddings in most cases. Specifically, translational based methods such as TransE, TransH, and TransD have equivalent performance for finetuned and target embeddings whereas SimplE, RESCAL, and DistMult have better finetuned embeddings than target embeddings as the graph size grows.
+
+---
+
+${}^{4}$ https://github.com/thunlp/OpenNE
+
+${}^{5}$ https://github.com/ArchitParnami/GraphProject
+
+---
+
+
+
+Fig. 3: Performance evaluation of different embeddings on link prediction using MRR (y-axis). Source (green) refers to embeddings from node2vec (left) and DeepWalk (right). Target (brown) refers to KG embeddings from TransE, TransH, TransD, SimplE, RESCAL, or DistMult. For each source and target pair, we evaluate finetuned (orange) embeddings (obtained by initializing target method with source embeddings) and transformed (red) embeddings (obtained by applying transformation model on source embeddings). Results are presented on different datasets of varying graph sizes.
+
+
+
+Fig. 4: ANOVA test of MRR scores from two embedding methods (Method 1 and Method 2). The difference of MRR scores between the two methods is significant when their p-values are $< {0.05}$ (light green) and not significant otherwise (light red). The values in each cell are the difference between the means of MRR scores from two methods (Method 2 - Method 1). The text in bold represents when Method 2 did better than Method 1. Source method refers to node2vec (left) and DeepWalk (right). Target method refers to TransE, TransH, TransD, SimplE, RESCAL, or DistMult in each row.
+
+
+
+Fig. 5: CPU Time (left y-axis) vs Graph Size (x-axis) and Mean MRR (right y-axis) vs Graph Size comparison of finetuned (TransE finetuned from node2vec) and transformed embeddings (from node2vec). As the graph size increases the time to obtain embeddings from KG methods (TransE) also increases significantly. However, there is no significant increase in time for the transformation (from node2vec) once we have the transformation model. The Mean MRR scores of both finetuned and transformed embeddings also drop with the increase in graph size, however, they perform equally good (for graphs <76). Note that finetuning time and transformation time both include time to obtain node2vec embeddings as well.
+
+Transformed embeddings consistently outperform source embeddings and have similar performance to finetuned embeddings at least for graphs of sizes up to 65. The performance drop starts from graph size 71-75 in the transformation to TransD from DeepWalk whereas 81-85 in the transformation to TransE from node2vec. For RESCAL, the transformation works for larger sized graphs in node2vec and till 121-125 in DeepWalk.
+
+As the graph size increases (top to bottom), the overall MRR scores decrease for all the embeddings as expected. In Figure 5, we compare computation time and MRR performance of transformed embeddings and finetuned embeddings where source method is node2vec and target method is TransE. It can be seen that the transformed embeddings give similar performance as finetuned embed-dings (without any significant increase in computational cost) up to graphs of size 71-75. Thereafter the transformed embeddings perform poorly, we attribute this to poor finetuned embeddings on which the transformation model was trained.
+
+## 6 Conclusion
+
+In this work, we have demonstrated that random-walk based node embedding (source) methods are computationally efficient but give sub-optmial results on link prediction in social networks whereas KG based embedding (target & fine-tuned) methods perform better but are computationally expensive. For our requirement of generating optimal embeddings quickly for real-time link prediction we proposed a self-attention based transformation model to convert walk-based embeddings to optimal KG embeddings. The proposed model works well for smaller graphs but as the complexity of the graph increases, the transformation performance decreases. For future work, our goal is to explore better transformation models for bigger graphs.
+
+## References
+
+1. Adamic, L.A., Adar, E.: Friends and neighbors on the web. Social networks 25(3), 211-230 (2003)
+
+2. Airoldi, E.M., Blei, D.M., Fienberg, S.E., Xing, E.P., Jaakkola, T.: Mixed membership stochastic block models for relational data with application to protein-protein interactions. In: Proceedings of the international biometrics society annual meeting. vol. 15 (2006)
+
+3. Al Hasan, M., Chaoji, V., Salem, S., Zaki, M.: Link prediction using supervised learning. In: SDM06: workshop on link analysis, counter-terrorism and security (2006)
+
+4. Barabási, A.L., Albert, R.: Emergence of scaling in random networks. Science $\mathbf{{286}}\left( {5439}\right) ,{509} - {512}\left( {1999}\right)$
+
+5. Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in neural information processing systems. pp. 585-591 (2002)
+
+6. Bordes, A., Glorot, X., Weston, J., Bengio, Y.: Joint learning of words and meaning representations for open-text semantic parsing. In: Artificial Intelligence and Statistics. pp. 127-135 (2012)
+
+7. Bordes, A., Glorot, X., Weston, J., Bengio, Y.: A semantic matching energy function for learning with multi-relational data. Machine Learning $\mathbf{{94}}\left( 2\right) ,{233} - {259}$ (2014)
+
+8. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in neural information processing systems. pp. 2787-2795 (2013)
+
+9. Bordes, A., Weston, J., Collobert, R., Bengio, Y.: Learning structured embeddings of knowledge bases. In: Twenty-Fifth AAAI Conference on Artificial Intelligence (2011)
+
+10. Chen, H., Perozzi, B., Hu, Y., Skiena, S.: Harp: Hierarchical representation learning for networks. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
+
+11. Chen, H., Li, X., Huang, Z.: Link prediction approach to collaborative filtering. In: Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries. pp. 141-142. IEEE (2005)
+
+12. Goyal, P., Ferrara, E.: Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems 151, 78-94 (2018)
+
+13. Grover, A., Leskovec, J.: node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 855-864. ACM (2016)
+
+14. Han, X., Cao, S., Lv, X., Lin, Y., Liu, Z., Sun, M., Li, J.: Openke: An open toolkit for knowledge embedding. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 139-144 (2018)
+
+15. Jenatton, R., Roux, N.L., Bordes, A., Obozinski, G.R.: A latent factor model for highly multi-relational data. In: Advances in Neural Information Processing Systems. pp. 3167-3175 (2012)
+
+16. Katz, L.: A new status index derived from sociometric analysis. Psychometrika $\mathbf{{18}}\left( 1\right) ,{39} - {43}\left( {1953}\right)$
+
+17. Kazemi, S.M., Poole, D.: Simple embedding for link prediction in knowledge graphs. In: Advances in Neural Information Processing Systems. pp. 4284-4295 (2018)
+
+18. Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer (8), 30-37 (2009)
+
+19. Liben-Nowell, D., Kleinberg, J.: The link-prediction problem for social networks. Journal of the American society for information science and technology $\mathbf{{58}}\left( 7\right)$ , 1019-1031 (2007)
+
+20. Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: Twenty-ninth AAAI conference on artificial intelligence (2015)
+
+21. Lü, L., Zhou, T.: Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications $\mathbf{{390}}\left( 6\right) ,{1150} - {1170}\left( {2011}\right)$
+
+22. Luo, Y., Wang, Q., Wang, B., Guo, L.: Context-dependent knowledge graph embedding. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. pp. 1656-1661 (2015)
+
+23. Newman, M.E.: A measure of betweenness centrality based on random walks. Social networks $\mathbf{{27}}\left( 1\right) ,{39} - {54}\left( {2005}\right)$
+
+24. Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: Proceedings of the 28th International Conference on International Conference on Machine Learning. vol. 11, pp. 809-816 (2011)
+
+25. Perozzi, B., Al-Rfou, R., Skiena, S.: Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 701-710. ACM (2014)
+
+26. Pirotte, A., Renders, J.M., Saerens, M., et al.: Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge & Data Engineering (3), 355-369 (2007)
+
+27. Qiu, J., Dong, Y., Ma, H., Li, J., Wang, K., Tang, J.: Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. pp. 459-467. ACM (2018)
+
+28. Socher, R., Chen, D., Manning, C.D., Ng, A.: Reasoning with neural tensor networks for knowledge base completion. In: Advances in neural information processing systems. pp. 926-934 (2013)
+
+29. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in neural information processing systems. pp. 5998-6008 (2017)
+
+30. Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering $\mathbf{{29}}\left( {12}\right) ,{2724} - {2743}\left( {2017}\right)$
+
+31. Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: Twenty-Eighth AAAI conference on artificial intelligence (2014)
+
+32. Yang, B., Yih, W.t., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 (2014)
+
+33. Yang, J., Leskovec, J.: Defining and evaluating network communities based on ground-truth. Knowledge and Information Systems $\mathbf{{42}}\left( 1\right) ,{181} - {213}\left( {2015}\right)$
\ No newline at end of file
diff --git a/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_tex/Initial_manuscript.tex b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..994be88f16c0bfd3faf33cffb9a54d814d26f847
--- /dev/null
+++ b/papers/KGCW/KGCW 2022/KGCW 2022 Workshop/SZeAub5Ty9/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,294 @@
+§ TRANSFORMATION OF NODE TO KNOWLEDGE GRAPH EMBEDDINGS FOR FASTER LINK PREDICTION IN SOCIAL NETWORKS
+
+Archit Parnami* ${}^{1}$ , Mayuri Deshpande ${}^{2}$ , Anant Kumar Mishra ${}^{2}$ , and Minwoo Le ${\mathrm{e}}^{1}$
+
+${}^{1}$ The University of North Carolina at Charlotte, NC, USA
+
+aparnami@uncc.edu, minwoo.lee@uncc.edu
+
+${}^{2}$ Siemens Corporate Technology, Charlotte, NC, USA
+
+Abstract. Recent advances in neural networks have solved common graph problems such as link prediction, node classification, node clustering, node recommendation by developing embeddings of entities and relations into vector spaces. Graph embeddings encode the structural information present in a graph. The encoded embeddings then can be used to predict the missing links in a graph. However, obtaining the optimal embeddings for a graph can be a computationally challenging task specially in an embedded system. Two techniques which we focus on in this work are 1) node embeddings from random walk based methods and 2) knowledge graph embeddings. Random walk based embeddings are computationally inexpensive to obtain but are sub-optimal whereas knowledge graph embeddings perform better but are computationally expensive. In this work, we investigate a transformation model which converts node embeddings obtained from random walk based methods to embeddings obtained from knowledge graph methods directly without an increase in the computational cost. Extensive experimentation shows that the proposed transformation model can be used for solving link prediction in real-time.
+
+Keywords: Knowledge Graphs - Node Embeddings - Link Prediction.
+
+§ 1 INTRODUCTION
+
+With the advancement in internet technology, online social networks have become part of people's everyday life. Their analysis can be used for targeted advertising, crime detection, detection of epidemics, behavioural analysis etc. Consequently, a lot of research has been devoted to computational analysis of these networks as they represent interactions between a group of people or community and it is of great interest to understand these underlying interactions. Generally, these networks are modeled as graphs where a node represents people or entity and an edge represent interactions, relationships or communication between two of them. For example, in a social network such as Facebook and Twitter, people are represented by nodes and the existence of an edge between two nodes would represent their friendship. Other examples would include a network of products purchased together on an E-commerce website like Amazon, a network of scientists publishing in a conference where an edge would represent their collaboration or a network of employees in a company working on a common project.
+
+* Work done while A. Parnami was as an intern at Siemens.
+
+Inherent nature of social networks is that they are dynamic, i.e., over time new edges are added as a network grows. Therefore, understanding the likelihood of future association between two nodes is a fundamental problem and is commonly known as link prediction [19]. Concretely, link prediction is to predict whether there will be a connection between two nodes in the future based on the existing structure of the graph and the existing attribute information of the nodes. For example, in social networks, link prediction can suggest new friends; in E-commerce, link prediction can recommend products to be purchased together [11]; in bioinformatics, it can find interaction between proteins [2]; in co-authorship networks, it can suggest new collaborations and in the security domain, link prediction can assist in identifying hidden groups of terrorists or criminals [3].
+
+Over the years, a large number of link prediction methods have been proposed [21]. These methods are classified based on different aspects such as the network evolution rules that they model, the type and amount of information they used or their computational complexity. Similarity-based methods such as Common Neighbors [19], Jaccard's Coefficient, Adamic-Adar Index [1], Preferential Attachment [4], Katz Index [16] use different graph similarity metrics to predict links in a graph. Embedding learning methods [18,2,13,25] take a matrix representation of the network and factorize them to learn a low-dimensional latent representation/embedding for each node. Recently proposed network em-beddings such as DeepWalk [25] and node2vec [13] are in this category since they implicitly factorize some matrices [27].
+
+Similar to these node embedding methods, recent years have also witnessed a rapid growth in knowledge graph embedding methods. A knowledge graph (KG) is a graph with entities of different types of nodes and various relations among them as edges. Link prediction in such a graph is known as knowledge graph completion. It is similar to link prediction in social network analysis, but more challenging because of the presence of multiple types of nodes and edges. For knowledge graph completion, we not only determine whether there is a link between two entities or not, but also predict the specific type of the link. For this reason, the traditional approaches of link prediction are not capable of knowledge graph completion. Therefore, to tackle this issue, a new research direction known as knowledge graph embedding has been proposed [24,8,31,20,15,7,28]. The main idea is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG.
+
+Neither of these two approaches, however, can generate "optimal" embed-dings "quickly" for real-time link prediction on new graphs. Random walk based node embedding methods are computationally efficient but give poor results whereas KG-based methods produce optimal results but are computationally expensive. Thus, in this work, we mainly focus on embedding learning methods (i.e., Walk based node embedding methods and knowledge graph completion methods) which are capable of finding optimal embeddings quickly enough to meet real-time constraints for practical applications. To bridge the gap between computational time and performance of embeddings on link prediction, we propose the following contributions in this work:
+
+ * We compare the embedding's performance and computational cost of both Random walk based node embedding and KG-based embedding methods and empirically determine that Random walk based node embedding methods are faster but give sub-optimal results on link prediction whereas KG based embedding methods are computationally expensive but perform better on link prediction.
+
+ * We propose a transformation model that takes node embeddings from Random walk based node embedding methods and output near optimal embed-dings without an increase in computational cost.
+
+ * We demonstrate the results of transformation through extensive experimentation on various social network datasets of different graph sizes and different combinations of node embeddings and KG embedding methods.
+
+§ 2 BACKGROUND
+
+§ 2.1 PROBLEM DEFINITION
+
+Let ${G}_{\text{ homo }} = \langle V,E,A\rangle$ be an unweighted, undirected homogeneous graph where $V$ is the set of vertices, $E$ is the set of observed links, i.e., $E \subset V \times V$ and $A$ is the adjacency matrix respectively. The graph $G$ represents the topological structure of the social network in which an edge $e = \langle u,v\rangle \in E$ represents an interaction that took place between $u$ and $v$ . Let $U$ denote the universal set containing all $\left( {\left| V\right| \times \left( {\left| V\right| - 1}\right) }\right) /2$ possible edges. Then, the set of non-existent links is $U - E$ . Our assumption is that there are some missing links (edges that will appear in future) in the set $U - E$ . Then the link prediction task is given the current network ${G}_{\text{ homo }}$ , find out these missing edges.
+
+Similarly, let ${G}_{kg} = \langle V,E,A\rangle$ be a Knowledge Graph (KG). A KG is a directed graph whose nodes are entities and edges are subject-property-object triple facts. Each edge of the form (head entity, relation, tail entity) (denoted as $\langle h,r,t\rangle )$ indicates a relationship $r$ from entity $h$ to entity $t$ . For example, $\langle$ Bob, isFriendOf, Sam $\rangle$ and $\langle$ Bob, livesIn, NewYork $\rangle$ . Note that the entities and relations in a KG are usually of different types. Link prediction in KGs aims to predict the missing $\mathrm{h}$ or $\mathrm{t}$ for a relation fact triple $\langle h,r,t\rangle$ , used in [9,6.8]. In this task, for each position of missing entity, the system is asked to rank a set of candidate entities from the knowledge graph, instead of only giving one best result [9.8].
+
+We then formulate the problem of link prediction on graph $G$ such that $G \equiv {G}_{\text{ homo }} \equiv {G}_{kg}$ , i.e., KG with only one type of entity and relation. Link prediction is then to predict the missing $h$ or $t$ for a relation fact triple $\langle h,r,t\rangle$ where both $h$ and $t$ are of same kind. For example $\langle {Bob},{isFriendOf},?\rangle$ or $\langle$ Sam, isFriendOf,? $\rangle$ .
+
+§ 2.2 GRAPH EMBEDDING METHODS
+
+Graph embedding aims to represent a graph in a low dimensional space which preserves as much graph property information as possible. The differences between different graph embedding algorithms lie in how they define the graph property to be preserved. Different algorithms have different insights of the node (/edge/substructure/whole-graph) similarities and how to preserve them in the embedded space. Formally, given a graph $G = \langle V,E,A\rangle$ , a node embedding is a mapping ${f}_{1} : {v}_{i} \rightarrow {\mathbf{y}}_{\mathbf{i}} \in {\mathbb{R}}^{d}\;\forall i \in \left\lbrack n\right\rbrack$ where $d$ is the dimension of the embed-dings, $n$ the number of vertices and the function $f$ preserves some proximity measure defined on graph $G$ . If there are multiple types of links/relations in the graph then similar to node embeddings, relation embeddings can be obtained as $f : {r}_{j} \rightarrow {\mathbf{y}}_{\mathbf{j}} \in {\mathbb{R}}^{d}\;\forall j \in \left\lbrack k\right\rbrack$ where $k$ the number of types of relations.
+
+Node Embeddings using Random Walk Random walks have been used to approximate many properties in the graph including node centrality [23] and similarity [26]. Their key innovation is optimizing the node embeddings so that nodes have similar embeddings if they tend to co-occur on short random walks over the graph. Thus, instead of using a deterministic measure of graph proximity [5], these random walk methods employ a flexible, stochastic measure of graph proximity, which has led to superior performance in a number of settings [12]. Two well known examples of random walk based methods are node2vec [13] and DeepWalk [25].
+
+KG Embeddings KG embedding methods usually consists of three steps. The first step specifies the form in which entities and relations are represented in a continuous vector space. Entities are usually represented as vectors, i.e. deterministic points in the vector space [24, 8.31]. In the second step, a scoring function ${f}_{r}\left( {h,t}\right)$ is defined on each fact $\langle h,r,t\rangle$ to measure its plausibility. Facts observed in the KG tend to have higher scores than those that have not been observed. Finally, to learn those entity and relation representations (i.e., embed-dings), the third step solves an optimization problem that maximizes the total plausibility of observed facts as detailed in [30]. KG embedding methods which we use for experiments in this paper are TransE [8], TransH [31], TransD [20], RESCAL [32] and SimplE [17].
+
+ < g r a p h i c s >
+
+Fig. 1: Transformation Model. Input Graph: Green edges are missing links and red edges represents present links. First, a random walk method outputs node embeddings (source) for a graph. These embeddings are then used to initialize KG embedding method, which outputs finetuned embeddings. A transformation model is then trained between source and finetuned embeddings.
+
+§ 3 METHODOLOGY
+
+Transformation model is suggested to expedite fine-tuning process with KG-embedding methods. Let ${G}_{n,m}$ be a graph with $n$ vertices and $m$ edges. Given the node embeddings of the graph $G$ , we would want to transform them to optimal node embeddings.
+
+§ 3.1 NODE EMBEDDING GENERATION
+
+The input graph ${G}_{n,m}$ is fed into one of the random walk based graph embed-dings methods (node2vec [13] or DeepWalk [25]), which gives us the node em-beddings. Let $f$ be a random walk based graph embedding method and ${E}_{\text{ source }}^{i}$ denotes the output node embeddings:
+
+$$
+{E}_{\text{ source }}^{i} = f\left( {G}^{i}\right) \tag{1}
+$$
+
+where ${G}^{i}$ is the ${i}^{th}$ graph in the dataset of graphs $D = \left\{ {{G}^{1},{G}^{2},\ldots }\right\}$ and ${E}_{\text{ source }}^{i} \in$ ${\mathbb{R}}^{n \times d}$ with the embedding dimension $d$ .
+
+§ 3.2 KNOWLEDGE EMBEDDING GENERATION
+
+In a KG-based embedding algorithm (such as TransE), the input is a graph and the initial embeddings are randomly initialized. The algorithm uses a scoring function and optimizes the initial embeddings to output the trained embeddings for the given graph. Since we are working with homogeneous graph with only one type of relation, we don't need to learn the embeddings for the relation, hence they are kept constant and only node embeddings are learnt. Let ${E}_{\text{ initial }}^{i}$ be the initial node embeddings, ${E}_{\text{ target }}^{i}$ be the trained embeddings and $g$ the KG method with parameters $\alpha$ .
+
+$$
+{E}_{\text{ target }}^{i} = g\left( {{G}^{i},{E}_{\text{ initial }}^{i};\alpha }\right) \tag{2}
+$$
+
+where ${E}_{\text{ target }}^{i} \in {R}^{n \times d}$ and ${E}_{\text{ initial }}^{i} \in {R}^{n \times d}$ .
+
+Instead of using randomly initialized embeddings ${E}_{\text{ initial }}^{i}$ to obtain target embeddings ${E}_{\text{ target }}^{i}$ , we can initialize with ${E}_{\text{ source }}^{i}$ in Eq. (1) as
+
+$$
+{E}_{\text{ finetuned }}^{i} = g\left( {{G}^{i},{E}_{\text{ source }}^{i};\alpha }\right) \tag{3}
+$$
+
+where ${E}_{\text{ finetuned }}^{i} \in {R}^{n \times d}$ are fine tuned output embeddings. This idea of better initialization has also been explored previously in [22, 10] where it has been shown to result in embeddings of higher quality.
+
+§ 3.3 TRANSFORMATION MODEL WITH SELF-ATTENTION
+
+Using the node embeddings ${E}_{\text{ source }}^{i}$ from Eq. (1) and fine-tuned KG embed-dings ${E}_{\text{ finetuned }}^{i}$ from Eq. (3), we train a transformation model which can learn to transform the node embeddings from a node-based method to KG embed-dings. We adopt self-attention [29] on graph adjacency matrix as explained in Algorithm 1:
+
+$$
+{E}_{\text{ transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{ source }}^{i};\theta }\right) \tag{4}
+$$
+
+where ${E}_{\text{ transformed }}^{i} \in {R}^{n \times d}$ are the transformed embeddings and $\theta$ are the parameters of the self-attention model.
+
+The error between the fine-tuned and transformed embeddings is calculated using squared euclidean distance as:
+
+$$
+{E}_{\text{ error }}^{i} = 1/n\sum {\begin{Vmatrix}{E}_{\text{ transformed }}^{i} - {E}_{\text{ finetuned }}^{i}\end{Vmatrix}}^{2}. \tag{5}
+$$
+
+The loss on batch $\mathbf{X}$ of graphs is measured as:
+
+$$
+\operatorname{Loss}\left( \mathbf{X}\right) = 1/b\mathop{\sum }\limits_{{i = 1}}^{b}{E}_{\text{ error }}^{i} \tag{6}
+$$
+
+where $\mathbf{X} = \left\{ \left( {{E}_{\text{ transformed }}^{i},{E}_{\text{ finetuned }}^{i}}\right) \right\}$ and $b$ is the batch size. Since KG em-beddings are trained from facts/triplets which are obtained from the adjacency matrix of the graph, a self-attention model reinforced with information of the adjacency matrix when applied to node-embeddings is able to learn the transformation function as observed in our experiments (Figure 3). The proposed algorithm is summarized in Algorithm 2.
+
+Algorithm 1: Self-attention on graph adjacency matrix
+
+Function SelfAttention $\left( {{G}_{n,m},{E}_{n \times d}}\right)$
+
+ ${A}_{n \times n} =$ Adjacency Matrix of ${G}_{n,m}$
+
+ ${K}_{n \times d} =$ affine(E, d)
+
+ ${Q}_{n \times d} = \operatorname{affine}\left( {\mathrm{E},\mathrm{d}}\right)$
+
+ ${\text{ Logits }}_{n \times n} =$ matmul(Q, transpose(K))
+
+ AttendedLogit ${s}_{n \times n} =$ Logits $+ \mathrm{A}$
+
+ ${V}_{n \times d} =$ affine(E, d)
+
+ ${\text{ Output }}_{n \times d} =$ matmul(AttendedLogits, V)
+
+ return Output
+
+Algorithm 2: Training the transformation model
+
+Input: Dataset of Graphs ${D}_{\text{ train }} = \left\{ {{G}^{1},{G}^{2},\ldots ,{G}^{n}}\right\}$
+
+foreach ${G}^{i}$ in ${D}_{\text{ train }}$ do
+
+ ${E}_{\text{ source }}^{i} \leftarrow f\left( {G}^{i}\right)$
+
+end
+
+foreach ${G}^{i}$ in ${D}_{\text{ train }}$ do
+
+$\;;\;{E}_{finetuned}^{i} \leftarrow g\left( {{G}^{i},{E}_{source}^{i};\alpha }\right)$
+
+end
+
+while true do
+
+ $\mathbf{B} = \left\{ \left( {{E}_{\text{ source }}^{i},{E}_{\text{ finetuned }}^{i}}\right) \right\}$ DSample batch
+
+ foreach ${E}_{\text{ source }}^{i}$ in $\mathbf{B}$ do
+
+ ${E}_{\text{ transformed }}^{i} = \operatorname{SelfAttention}\left( {{G}^{i},{E}_{\text{ source }}^{i};\theta }\right)$
+
+ end
+
+ $\mathbf{X} = \left\{ \left( {{E}_{\text{ transformed }}^{i},{E}_{\text{ finetuned }}^{i}}\right) \right\}$
+
+ $\theta \leftarrow \theta - \beta {\nabla }_{\theta }\operatorname{Loss}\left( \mathbf{X}\right)$ DUpdate
+
+end
+
+§ 4 EXPERIMENTS
+
+§ 4.1 DATASETS
+
+Yang, et. al [33] introduced social network datasets with ground-truth communities. Each dataset $D$ is a network having a total of $N$ nodes, $E$ edges and a set of communities (Table 1).
+
+max width=
+
+Dataset Description Nodes Edges Communities
+
+1-5
+YouTube Friendship 1,134,890 2,987,624 8,385
+
+1-5
+DBLP Co-authorship 317,080 1,049,866 13,477
+
+1-5
+Amazon Co-purchasing 334,863 925,872 75,149
+
+1-5
+LiveJournal Friendship 3,997,962 34,681,189 287,512
+
+1-5
+Orkut Friendship 3,072,441 117,185,083 6,288,363
+
+1-5
+
+Table 1: Datasets
+
+ < g r a p h i c s >
+
+Fig. 2: Histogram showing community size vs its frequency. DBLP, YouTube and Amazon datasets have smaller size communities and LiveJournal and Orkut have larger size communities.
+
+The communities in each dataset are of different sizes. They range from a small size (1-20) to bigger sizes (380-400). There are more communities with small sizes and their frequency decreases as their size increases. This trend is depicted in Figure 2.
+
+YouTube ${}^{3}$ , Orkut ${}^{3}$ and LiveJournal ${}^{3}$ are friendship networks where each community is a user-defined group. Nodes in the community represent users, and edges represent their friendship.
+
+${\mathrm{{DBLP}}}^{3}$ is a co-authorship network where two authors are connected if they publish at least one paper together. A community is represented by a publication venue, e.g., journal or conference. Authors who published to a certain journal or conference form a community.
+
+Amazon ${}^{3}$ co-purchasing network is based on Customers Who Bought This Item Also Bought feature of the Amazon website. If a product $i$ is frequently co-purchased with product $j$ , the graph contains an undirected edge from $i$ to $j$ . Each connected component in a product category defined by Amazon acts as a community where nodes represent products in the same category and edges indicate that we were purchased together.
+
+§ 4.2 TRAINING
+
+We consider each community in a dataset as an individual graph ${G}_{n,m}$ with vertices representing the entity in the community and edges representing the relationship. For training the transformation model, we select communities of particular size range which acts as dataset $D$ of graphs (Table 2). We randomly disable ${20}\%$ of the links (edges) in each graph to act as missing links for link prediction. In all the experiments, the embedding dimension is set to 32, which works best in our pilot test. We used OpenNE ${}^{4}$ for generating node2vec and DeepWalk embeddings and OpenKE [14] for generating KG embeddings. The dataset $D$ of graphs is split into train, validation and test split of ${64}\% ,{16}\%$ , and ${20}\%$ respectively.
+
+${}^{3}$ http://snap.stanford.edu/data/index.html#communities
+
+max width=
+
+Dataset Graph Size Number of Graphs Average Degree Average Density
+
+1-5
+YouTube 16-21 338 3.00 0.17
+
+1-5
+DBLP 16-21 654 4.93 0.29
+
+1-5
+Amazon 21-25 1425 4.00 0.18
+
+1-5
+LiveJournal 51-55 1504 6.11 0.12
+
+1-5
+LiveJournal 61-65 1101 7.20 0.11
+
+1-5
+LiveJournal 71-75 806 7.53 0.10
+
+1-5
+LiveJournal 81-85 672 6.58 0.08
+
+1-5
+LiveJournal 91-95 497 8.01 0.08
+
+1-5
+LiveJournal 101-105 400 6.85 0.06
+
+1-5
+LiveJournal 111-115 351 5.89 0.05
+
+1-5
+LiveJournal 121-125 332 7.67 0.06
+
+1-5
+Orkut 151-155 1868 7.20 0.04
+
+1-5
+Orkut 251-255 654 7.21 0.028
+
+1-5
+Orkut 351-355 335 7.33 0.020
+
+1-5
+
+Table 2: Selected datasets and graph size for experiments.
+
+§ 4.3 EVALUATION METRICS
+
+For evaluation, we use MRR and Precision@K.The algorithm predicts a list of ranked candidates for the incoming query. To remove pre-existing triples in the knowledge graph, filtering operation cleans them up from the list. MRR computes the mean of the reciprocal rank of the correct candidate in the list, and Precision@K evaluates the rate of correct candidates appearing in the top $\mathrm{K}$ candidates predicted. Due to space constraints, we only present the results for MRR. Results of Precision@K can be found at our GitHub ${}^{5}$ .
+
+§ 5 RESULTS & DISCUSSIONS
+
+From the results depicted in Figure 3, we observe that the target KG embeddings (TransE, TransH, etc.) almost always outperforms random-walk based source embeddings (node2vec and DeepWalk) except in case of SimplE and DistMult where both the methods perform poorly. This can also be observed in Figure 4.
+
+Finetuned KG embeddings achieved better or equivalent performance as compared to target KG embeddings. This can be confirmed by ANOVA test in Figure 4 where there is no significant difference between the MRRs obtained from finetuned and target KG embeddings in most cases. Specifically, translational based methods such as TransE, TransH, and TransD have equivalent performance for finetuned and target embeddings whereas SimplE, RESCAL, and DistMult have better finetuned embeddings than target embeddings as the graph size grows.
+
+${}^{4}$ https://github.com/thunlp/OpenNE
+
+${}^{5}$ https://github.com/ArchitParnami/GraphProject
+
+ < g r a p h i c s >
+
+Fig. 3: Performance evaluation of different embeddings on link prediction using MRR (y-axis). Source (green) refers to embeddings from node2vec (left) and DeepWalk (right). Target (brown) refers to KG embeddings from TransE, TransH, TransD, SimplE, RESCAL, or DistMult. For each source and target pair, we evaluate finetuned (orange) embeddings (obtained by initializing target method with source embeddings) and transformed (red) embeddings (obtained by applying transformation model on source embeddings). Results are presented on different datasets of varying graph sizes.
+
+ < g r a p h i c s >
+
+Fig. 4: ANOVA test of MRR scores from two embedding methods (Method 1 and Method 2). The difference of MRR scores between the two methods is significant when their p-values are $< {0.05}$ (light green) and not significant otherwise (light red). The values in each cell are the difference between the means of MRR scores from two methods (Method 2 - Method 1). The text in bold represents when Method 2 did better than Method 1. Source method refers to node2vec (left) and DeepWalk (right). Target method refers to TransE, TransH, TransD, SimplE, RESCAL, or DistMult in each row.
+
+ < g r a p h i c s >
+
+Fig. 5: CPU Time (left y-axis) vs Graph Size (x-axis) and Mean MRR (right y-axis) vs Graph Size comparison of finetuned (TransE finetuned from node2vec) and transformed embeddings (from node2vec). As the graph size increases the time to obtain embeddings from KG methods (TransE) also increases significantly. However, there is no significant increase in time for the transformation (from node2vec) once we have the transformation model. The Mean MRR scores of both finetuned and transformed embeddings also drop with the increase in graph size, however, they perform equally good (for graphs <76). Note that finetuning time and transformation time both include time to obtain node2vec embeddings as well.
+
+Transformed embeddings consistently outperform source embeddings and have similar performance to finetuned embeddings at least for graphs of sizes up to 65. The performance drop starts from graph size 71-75 in the transformation to TransD from DeepWalk whereas 81-85 in the transformation to TransE from node2vec. For RESCAL, the transformation works for larger sized graphs in node2vec and till 121-125 in DeepWalk.
+
+As the graph size increases (top to bottom), the overall MRR scores decrease for all the embeddings as expected. In Figure 5, we compare computation time and MRR performance of transformed embeddings and finetuned embeddings where source method is node2vec and target method is TransE. It can be seen that the transformed embeddings give similar performance as finetuned embed-dings (without any significant increase in computational cost) up to graphs of size 71-75. Thereafter the transformed embeddings perform poorly, we attribute this to poor finetuned embeddings on which the transformation model was trained.
+
+§ 6 CONCLUSION
+
+In this work, we have demonstrated that random-walk based node embedding (source) methods are computationally efficient but give sub-optmial results on link prediction in social networks whereas KG based embedding (target & fine-tuned) methods perform better but are computationally expensive. For our requirement of generating optimal embeddings quickly for real-time link prediction we proposed a self-attention based transformation model to convert walk-based embeddings to optimal KG embeddings. The proposed model works well for smaller graphs but as the complexity of the graph increases, the transformation performance decreases. For future work, our goal is to explore better transformation models for bigger graphs.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..475fab1d3efac58402c58b14fa0ff477337a56b4
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,419 @@
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Link prediction (LP) has been recognized as an important task in graph learning with its board practical applications. A typical application of LP is to retrieve the top scoring neighbors for a given source node, such as the friend recommendation. These services desire the high inference scalability to find the top scoring neighbors from many candidate nodes at low latencies. There are two popular decoders that the recent LP models mainly use to compute the edge scores from node embeddings: the HadamardMLP and Dot Product decoders. After theoretical and empirical analysis, we find that the HadamardMLP decoders are generally more effective for LP. However, HadamardMLP lacks the scalability for retrieving top scoring neighbors on large graphs, since to the best of our knowledge, there does not exist an algorithm to retrieve the top scoring neighbors for HadamardMLP decoders in sublinear complexity. To make HadamardMLP scalable, we propose the Flashlight algorithm to accelerate the top scoring neighbor retrievals for HadamardMLP: a sublinear algorithm that progressively applies approximate maximum inner product search (MIPS) techniques with adaptively adjusted query embeddings. Empirical results show that Flashlight improves the inference speed of LP by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing effectiveness. Our work paves the way for large-scale LP applications with the effective HadamardMLP decoders by greatly accelerating their inference.
+
+## 21 I Introduction
+
+The goal of link prediction (LP) is to predict the missing links in a graph [1]. LP is drawing increasing attention in the past decade due to its board practical applications [2]. For instance, LP can be used to recommend new friends on social media [3], and recommend attractive items to the costumers on E-commerce sites [4], so as to improve the user experience. During inference, these applications demand the LP methods to retrieve the top scoring neighbors for a source node at low latencies. This is especially challenging on large graphs because the LP methods need to search many candidate nodes to find the top scoring neighbors.
+
+There are two main kinds of architecture followed by the recent LP models. The first uses an encoder, e.g., GCN [5], to obtain the node-level embeddings and uses a decoder, e.g., Dot Product, to get the edge scores between the paired nodes [6]. The second crops a subgraph for every edge and computes the edge score from the subgraph directly [7]. The inference speed of the second is much lower than the first, so we focus on the first kind of models to achieve fast inference on large graphs. In the last years, extensive research focuses on developing more expressive LP encoders [6, 8]. However, much less work pays attention to the essential impacts of the choice of decoders on LP's performance. In this work, we theoretically and empirically analyze two popular LP decoders: Dot Product and HadamardMLP (a MLP following the Hadamard Product), and find that the latter is generally more effective than the former.
+
+In practical applications, we should not only consider the effectiveness of LP, but also inference efficiency. Many LP applications generally require fast retrieval of the top scoring neighbors for low-latency services $\left\lbrack {3,9,{10}}\right\rbrack$ . For a Dot Product decoder, this retrieval can be approximated efficiently at the sublinear time complexity [11]. However, to the best of our knowledge, no such sublinear algorithms exist for the top scoring neighbor retrievals of the HadamardMLP decoders. This means
+
+# Flashlight $\mathcal{L}$ : Scalable Link Prediction with Effective Decoders
+
+
+
+Figure 1: Two popular LP decoders: The Dot Product (left), equivalent to the element-wise summation following the Hadamard product, and the HadamardMLP decoder (right).
+
+that for every source node, we have to iterate over all the nodes in the graph to compute the scores so as to find the top scoring neighbors for HadamardMLP, which is of linear complexity and cannot scale to large graphs.
+
+To allow LP applications to enjoy the high effectiveness of HadamardMLP decoders while avoiding the poor inference scalability, we propose the scalable top scoring neighbor search algorithm named Flashlight. Our Flashlight progressively calls the well-developed approximate maximum inner product search (MIPS) techniques for a few iterations. At every iteration, we analyze the retrieved neighbors and adaptively adjust the query embedding for Flashlight to find the missed high scoring neighbors. Our Flashlight algorithm holds sublinear time complexity on finding top scoring neighbors for HadamardMLP decoders, allowing for fast and scalable inference. Empirical results show that Flashlight accelerates the inference of LP models by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing the effectiveness. Overall, our work paves the way for the use of effective LP decoders in practical settings by greatly accelerating their inference.
+
+## 2 Revisiting Link Prediction Decoders
+
+In this section, we formalize the link prediction (LP) problem and the LP decoders. Typically, many LP models include an encoder that learns the node-level embeddings ${\mathbf{x}}_{i}, i \in \mathcal{V}$ , where $\mathcal{V}$ is the set of nodes, and an decoder $\phi : {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \rightarrow \mathbb{R}$ that combines the node-level embeddings of a pair of nodes: ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ into a single score: ${s}_{ij}$ . If ${s}_{ij}$ is higher, the link between nodes $i$ and $j$ is more likely to exist. The state-of-the-art models generally use graph neural networks as the encoders [5, 6, 8, 12, 13]. From here on, we mainly focus on the decoder $\phi$ .
+
+### 2.1 Dot Product Decoder
+
+The most common decoder of link prediction is the Dot Product $\left\lbrack {6,8,{10}}\right\rbrack$ :
+
+$$
+{s}_{ij} = {\phi }^{\text{dot }}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{:=}} {\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}, \tag{1}
+$$
+
+where $\cdot$ denotes the dot product.
+
+Training a link prediction model with the Dot Product decoder encourages the embeddings of the connected nodes to be close to each other. Intuitively, the score ${s}_{ij}$ can be thought as a measure of the squared Eulidean distance between the node embeddings ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ , as ${\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}\end{Vmatrix}}^{2} = {\begin{Vmatrix}{\mathbf{x}}_{i}\end{Vmatrix}}^{2} - 2{\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j} +$ ${\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}}^{2}$ , if the $\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}$ is constant over the neighbors $j \in \mathcal{N}$ , e.g., after normalization [14]. Because the node embeddings represent the semantic information of nodes, Dot Product assumes the homophily of graph topology, i.e., the semantically similar nodes are more likely to be connected.
+
+### 2.2 HadamardMLP (MLP following Hadamard Product) Decoder
+
+Multi layer perceptrons (MLPs) are known to be universal approximators that can approximate any continuous function on a compact set [15]. A MLP layer can be defined as a function $f : {\mathbb{R}}^{{d}_{\text{in }}} \rightarrow$ ${\mathbb{R}}^{{d}_{\text{out }}}$ :
+
+$$
+{f}_{\mathbf{W}}\left( \mathbf{x}\right) = \operatorname{ReLU}\left( {\mathbf{W}\mathbf{x}}\right) \tag{2}
+$$
+
+which is parameterized by the learnable weight $\mathbf{W} \in {\mathbb{R}}^{{d}_{\text{out }} \times {d}_{\text{in }}}$ (the bias, if exists, can be represented by an additional column in $\mathbf{W}$ and an additional channel in the input $\mathbf{x}$ with the value as 1 ). ReLU is the activation function. In a MLP, several layers of $f$ are stacked, e.g., a 3-layer MLP can be formalized as ${f}_{{\mathbf{W}}_{3}}\left( {{f}_{{\mathbf{W}}_{2}}\left( {{f}_{{\mathbf{W}}_{1}}\left( \mathbf{x}\right) }\right) }\right)$ .
+
+
+
+Figure 2: HadamardMLP achieves higher Mean Reciprocal Rank (MRR, higher is better) than other decoders on the OGBL-CITATION2 [16] dataset with the encoder as GraphSAGE [12] and GCN [5]. More empirical results and the detailed settings are in Sec. 6.3.
+
+The state-of-the-art models widely use a MLP following the Hadamard Product between the paired nodes as the decoder (short as the HadamardMLP decoders) [6, 8, 10, 16]:
+
+$$
+{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{:=}} \operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}\left( {{f}_{{\mathbf{W}}_{L - 1}}\left( {\ldots {f}_{{\mathbf{W}}_{1}}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) \ldots }\right) }\right) , \tag{3}
+$$
+
+where $\odot$ denotes the Hadamard Product. Fig. 1 illustrates these two models the Dot Product and HadamardMLP decoders.
+
+### 2.3 Other Link Prediction Decoders
+
+In principle, every function that takes two vectors as the input and outputs a scalar can act as the decoder. For example, there are bilinear dot product decoder (short as Bilinear decoder) [6]:
+
+$$
+{s}_{ij} = {\mathbf{h}}_{i}^{T}\mathbf{W}{\mathbf{h}}_{j}, \tag{4}
+$$
+
+where $\mathbf{W}$ is the learnable weight, and the MLPs following the concatenate decoder [6,10] (short as ConcatMLP decoder):
+
+$$
+{s}_{ij} = \operatorname{MLP}\left( {{\mathbf{h}}_{i}\parallel {\mathbf{h}}_{j}}\right) \tag{5}
+$$
+
+, etc. These two decoders are used much less than Dot Product and HadamardMLP in the state-of-the-art LP models possibly due to their lower effectiveness [6, 8, 10, 16].
+
+### 2.4 HadamardMLP is Generally More Effective than Other Decoders
+
+Dot Product demands the homophily of graph data to effectively infer the link between nodes. In contrast, thanks to the universal approximation capability, MLP can approximate any continuous function, and thus does not demand the homophily of graph data for effective LP. This gap in the expressiveness accounts for the performance difference of these two decoders on many datasets (see Sec. 6.3). We additionally show in Appendix. A that using a HadamardMLP is easy to learn Dot Product, which also partially accounts for the better effectiveness of the HadamardMLP decoders over the Dot Product. Existing work also finds that the effectiveness of Bilinear and ConcatMLP is generally worse than the HardmardMLP or Dot Product decoder [6, 8, 10, 16]. We confirm these findings more rigorously in the empirical results in Fig. 2 and more complete in Sec. 6.3.
+
+## 3 Scalability of Link Prediction Decoders
+
+Most academic studies focus on training runtime when discussing scalability. However, in industrial applications, the inference speed is often more important. The inference of many LP applications needs to retrieve the top scoring neighbors given a source node, e.g., recommending friends to a user for friend recommendation. Given a source node, if there are $n$ nodes in the graph, then the inference time complexity is $\mathcal{O}\left( n\right)$ if the decoder needs to iterate over all the $n$ nodes to compute the edge scores. For large scale applications, $n$ is typically in the range of millions, or even larger. The empirical results show that the inference time of finding the top scoring neighbors for a source node is longer than one second for HadamardMLP on the OGBL-CITATION2 dataset of nearly three million nodes (see Sec. 6.5).
+
+For a Dot Product decoder, the problem of finding the top scoring neighbors can be approximated efficiently. This is a well-studied problem, known as approximate maximum inner product search (MIPS) [17, 18] (see Sec. 5.2 for a comprehensive literature review). MIPS techniques allow Dot Product' inference to be completed in a few milliseconds, even with millions of neighbors. There exists some work that tries to extend MIPS to the ConcatMLP [19, 20]. These methods hold strict assumptions on the models' training and are not directly applicable to the HadamardMLP. To the best of our knowledge, no such sublinear techniques exist for the top scoring neighbor retrieval with the HadamardMLP [10], which is a complex nonlinear function.
+
+To summarize, the HadamardMLP decoder is not scalable for the real time LP services on large graphs, while the Dot Product decoder allows fast retrieval using the well established MIPS techniques.
+
+## 4 Flashlight: Scalable Link Prediction with Effective Decoders
+
+Sec. 2 has shown that the HadamardMLP decoder enjoys higher effectiveness than the Dot Product decoder, which supports the superior performance of HadamardMLP on many LP benchmarks. On the other hand, Sec. 3 has shown that the HadamardMLP is not scalable for real time LP applications on large graphs, while Dot Product supports the fast inference using the well-established MIPS techniques. In this section, we aim to devise fast inference algorithms for HadamardMLP to enable scalable LP with effective decoders.
+
+We try to exploit the advances in the well-developed MIPS techniques to accelerate the inference of HadamardMLP. Specifically, we divide the top scoring retrievals for HadamardMLP predictors into a sequence of MIPS. Our algorithm works in a progressive manner. The query embedding in every search is adaptively adjusted to find the high scoring neighbors missed in the last search.
+
+The challenge of retrieving the neighbors of highest scores for HadamardMLP is rooted in the unawareness of which neurons are activated, since if we know which neurons are activated, the nonlinear HadamardMLP degrades to a linear model. On the $l$ th MLP layer, we define the mask matrix ${\mathbf{M}}_{\mathcal{A}, l} \in {\mathbb{R}}^{{d}_{l} \times {d}_{l}}$ to represent the set of activated neurons $\mathcal{A}$ as
+
+$$
+{M}_{ij} = \left\{ \begin{array}{ll} 1, & \text{ if }i = j\text{ and }i \in \mathcal{A} \\ 0, & \text{ otherwise } \end{array}\right. \tag{6}
+$$
+
+With ${\mathbf{M}}_{\mathcal{A}, l}$ , we reformulate the HadamardMLP decoder as:
+
+$$
+{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}{\mathbf{W}}_{L - 1}\ldots {\mathbf{M}}_{\mathcal{A},1}{\mathbf{W}}_{1}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)
+$$
+
+$$
+= \left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right) \cdot {\mathbf{x}}_{j} \tag{7}
+$$
+
+Because the vector ${\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}^{T}{\mathbf{w}}_{L}$ is determined by the weights of MLP and the activated neurons $\mathcal{A}$ , we term it as ${\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right)$ :
+
+$$
+{\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \mathrel{\text{:=}} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}^{T}{\mathbf{w}}_{L} \tag{8}
+$$
+
+Given the source node $i$ , because the score ${s}_{ij}$ is obtained by the dot product between $\left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}^{T}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right)$ and the neighbor embedding ${\mathbf{x}}_{j}$ , we term the former vector as the query embedding $\mathbf{q}$ :
+
+$$
+\mathbf{q} \mathrel{\text{:=}} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A}, L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} = {\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \odot {\mathbf{x}}_{i} \tag{9}
+$$
+
+In this way, we can reformulate the output of decoder ${\phi }^{MLP}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{\mathbf{j}}}\right)$ as
+
+$$
+{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = \mathbf{q} \cdot {\mathbf{x}}_{j}. \tag{10}
+$$
+
+In practice, we can use the $\mathbf{q}$ as the query embedding in MIPS to retrieve the neighbors of highest inner products, which correspond to the highest scores. Here, how to get the activated neurons $\mathcal{A}$ so as to obtain the query embedding $\mathbf{q}$ is an issue. Different node pairs activate different neurons $\mathcal{A}$ . Initially, without knowing which neurons are activated, we first assume all the neurons are activated, i.e., we have the initial query embedding as:
+
+$$
+\mathbf{q}\left\lbrack 1\right\rbrack = \left( {\mathop{\prod }\limits_{{i = 1}}^{{L - 1}}{\mathbf{W}}_{i}^{T}}\right) {\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} \tag{11}
+$$
+
+Algorithm 1 Flashlight $\#$ : progressively "illuminates" the semantic space to retrieve the high scoring neighbors for the LP HadamardMLP decoders.
+
+---
+
+Input: A trained HadamardMLP decoder ${\phi }^{\text{MLP }}$ that outputs the logit ${s}_{ij}$ for the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ . The
+
+set of nodes $\mathcal{V}$ . The node embedding set $\mathcal{X} = \left\{ {{\mathbf{x}}_{i} \mid i \in \mathcal{V}}\right\}$ . A source node $i$ . The number of iterations
+
+$T$ . The number of neighbors to retrieve at every iteration: $\mathbf{N} = \left\lbrack {{N}_{1},{N}_{2},\ldots ,{N}_{T}}\right\rbrack$ .
+
+Output: The recommended neighbors $\mathcal{N}$ for the source node $i$ .
+
+ : Initialize the set of retrieved recommended neighbors $\mathcal{N} \leftarrow \varnothing$
+
+ Initialize the set of activated neurons as $\mathcal{A}\left\lbrack 0\right\rbrack$ as all the neurons in MLP.
+
+ for $t \leftarrow 1$ to $T$ do
+
+ Calculate the query embedding $\mathbf{q}\left\lbrack t\right\rbrack \leftarrow {\mathbf{x}}_{i} \odot {\operatorname{MLP}}_{\mathcal{A}\left\lbrack {t - 1}\right\rbrack }\left( \cdot \right)$ .
+
+ $\mathcal{N}\left\lbrack t\right\rbrack \leftarrow {N}_{t}$ neighbors in $\mathcal{X}$ that maximizes the inner product with $\mathbf{q}\left\lbrack t\right\rbrack$ .
+
+ $\mathcal{X} \leftarrow \mathcal{X} \smallsetminus \left\{ {{\mathbf{x}}_{j} \mid j \in \mathcal{N}\left\lbrack t\right\rbrack }\right\} .$
+
+ ${j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)$
+
+ $\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right)$ .
+
+ $\mathcal{N} \leftarrow \mathcal{N} \cup \mathcal{N}\left\lbrack t\right\rbrack$ .
+
+ return $\mathcal{N}$
+
+---
+
+This initial design can reflect the general trends of increasing the edge scores on LP, without restricting which neurons are activated. We use $\mathbf{q}\left\lbrack 1\right\rbrack$ as the query embedding to retrieve the highest inner product neighbors as $\mathcal{N}\left\lbrack 1\right\rbrack$ in the first iteration. Then, given the retrieved neighbors in the $t$ th iteration as $\mathcal{N}\left\lbrack t\right\rbrack$ , we analyze the $\mathcal{N}\left\lbrack t\right\rbrack$ and adaptively adjust the query embedding $\mathbf{q}\left\lbrack {t + 1}\right\rbrack$ that we use in the next iteration to find more high scoring neighbors. Specifically, we operate the feed-forward to MLP for $\mathcal{N}\left( t\right)$ . We define the function $A\left( {\cdot , \cdot }\right)$ that returns the set of activated neurons for a MLP (the first input) with the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ (the second input). Then we can use it to extract $\mathcal{A}$ as:
+
+$$
+\mathcal{A} = A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{12}
+$$
+
+Then, we obtain the set of activated neurons of the highest scored neighbor at the $t$ th iteration as:
+
+$$
+\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right) \text{, where }{j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{13}
+$$
+
+This implies that the neighbors activating $\mathcal{A}\left\lbrack t\right\rbrack$ can obtain the high edge scores. Then, if we take $\mathcal{A}\left\lbrack 1\right\rbrack$ as the set of neurons that we activate at the next query, we could find more high scoring neighbors. In this way, we set the neurons that we assume to activate in the next iteration as $\mathcal{A}\left\lbrack t\right\rbrack$ . We repeat the above iterations until enough neighbors are retrieved. The algorithm is summarized in Alg. 1.
+
+We name our algorithm as Flashlight because it works like a flashlight to progressively "illuminates" the semantic space to find the high scoring neighbors. The query embeddings are like the lights sent from the flashlight. And our process of adjusting the query embeddings is just like progressively adjusting the "lights" from the "flashlight" by checking the "objects" found in the last "illumination".
+
+In the experiments, we find that our Flashlight algorithm is effective to find the top scoring neighbors from the massive candidate neighbors. For example, in Fig. 3, our Flashlight is able to find the top 100 scoring neighbors from nearly three million candidates by retrieving only 200 neighbors in the large OGBL-CITATION2 graph dataset for the HadamardMLP decoders.
+
+Complexity Analysis. Using MLP decoders to compute the LP probabilities of all the neighbors holds the complexity as $\mathcal{O}\left( N\right)$ , where $N$ is the number of nodes in the whole graph. Finding the top scoring neighbors from the exact probabilities of all the neighbors also holds the linear complexity $\mathcal{O}\left( N\right)$ . Overall, using MLP decoders to find the top scoring neighbors is of the time complexity $\mathcal{O}\left( N\right)$ . In contrast, our Flashlight progressively calls the MIPS techniques for a constant number of times invariant to the graph data, which leads to the sublinear complexity as same as MIPS. In conclusion, our Flashlight improves the scalability and applicability of HadamardMLP decoders by reducing their inference time complexity from linear to sublinear time.
+
+Table 1: Statistics of datasets.
+
+| Dataset | OGBL-DDI | OGBL-COLLAB | OGBL-PPA | OGBL-CITATION2 |
| #Nodes | 4,267 | 235,868 | 576,289 | 2,927,963 |
| $\mathbf{\# {Edges}}$ | 1,334,889 | 1,285,465 | 30,326,273 | 30,561,187 |
+
+## 5 Related Work
+
+### 5.1 Link Prediction Models
+
+Existing LP models can be categorized into three families: heuristic feature based [3, 9, 21-23], latent embedding based [12, 24-28], and neural network based ones. The neural network-based link prediction models are mainly developed in recent years, which explore non-linear deep structural features with neural layers. Variational graph auto-encoders [13] predict links by encoding graph with graph convolutional layer [5]. Another two state-of-the-art neural models WLNM [29] and SEAL [30] use graph labeling algorithm to transfer union neighborhood of two nodes (enclosing subgraph) as meaningful matrix and employ convolutional neural layer or a novel graph neural layer DGCNN [31] for encoding. More recently, $\left\lbrack {6,8}\right\rbrack$ summarized the architectures LP models, and formally define the encoders and decoders.
+
+Different from the previous work, we focus on analyzing the effectiveness of different LP decoders and improving the scalability of the effective LP decoders. In practice, we find that the Hadamard decoders exhibit superior effectiveness but poor scalability for inference. Our work significantly accelerates the inference of HadamardMLP decoders to make the effective LP scalable.
+
+### 5.2 Maximum Inner Product Search
+
+Finding the top scoring neighbors for the Dot Product decoder at the sublinear time complexity is a well studied research problem, known as the approximate maximum inner product search (MIPS). There are several approaches to MIPS: sampling based [11, 32, 33], LSH-based [34-37], graph based [38-40], and quantization approaches [17, 18]. MIPS is a fundamental building block in various application domains [41-46], such as information retrieval [47, 48], pattern recognition [49, 50], data mining [51, 52], machine learning [53, 54], and recommendation systems [55, 56].
+
+With the explosive growth of datasets' scale and the inevitable curse of dimensionality, MIPS is essential to offer the scalable services. However, the HadamardMLP decoders are nonlinear and there do not exist the well studied sublinear complexity algorithms to find the top scoring neighbors for HadamardMLP [10]. In this work, we utilize the well studied approximate MIPS techniques with the adaptively adjusted query embeddings to find the top scoring neighbors for the MLP decoders in a progressive manner. Our method supports the plug-and-play use during inference and significantly acclerates the LP inference with the effective MLP decoders.
+
+## 6 Experiments
+
+In this section, we first compare the effectiveness of different LP decoders. We find that the HadamardMLP decoders generally perform better than other decoders. Then, we implement our 9 Flashlight algorithm with LP models to show that Flashlight effectively retrieves the top scoring neighbors for the HadamardMLP decoders. As a result, the inference efficiency and scalability of HadamardMLP decoders are improved significantly by our work.
+
+### 6.1 Datasets
+
+We evaluate the link prediction on Open Graph Benchmark (OGB) data [57]. We use four OGB datasets with different graph types, including OGBL-DDI, OGBL-COLLAB, OGBL-CITATION2, and OGBL-PPA. OGBL-DDI is a homogeneous, unweighted, undirected graph, representing the drug-drug interaction network. Each node represents a drug. Edges represent interactions between drugs. OGBL-COLLAB is an undirected graph, representing a subset of the collaboration network between authors indexed by MAG. Each node represents an author and edges indicate the collaboration between authors. All nodes come with 128-dimensional features. OGBL-CITATION2 is a directed graph, representing the citation network between a subset of papers extracted from MAG. Each node is a paper with 128-dimensional word2vec features. OGBL-PPA is an undirected, unweighted graph. Nodes represent proteins from 58 different species, and edges indicate biologically meaningful associations between proteins. The statistics of these datasets is presented in Table. 1.
+
+Table 2: The test effectiveness comparison of LP decoders on four OGB datasets (DDI, COLLAB, PPA, and CITATION2) [16]. We report the results of the standard metrics averaged over 10 runs following the existing work $\left\lbrack {6,{16}}\right\rbrack$ . HadamardMLP is more effective than other decoders. Flashlight effectively retrieves the top scoring neighbors for HadamardMLP and keep its exact outputs.
+
+| Decoder | Dot Product | Bilinear | ConcatMLP | HadamardMLP | HadamardMLP w/ Flashlight |
| OGBL-DDI |
| GCN [5] | ${13.8} \pm {1.8}$ | ${16.1} \pm {1.2}$ | ${12.9} \pm {1.4}$ | ${37.1} \pm {5.1}$ | ${37.1} \pm {5.1}$ |
| GraphSAGE [12] | ${36.5} \pm {2.6}$ | ${39.4} \pm {1.7}$ | ${34.2} \pm {1.9}$ | $\mathbf{{53.9} \pm {4.7}}$ | $\mathbf{{53.9} \pm {4.7}}$ |
| Node2Vec [27] | ${11.6} \pm {1.9}$ | ${13.8} \pm {1.6}$ | ${10.8} \pm {1.7}$ | ${23.3} \pm {2.1}$ | ${23.3} \pm {2.1}$ |
| OGBL-COLLAB |
| GCN [5] | ${42.9} \pm {0.7}$ | ${43.2} \pm {0.9}$ | ${42.3} \pm {1.0}$ | ${44.8} \pm {1.1}$ | ${44.8} \pm {1.1}$ |
| GraphSAGE [12] | ${37.3} \pm {0.9}$ | ${41.5} \pm {0.8}$ | ${37.0} \pm {0.7}$ | ${48.1} \pm {0.8}$ | $\mathbf{{48.1} \pm {0.8}}$ |
| Node2Vec [27] | ${27.7} \pm {1.1}$ | ${31.5} \pm {1.0}$ | ${27.2} \pm {0.8}$ | $\mathbf{{48.9} \pm {0.5}}$ | ${48.9} \pm {0.5}$ |
| OGBL-PPA |
| GCN [5] | ${5.1} \pm {0.4}$ | ${5.8} \pm {0.5}$ | ${6.2} \pm {0.6}$ | ${18.7} \pm {1.3}$ | $\mathbf{{18.7} \pm {1.3}}$ |
| GraphSAGE [12] | ${3.2} \pm {0.3}$ | ${6.5} \pm {0.7}$ | ${5.8} \pm {0.4}$ | ${16.6} \pm {2.4}$ | ${16.6} \pm {2.4}$ |
| Node2Vec [27] | ${4.2} \pm {0.5}$ | ${7.8} \pm {0.6}$ | ${8.3} \pm {0.4}$ | $\mathbf{{22.3} \pm {0.8}}$ | $\mathbf{{22.3} \pm {0.8}}$ |
| OGBL-CITATION2 |
| GCN [5] | ${65.3} \pm {0.4}$ | ${69.0} \pm {0.8}$ | ${62.7} \pm {0.3}$ | $\mathbf{{84.7} \pm {0.2}}$ | $\mathbf{{84.7} \pm {0.2}}$ |
| GraphSAGE [12] | ${62.2} \pm {0.7}$ | ${65.4} \pm {0.9}$ | ${60.8} \pm {0.6}$ | $\mathbf{{80.4} \pm {0.1}}$ | ${80.4} \pm {0.1}$ |
| Node2Vec [27] | ${52.7} \pm {0.8}$ | ${54.1} \pm {0.6}$ | ${51.4} \pm {0.5}$ | ${61.4} \pm {0.1}$ | $\mathbf{{61.4} \pm {0.1}}$ |
+
+### 6.2 Hyper-parameter Settings
+
+For all experiments in this section, we report the average and standard deviation over ten runs with different random seeds. The results are reported on the the best model selected using validation data. We set hyper-parameters of the used techniques and considered baseline methods, e.g., the batch size, the number of hidden units, the optimizer, and the learning rate as suggested by their authors. We use the recent MIPS method ScaNN [18] in the implementation of our Flashlight. For the hyper-parameters of our Flashlight, we have found in the experiments that the performance of Flashlight is robust to the change of hyper-parameters in a board range. Therefore, we simply set the number of iterations of our Flashlight as $T = 3$ and the number of retrieved neighbors constant as 200 per iteration by default. We run all experiments on a machine with 80 Intel(R) Xeon(R) E5-2698 v4 @ 2.20GHz CPUs, and a single NVIDIA V100 GPU with 16GB RAM.
+
+### 6.3 Effectiveness of Link Prediction Decoders
+
+We follow the standard benchmark settings of OGB datasets to evaluate the effectiveness of LP with different decoders. The benchmark setting of OGBL-DDI is to predict drug-drug interactions given information on already known drug-drug interactions. The performance is evaluated by Hits@20: each true drug interaction is ranked among a set of approximately 100,000 randomly-sampled negative drug interactions, and count the ratio of positive edges that are ranked at 20-place or above. The task of OGBL-COLLAB is to predict the future author collaboration relationships given the past collaborations. Evaluation metric is Hits50, where each true collaboration is ranked among a set of 100,000 randomly-sampled negative collaborations. The task of OGBL-PPA is to predict new association edges given the training edges. Evaluation metric is Hits@100, where each positive edge is ranked among 3,000,000 randomly-sampled negative edges. The task of OGBL-CITATION2 is predict missing citation given existing citations. The evaluation metric is Mean Reciprocal Rank (MRR), where the reciprocal rank of the true reference among 1,000 sampled negative candidates is calculated for each source nodes, and then the average is taken over all source nodes.
+
+We implement different decoders as introduced in Sec. 2, including the Dot Product, Bilinear, ConcatMLP, and the HadamardMLP decoders, over the LP encoders, including GCN [5], GraphSAGE [12], and Node2Vec [27], to compare the effects of different decoders on the LP effectiveness. We present the results on the OGBL-DDI, OGBL-COLLAB, OGBL-PPA, and OGBL-CITATION2 datasets in Table. 2. We observe that the HadamardMLP decoder outperforms other decoders on all encoders and datasets. Our Flashlight algorithm can effectively retrieve the top scoring neighbors for the HadamardMLP decoder and keep the exact LP probabilities of HadamardMLPs' output, which leads to the same results of the HadamardMLP decoder with and without Flashlight.
+
+Note that the benchmark settings of these datasets sample a small portion of negative edges for the test evaluation, which is not challenging enough to evaluate the scalability of LP decoders on retrieving the top scoring neighbors from massive candidates in practice.
+
+### 6.4 The Flashlight Algorithm Effectively Finds the Top Scoring Neighbors
+
+To evaluate the effectiveness of our Flashlight on retrieving the top scoring neighbors for the HadamardMLP decoder, we propose a more challenging test setting for the OGB LP datasets. Given a source node, we takes its top 100 scoring neighbors of the HadamardMLP decoder as the ground-truth for retrievals. We set the task as retrieving $k$ neighbors for a source node that can match the ground-truth neighbors as much as possible. We formally define the metric as Recall $@k$ , which is the portion of the ground-truth neighbors being in the top $k$ neighbors retrieved by different methods.
+
+We sample 1000 nodes as the source nodes from the OGBL-DDI and OGBL-CITATION2 datasets respectively for evaluation. We evaluate the effectivness of our Flashlight algorithm by checking whether it can find the top scoring neighbors for every source node. We set the number of Flashlight iterations as 10 and the number of retrieved neighbors per iteration as 50 . We present the Recall@ $k$ for $k$ from 1 to 500 averaged over all the source nodes in Fig. 3. The "oracle" curve represents the performance of a optimum searcher, of which the retrieved top $k$ neighbors are exactly the top $k$ scoring neighbors of HadamardMLP.
+
+
+
+Figure 3: Recall $@k$ is the fraction of the 100 top scoring neighbors of HadamardMLP ranked in the top $k$ neighbors retrieved by Flashlight. We report Recall $@k$ averaged over all the source nodes on OGBL-CITATION2 and OGBL-DDI.
+
+When $k = {100}$ , the 100 neighbors retrieved by our Flashlight can cover more than ${80}\%$ ground-truth neighbors. When $k \geq {200}$ , the recall reaches ${100}\%$ . As a comparison, if we randomly sample the candidate neighbors for retrievals, the Recall $@k$ grows linearly with $k$ and is less than $1 \times {10}^{-4}$ for $k = {100}$ on the OGBL-CITATION2 dataset. The curves of Flashlight is close the optimum curve of the "oracle". These results demonstrate the highly effectiveness of our Flashlight on finding the top scoring neighbors.
+
+Given the large OGBL-Citation2 dataset and smaller DDI dataset, our Flashlight exhibits similar Recall $@k$ performance given different numbers $k$ of retrieved neighbors. This implies that our Flashlight can accurately find the top scoring neighbors for both small and large graphs.
+
+### 6.5 Inference Efficiency of Link Prediction with Our Flashlight Algorithm
+
+We use the throughputs to evaluate the inference speed of neighbor retrieval of different methods. The throughput is defined as how many source nodes that a method can serve to retrieve the top 100 scoring neighbors per second. Except for the LP models that follow the encoder and decoder architectures, e.g., GraphSAGE [12], GCN [5], and PLNLP [6], there are some subgraph based LP models, e.g., SUREL [7] and SEAL [58]. The common issue of the subgraph based models is the poor efficiency: they have to crop a seperate subgraph for every node pair to calculate the LP probability on the node pair. In this sense, the node embeddings cannot be shared on the LP calculation for different node pairs. This leads to the much lower inference speed of the subgraph based LP models than the encoder-decoder LP models. We compare the inference effeciency of different methods on the OGBL-CITATION2 dataset in Fig. 4, where we present the inference speed of different methods when achieving the ${100}\%$ recall@100 for the top 100 scoring neighbors.
+
+We observe that our Flashlight significantly accelerate the inference speed of LP models GraphSAGE [12], GCN [5], and PLNLP [6] with the HadamardMLP decoders by more than 100 times. This gap will be even larger for the datasets of larger scales, because the inference with our Flashlight holds the sublinear time complexity while the HadamardMLP decoders holds the linear complexity. Note that the y-axis is in logoratimic scale. The subgraph based methods SUREL [7] and SEAL [58] hold the inference speed of throuputs lower than $1 \times {10}^{-2}$ and $1 \times {10}^{-3}$ respectively, which is not applicable to the practical services that require the low latency of milliseconds.
+
+
+
+Figure 4: The inference speed of different LP methods on the OGBL-CITATION2 dataset. The y-axis (througputs) is in the logarithmic scale.
+
+
+
+Figure 5: The tradeoff between the inference speed (y-axis) and the effectiveness of finding the top scoring neighbors (x-axis) on the OGBL-CITATION2 (left) and OGBL-PPA (right) datasets.
+
+Taking a further step, we comprehensively evaluate the tradeoff between the inference speed and the effectiveness of finding the top scoring neighbors. Taking GraphSAGE as the encoder, we present the tradeoff curves between the throughputs and the Recall@100 on the OGBL-CITATION2 and OGBL-PPI datasets in Fig. 5. In comparison with our Flashlight, we take the HadamardMLP decoder with the Random Sampling as the baseline for comparison. For example, on the OGBL-CITATION2 dataset, when achieving the Recall@100 as more than 80%, the HadamardMLP with our Flashlight can serve more than 200 source nodes per second, while the HadamardMLP with the random sampling can only serve less than 1 node per second. Overall, our Flashlight achieves much better inference speed and effectiveness tradeoff than the HadamardMLP with random sampling.
+
+## 7 Conclusion
+
+Our theoretical and empirical analysis suggests that the HadamardMLP decoders are a better default choice than the Dot Product in terms of LP effectiveness. Because there does not exist a well-developed sublinear complexity top scoring neighbor searching algorithm for HadamardMLP, the HadamardMLP decoders are not scalable and cannot support the fast inference on large graphs. To resolve this issue, we propose the Flashlight algorithm to accelerate the inference of LP models with HadamardMLP decoders. Flashlight progressively operates the well-studied MIPS techniques for a few iterations. We adaptively adjust the query embeddings at every iteration to find more high scoring neighbors. Empirical results show that our Flashlight accelrates the inference of LP models by more than 100 times on the large OGBL-CITATION2 graph. Overall, our work paves the way for the use of strong LP decoders in practical settings by greatly accelerating their inference.
+
+References
+
+[1] Linyuan Lü and Tao Zhou. Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications, 390(6):1150-1170, 2011. 1
+
+[2] Víctor Martínez, Fernando Berzal, and Juan-Carlos Cubero. A survey of link prediction in complex networks. ACM computing surveys (CSUR), 49(4):1-33, 2016. 1
+
+[3] Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social networks, 25(3): 211-230, 2003. 1, 6
+
+[4] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37, 2009. 1
+
+[5] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 1, 2, 3, 6, 7, 8
+
+[6] Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, and Hanjing Su. Pairwise learning for neural link prediction. arXiv preprint arXiv:2112.02936, 2021. 1, 2, 3, 6, 7, 8, 13
+
+[7] Haoteng Yin, Muhan Zhang, Yanbang Wang, Jianguo Wang, and Pan Li. Algorithm and system co-design for efficient subgraph-based graph representation learning. arXiv preprint arXiv:2202.13538, 2022. 1, 8, 9
+
+[8] Chuxiong Sun and Guoshi Wu. Adaptive graph diffusion networks with hop-wise attention. arXiv preprint arXiv:2012.15024, 2020. 1, 2, 3, 6, 13
+
+[9] Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. The European Physical Journal B, 71(4):623-630, 2009. 1, 6
+
+[10] Steffen Rendle, Walid Krichene, Li Zhang, and John Anderson. Neural collaborative filtering vs. matrix factorization revisited. In Fourteenth ACM conference on recommender systems, pages 240-248, 2020. 1, 2, 3, 4, 6, 13, 14
+
+[11] Rui Liu, Tianyi Wu, and Barzan Mozafari. A bandit approach to maximum inner product search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4376-4383, 2019. 1, 6
+
+[12] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. 2, 3, 6, 7, 8
+
+[13] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308,2016. 2, 6
+
+[14] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 974-983, 2018. 2
+
+[15] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303-314, 1989. 2
+
+[16] Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs. arXiv preprint arXiv:2103.09430, 2021.3,7
+
+[17] Xinyan Dai, Xiao Yan, Kelvin KW Ng, Jiu Liu, and James Cheng. Norm-explicit quantization: Improving vector quantization for maximum inner product search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 51-58, 2020. 4, 6
+
+[18] Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. Accelerating large-scale inference with anisotropic vector quantization. In International Conference on Machine Learning, pages 3887-3896. PMLR, 2020. 4, 6, 7
+
+[19] Shulong Tan, Zhixin Zhou, Zhaozhuo Xu, and Ping Li. Fast item ranking under neural network based measures. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 591-599, 2020. 4
+
+[20] Rihan Chen, Bin Liu, Han Zhu, Yaoxuan Wang, Qi Li, Buting Ma, Qingbo Hua, Jun Jiang, Yunlong Xu, Hongbo Deng, et al. Approximate nearest neighbor search under neural similarity metric for large-scale recommendation. arXiv preprint arXiv:2202.10226, 2022. 4
+
+[21] Gobinda G Chowdhury. Introduction to modern information retrieval. Facet publishing, 2010. 6
+
+[22] David Liben-Nowell and Jon Kleinberg. The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management, pages 556-559, 2003.
+
+[23] Glen Jeh and Jennifer Widom. Simrank: a measure of structural-context similarity. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 538-543, 2002. 6
+
+[24] Aditya Krishna Menon and Charles Elkan. Link prediction via matrix factorization. In Joint european conference on machine learning and knowledge discovery in databases, pages 437- 452. Springer, 2011. 6
+
+[25] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710, 2014.
+
+[26] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067-1077, 2015.
+
+[27] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855-864, 2016. 7
+
+[28] Zhitao Wang, Chengyao Chen, and Wenjie Li. Predictive network representation learning for link prediction. In Proceedings of the 40th international ACM SIGIR conference on research and development in information retrieval, pages 969-972, 2017. 6
+
+[29] Muhan Zhang and Yixin Chen. Weisfeiler-lehman neural machine for link prediction. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 575-583, 2017. 6
+
+[30] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018. 6
+
+[31] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. 6
+
+[32] Edith Cohen and David D Lewis. Approximating matrix multiplication for pattern recognition tasks. Journal of Algorithms, 30(2):211-252, 1999. 6
+
+[33] Hsiang-Fu Yu, Cho-Jui Hsieh, Qi Lei, and Inderjit S Dhillon. A greedy approach for budgeted maximum inner product search. Advances in neural information processing systems, 30, 2017. 6
+
+[34] Qiang Huang, Guihong Ma, Jianlin Feng, Qiong Fang, and Anthony KH Tung. Accurate and fast asymmetric locality-sensitive hashing scheme for maximum inner product search. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1561-1570, 2018. 6
+
+[35] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In International Conference on Machine Learning, pages 1926-1934. PMLR, 2015.
+
+[36] Anshumali Shrivastava and Ping Li. Asymmetric Ish (alsh) for sublinear time maximum inner product search (mips). Advances in neural information processing systems, 27, 2014.
+
+[37] Xiao Yan, Jinfeng Li, Xinyan Dai, Hongzhi Chen, and James Cheng. Norm-ranging lsh for maximum inner product search. Advances in Neural Information Processing Systems, 31, 2018. 6
+
+[38] Jie Liu, Xiao Yan, Xinyan Dai, Zhirong Li, James Cheng, and Ming-Chang Yang. Understanding and improving proximity graph based maximum inner product search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 139-146, 2020. 6
+
+[39] Stanislav Morozov and Artem Babenko. Non-metric similarity graphs for maximum inner product search. Advances in Neural Information Processing Systems, 31, 2018.
+
+[40] Zhixin Zhou, Shulong Tan, Zhaozhuo Xu, and Ping Li. Möbius transformation for fast inner product search on graph. Advances in Neural Information Processing Systems, 32, 2019. 6
+
+[41] Kazuo Aoyama, Kazumi Saito, Hiroshi Sawada, and Naonori Ueda. Fast approximate similarity search based on degree-reduced neighborhood graphs. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1055-1063, 2011. 6
+
+[42] Akhil Arora, Sakshi Sinha, Piyush Kumar, and Arnab Bhattacharya. Hd-index: Pushing the scalability-accuracy boundary for approximate knn search in high-dimensional spaces. arXiv preprint arXiv:1804.06829, 2018.
+
+[43] Cong Fu, Chao Xiang, Changxu Wang, and Deng Cai. Fast approximate nearest neighbor search with the navigating spreading-out graph. arXiv preprint arXiv:1707.00143, 2017.
+
+[44] Yu A Malkov and Dmitry A Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, 42(4):824-836, 2018.
+
+[45] Philipp M Riegger. Literature survey on nearest neighbor search and search in graphs. 2010.
+
+[46] Wenhui Zhou, Chunfeng Yuan, Rong Gu, and Yihua Huang. Large scale nearest neighbors search based on neighborhood graph. In 2013 International Conference on Advanced Cloud and Big Data, pages 181-186. IEEE, 2013. 6
+
+[47] Myron Flickner, Harpreet Sawhney, Wayne Niblack, Jonathan Ashley, Qian Huang, Byron Dom, Monika Gorkani, Jim Hafner, Denis Lee, Dragutin Petkovic, et al. Query by image and video content: The qbic system. computer, 28(9):23-32, 1995. 6
+
+[48] Chun Jiang Zhu, Tan Zhu, Haining Li, Jinbo Bi, and Minghu Song. Accelerating large-scale molecular similarity search through exploiting high performance computing. In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 330-333. IEEE, 2019. 6
+
+[49] Thomas Cover and Peter Hart. Nearest neighbor pattern classification. IEEE transactions on information theory, 13(1):21-27, 1967. 6
+
+[50] Atsutake Kosuge and Takashi Oshima. An object-pose estimation acceleration technique for picking robot applications by using graph-reusing k-nn search. In 2019 First International Conference on Graph Computing (GC), pages 68-74. IEEE, 2019. 6
+
+[51] Qiang Huang, Jianlin Feng, Qiong Fang, Wilfred Ng, and Wei Wang. Query-aware locality-sensitive hashing scheme for $l\_ p$ norm. The VLDB Journal,26(5):683-708,2017. 6
+
+[52] Masajiro Iwasaki. Pruned bi-directed k-nearest neighbor graph for proximity search. In International Conference on Similarity Search and Applications, pages 20-33. Springer, 2016. 6
+
+[53] Yuan Cao, Heng Qi, Wenrui Zhou, Jien Kato, Keqiu Li, Xiulong Liu, and Jie Gui. Binary hashing for approximate nearest neighbor search on big data: A survey. IEEE Access, 6: 2039-2054, 2017. 6
+
+[54] Scott Cost and Steven Salzberg. A weighted nearest neighbor algorithm for learning with symbolic features. Machine learning, 10(1):57-78, 1993. 6
+
+[55] Yitong Meng, Xinyan Dai, Xiao Yan, James Cheng, Weiwen Liu, Jun Guo, Benben Liao, and Guangyong Chen. Pmd: An optimal transportation-based user distance for recommender systems. In European Conference on Information Retrieval, pages 272-280. Springer, 2020. 6
+
+[56] Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pages 285-295, 2001. 6
+
+[57] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. 6
+
+[58] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, 34:9061-9073, 2021. 8, 9
+
+[59] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, pages 242-252. PMLR, 2019. 13
+
+[60] Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with neural networks. In International conference on machine learning, pages 1908-1916. PMLR, 2014.13
+
+## A Learning a Dot Product decoder with a HadamardMLP decoder is Easy
+
+Before we have discussed the limitations of the Dot Product decoder. An interesting questions is whether the HadamardMLP decoder can replace the Dot Product decoder by approximating it. If the MLP decoder can learn a dot product easily, it is safe to use MLP decoder instead of the dot product ones in most cases. There are similar problems actively studied in machine learning. Existing work imply that the difficulty scales polynomial with dimensionality $d$ and $1/\epsilon$ in theory [10,59,60]. This motivates us to investigate the question empirically.
+
+
+
+Figure 6: A MLP decoder can learn a Dot Product decoder well with enough training data. The left and right figures shows the MSE differences (y-axis) per epoch (x-axis) between the outputs of dot product and the MLP decoders given different training sizes with the input embedding dimenionality as $d = {64}$ and $d = {128}$ respectively. The naive output denotes the outputs of zeros.
+
+
+
+Figure 7: Test inverse MSE differences between the outputs of Dot Product and MLP decoders after convergence (y-axis) versus the training set size (x-axis).
+
+We set up a synthetic learning task where given two embeddings ${\mathbf{x}}_{i},{\mathbf{x}}_{j} \in {\mathbb{R}}^{d}$ and a label ${\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}$ , we want to obtain a MLP function that approximates the ${\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}$ with the inputs ${\mathbf{x}}_{i},{\mathbf{x}}_{j} \in {\mathbb{R}}^{d}$ . For this experiment, we create the datasets including the embedding matrix as $\mathbf{E} \in {\mathbb{R}}^{{10}^{6} \times d}$ . We draw every row in $\mathbf{E}$ from $\mathcal{N}\left( {0,\mathbf{I}}\right)$ independently. Then, we uniformly sample (without replacement) ${10}^{4}$ and $S$ embedding pair combinations from $\mathbf{E}$ to form the test and training sets (no overlap) respectively.
+
+We train the MLP on the training and evalute it on the test set. For the architecture of the MLP, we keep it simple: we follow the existing work $\left\lbrack {6,8}\right\rbrack$ to set the number of layers as 2 and the number of hidden units as same as the input embeddings: $d$ . For the optimizer, we also folow the existing work $\left\lbrack {6,8}\right\rbrack$ to choose the Adam optimizer. As for evaluation metrics, we compute the MSE (Mean Squared Error) differences between the predicted score of the MLP and the dot product decoders. We measure the MSE of a naive model that predicts always 0 (the average rating). Every experiment is repeated 5 times and we report the mean.
+
+Fig. 6 shows the approximation errors on the MLP per epoch given different number of training pairs and dimensions. The figure suggests that an MLP can easily approximate the dot product with enough training data. Consistent with the theory, the number of samples needed scales polynominally with the increasing dimensions and reduced errors. Ancedotally, we observe the number of needed training samples is about $\mathcal{O}\left( {{d}^{\alpha }/{\epsilon }^{\beta }}\right)$ for $\alpha \approx 2,\beta \ll 1$ (see Fig. 7). In all cases, the MSE errors of the MLP decoder are negligible compared with the naive output.
+
+This experiment shows that an MLP can easily approximate the dot product with enough training data. We hope this can explain, at least partially, why the MLP decoder generally performs better than the dot product.
+
+Our conclusion seems to be distinct to to the existing work [10], which claims that the ConcatMLP is hard to learn a Dot Product. Actually, our conclusion is not conflicted with that in [10]. This ConcatMLP decoder processes the concatenation of the paired embeddings instead of the Hadamard product of the paired embeddings as the HadamardMLP. The HadamardMLP holds the inductive bias similar to the Dot Product, which makes the former easily learns the latter. Actually, we show that a simple two-layer MLP with only two hidden units is equivalent to the Dot Product with specific weights. We assign the first layer weights for two hidden units as1and $- \mathbf{1}$ and the second layer weights as ones. Then, we have its output as:
+
+$$
+{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{\mathbf{i}},{\mathbf{x}}_{j}}\right) = \operatorname{ReLU}\left( {\mathbf{1} \cdot \left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) }\right) + \operatorname{ReLU}\left( {-\mathbf{1} \cdot \left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) }\right) = \mathbf{1} \cdot \left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) = {\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}, \tag{14}
+$$
+
+which is equivalent to the Dot Product decoder. From this result, we find that any MLP decoder with the careful initialization is equivalent to the Dot Product decoder and thus can learn the Dot Product easily.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4b6bc2f2a8e64e7b4fb733703bba9016f729e63e
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/-H-AKyXZnHn/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,327 @@
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Link prediction (LP) has been recognized as an important task in graph learning with its board practical applications. A typical application of LP is to retrieve the top scoring neighbors for a given source node, such as the friend recommendation. These services desire the high inference scalability to find the top scoring neighbors from many candidate nodes at low latencies. There are two popular decoders that the recent LP models mainly use to compute the edge scores from node embeddings: the HadamardMLP and Dot Product decoders. After theoretical and empirical analysis, we find that the HadamardMLP decoders are generally more effective for LP. However, HadamardMLP lacks the scalability for retrieving top scoring neighbors on large graphs, since to the best of our knowledge, there does not exist an algorithm to retrieve the top scoring neighbors for HadamardMLP decoders in sublinear complexity. To make HadamardMLP scalable, we propose the Flashlight algorithm to accelerate the top scoring neighbor retrievals for HadamardMLP: a sublinear algorithm that progressively applies approximate maximum inner product search (MIPS) techniques with adaptively adjusted query embeddings. Empirical results show that Flashlight improves the inference speed of LP by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing effectiveness. Our work paves the way for large-scale LP applications with the effective HadamardMLP decoders by greatly accelerating their inference.
+
+§ 21 I INTRODUCTION
+
+The goal of link prediction (LP) is to predict the missing links in a graph [1]. LP is drawing increasing attention in the past decade due to its board practical applications [2]. For instance, LP can be used to recommend new friends on social media [3], and recommend attractive items to the costumers on E-commerce sites [4], so as to improve the user experience. During inference, these applications demand the LP methods to retrieve the top scoring neighbors for a source node at low latencies. This is especially challenging on large graphs because the LP methods need to search many candidate nodes to find the top scoring neighbors.
+
+There are two main kinds of architecture followed by the recent LP models. The first uses an encoder, e.g., GCN [5], to obtain the node-level embeddings and uses a decoder, e.g., Dot Product, to get the edge scores between the paired nodes [6]. The second crops a subgraph for every edge and computes the edge score from the subgraph directly [7]. The inference speed of the second is much lower than the first, so we focus on the first kind of models to achieve fast inference on large graphs. In the last years, extensive research focuses on developing more expressive LP encoders [6, 8]. However, much less work pays attention to the essential impacts of the choice of decoders on LP's performance. In this work, we theoretically and empirically analyze two popular LP decoders: Dot Product and HadamardMLP (a MLP following the Hadamard Product), and find that the latter is generally more effective than the former.
+
+In practical applications, we should not only consider the effectiveness of LP, but also inference efficiency. Many LP applications generally require fast retrieval of the top scoring neighbors for low-latency services $\left\lbrack {3,9,{10}}\right\rbrack$ . For a Dot Product decoder, this retrieval can be approximated efficiently at the sublinear time complexity [11]. However, to the best of our knowledge, no such sublinear algorithms exist for the top scoring neighbor retrievals of the HadamardMLP decoders. This means
+
+§ FLASHLIGHT $\MATHCAL{L}$ : SCALABLE LINK PREDICTION WITH EFFECTIVE DECODERS
+
+ < g r a p h i c s >
+
+Figure 1: Two popular LP decoders: The Dot Product (left), equivalent to the element-wise summation following the Hadamard product, and the HadamardMLP decoder (right).
+
+that for every source node, we have to iterate over all the nodes in the graph to compute the scores so as to find the top scoring neighbors for HadamardMLP, which is of linear complexity and cannot scale to large graphs.
+
+To allow LP applications to enjoy the high effectiveness of HadamardMLP decoders while avoiding the poor inference scalability, we propose the scalable top scoring neighbor search algorithm named Flashlight. Our Flashlight progressively calls the well-developed approximate maximum inner product search (MIPS) techniques for a few iterations. At every iteration, we analyze the retrieved neighbors and adaptively adjust the query embedding for Flashlight to find the missed high scoring neighbors. Our Flashlight algorithm holds sublinear time complexity on finding top scoring neighbors for HadamardMLP decoders, allowing for fast and scalable inference. Empirical results show that Flashlight accelerates the inference of LP models by more than 100 times on the large OGBL-CITATION2 dataset without sacrificing the effectiveness. Overall, our work paves the way for the use of effective LP decoders in practical settings by greatly accelerating their inference.
+
+§ 2 REVISITING LINK PREDICTION DECODERS
+
+In this section, we formalize the link prediction (LP) problem and the LP decoders. Typically, many LP models include an encoder that learns the node-level embeddings ${\mathbf{x}}_{i},i \in \mathcal{V}$ , where $\mathcal{V}$ is the set of nodes, and an decoder $\phi : {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \rightarrow \mathbb{R}$ that combines the node-level embeddings of a pair of nodes: ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ into a single score: ${s}_{ij}$ . If ${s}_{ij}$ is higher, the link between nodes $i$ and $j$ is more likely to exist. The state-of-the-art models generally use graph neural networks as the encoders [5, 6, 8, 12, 13]. From here on, we mainly focus on the decoder $\phi$ .
+
+§ 2.1 DOT PRODUCT DECODER
+
+The most common decoder of link prediction is the Dot Product $\left\lbrack {6,8,{10}}\right\rbrack$ :
+
+$$
+{s}_{ij} = {\phi }^{\text{ dot }}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{ := }} {\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j}, \tag{1}
+$$
+
+where $\cdot$ denotes the dot product.
+
+Training a link prediction model with the Dot Product decoder encourages the embeddings of the connected nodes to be close to each other. Intuitively, the score ${s}_{ij}$ can be thought as a measure of the squared Eulidean distance between the node embeddings ${\mathbf{x}}_{i},{\mathbf{x}}_{j}$ , as ${\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}\end{Vmatrix}}^{2} = {\begin{Vmatrix}{\mathbf{x}}_{i}\end{Vmatrix}}^{2} - 2{\mathbf{x}}_{i} \cdot {\mathbf{x}}_{j} +$ ${\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}}^{2}$ , if the $\begin{Vmatrix}{\mathbf{x}}_{j}\end{Vmatrix}$ is constant over the neighbors $j \in \mathcal{N}$ , e.g., after normalization [14]. Because the node embeddings represent the semantic information of nodes, Dot Product assumes the homophily of graph topology, i.e., the semantically similar nodes are more likely to be connected.
+
+§ 2.2 HADAMARDMLP (MLP FOLLOWING HADAMARD PRODUCT) DECODER
+
+Multi layer perceptrons (MLPs) are known to be universal approximators that can approximate any continuous function on a compact set [15]. A MLP layer can be defined as a function $f : {\mathbb{R}}^{{d}_{\text{ in }}} \rightarrow$ ${\mathbb{R}}^{{d}_{\text{ out }}}$ :
+
+$$
+{f}_{\mathbf{W}}\left( \mathbf{x}\right) = \operatorname{ReLU}\left( {\mathbf{W}\mathbf{x}}\right) \tag{2}
+$$
+
+which is parameterized by the learnable weight $\mathbf{W} \in {\mathbb{R}}^{{d}_{\text{ out }} \times {d}_{\text{ in }}}$ (the bias, if exists, can be represented by an additional column in $\mathbf{W}$ and an additional channel in the input $\mathbf{x}$ with the value as 1 ). ReLU is the activation function. In a MLP, several layers of $f$ are stacked, e.g., a 3-layer MLP can be formalized as ${f}_{{\mathbf{W}}_{3}}\left( {{f}_{{\mathbf{W}}_{2}}\left( {{f}_{{\mathbf{W}}_{1}}\left( \mathbf{x}\right) }\right) }\right)$ .
+
+ < g r a p h i c s >
+
+Figure 2: HadamardMLP achieves higher Mean Reciprocal Rank (MRR, higher is better) than other decoders on the OGBL-CITATION2 [16] dataset with the encoder as GraphSAGE [12] and GCN [5]. More empirical results and the detailed settings are in Sec. 6.3.
+
+The state-of-the-art models widely use a MLP following the Hadamard Product between the paired nodes as the decoder (short as the HadamardMLP decoders) [6, 8, 10, 16]:
+
+$$
+{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) \mathrel{\text{ := }} \operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}\left( {{f}_{{\mathbf{W}}_{L - 1}}\left( {\ldots {f}_{{\mathbf{W}}_{1}}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) \ldots }\right) }\right) , \tag{3}
+$$
+
+where $\odot$ denotes the Hadamard Product. Fig. 1 illustrates these two models the Dot Product and HadamardMLP decoders.
+
+§ 2.3 OTHER LINK PREDICTION DECODERS
+
+In principle, every function that takes two vectors as the input and outputs a scalar can act as the decoder. For example, there are bilinear dot product decoder (short as Bilinear decoder) [6]:
+
+$$
+{s}_{ij} = {\mathbf{h}}_{i}^{T}\mathbf{W}{\mathbf{h}}_{j}, \tag{4}
+$$
+
+where $\mathbf{W}$ is the learnable weight, and the MLPs following the concatenate decoder [6,10] (short as ConcatMLP decoder):
+
+$$
+{s}_{ij} = \operatorname{MLP}\left( {{\mathbf{h}}_{i}\parallel {\mathbf{h}}_{j}}\right) \tag{5}
+$$
+
+, etc. These two decoders are used much less than Dot Product and HadamardMLP in the state-of-the-art LP models possibly due to their lower effectiveness [6, 8, 10, 16].
+
+§ 2.4 HADAMARDMLP IS GENERALLY MORE EFFECTIVE THAN OTHER DECODERS
+
+Dot Product demands the homophily of graph data to effectively infer the link between nodes. In contrast, thanks to the universal approximation capability, MLP can approximate any continuous function, and thus does not demand the homophily of graph data for effective LP. This gap in the expressiveness accounts for the performance difference of these two decoders on many datasets (see Sec. 6.3). We additionally show in Appendix. A that using a HadamardMLP is easy to learn Dot Product, which also partially accounts for the better effectiveness of the HadamardMLP decoders over the Dot Product. Existing work also finds that the effectiveness of Bilinear and ConcatMLP is generally worse than the HardmardMLP or Dot Product decoder [6, 8, 10, 16]. We confirm these findings more rigorously in the empirical results in Fig. 2 and more complete in Sec. 6.3.
+
+§ 3 SCALABILITY OF LINK PREDICTION DECODERS
+
+Most academic studies focus on training runtime when discussing scalability. However, in industrial applications, the inference speed is often more important. The inference of many LP applications needs to retrieve the top scoring neighbors given a source node, e.g., recommending friends to a user for friend recommendation. Given a source node, if there are $n$ nodes in the graph, then the inference time complexity is $\mathcal{O}\left( n\right)$ if the decoder needs to iterate over all the $n$ nodes to compute the edge scores. For large scale applications, $n$ is typically in the range of millions, or even larger. The empirical results show that the inference time of finding the top scoring neighbors for a source node is longer than one second for HadamardMLP on the OGBL-CITATION2 dataset of nearly three million nodes (see Sec. 6.5).
+
+For a Dot Product decoder, the problem of finding the top scoring neighbors can be approximated efficiently. This is a well-studied problem, known as approximate maximum inner product search (MIPS) [17, 18] (see Sec. 5.2 for a comprehensive literature review). MIPS techniques allow Dot Product' inference to be completed in a few milliseconds, even with millions of neighbors. There exists some work that tries to extend MIPS to the ConcatMLP [19, 20]. These methods hold strict assumptions on the models' training and are not directly applicable to the HadamardMLP. To the best of our knowledge, no such sublinear techniques exist for the top scoring neighbor retrieval with the HadamardMLP [10], which is a complex nonlinear function.
+
+To summarize, the HadamardMLP decoder is not scalable for the real time LP services on large graphs, while the Dot Product decoder allows fast retrieval using the well established MIPS techniques.
+
+§ 4 FLASHLIGHT: SCALABLE LINK PREDICTION WITH EFFECTIVE DECODERS
+
+Sec. 2 has shown that the HadamardMLP decoder enjoys higher effectiveness than the Dot Product decoder, which supports the superior performance of HadamardMLP on many LP benchmarks. On the other hand, Sec. 3 has shown that the HadamardMLP is not scalable for real time LP applications on large graphs, while Dot Product supports the fast inference using the well-established MIPS techniques. In this section, we aim to devise fast inference algorithms for HadamardMLP to enable scalable LP with effective decoders.
+
+We try to exploit the advances in the well-developed MIPS techniques to accelerate the inference of HadamardMLP. Specifically, we divide the top scoring retrievals for HadamardMLP predictors into a sequence of MIPS. Our algorithm works in a progressive manner. The query embedding in every search is adaptively adjusted to find the high scoring neighbors missed in the last search.
+
+The challenge of retrieving the neighbors of highest scores for HadamardMLP is rooted in the unawareness of which neurons are activated, since if we know which neurons are activated, the nonlinear HadamardMLP degrades to a linear model. On the $l$ th MLP layer, we define the mask matrix ${\mathbf{M}}_{\mathcal{A},l} \in {\mathbb{R}}^{{d}_{l} \times {d}_{l}}$ to represent the set of activated neurons $\mathcal{A}$ as
+
+$$
+{M}_{ij} = \left\{ \begin{array}{ll} 1, & \text{ if }i = j\text{ and }i \in \mathcal{A} \\ 0, & \text{ otherwise } \end{array}\right. \tag{6}
+$$
+
+With ${\mathbf{M}}_{\mathcal{A},l}$ , we reformulate the HadamardMLP decoder as:
+
+$$
+{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = {\mathbf{w}}_{L}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}{\mathbf{W}}_{L - 1}\ldots {\mathbf{M}}_{\mathcal{A},1}{\mathbf{W}}_{1}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)
+$$
+
+$$
+= \left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right) \cdot {\mathbf{x}}_{j} \tag{7}
+$$
+
+Because the vector ${\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}^{T}{\mathbf{w}}_{L}$ is determined by the weights of MLP and the activated neurons $\mathcal{A}$ , we term it as ${\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right)$ :
+
+$$
+{\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \mathrel{\text{ := }} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}^{T}{\mathbf{w}}_{L} \tag{8}
+$$
+
+Given the source node $i$ , because the score ${s}_{ij}$ is obtained by the dot product between $\left( {{\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}^{T}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i}}\right)$ and the neighbor embedding ${\mathbf{x}}_{j}$ , we term the former vector as the query embedding $\mathbf{q}$ :
+
+$$
+\mathbf{q} \mathrel{\text{ := }} {\mathbf{W}}_{1}^{T}{\mathbf{M}}_{\mathcal{A},1}\ldots {\mathbf{W}}_{L - 1}^{T}{\mathbf{M}}_{\mathcal{A},L - 1}{\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} = {\operatorname{MLP}}_{\mathcal{A}}\left( \cdot \right) \odot {\mathbf{x}}_{i} \tag{9}
+$$
+
+In this way, we can reformulate the output of decoder ${\phi }^{MLP}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{\mathbf{j}}}\right)$ as
+
+$$
+{s}_{ij} = {\phi }^{\mathrm{{MLP}}}\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right) = \mathbf{q} \cdot {\mathbf{x}}_{j}. \tag{10}
+$$
+
+In practice, we can use the $\mathbf{q}$ as the query embedding in MIPS to retrieve the neighbors of highest inner products, which correspond to the highest scores. Here, how to get the activated neurons $\mathcal{A}$ so as to obtain the query embedding $\mathbf{q}$ is an issue. Different node pairs activate different neurons $\mathcal{A}$ . Initially, without knowing which neurons are activated, we first assume all the neurons are activated, i.e., we have the initial query embedding as:
+
+$$
+\mathbf{q}\left\lbrack 1\right\rbrack = \left( {\mathop{\prod }\limits_{{i = 1}}^{{L - 1}}{\mathbf{W}}_{i}^{T}}\right) {\mathbf{w}}_{L} \odot {\mathbf{x}}_{i} \tag{11}
+$$
+
+Algorithm 1 Flashlight $\#$ : progressively "illuminates" the semantic space to retrieve the high scoring neighbors for the LP HadamardMLP decoders.
+
+Input: A trained HadamardMLP decoder ${\phi }^{\text{ MLP }}$ that outputs the logit ${s}_{ij}$ for the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ . The
+
+set of nodes $\mathcal{V}$ . The node embedding set $\mathcal{X} = \left\{ {{\mathbf{x}}_{i} \mid i \in \mathcal{V}}\right\}$ . A source node $i$ . The number of iterations
+
+$T$ . The number of neighbors to retrieve at every iteration: $\mathbf{N} = \left\lbrack {{N}_{1},{N}_{2},\ldots ,{N}_{T}}\right\rbrack$ .
+
+Output: The recommended neighbors $\mathcal{N}$ for the source node $i$ .
+
+ : Initialize the set of retrieved recommended neighbors $\mathcal{N} \leftarrow \varnothing$
+
+ Initialize the set of activated neurons as $\mathcal{A}\left\lbrack 0\right\rbrack$ as all the neurons in MLP.
+
+ for $t \leftarrow 1$ to $T$ do
+
+ Calculate the query embedding $\mathbf{q}\left\lbrack t\right\rbrack \leftarrow {\mathbf{x}}_{i} \odot {\operatorname{MLP}}_{\mathcal{A}\left\lbrack {t - 1}\right\rbrack }\left( \cdot \right)$ .
+
+ $\mathcal{N}\left\lbrack t\right\rbrack \leftarrow {N}_{t}$ neighbors in $\mathcal{X}$ that maximizes the inner product with $\mathbf{q}\left\lbrack t\right\rbrack$ .
+
+ $\mathcal{X} \leftarrow \mathcal{X} \smallsetminus \left\{ {{\mathbf{x}}_{j} \mid j \in \mathcal{N}\left\lbrack t\right\rbrack }\right\} .$
+
+ ${j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right)$
+
+ $\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right)$ .
+
+ $\mathcal{N} \leftarrow \mathcal{N} \cup \mathcal{N}\left\lbrack t\right\rbrack$ .
+
+ return $\mathcal{N}$
+
+This initial design can reflect the general trends of increasing the edge scores on LP, without restricting which neurons are activated. We use $\mathbf{q}\left\lbrack 1\right\rbrack$ as the query embedding to retrieve the highest inner product neighbors as $\mathcal{N}\left\lbrack 1\right\rbrack$ in the first iteration. Then, given the retrieved neighbors in the $t$ th iteration as $\mathcal{N}\left\lbrack t\right\rbrack$ , we analyze the $\mathcal{N}\left\lbrack t\right\rbrack$ and adaptively adjust the query embedding $\mathbf{q}\left\lbrack {t + 1}\right\rbrack$ that we use in the next iteration to find more high scoring neighbors. Specifically, we operate the feed-forward to MLP for $\mathcal{N}\left( t\right)$ . We define the function $A\left( {\cdot , \cdot }\right)$ that returns the set of activated neurons for a MLP (the first input) with the input ${\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}$ (the second input). Then we can use it to extract $\mathcal{A}$ as:
+
+$$
+\mathcal{A} = A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{12}
+$$
+
+Then, we obtain the set of activated neurons of the highest scored neighbor at the $t$ th iteration as:
+
+$$
+\mathcal{A}\left\lbrack t\right\rbrack \leftarrow A\left( {\operatorname{MLP}\left( \cdot \right) ,{\mathbf{x}}_{i} \odot {\mathbf{x}}_{{j}^{ \star }\left\lbrack t\right\rbrack }}\right) \text{ , where }{j}^{ \star }\left\lbrack t\right\rbrack = \arg \mathop{\max }\limits_{{j \in \mathcal{N}\left\lbrack t\right\rbrack }}\operatorname{MLP}\left( {{\mathbf{x}}_{i} \odot {\mathbf{x}}_{j}}\right) . \tag{13}
+$$
+
+This implies that the neighbors activating $\mathcal{A}\left\lbrack t\right\rbrack$ can obtain the high edge scores. Then, if we take $\mathcal{A}\left\lbrack 1\right\rbrack$ as the set of neurons that we activate at the next query, we could find more high scoring neighbors. In this way, we set the neurons that we assume to activate in the next iteration as $\mathcal{A}\left\lbrack t\right\rbrack$ . We repeat the above iterations until enough neighbors are retrieved. The algorithm is summarized in Alg. 1.
+
+We name our algorithm as Flashlight because it works like a flashlight to progressively "illuminates" the semantic space to find the high scoring neighbors. The query embeddings are like the lights sent from the flashlight. And our process of adjusting the query embeddings is just like progressively adjusting the "lights" from the "flashlight" by checking the "objects" found in the last "illumination".
+
+In the experiments, we find that our Flashlight algorithm is effective to find the top scoring neighbors from the massive candidate neighbors. For example, in Fig. 3, our Flashlight is able to find the top 100 scoring neighbors from nearly three million candidates by retrieving only 200 neighbors in the large OGBL-CITATION2 graph dataset for the HadamardMLP decoders.
+
+Complexity Analysis. Using MLP decoders to compute the LP probabilities of all the neighbors holds the complexity as $\mathcal{O}\left( N\right)$ , where $N$ is the number of nodes in the whole graph. Finding the top scoring neighbors from the exact probabilities of all the neighbors also holds the linear complexity $\mathcal{O}\left( N\right)$ . Overall, using MLP decoders to find the top scoring neighbors is of the time complexity $\mathcal{O}\left( N\right)$ . In contrast, our Flashlight progressively calls the MIPS techniques for a constant number of times invariant to the graph data, which leads to the sublinear complexity as same as MIPS. In conclusion, our Flashlight improves the scalability and applicability of HadamardMLP decoders by reducing their inference time complexity from linear to sublinear time.
+
+Table 1: Statistics of datasets.
+
+max width=
+
+Dataset OGBL-DDI OGBL-COLLAB OGBL-PPA OGBL-CITATION2
+
+1-5
+#Nodes 4,267 235,868 576,289 2,927,963
+
+1-5
+$\mathbf{\# {Edges}}$ 1,334,889 1,285,465 30,326,273 30,561,187
+
+1-5
+
+§ 5 RELATED WORK
+
+§ 5.1 LINK PREDICTION MODELS
+
+Existing LP models can be categorized into three families: heuristic feature based [3, 9, 21-23], latent embedding based [12, 24-28], and neural network based ones. The neural network-based link prediction models are mainly developed in recent years, which explore non-linear deep structural features with neural layers. Variational graph auto-encoders [13] predict links by encoding graph with graph convolutional layer [5]. Another two state-of-the-art neural models WLNM [29] and SEAL [30] use graph labeling algorithm to transfer union neighborhood of two nodes (enclosing subgraph) as meaningful matrix and employ convolutional neural layer or a novel graph neural layer DGCNN [31] for encoding. More recently, $\left\lbrack {6,8}\right\rbrack$ summarized the architectures LP models, and formally define the encoders and decoders.
+
+Different from the previous work, we focus on analyzing the effectiveness of different LP decoders and improving the scalability of the effective LP decoders. In practice, we find that the Hadamard decoders exhibit superior effectiveness but poor scalability for inference. Our work significantly accelerates the inference of HadamardMLP decoders to make the effective LP scalable.
+
+§ 5.2 MAXIMUM INNER PRODUCT SEARCH
+
+Finding the top scoring neighbors for the Dot Product decoder at the sublinear time complexity is a well studied research problem, known as the approximate maximum inner product search (MIPS). There are several approaches to MIPS: sampling based [11, 32, 33], LSH-based [34-37], graph based [38-40], and quantization approaches [17, 18]. MIPS is a fundamental building block in various application domains [41-46], such as information retrieval [47, 48], pattern recognition [49, 50], data mining [51, 52], machine learning [53, 54], and recommendation systems [55, 56].
+
+With the explosive growth of datasets' scale and the inevitable curse of dimensionality, MIPS is essential to offer the scalable services. However, the HadamardMLP decoders are nonlinear and there do not exist the well studied sublinear complexity algorithms to find the top scoring neighbors for HadamardMLP [10]. In this work, we utilize the well studied approximate MIPS techniques with the adaptively adjusted query embeddings to find the top scoring neighbors for the MLP decoders in a progressive manner. Our method supports the plug-and-play use during inference and significantly acclerates the LP inference with the effective MLP decoders.
+
+§ 6 EXPERIMENTS
+
+In this section, we first compare the effectiveness of different LP decoders. We find that the HadamardMLP decoders generally perform better than other decoders. Then, we implement our 9 Flashlight algorithm with LP models to show that Flashlight effectively retrieves the top scoring neighbors for the HadamardMLP decoders. As a result, the inference efficiency and scalability of HadamardMLP decoders are improved significantly by our work.
+
+§ 6.1 DATASETS
+
+We evaluate the link prediction on Open Graph Benchmark (OGB) data [57]. We use four OGB datasets with different graph types, including OGBL-DDI, OGBL-COLLAB, OGBL-CITATION2, and OGBL-PPA. OGBL-DDI is a homogeneous, unweighted, undirected graph, representing the drug-drug interaction network. Each node represents a drug. Edges represent interactions between drugs. OGBL-COLLAB is an undirected graph, representing a subset of the collaboration network between authors indexed by MAG. Each node represents an author and edges indicate the collaboration between authors. All nodes come with 128-dimensional features. OGBL-CITATION2 is a directed graph, representing the citation network between a subset of papers extracted from MAG. Each node is a paper with 128-dimensional word2vec features. OGBL-PPA is an undirected, unweighted graph. Nodes represent proteins from 58 different species, and edges indicate biologically meaningful associations between proteins. The statistics of these datasets is presented in Table. 1.
+
+Table 2: The test effectiveness comparison of LP decoders on four OGB datasets (DDI, COLLAB, PPA, and CITATION2) [16]. We report the results of the standard metrics averaged over 10 runs following the existing work $\left\lbrack {6,{16}}\right\rbrack$ . HadamardMLP is more effective than other decoders. Flashlight effectively retrieves the top scoring neighbors for HadamardMLP and keep its exact outputs.
+
+max width=
+
+Decoder Dot Product Bilinear ConcatMLP HadamardMLP HadamardMLP w/ Flashlight
+
+1-6
+6|c|OGBL-DDI
+
+1-6
+GCN [5] ${13.8} \pm {1.8}$ ${16.1} \pm {1.2}$ ${12.9} \pm {1.4}$ ${37.1} \pm {5.1}$ ${37.1} \pm {5.1}$
+
+1-6
+GraphSAGE [12] ${36.5} \pm {2.6}$ ${39.4} \pm {1.7}$ ${34.2} \pm {1.9}$ $\mathbf{{53.9} \pm {4.7}}$ $\mathbf{{53.9} \pm {4.7}}$
+
+1-6
+Node2Vec [27] ${11.6} \pm {1.9}$ ${13.8} \pm {1.6}$ ${10.8} \pm {1.7}$ ${23.3} \pm {2.1}$ ${23.3} \pm {2.1}$
+
+1-6
+6|c|OGBL-COLLAB
+
+1-6
+GCN [5] ${42.9} \pm {0.7}$ ${43.2} \pm {0.9}$ ${42.3} \pm {1.0}$ ${44.8} \pm {1.1}$ ${44.8} \pm {1.1}$
+
+1-6
+GraphSAGE [12] ${37.3} \pm {0.9}$ ${41.5} \pm {0.8}$ ${37.0} \pm {0.7}$ ${48.1} \pm {0.8}$ $\mathbf{{48.1} \pm {0.8}}$
+
+1-6
+Node2Vec [27] ${27.7} \pm {1.1}$ ${31.5} \pm {1.0}$ ${27.2} \pm {0.8}$ $\mathbf{{48.9} \pm {0.5}}$ ${48.9} \pm {0.5}$
+
+1-6
+6|c|OGBL-PPA
+
+1-6
+GCN [5] ${5.1} \pm {0.4}$ ${5.8} \pm {0.5}$ ${6.2} \pm {0.6}$ ${18.7} \pm {1.3}$ $\mathbf{{18.7} \pm {1.3}}$
+
+1-6
+GraphSAGE [12] ${3.2} \pm {0.3}$ ${6.5} \pm {0.7}$ ${5.8} \pm {0.4}$ ${16.6} \pm {2.4}$ ${16.6} \pm {2.4}$
+
+1-6
+Node2Vec [27] ${4.2} \pm {0.5}$ ${7.8} \pm {0.6}$ ${8.3} \pm {0.4}$ $\mathbf{{22.3} \pm {0.8}}$ $\mathbf{{22.3} \pm {0.8}}$
+
+1-6
+6|c|OGBL-CITATION2
+
+1-6
+GCN [5] ${65.3} \pm {0.4}$ ${69.0} \pm {0.8}$ ${62.7} \pm {0.3}$ $\mathbf{{84.7} \pm {0.2}}$ $\mathbf{{84.7} \pm {0.2}}$
+
+1-6
+GraphSAGE [12] ${62.2} \pm {0.7}$ ${65.4} \pm {0.9}$ ${60.8} \pm {0.6}$ $\mathbf{{80.4} \pm {0.1}}$ ${80.4} \pm {0.1}$
+
+1-6
+Node2Vec [27] ${52.7} \pm {0.8}$ ${54.1} \pm {0.6}$ ${51.4} \pm {0.5}$ ${61.4} \pm {0.1}$ $\mathbf{{61.4} \pm {0.1}}$
+
+1-6
+
+§ 6.2 HYPER-PARAMETER SETTINGS
+
+For all experiments in this section, we report the average and standard deviation over ten runs with different random seeds. The results are reported on the the best model selected using validation data. We set hyper-parameters of the used techniques and considered baseline methods, e.g., the batch size, the number of hidden units, the optimizer, and the learning rate as suggested by their authors. We use the recent MIPS method ScaNN [18] in the implementation of our Flashlight. For the hyper-parameters of our Flashlight, we have found in the experiments that the performance of Flashlight is robust to the change of hyper-parameters in a board range. Therefore, we simply set the number of iterations of our Flashlight as $T = 3$ and the number of retrieved neighbors constant as 200 per iteration by default. We run all experiments on a machine with 80 Intel(R) Xeon(R) E5-2698 v4 @ 2.20GHz CPUs, and a single NVIDIA V100 GPU with 16GB RAM.
+
+§ 6.3 EFFECTIVENESS OF LINK PREDICTION DECODERS
+
+We follow the standard benchmark settings of OGB datasets to evaluate the effectiveness of LP with different decoders. The benchmark setting of OGBL-DDI is to predict drug-drug interactions given information on already known drug-drug interactions. The performance is evaluated by Hits@20: each true drug interaction is ranked among a set of approximately 100,000 randomly-sampled negative drug interactions, and count the ratio of positive edges that are ranked at 20-place or above. The task of OGBL-COLLAB is to predict the future author collaboration relationships given the past collaborations. Evaluation metric is Hits50, where each true collaboration is ranked among a set of 100,000 randomly-sampled negative collaborations. The task of OGBL-PPA is to predict new association edges given the training edges. Evaluation metric is Hits@100, where each positive edge is ranked among 3,000,000 randomly-sampled negative edges. The task of OGBL-CITATION2 is predict missing citation given existing citations. The evaluation metric is Mean Reciprocal Rank (MRR), where the reciprocal rank of the true reference among 1,000 sampled negative candidates is calculated for each source nodes, and then the average is taken over all source nodes.
+
+We implement different decoders as introduced in Sec. 2, including the Dot Product, Bilinear, ConcatMLP, and the HadamardMLP decoders, over the LP encoders, including GCN [5], GraphSAGE [12], and Node2Vec [27], to compare the effects of different decoders on the LP effectiveness. We present the results on the OGBL-DDI, OGBL-COLLAB, OGBL-PPA, and OGBL-CITATION2 datasets in Table. 2. We observe that the HadamardMLP decoder outperforms other decoders on all encoders and datasets. Our Flashlight algorithm can effectively retrieve the top scoring neighbors for the HadamardMLP decoder and keep the exact LP probabilities of HadamardMLPs' output, which leads to the same results of the HadamardMLP decoder with and without Flashlight.
+
+Note that the benchmark settings of these datasets sample a small portion of negative edges for the test evaluation, which is not challenging enough to evaluate the scalability of LP decoders on retrieving the top scoring neighbors from massive candidates in practice.
+
+§ 6.4 THE FLASHLIGHT ALGORITHM EFFECTIVELY FINDS THE TOP SCORING NEIGHBORS
+
+To evaluate the effectiveness of our Flashlight on retrieving the top scoring neighbors for the HadamardMLP decoder, we propose a more challenging test setting for the OGB LP datasets. Given a source node, we takes its top 100 scoring neighbors of the HadamardMLP decoder as the ground-truth for retrievals. We set the task as retrieving $k$ neighbors for a source node that can match the ground-truth neighbors as much as possible. We formally define the metric as Recall $@k$ , which is the portion of the ground-truth neighbors being in the top $k$ neighbors retrieved by different methods.
+
+We sample 1000 nodes as the source nodes from the OGBL-DDI and OGBL-CITATION2 datasets respectively for evaluation. We evaluate the effectivness of our Flashlight algorithm by checking whether it can find the top scoring neighbors for every source node. We set the number of Flashlight iterations as 10 and the number of retrieved neighbors per iteration as 50 . We present the Recall@ $k$ for $k$ from 1 to 500 averaged over all the source nodes in Fig. 3. The "oracle" curve represents the performance of a optimum searcher, of which the retrieved top $k$ neighbors are exactly the top $k$ scoring neighbors of HadamardMLP.
+
+ < g r a p h i c s >
+
+Figure 3: Recall $@k$ is the fraction of the 100 top scoring neighbors of HadamardMLP ranked in the top $k$ neighbors retrieved by Flashlight. We report Recall $@k$ averaged over all the source nodes on OGBL-CITATION2 and OGBL-DDI.
+
+When $k = {100}$ , the 100 neighbors retrieved by our Flashlight can cover more than ${80}\%$ ground-truth neighbors. When $k \geq {200}$ , the recall reaches ${100}\%$ . As a comparison, if we randomly sample the candidate neighbors for retrievals, the Recall $@k$ grows linearly with $k$ and is less than $1 \times {10}^{-4}$ for $k = {100}$ on the OGBL-CITATION2 dataset. The curves of Flashlight is close the optimum curve of the "oracle". These results demonstrate the highly effectiveness of our Flashlight on finding the top scoring neighbors.
+
+Given the large OGBL-Citation2 dataset and smaller DDI dataset, our Flashlight exhibits similar Recall $@k$ performance given different numbers $k$ of retrieved neighbors. This implies that our Flashlight can accurately find the top scoring neighbors for both small and large graphs.
+
+§ 6.5 INFERENCE EFFICIENCY OF LINK PREDICTION WITH OUR FLASHLIGHT ALGORITHM
+
+We use the throughputs to evaluate the inference speed of neighbor retrieval of different methods. The throughput is defined as how many source nodes that a method can serve to retrieve the top 100 scoring neighbors per second. Except for the LP models that follow the encoder and decoder architectures, e.g., GraphSAGE [12], GCN [5], and PLNLP [6], there are some subgraph based LP models, e.g., SUREL [7] and SEAL [58]. The common issue of the subgraph based models is the poor efficiency: they have to crop a seperate subgraph for every node pair to calculate the LP probability on the node pair. In this sense, the node embeddings cannot be shared on the LP calculation for different node pairs. This leads to the much lower inference speed of the subgraph based LP models than the encoder-decoder LP models. We compare the inference effeciency of different methods on the OGBL-CITATION2 dataset in Fig. 4, where we present the inference speed of different methods when achieving the ${100}\%$ recall@100 for the top 100 scoring neighbors.
+
+We observe that our Flashlight significantly accelerate the inference speed of LP models GraphSAGE [12], GCN [5], and PLNLP [6] with the HadamardMLP decoders by more than 100 times. This gap will be even larger for the datasets of larger scales, because the inference with our Flashlight holds the sublinear time complexity while the HadamardMLP decoders holds the linear complexity. Note that the y-axis is in logoratimic scale. The subgraph based methods SUREL [7] and SEAL [58] hold the inference speed of throuputs lower than $1 \times {10}^{-2}$ and $1 \times {10}^{-3}$ respectively, which is not applicable to the practical services that require the low latency of milliseconds.
+
+ < g r a p h i c s >
+
+Figure 4: The inference speed of different LP methods on the OGBL-CITATION2 dataset. The y-axis (througputs) is in the logarithmic scale.
+
+ < g r a p h i c s >
+
+Figure 5: The tradeoff between the inference speed (y-axis) and the effectiveness of finding the top scoring neighbors (x-axis) on the OGBL-CITATION2 (left) and OGBL-PPA (right) datasets.
+
+Taking a further step, we comprehensively evaluate the tradeoff between the inference speed and the effectiveness of finding the top scoring neighbors. Taking GraphSAGE as the encoder, we present the tradeoff curves between the throughputs and the Recall@100 on the OGBL-CITATION2 and OGBL-PPI datasets in Fig. 5. In comparison with our Flashlight, we take the HadamardMLP decoder with the Random Sampling as the baseline for comparison. For example, on the OGBL-CITATION2 dataset, when achieving the Recall@100 as more than 80%, the HadamardMLP with our Flashlight can serve more than 200 source nodes per second, while the HadamardMLP with the random sampling can only serve less than 1 node per second. Overall, our Flashlight achieves much better inference speed and effectiveness tradeoff than the HadamardMLP with random sampling.
+
+§ 7 CONCLUSION
+
+Our theoretical and empirical analysis suggests that the HadamardMLP decoders are a better default choice than the Dot Product in terms of LP effectiveness. Because there does not exist a well-developed sublinear complexity top scoring neighbor searching algorithm for HadamardMLP, the HadamardMLP decoders are not scalable and cannot support the fast inference on large graphs. To resolve this issue, we propose the Flashlight algorithm to accelerate the inference of LP models with HadamardMLP decoders. Flashlight progressively operates the well-studied MIPS techniques for a few iterations. We adaptively adjust the query embeddings at every iteration to find more high scoring neighbors. Empirical results show that our Flashlight accelrates the inference of LP models by more than 100 times on the large OGBL-CITATION2 graph. Overall, our work paves the way for the use of strong LP decoders in practical settings by greatly accelerating their inference.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..0bec05a17c741f9c5023c9b677889369015edb9d
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,413 @@
+# Dynamic Network Reconfiguration for Entropy Maximization using Deep Reinforcement Learning
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+A key problem in network theory is how to reconfigure a graph in order to optimize a quantifiable objective. Given the ubiquity of networked systems, such work has broad practical applications in a variety of situations, ranging from drug and material design to telecommunications. The large decision space of possible reconfigurations, however, makes this problem computationally intensive. In this paper, we cast the problem of network rewiring for optimizing a specified structural property as a Markov Decision Process (MDP), in which a decision-maker is given a budget of modifications that are performed sequentially. We then propose a general approach based on the Deep Q-Network (DQN) algorithm and graph neural networks (GNNs) that can efficiently learn strategies for rewiring networks. We then discuss a cybersecurity case study, i.e., an application to the computer network reconfiguration problem for intrusion protection. In a typical scenario, an attacker might have a (partial) map of the system they plan to penetrate; if the network is effectively "scrambled", they would not be able to navigate it since their prior knowledge would become obsolete. This can be viewed as an entropy maximization problem, in which the goal is to increase the surprise of the network. Indeed, entropy acts as a proxy measurement of the difficulty of navigating the network topology. We demonstrate the general ability of the proposed method to obtain better entropy gains than random rewiring on synthetic and real-world graphs while being computationally inexpensive, as well as being able to generalize to larger graphs than those seen during training. Simulations of attack scenarios confirm the effectiveness of the learned rewiring strategies.
+
+## 24 1 Introduction
+
+A key problem in network theory is how to rewire a graph in order to optimize a given quantifiable objective. Addressing this problem might have applications in several domains, given the fact several systems of practical interest can be represented as graphs $\left\lbrack {{23},{24},{29},{49},{50}}\right\rbrack$ . A large body of literature studies how to construct and design networks in order to optimize some quantifiable goal, such as robustness in supply chain and wireless sensor networks [40, 53] or ADME properties of molecules $\left\lbrack {{19},{39}}\right\rbrack$ . Given the intractable number of distinct configurations of even relatively small networks, optimizing these structural and topological properties is generally a non-trivial task that has been approached from various angles in graph theory $\left\lbrack {{15},{18}}\right\rbrack$ and also studied from heuristic perspectives $\left\lbrack {{21},{35}}\right\rbrack$ . Exact solutions are too computationally expensive to obtain and heuristic methods are generally sub-optimal and do not generalize well to unseen instances.
+
+The adoption of graph neural networks (GNNs) [41] and deep reinforcement learning (RL) [36] techniques have lead to promising approaches to the problem of optimizing graph processes or structure $\left\lbrack {{14},{16},{30}}\right\rbrack$ . A fundamental structural modification is rewiring, in which edges (e.g., links in a computer network) are reconfigured such that the topology is changed while their total number remains constant. The problem of rewiring to optimize a structural property has not been studied in the literature.
+
+In this paper, we present a solution to the network rewiring problem for optimizing a specified structural property. We formulate this task as a Markov Decision Process (MDP), in which a decision-maker is given a budget of rewiring operations that are performed sequentially. We then propose an approach based on the Deep Q-Network (DQN) algorithm and GNNs that can efficiently learn strategies for rewiring networks. We evaluate the method by means of a realistic cybersecurity case study. In particular, we assume a scenario in which an attacker has entered a computer network and aims to reach a particular node of interest. We also assume that the attacker has partial knowledge of the underlying graph topology, which is used to reach a given target inside the network. The goal is to learn a rewiring process for modifying the structure of the graph so as to disrupt the capability of the attacker to reach its target, all the while keeping the network operational. This can be seen as an example of moving target defense (MTD) [8]. We frame the solution as an entropy maximization problem, in which the goal is to increase the surprise of the network in order to disrupt the navigation of the attacker inside it. Indeed, entropy acts as proxy measurement of the difficulty of this task, with an increase in entropy corresponding to an increase its difficulty. In particular, we consider two measures of network entropy - namely Shannon entropy and Maximal Entropy Random Walk (MERW), and we compare their effectiveness.
+
+More specifically, the contributions of this paper can be summarized as follows:
+
+- We formulate the problem of graph rewiring so as to maximize a global structural property as an MDP, in which a central decision-maker is given a certain budget of rewiring operations that are performed sequentially. We formulate an approach that combines GNN architectures and the DQN algorithm to learn an optimal set of rewiring actions by trial-and-error;
+
+- We present an extensive case study of the proposed approach in the context of defense against network intrusion by an attacker. We show that our method is able to obtain better gains in entropy than random rewiring, while scaling to larger networks than a local greedy search, and generalizing to larger out-of-distribution graphs in some cases. Furthermore, we demonstrate the effectiveness of this approach by simulating the movement of an attacker in the network, finding that indeed the applied modifications increase the difficulty for the attacker to reach its targets in both synthetic and real-world graph topologies.
+
+## 2 Related work
+
+RL for graph reconfiguration. Recently, an increasing amount of research has been conducted on the use of reinforcement learning in graph reconfiguration. In particular, in [14] a solution based on reinforcement learning for modifying graphs with the aim of attacking both node and graph classification is presented. In addition, the authors briefly introduce a defense method using adversarial training and edge removal, which decreases their proposed classifier attack rate slightly by $1\%$ . This defense strategy is however only effective on the attack strategy it is trained on and does not generalize. Instead, the authors of [34] use a reinforcement learning approach to learn an attack strategy for neural network classifiers of graph topologies based on edge rewiring, and show that they are able to achieve misclassification with changes that are less noticeable compared to edge and vertex removal and addition. Our paper focuses on a different problem that does not involve classification tasks, but the maximization of a given network objective function. In [16] reinforcement learning techniques are applied to the problem of optimizing the robustness of a graph by means of graph construction; the authors show that their proposed method is able to outperform existing techniques and generalize to different graphs. In the present work, we optimize a global structural property through rewiring instead of constructing a graph through edge addition.
+
+Graph robustness and attacks. A related research area is the optimization of graph robustness [37], which denotes the capacity of a graph to withstand targeted attacks and random failures. [42] demonstrates how small changes in complex networks such as an electricity system or the Internet can improve their robustness against malicious attacks. [6] investigates several heuristic reconfiguration techniques that aim to improve graph robustness without substantially modifying the network structure, and find that preferential rewiring is superior to random rewiring. The authors of [11] extend this study to a framework that can accommodate multiple rewiring strategies and objectives. Several works have used information-based complexity metrics in the context of network defense or attack strategies: [27] proposes a network security metric to assess network vulnerability by measuring the Kolmogorov complexity of effective attack paths. The underlying reasoning is that the more complex attack paths have to be in order to harm a network, the less vulnerable a network is to external attacks. Furthermore, [25] investigates the vulnerability of complex networks, finding that attacks based on edge and vertex removal are substantially more effective when the network properties are recomputed after each attack.
+
+
+
+Figure 1: Illustrative example of the MDP timesteps comprising a single rewiring operation. The agent observes an initial state ${S}_{0} = \left( {{G}_{0},\varnothing ,\varnothing }\right)$ (first panel), from which it then selects a base node ${v}_{1} = \{ 1\}$ that will be rewired (second panel). Given the new state that contains the initial graph and the selected base node, the agent selects a target node ${v}_{2} = \{ 5\}$ to which an edge will be added (third panel). Finally, a third node ${v}_{3} = \{ 0\}$ is selected from the neighborhood of ${v}_{1} = \{ 1\}$ and the corresponding edge is removed (last panel). After a sequence of $b$ rewiring operations, the agent will receive a reward proportional to the improvement in the objective function $\mathcal{F}$ .
+
+Cybersecurity and network defense. In the last decade and in recent years in particular, a drastic surge in cyberattacks on governmental and industrial organizations has exposed the imminent vulnerability of global society to cyberthreats [43]. The targeted digital systems are generally structured as a network in which entities in the system communicate and share resources among each other. Typically, attackers seek to gain unauthorized access to the underlying network through an entry point and search for highly valuable nodes in order to infect these digital systems with malicious software such as viruses, ransomware and spyware [3], enabling them to extract sensitive information or control the functioning of the network [26]. Moving target defense (MTD) is a cybersecurity defense technique by which a network and the underlying software are dynamically changed to counteract attack strategies [4, 8, 9, 44, 51] Most existing MTD techniques involve NP-hard problems, and approximate or heuristic solutions are often impractical [8]. We note that while most studies are applied to specific software architectures, which prevent them from being applied effectively to large scale deployments, in this work we focus on modeling this problem from an abstract, infrastructure-agnostic perspective.
+
+## 3 Graph rewiring as an MDP
+
+### 3.1 Problem statement
+
+We define a graph (network) as $G = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\}$ is the set of $n = \left| \mathcal{V}\right|$ vertices (nodes) and $\mathcal{E} = \left\{ {{e}_{1},\ldots ,{e}_{m}}\right\}$ is the set of $m = \left| \mathcal{E}\right|$ edges (links). A rewiring operation $\gamma \left( {G,{v}_{i},{v}_{j},{v}_{k}}\right)$ transforms the graph $G$ by adding the non-edge $\left( {{v}_{i},{v}_{j}}\right)$ and removing the existing edge $\left( {{v}_{i},{v}_{k}}\right)$ ; we denote the set of all such operations by $\Gamma$ . Given a budget $b \propto m$ of rewiring operations, and a global objective function $\mathcal{F}\left( G\right)$ to be maximized, the goal is to find the set of unique rewiring operations out of ${\Gamma }^{b}$ such that the resulting graph ${G}^{\prime }$ maximizes $\mathcal{F}\left( {G}^{\prime }\right)$ .
+
+Since the size of the set of possible rewirings grows rapidly with the graph size, we cast this problem as a sequential decision-making process, which is detailed below.
+
+### 3.2 MDP framework
+
+We let every rewiring operation consist of three sub-steps: 1) base node selection; 2) node selection for edge addition; and 3) node selection for edge removal. We precede the edge removal step by edge addition to suppress potential disconnections of the graph. The rewiring procedure is illustrated in Figure 1. For reducing the size of the decision space, we model each sub-step of the rewiring operation as a separate timestep in the MDP itself. Its elements are defined as:
+
+State. The state ${S}_{t}$ is the tuple ${S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right)$ , containing the graph ${G}_{t} = \left( {\mathcal{V},{\mathcal{E}}_{t}}\right)$ , the chosen base node ${a}_{1}$ , and the chosen addition node ${a}_{2}$ . The base node and addition node may be null $\left( \varnothing \right)$ depending on the rewiring operation sub-step.
+
+Actions. We specify three distinct action spaces ${\mathcal{A}}_{\widehat{t}}\left( {S}_{t}\right)$ , where $\widehat{t} \mathrel{\text{:=}} \left( \begin{array}{ll} t & \text{ mod }3 \end{array}\right)$ denotes the sub-step within a rewiring operation. Letting the degree of node $v$ be ${k}_{v}$ , they are defined as:
+
+$$
+{\mathcal{A}}_{0}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,\varnothing ,\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V}\left| {0 < {k}_{v} < }\right| \mathcal{V} \mid - 1}\right\} , \tag{1}
+$$
+
+$$
+{\mathcal{A}}_{1}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1}, v}\right) \notin {\mathcal{E}}_{t}}\right\} , \tag{2}
+$$
+
+$$
+{\mathcal{A}}_{2}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},{a}_{2}}\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1}, v}\right) \in {\mathcal{E}}_{t} \smallsetminus \left( {{a}_{1},{a}_{2}}\right) }\right\} . \tag{3}
+$$
+
+Transitions. Transitions are deterministic; the model $P\left( {{S}_{t} = {s}^{\prime } \mid {S}_{t - 1} = s,{A}_{t - 1} = {a}_{t - 1}}\right)$ transitions to state ${S}^{\prime }$ with probability 1, where:
+
+$$
+{S}^{\prime } = \left\{ \begin{array}{lll} \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1}}\right) ,{a}_{1},\varnothing }\right) , & \text{ if }3 \mid t + 2 & \text{ mark base node } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \cup \left( {{a}_{1},{a}_{2}}\right) }\right) ,{a}_{1},{a}_{2}}\right) , & \text{ if }3 \mid t & \text{ mark addition node }\& \text{ add edge } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \smallsetminus \left( {{a}_{1},{a}_{3}}\right) }\right) ,\varnothing ,\varnothing }\right) , & \text{ if }3 \mid t + 1 & \text{ remove edge }\& \text{ reset marked nodes } \end{array}\right.
+$$
+
+(4)
+
+Rewards. The reward signal ${R}_{t}$ is proportional to the difference in the value of the objective function $\mathcal{F}$ before and after the graph reconfiguration. Furthermore, a key operational constraint in the domain we consider is that the network remains connected after the rewiring operations. Instead of running connectivity algorithms at every time-step to determine if a potential removed edge disconnects the graph, we encourage maintaining connectivity by giving a penalty $\bar{r} < 0$ at the end of the episode if the graph becomes disconnected. All rewards and penalties are provided at the final timestep $T$ , and no intermediate rewards are given. This enables the flexibility to discover long-term strategies that maximize the total cumulative reward of a sequence of reconfigurations rather than a single-step rewiring operation, even if the graph is disconnected during intermediate steps. Concretely, given an initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we define the reward function at timestep $t$ as:
+
+$$
+{R}_{t} = \left\{ \begin{array}{ll} {c}_{\mathcal{F}} \cdot \left( {\mathcal{F}\left( {G}_{t}\right) - \mathcal{F}\left( {G}_{0}\right) }\right) & \text{ if }t = T \land c\left( G\right) = 1, \\ \bar{r} & \text{ if }t = T \land c\left( G\right) \geq 2, \\ 0 & \text{ otherwise,} \end{array}\right. \tag{5}
+$$
+
+where $c\left( G\right)$ denotes the number of connected components of $G$ , and $\bar{r} < 0$ is the disconnection penalty. As the different objective functions may act on different scales, we use a reward scaling ${c}_{\mathcal{F}}$ , which we empirically establish for every objective function $\mathcal{F}$ .
+
+## 4 Reinforcement learning representation and parametrization
+
+In this section, we extend the graph representation and value function approximation parametrizations proposed in past work $\left\lbrack {{14},{16}}\right\rbrack$ for the problem of graph rewiring.
+
+### 4.1 Graph representation
+
+As the state and action spaces in network reconfiguration quickly become intractable for a sequence of rewiring operations, we require a graph representation that generalizes over similar states and actions. To this end, we use a GNN architecture that is based on a mean field inference method [46]. More specifically, we use a variant of the structure2vec [13] embedding method to represent every node ${v}_{i} \in \mathcal{V}$ in a graph $G = \left( {\mathcal{V},\mathcal{E}}\right)$ by an embedding vector ${\mu }_{i}$ . This embedding vector is constructed in an iterative process by linearly transforming feature vectors ${x}_{i}$ with a set of weights $\left\{ {{\theta }^{\left( 1\right) },{\theta }^{\left( 2\right) }}\right\}$ , aggregating the ${x}_{i}$ with the feature vectors of neighboring nodes ${v}_{j} \in {\mathcal{N}}_{i}$ , then applying the nonlinear Rectified Linear Unit (ReLU) activation function. Hence, at every step $l \in \left( {1,2,\ldots , L}\right)$ , embedding vectors are updated according to:
+
+$$
+{\mu }_{i}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {{\theta }^{\left( 1\right) }{x}_{i} + {\theta }^{\left( 2\right) }\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}}}{\mu }_{j}^{\left( l\right) }}\right) , \tag{6}
+$$
+
+where all embedding vectors are initialized as ${\mu }_{i}^{\left( 0\right) } = \mathbf{0}$ . After $L$ iterations of feature aggregation, we obtain the node embedding vectors ${\mu }_{i} \equiv {\mu }_{i}^{\left( L\right) }$ . By summing the embedding vectors of nodes in a graph $G$ , we obtain its permutation-invariant embedding: $\mu \left( G\right) = \mathop{\sum }\limits_{{i \in \mathcal{V}}}{\mu }_{i}$ . These invariant graph embeddings represent part of the state that the RL agent observes. Aside from permutation invariance, such embeddings allow learned models to be applied to graphs of different sizes, potentially larger than those seen during training.
+
+### 4.2 Value function approximation
+
+Due to the intractable size of the state-action space in graph reconfiguration tasks, we make use of neural networks to learn approximations of the state-action values $Q\left( {s, a}\right)$ [47]. More specifically, as the action spaces defined in Equation (1) are discrete, we use the DQN algorithm [36] to update the state-action values as follows:
+
+$$
+Q\left( {s, a}\right) \leftarrow Q\left( {s, a}\right) + \alpha \left\lbrack {r + \gamma \mathop{\max }\limits_{{{a}^{\prime } \in \mathcal{A}}}Q\left( {{s}^{\prime },{a}^{\prime }}\right) - Q\left( {s, a}\right) }\right\rbrack . \tag{7}
+$$
+
+The DQN algorithm uses an experience replay buffer [33] from which it samples previously observed transitions $\left( {s, a, r,{s}^{\prime }}\right)$ , and periodically synchronizes a target network with the parameters of the Q-network. The target network is used in the computation of the learning target for estimating the Q-value of the best action in the next timestep, making the learning more stable as the parameters are - kept fixed between updates. We use three separate MLP parametrizations of the Q-function, each corresponding to one of the three sub-steps of the rewiring procedure:
+
+$$
+{Q}_{1}\left( {{S}_{t} = \left( {{G}_{t},\varnothing ,\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 3\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 4\right) }\left\lbrack {{\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8a}
+$$
+
+$$
+{Q}_{2}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 5\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 6\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8b}
+$$
+
+$$
+{Q}_{3}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right) ,{A}_{t}}\right) = {\theta }^{\left( 7\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 8\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{a}_{2}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8c}
+$$
+
+where $\oplus$ denotes concatenation. We highlight that, since the underlying structure2vec parameters shown in Equation (6) are shared, the combined set of the learnable parameters in our model is $\Theta = {\left\{ {\theta }^{\left( i\right) }\right\} }_{i = 1}^{8}$ . During validation and test time, we derive a greedy policy from the above learned Q-functions as $\arg \mathop{\max }\limits_{{a \in {\mathcal{A}}_{t}}}Q\left( {s, a}\right)$ . During training, however, we use a linearly decaying $\epsilon$ -greedy behavioral policy. We refer the reader to Appendix B for a detailed description of our implementation.
+
+## 5 Case study: network reconfiguration for intrusion defense
+
+In this section, we detail the specifics of our intrusion defense application scenario. We first present the definition of the objective functions we leverage, which act as proxy metrics for the difficulty of navigating the graph. Secondly, we detail the procedure we use for simulating attacker behavior during an intrusion, which will allow us to compare the pre- and post-rewiring costs of traversal.
+
+### 5.1 Objective functions for network obfuscation
+
+Our goal is to reconfigure the network so as to deter an attacker with partial knowledge of the network topology. Equivalently, we seek to modify the network so as to increase the surprise of the network and render this prior knowledge obsolete, while keep the network operational. A natural formalization of surprise is the concept of entropy, which measures the quantity of information encoded in a graph or, equivalently, its complexity.
+
+As measures of entropy, we investigate two graph quantities that are invariant to permutations in representation: the Shannon entropy of the degree distribution [2] and the Maximum Entropy Random Walk (MERW) [7] calculated from the spectrum of the adjacency matrix. The former captures the idea that graphs with heterogeneous degrees are less predictable than regular graphs, while the latter is related to random walks on the network. Whereas generic random walks generally do not
+
+
+
+Figure 2: Illustrative example of the evaluation process for a network reconfiguration. (i) The graph is rewired by our approach, removing and adding the highlighted edges respectively. (ii) The leftmost nodes in the graph become unreachable by the attacker from the entry point marked E, and hence a path to them must be rediscovered by exploring the graph. (iii) To reach the nodes, the attacker pays a cost of 1 and 2 respectively for "unlocking" the previously unseen links along the highlighted paths. The total cost induced by the rewiring strategy is ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }} = 3$ .
+
+maximize entropy [17], MERW uses a specific choice of transition probabilities that ensures every trajectory of fixed length is equiprobable, resulting in a maximal global entropy in the limit of infinite trajectory length. Although the local transition probabilities depend on the global structure of the graph, the generating process is local [7]. More formally, the two objective functions are formulated as follows: the Shannon entropy is defined as ${\mathcal{F}}_{\text{Shannon }}\left( G\right) = - \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}q\left( k\right) {\log }_{2}q\left( k\right)$ , where $q\left( k\right)$ is the degree distribution; MERW is defined as ${\mathcal{F}}_{\text{MERW }}\left( G\right) = \ln \lambda$ , where $\lambda$ is the largest eigenvalue of the adjacency matrix. In terms of time complexity, computing the Shannon entropy scales as $\mathcal{O}\left( n\right)$ . The calculation of MERW has instead an $\mathcal{O}\left( {n}^{3}\right)$ complexity due to the eigendecomposition required to compute the spectrum of the adjacency matrix.
+
+It is worth noting that, in preliminary experiments, we have additionally investigated objective functions related to the Kolmogorov complexity. Also known as algorithmic complexity, this measure does not suffer from distributional dependencies [32]. As the Kolmogorov complexity is theoretically incomputable [10], we used graph compression algorithms such as bzip-2 [12] and Block Decomposition Methods [52] to approximate the Kolmogorov complexity. However, as these approximations depend on the representation of the graph such as the adjacency matrix, one has to consider many permutations of the graph representation. Compressing the representation for a sufficient number of permutations becomes infeasible even for small graphs. While the MERW objective function is also derived from the adjacency matrix through its largest eigenvalue, it does not suffer from this artifact as the spectrum of the adjacency matrix is invariant to permutations.
+
+### 5.2 Simulating and evaluating attacker behavior
+
+Given an initial connected and undirected graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we model the attacker as having entered the network through an arbitrary node $u \in \mathcal{V}$ , and having built a local map ${\mathcal{M}}_{0}^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{0}^{u}}\right)$ around this entry point, where ${\mathcal{V}}^{v} \subset \mathcal{V}$ is the set of nodes and ${\mathcal{E}}_{0}^{u} \subset {\mathcal{E}}_{0}$ is the set of edges in the map. The rewiring procedure transforms the initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ to the graph ${G}_{ * } = \left( {\mathcal{V},{\mathcal{E}}_{ * }}\right)$ , yielding the new local map ${\mathcal{M}}_{ * }^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{ * }^{u}}\right)$ that is unknown to the attacker. Our goal is to evaluate the effectiveness of the reconfiguration by measuring how "stale" the prior information of the attacker has become in comparison to the new map: if the attacker struggles to find its targets in the updated topology, the rewiring has succeeded.
+
+Let $\overline{{\mathcal{V}}^{u}}$ denote the set of nodes in the new local map ${\mathcal{M}}_{ * }^{u}$ that are unreachable through at least one trajectory composed of original edges ${E}_{0}^{u}$ in the old map. For each newly unreachable node ${v}_{i}$ , we measure the cost ${\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ of finding it with a forward random walk, in which the random walker only returns to the previous node if the current node has no other outgoing links. Every time the random walker encounters a link that is (i) not included in ${E}_{0}^{u}$ and (ii) not yet encountered during the random walk, the cost increases by one. This simulates the cost of having to explore the new graph topology due to the reconfigurations that were introduced. Finally, we let ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }} = \mathop{\sum }\limits_{{{v}_{i} \in {\mathcal{V}}^{u}}}{\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ denote the sum of the costs for all newly unreachable nodes, which is our metric for the effectiveness of a rewiring strategy. An illustrative example of a forward random walk and cost evaluation is shown in Figure 2, and a formal description is presented in Algorithm 1 in Appendix B to aid reproducibility.
+
+## 6 Experiments
+
+### 6.1 Experimental setup
+
+Training and evaluation procedure. Our agent is trained on synthetic graphs of size $n = {30}$ that are generated using the graph models listed below. The given budget is ${15}\%$ of the total edges $m$ that are present in the initial graph. When performing the attacker simulations, the initial local map contains the subgraph induced by all nodes that are 2 hops away from the entry point, which is sampled without replacement from the node set. Training occurs separately for each graph model and objective $\mathcal{F}$ on a set of graphs ${\mathcal{G}}_{\text{train }}$ of size $\left| {\mathcal{G}}_{\text{train }}\right| = 6 \cdot {10}^{2}$ . Every 10 training steps, we measure the performance on a disjoint validation set ${\mathcal{G}}_{\text{validation }}$ of size $\left| {\mathcal{G}}_{\text{validation }}\right| = 2 \cdot {10}^{2}$ . We perform reconfiguration operations on a test set ${\mathcal{G}}_{\text{test }}$ of size $\left| {\mathcal{G}}_{\text{test }}\right| = {10}^{2}$ . To account for stochasticity, we train our models with 10 different seeds and present mean and confidence intervals accordingly. Further details about the experimental procedure (e.g., hyperparameter optimization) can be found in Appendix B.
+
+Synthetic graphs. We evaluate the approaches on graphs generated by the following models:
+
+Barabási-Albert (BA): A preferential attachment model where nodes joining the network are linked to $M$ nodes [5]. We consider values of ${M}_{ba} = 2$ and ${M}_{ba} = 1$ (abbreviated BA-2 and BA-1).
+
+Watts-Strogatz (WS): A model that starts with a ring lattice of nodes with degree $k$ . Each edge is rewired to a random node with probability $p$ , yielding characteristically small shortest path lengths [48]. We use $k = 4$ and $p = {0.1}$ .
+
+Erdős-Rényi (ER): A random graph model in which the existence of each edge is governed by a uniform probability $p$ [20]. We use $p = {0.15}$ .
+
+Real-world graphs. We also consider the real-world Unified Host and Network (UHN) dataset [45], which is a subset of network and host events from an enterprise network. We transform this dataset into a graph by identifying the bidirectional links between hosts appearing in these records, obtaining a graph with $n = {461}$ nodes and $m = {790}$ edges. Further information about this processing can be found in Appendix B.
+
+Baselines. We compare the approach against two baselines: Random, which acts in the same MDP as the agent but chooses actions uniformly, and Greedy, which is a shallow one-step search over all rewirings from a given configuration. The latter picks the rewiring that gives the largest improvement in $\mathcal{F}$ . As this search scales very poorly with graph size and budget, we only evaluate it on graphs of size 30 that are used to train the DQN as a comparison point for validating the learned strategies.
+
+### 6.2 Entropy maximization results
+
+We first consider the results for the maximization of the entropy-based objectives. The gains in entropy obtained by the methods on the held-out test set are shown in Table 1 , while training curves are presented in Appendix A. The results demonstrate that the approach discovers better reconfiguration strategies than random rewiring in all cases, and even the greedy search in one setting. Furthermore, we evaluate the out-of-distribution generalization properties of the learned models along two dimensions: varying the graph size $n \in \left\lbrack {{10},{300}}\right\rbrack$ and the budget $b$ as a percentage of existing edges $\in \{ 5,{10},{15},{20},{25}\}$ . The results for this experiment (from which Greedy is excluded due to poor scalability) are shown in Figure 3. We find that, with the exception of the (BA, ${\mathcal{F}}_{\text{Shannon }}$ ) combination, the learned models generalize well to graphs substantially larger in size as well as varying rewiring budgets.
+
+Table 1: Entropy gains on test graphs with $n = {30}$ .
+
+| $\mathcal{F}$ | ${\mathcal{G}}_{\text{test }}$ | DQN | Greedy | Random |
| $\Delta {\mathcal{F}}_{MERW}$ | BA-2 | ${0.197}_{\pm {0.002}}$ | ${0.225} \pm {0.003}$ | $- {0.019}_{\pm {0.003}}$ |
| BA-1 | ${0.167}_{\pm {0.003}}$ | ${0.135}_{\pm {0.003}}$ | $- {0.045}_{\pm {0.004}}$ |
| ER | ${0.182}_{\pm {0.004}}$ | ${0.209}_{\pm {0.012}}$ | $- {0.005}_{\pm {0.003}}$ |
| WS | ${0.233}_{\pm {0.003}}$ | ${0.298}_{\pm {0.002}}$ | ${0.035}_{\pm {0.002}}$ |
| $\Delta {\mathcal{F}}_{\text{Shannon }}$ | BA-2 | ${0.541}_{\pm {0.009}}$ | ${0.724} \pm {0.015}$ | ${0.252} \pm {0.024}$ |
| BA-1 | ${0.167}_{\pm {0.008}}$ | ${0.242}_{\pm {0.012}}$ | ${0.084}_{\pm {0.015}}$ |
| ER | ${0.101} \pm {0.012}$ | ${0.400}_{\pm {0.023}}$ | $- {0.022}_{\pm {0.018}}$ |
| WS | ${0.926}_{\pm {0.016}}$ | ${1.116} \pm {0.022}$ | ${0.567}_{\pm {0.036}}$ |
+
+### 6.3 Evaluating the reconfiguration impact
+
+We next evaluate the performance of the learned models for entropy maximization on the downstream task of disrupting the navigation of the graph by the attacker.
+
+
+
+Figure 3: Evaluation of the out-of-distribution generalization performance (higher is better) of the learned entropy maximization models as a function of graph size (top) and budget size (bottom). All models are trained on graphs with $n = {30}$ . In the bottom figure, the solid and dotted lines represent graphs with $n = {30}$ and $n = {100}$ respectively. Note the different $\mathrm{x}$ -axes used for ER graphs due to their high edge density.
+
+Synthetic graphs. The results for synthetic graphs are shown in Figure 4 in an out-of-distribution setting as a function of graph size, a regime in which the Greedy baseline is too expensive to scale. We find that the best proxy metric varies with the class of synthetic graphs - Shannon entropy performs better for BA graphs, MERW performs better for ER, and performance is similar for WS. Strong out-of-distribution generalization performance is observed for 3 out of 4 synthetic graph models. The results also show that, in the case of WS graphs, even though the performance in terms of the metric itself is high (as shown in Figure 3), the objective is not a suitable proxy for the downstream task in an out-of-distribution setting since the random walk cost decays rapidly. This might be explained by the fact that the graph topology is derived through a rewiring process of cliques of nodes of a given size.
+
+Real-world graphs. We also evaluate the models trained on synthetic graphs on the real-world graph constructed from the UHN dataset. Results are shown in Table 2. All but one of the trained models maintain a statistically significant random walk cost difference over the Random baseline. The best-performing models were trained on the (WS, ${\mathcal{F}}_{MERW}$ ) and (BA-1, ${\mathcal{F}}_{\text{Shannon }}$ ) combinations, obtaining total gains in random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }}$ of ${136}\%$ and ${125}\%$ respectively. The Greedy baseline is not applicable for a graph of this size.
+
+
+
+Figure 4: Evaluation of the learned rewiring strategies for entropy maximization on the downstream task of disrupting attacker navigation. All models are trained on graphs with $n = {30}$ . The random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }}$ (higher is better) is normalized by $n$ for meaningful comparisons. Note the different $\mathrm{x}$ -axis used for ER graphs due to their high edge density.
+
+303
+
+## 7 Conclusion
+
+Summary. In this work, we have addressed the problem of graph reconfiguration for the optimization of a given property of a networked system, a computationally challenging problem given the generally large decision space. We have then have formulated it as a Markov Decision Process that treats rewirings as sequential, and proposed an approach based on deep reinforcement learning and graph neural networks for efficient learning of network reconfigurations. As a case study, we have applied the proposed method to a cybersecurity scenario in which the task is to disrupt the navigation of potential intruders in a computer network. We have assumed that the goal of the intruder is to navigate the network given some knowledge about its topology. In order to disrupt the attack, we have designed a mechanism for increasing the level of surprise of the network through entropy maximization by means of network rewiring. More specifically, in terms of the objective of the optimization process, we have considered two entropy metrics that quantify the predictability of the network topology, and demonstrated that our method generalizes well on unseen graphs with varying rewiring budgets and different numbers of nodes. We have also validated the effectiveness of the learned models for increasing path lengths towards targeted nodes. The proposed approach outperforms the considered baselines on both synthetic and real-world graphs.
+
+Table 2: Total random walk cost of models applied to the real-world UHN graph $\left( {n = {461}, m = {790}}\right)$ .
+
+ | $\mathcal{F}$ | | ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{tot }}/n\left( \widehat{ \uparrow }\right)$ |
| DQN | ${\mathcal{F}}_{MERW}$ | BA-2 | ${3.087}_{\pm {0.225}}$ |
| BA-1 | ${1.294}_{\pm {0.185}}$ |
| ER | ${2.887}_{\pm {0.335}}$ |
| WS | ${\mathbf{{4.888}}}_{\pm {0.568}}$ |
| ${\mathcal{F}}_{\text{Shannon }}$ | BA-2 | ${3.774}_{\pm {0.445}}$ |
| BA-1 | ${\mathbf{{4.660}}}_{\pm {0.461}}$ |
| ER | ${3.891}_{\pm {0.559}}$ |
| WS | ${3.555}_{\pm {0.318}}$ |
| Random | - | - | ${2.071}_{\pm {0.289}}$ |
| Greedy | - | - | ∞ |
+
+Limitations and future work. An advantage of the proposed approach is that it does not require any knowledge of the exact position of the attacker as the traversal of the graph takes place. One may also consider a real-time scenario in which the network reconfiguration aims to "close off" the attacker given knowledge of their location, which may lead to a more efficient defense if such information is available. We have also adopted a simple model of attacker navigation (forward random walks). Different, more complex navigation strategies (e.g., targeting vulnerable machines) can also be considered. This knowledge might be integrated as part of the training process, for example by increasing the probability of rewiring of edges around these nodes through a corresponding reward structure (i.e., higher reward for protecting more sensitive nodes). More generally, we have identified an important application to cybersecurity, which might have a positive impact in safeguarding networks from malicious intrusions. With respect to potential dual-use, we note that the proposed defense mechanism cannot be exploited by attackers directly, since it requires knowledge of at least part of the underlying network topology.
+
+References
+
+[1] Réka Albert and Albert-László Barabási. Statistical Mechanics of Complex Networks. Reviews of Modern Physics, 74:47-97, 2002.
+
+[2] Kartik Anand and Ginestra Bianconi. Entropy measures for networks: Toward an information theory of complex topologies. Physical Review E, 80(4), 2009. 5
+
+[3] Ross Anderson. Security Engineering: a Guide to Building Dependable Distributed Systems. John Wiley & Sons, 2020. 3
+
+[4] Abdullah Aydeger, Nico Saputro, Kemal Akkaya, and Mohammed Rahman. Mitigating crossfire attacks using SDN-based moving target defense. In ${LCN}$ , pages ${627} - {630}$ . IEEE,2016. 3
+
+[5] Albert-László Barabási and Réka Albert. Emergence of Scaling in Random Networks. Science, 286(5439):509-512, 1999. 7
+
+[6] Alina Beygelzimer, Geoffrey Grinstein, Ralph Linsker, and Irina Rish. Improving Network Robustness by Edge Modification. Physica A: Statistical Mechanics and its Applications, 357 (3-4):593-612,2005. 2
+
+[7] Zdzisław Stanisfaw Burda, Jarosłav Duda, Jean-Marc Luck, and Bartlomiej Waclaw. Localization of the Maximal Entropy Random Walk. Physical Review Letters, 102(16), 2009. 5, 6
+
+[8] Gui-lin Cai, Bao-sheng Wang, Wei Hu, and Tian-zuo Wang. Moving target defense: state of the art and characteristics. Frontiers of Information Technology & Electronic Engineering, 17(11): 1122-1153, 2016. 2, 3
+
+[9] Thomas E Carroll, Michael Crouse, Errin W Fulp, and Kenneth S Berenhaut. Analysis of network address shuffling as a moving target defense. In ${ICC}$ , pages 701-706. IEEE,2014. 3
+
+[10] Gregory J. Chaitin. On the Length of Programs for Computing Finite Binary Sequences. Journal of the ACM, 13(4):547-569, 10 1966. 6
+
+[11] Hau Chan and Leman Akoglu. Optimizing network robustness by edge rewiring: a general framework. Data Mining and Knowledge Discovery, 30(5):1395-1425, 2016. 2
+
+[12] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-Interscience, New York, 2nd edition, 1991. 6
+
+[13] Hanjun Dai, Bo Dai, and Le Song. Discriminative Embeddings of Latent Variable Models for Structured Data. In ICML, volume 6, pages 3970-3986, 2016. 4, 14
+
+[14] Hanjun Dai, Hui Li, Tian Tian, Huang Xin, Lin Wang, Zhu Jun, and Song Le. Adversarial Attack on Graph Structured Data. In ICML, volume 3, pages 1799-1808, 2018. 1, 2, 4, 14
+
+[15] George B. Dantzig, D. Ray Fulkerson, and Selmer Johnson. Solution of a large scale traveling salesman problem. Operations Research, pages 393-410, 1954. 1
+
+[16] Victor-Alexandru Darvariu, Stephen Hailes, and Mirco Musolesi. Goal-directed graph construction using reinforcement learning. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 477(2254), 2021. 1, 2, 4, 14
+
+[17] Jarek Duda. From Maximal Entropy Random Walk to Quantum Thermodynamics. In Journal of Physics: Conference Series, volume 361, 2012. 6
+
+[18] Jack Edmonds and Richard M Karp. Theoretical Improvements in Algorithmic Efficiency for Network Flow Problems. Journal of the Association for Computing Machinery, 19(2):248-264, 1972.1
+
+[19] Sean Ekins, J. Dana Honeycutt, and James T. Metz. Evolving molecules using multi-objective optimization: Applying to ADME/Tox. Drug Discovery Today, 15(11-12):451-460, 6 2010. 1
+
+[20] Paul Erdős and Alfréd Rényi. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17-60, 1960. 7
+
+[21] Arpita Ghosh and Stephen Boyd. Growing Well-connected Graphs. Proceedings of the 45th IEEE Conference on Decision & Control, 2006. 1
+
+[22] Xavier Glorot and Yoshua Bengio. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Journal of Machine Learning Research, volume 9, pages 249-256, 2010. 14
+
+[23] Nils Goldbeck, Panagiotis Angeloudis, and Washington Y. Ochieng. Resilience assessment for interdependent urban infrastructure systems using dynamic network flow models. Reliability Engineering and System Safety, 188:62-79, 8 2019. 1
+
+[24] Roger Guimerà, Stefano Mossa, Adrian Turtschi, and LA Nunes Amaral. The worldwide air transportation network: Anomalous centrality, community structure, and cities' global roles. Proceedings of the National Academy of Sciences, 102(22), 2005. 1
+
+[25] Petter Holme, Beom Jun Kim, Chang No Yoon, and Seung Kee Han. Attack Vulnerability of Complex Networks. Physical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics, 65(5):14, 2002. 3
+
+[26] Keman Huang, Michael Siegel, and Stuart Madnick. Systematically Understanding the Cyber Attack Business: A Survey. ACM Computing Surveys (CSUR), 51(4):1-36, 2018. 3
+
+[27] Nwokedi Idika and Bharat Bhargava. A Kolmogorov Complexity Approach for Measuring Attack Path Complexity. In IFIP Advances in Information and Communication Technology, volume 354 AICT, pages 281-292, 2011. 2
+
+[28] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, volume 1, pages 448-456, 2015. 14
+
+[29] Steven Kearnes, Kevin Mccloskey, Marc Berndl, Vijay Pande, and Patrick Riley. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer-Aided Molecular Design, 30:595-608, 2016. 1
+
+[30] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In NeurIPS, 2017. 1
+
+[31] Diederik P Kingma and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. In ICLR, 2015.14
+
+[32] Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Texts in Computer Science. Springer International Publishing, 2019. 6
+
+[33] Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8(3):293-321, 1992. 5
+
+[34] Yao Ma, Suhang Wang, Lingfei Wu, and Jiliang Tang. Attacking Graph Convolutional Networks via Rewiring. In ${ICLR},{2020.2}$
+
+[35] Madhav V Marathe, Heinz Breu, Harry B Hunt III, Shankar S Ravi, and Daniel J Rosenkrantz. Simple heuristics for unit disk graphs. Networks, 25(2):59-68, 3 1995. ISSN 1097-0037. 1
+
+[36] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. 1,5
+
+[37] Mark E.J. Newman. Networks. Oxford University Press, 2018. 2
+
+[38] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In NeurIPS, volume 32, 2019. 14
+
+[39] Douglas E.V. Pires, Tom L. Blundell, and David B. Ascher. pkCSM: Predicting small-molecule pharmacokinetic and toxicity properties using graph-based signatures. Journal of Medicinal Chemistry, 58(9):4066-4072, 5 2015. 1
+
+[40] Tie Qiu, Jie Liu, Weisheng Si, and Dapeng Oliver Wu. Robustness optimization scheme with multi-population co-evolution for scale-free wireless sensor networks. IEEE/ACM Transactions on Networking, 27(3):1028-1042, 2019. 1
+
+[41] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The Graph Neural Network Model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. 1
+
+[42] Christian M Schneider, André A Moreira, José S Andrade, Shlomo Havlin, and Hans J Herrmann. Mitigation of Malicious Attacks on Networks. Proceedings of the National Academy of Sciences, 108(10):3838-3841, 3 2011. 2
+
+[43] Bruce Schneier. Secrets and Lies: Digital Security in a Networked World. John Wiley & Sons, 2015. 3
+
+[44] Sailik Sengupta, Ankur Chowdhary, Dijiang Huang, and Subbarao Kambhampati. Moving target defense for the placement of intrusion detection systems in the cloud. In International Conference on Decision and Game Theory for Security, pages 326-345. Springer, 2018. 3
+
+[45] Melissa J. M. Turcotte, Alexander D. Kent, and Curtis Hash. Unified Host and Network Data Set, chapter Chapter 1, pages 1-22. World Scientific, 2018. 7, 14
+
+[46] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1-305, 2008. 4
+
+[47] Christopher J C H Watkins and Peter Dayan. Q-Learning. Machine Learning, 8:279-292, 1992. 5
+
+[48] Duncan J. Watts and Steven H. Strogatz. Collective dynamics of 'small-world' networks. Nature, 393(6684):440, 1998. 7
+
+[49] Tian Xie and Jeffrey C Grossman. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Physical Review Letters, 120(14), 2018.1
+
+[50] Jiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, and Jure Leskovec. Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation. NeurIPS, 31, 2018. 1
+
+[51] Kimberly Zeitz, Michael Cantrell, Randy Marchany, and Joseph Tront. Designing a micro-moving target ipv6 defense for the internet of things. In IoTDI, pages 179-184. IEEE, 2017. 3
+
+[52] Hector Zenil, Santiago Hernández-Orozco, Narsis A Kiani, Fernando Soler-Toscano, Antonio Rueda-Toicen, and Jesper Tegnér. A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity. Entropy, 20(8):605, 2018. ISSN 10994300. 6
+
+[53] Kang Zhao, Kevin Scheibe, Jennifer Blackhurst, and Akhil Kumar. Supply Chain Network Robustness Against Disruptions: Topological Analysis, Measurement, and Optimization. IEEE Transactions on Engineering Management, 66(1):127-139, 2018. 1
+
+471
+
+## A Additional results
+
+Computational cost of Greedy baseline. To evidence the poor scalability of the Greedy baseline as discussed in Section 6.1, we perform an additional experiment that measures the wall clock time taken by the different approaches to complete a sequence of rewirings. Results are shown in Figure 5 for Barabási-Albert graphs $\left( {{M}_{ba} = 2}\right)$ as a function of graph size. Beyond graphs of size $n = {150}$ , we extrapolate by fitting polynomials of degree 5 and 4 for ${\mathcal{F}}_{\text{MERW }}$ and ${\mathcal{F}}_{\text{Shannon }}$ respectively.
+
+
+
+Figure 5: Wall clock time needed to complete a sequence of rewirings by the Greedy and DQN methods on Barabási-Albert graphs $\left( {{M}_{ba} = 2}\right)$ with a rewiring budget of 15%.
+
+The time needed for evaluating the Greedy baseline increases rapidly as the size of the graph grows, while the post-training DQN is very efficient from a computational point of view. Hence, it is not feasible to use the Greedy baseline beyond very small graphs, but it serves as a useful comparison point.
+
+Learning curves. Learning curves are shown in Figure 6, which captures the performance on the held-out validation set ${\mathcal{G}}_{\text{validation }}$ . We note that in many cases (e.g., BA $/{\mathcal{F}}_{MERW}$ ) the performance averaged across all seeds is misleadingly low compared to the baselines, an artifact of the variability of the validation set performance. We also show the performance of the worst-performing seed (dotted) and best-performing seed (dashed) to clarify this.
+
+
+
+Figure 6: MERW (upper half) and Shannon entropy (lower half) increase on the held-out validation set ${\mathcal{G}}_{\text{validation }}$ during training of the DQN algorithm. The dotted and dashed lines for the DQN algorithm represent the worst-performing and best-performing seeds respectively. Random and Greedy rewiring performance are shown for comparison. Graphs are of size $n = {30}$ and the rewiring budget is ${15}\%$ of the number of existing edges.
+
+## B Implementation and training details
+
+Codebase. The code for reproducing the results of this work will be made available in a future version. The DQN implementation we use is bootstrapped from the RNet-DQN codebase ${}^{1}$ in [16], which itself is based on the RL-S2V ${}^{2}$ implementation from [14] and S2V GNN ${}^{3}$ from [13]. Our neural network architecture is implemented with the deep learning library PyTorch [38].
+
+Infrastructure and runtimes. Experiments were carried out on a cluster of 8 machines, each equipped with 2 Intel Xeon E5-2630 v3 processors and 128GB RAM. On this infrastructure, all experiments reported in this paper took approximately 8 days to complete.
+
+MDP parameters. To improve numerical stability we scale the reward signals in Equation 5 by ${c}_{\mathcal{F}} = {10}^{1}$ for MERW-DQN and ${c}_{\mathcal{F}} = {10}^{2}$ for Shannon-DQN. We set the disconnection penalty ${\bar{r}}_{n} = - {10.0}$ . As we consider a finite horizon MDP, we set the discount factor $\gamma = 1$ .
+
+Model architectures and hyperparameters. In all experiments the same neural network architectures and hy-perparameters are used in the three stages of the rewiring procedure as described in Section 3. The final MLPs described in Equation 8 contain a hidden layer of 128 units and a single-unit output layer representing the estimated state-action value. Batch normalization [28] is applied to the input of the final layer.
+
+Table 3: Optimal initial learning rate ${\alpha }_{0}$ , message passing rounds $L$ and graph embedding dimension $\dim \left( {\mu }_{i}\right)$ found by a hyperparameter search.
+
+| DQN | G | ${\alpha }_{0}$$\left\lbrack {10}^{-4}\right\rbrack$ | $L$ | $\mathrm{{dim}}\left( {\mu }_{i}\right)$ |
| ${\mathcal{F}}_{MERW}$ | BA-2 | 5 | 3 | 128 |
| BA-1 | 5 | 6 | 128 |
| ER | 5 | 4 | 128 |
| WS | 10 | 6 | 128 |
| ${\mathcal{F}}_{\text{Shannon }}$ | BA-2 | 10 | 3 | 64 |
| BA-1 | 5 | 6 | 64 |
| ER | 1 | 4 | 64 |
| WS | 10 | 6 | 64 |
+
+We performed an initial hyperparameter grid search on BA-2 graphs over the following search space: the initial learning rate ${\alpha }_{0} \in \{ 5,{10},{50}\} \cdot {10}^{-4}$ for MERW-DQN and ${\alpha }_{0} \in \{ 1,5,{10}\} \cdot {10}^{-4}$ for Shannon-DQN; the number of message-passing rounds $L \in \{ 3,4\}$ ; the latent dimension of the graph embedding $\dim \left( {\mu }_{i}\right) \in \{ {32},{64},{128}\}$ . Due to computational budget constraints, for BA-1, ER and WS graphs, we only performed a hyperparameter search for for the initial learning rate ${\alpha }_{0}$ over the same values as for BA-2 graphs, while setting the number of message passing rounds equal to the graph diameter $L = D$ and bootstrapping the latent dimension from the hyperparameter search on BA-2 graphs. Table 3 presents an overview of the optimal values of the hyperparameters that were used for the results presented in the paper.
+
+Training details. We train the models for 120,000 steps, and let the exploration parameter $\varepsilon$ decay linearly from $\varepsilon = {1.0}$ to $\varepsilon = {0.1}$ in the first40,000training steps after which it is kept constant. The network parameters are initialized using Glorot initialization [22] and updated using the Adam optimizer [31]. We use a batch size of 50 graphs. The replay memory contains 12,000 instances and replaces the oldest entry when adding a new transition. The target network parameters are updated every 50 training steps.
+
+Graphs. The real-world UHN dataset [45] contains network events on day 2 of approximately 90 days of network events collected from the Los Alamos National Laboratory enterprise network and is pre-processed as follows: firstly, we build a directional graph where nodes represent unique hosts in the data set and construct directional links from the events between the hosts. Secondly, we filter the graph by removing all unidirectional links and transform the graph to be undirected, only keeping the largest connected component. Thirdly, we exclude nodes that only have many single-degree neighbors, such as email servers, and furthermore only retain nodes with degrees $\leq {80}$ . The graph obtained by this procedure is illustrated in Figure 7. We additionally note that, in all downstream experiments, graphs that are disconnected after rewiring are not considered in any of the evaluations.
+
+Reconfiguration impact evaluation. The algorithm we use for measuring the random walk cost ${\mathcal{C}}_{RW}$ induced by a sequence of rewirings is shown in Algorithm 1. We sample without replacement ${N}_{\text{synthetic }} = \min \{ n,{30}\}$ and ${N}_{\mathrm{{UHN}}} = n$ entry nodes for synthetic graphs and the UHN graph,
+
+---
+
+${}^{1}$ https://github.com/VictorDarvariu/graph-construction-rl
+
+${}^{2}$ https://github.com/Hanjun-Dai/graph_adversarial_attack
+
+${}^{3}$ https://github.com/Hanjun-Dai/pytorch_structure2vec
+
+---
+
+
+
+Figure 7: The graph derived from the Unified Host and Network (UHN) data set. It contains $n = {461}$ nodes, $m = {790}$ edges, and has a diameter $D = {18}$ .
+
+respectively. After rewiring, we find the nodes that have become unreachable through at least one trajectory composed of the edges of the old map. We then perform a single random walk per missing target node as described in Section 5.2 and Algorithm 1.
+
+Algorithm 1: Random walk cost evaluation
+
+---
+
+Data: ${G}_{ * }\left( {\mathcal{V},{\mathcal{E}}_{ * }}\right) , u,{v}_{i} \in \mathcal{V},{E}_{0}^{u} \subset {\mathcal{E}}_{0};\;//u,{v}_{i}$ are entry, target node resp.
+
+${\mathcal{C}}_{\mathrm{{RW}}} \leftarrow 0$ ;
+
+${\mathcal{E}}_{\text{visited }} \leftarrow \left( {{v}_{j},{v}_{k}}\right) \in {E}_{0}^{u}\forall j, k;$
+
+${v}_{t - 1},{v}_{t} \leftarrow u \in \mathcal{V}$ ; $\;//{v}_{t - 1},{v}_{t}$ are previous, current position resp.
+
+${v}_{t + 1} \leftarrow \mathcal{U}\left( {\mathcal{N}}_{u}\right)$ // ${v}_{t + 1}$ is next position
+
+while ${v}_{t + 1} \neq {v}_{i}$ do
+
+ ${e}_{t} \leftarrow \left( {{v}_{t},{v}_{t + 1}}\right)$ ;
+
+ if ${e}_{t} \notin {\mathcal{E}}_{\text{visited }}$ then
+
+ ${\mathcal{C}}_{\mathrm{{RW}}} \leftarrow {\mathcal{C}}_{\mathrm{{RW}}} + 1$
+
+ add ${e}_{t}$ to ${\mathcal{E}}_{\text{visited }}$ ;
+
+ end
+
+ if ${k}_{{v}_{t + 1}} = 1$ then
+
+ ${v}_{t - 1} \leftarrow {v}_{t + 1}$ ; // reverse random walk if dead end
+
+ else
+
+ ${v}_{t - 1} \leftarrow {v}_{t};$
+
+ ${v}_{t} \leftarrow {v}_{t + 1}$ ;
+
+ end
+
+ ${v}_{t + 1} \leftarrow \mathcal{U}\left( {{\mathcal{N}}_{{v}_{t}} \smallsetminus {v}_{t - 1}}\right) ;$ // choose next node randomly
+
+end
+
+${e}_{t} \leftarrow \left( {{v}_{t},{v}_{t + 1}}\right)$ if ${e}_{t} \notin {\mathcal{E}}_{\text{visited }}$ then
+
+ ${\mathcal{C}}_{\mathrm{{RW}}} \leftarrow {\mathcal{C}}_{\mathrm{{RW}}} + 1$
+
+end
+
+---
+
+546
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b50dd1f29c21c94e839ddd9d36a13d0df6fa3678
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/-vshFhHpKhX/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,259 @@
+§ DYNAMIC NETWORK RECONFIGURATION FOR ENTROPY MAXIMIZATION USING DEEP REINFORCEMENT LEARNING
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+A key problem in network theory is how to reconfigure a graph in order to optimize a quantifiable objective. Given the ubiquity of networked systems, such work has broad practical applications in a variety of situations, ranging from drug and material design to telecommunications. The large decision space of possible reconfigurations, however, makes this problem computationally intensive. In this paper, we cast the problem of network rewiring for optimizing a specified structural property as a Markov Decision Process (MDP), in which a decision-maker is given a budget of modifications that are performed sequentially. We then propose a general approach based on the Deep Q-Network (DQN) algorithm and graph neural networks (GNNs) that can efficiently learn strategies for rewiring networks. We then discuss a cybersecurity case study, i.e., an application to the computer network reconfiguration problem for intrusion protection. In a typical scenario, an attacker might have a (partial) map of the system they plan to penetrate; if the network is effectively "scrambled", they would not be able to navigate it since their prior knowledge would become obsolete. This can be viewed as an entropy maximization problem, in which the goal is to increase the surprise of the network. Indeed, entropy acts as a proxy measurement of the difficulty of navigating the network topology. We demonstrate the general ability of the proposed method to obtain better entropy gains than random rewiring on synthetic and real-world graphs while being computationally inexpensive, as well as being able to generalize to larger graphs than those seen during training. Simulations of attack scenarios confirm the effectiveness of the learned rewiring strategies.
+
+§ 24 1 INTRODUCTION
+
+A key problem in network theory is how to rewire a graph in order to optimize a given quantifiable objective. Addressing this problem might have applications in several domains, given the fact several systems of practical interest can be represented as graphs $\left\lbrack {{23},{24},{29},{49},{50}}\right\rbrack$ . A large body of literature studies how to construct and design networks in order to optimize some quantifiable goal, such as robustness in supply chain and wireless sensor networks [40, 53] or ADME properties of molecules $\left\lbrack {{19},{39}}\right\rbrack$ . Given the intractable number of distinct configurations of even relatively small networks, optimizing these structural and topological properties is generally a non-trivial task that has been approached from various angles in graph theory $\left\lbrack {{15},{18}}\right\rbrack$ and also studied from heuristic perspectives $\left\lbrack {{21},{35}}\right\rbrack$ . Exact solutions are too computationally expensive to obtain and heuristic methods are generally sub-optimal and do not generalize well to unseen instances.
+
+The adoption of graph neural networks (GNNs) [41] and deep reinforcement learning (RL) [36] techniques have lead to promising approaches to the problem of optimizing graph processes or structure $\left\lbrack {{14},{16},{30}}\right\rbrack$ . A fundamental structural modification is rewiring, in which edges (e.g., links in a computer network) are reconfigured such that the topology is changed while their total number remains constant. The problem of rewiring to optimize a structural property has not been studied in the literature.
+
+In this paper, we present a solution to the network rewiring problem for optimizing a specified structural property. We formulate this task as a Markov Decision Process (MDP), in which a decision-maker is given a budget of rewiring operations that are performed sequentially. We then propose an approach based on the Deep Q-Network (DQN) algorithm and GNNs that can efficiently learn strategies for rewiring networks. We evaluate the method by means of a realistic cybersecurity case study. In particular, we assume a scenario in which an attacker has entered a computer network and aims to reach a particular node of interest. We also assume that the attacker has partial knowledge of the underlying graph topology, which is used to reach a given target inside the network. The goal is to learn a rewiring process for modifying the structure of the graph so as to disrupt the capability of the attacker to reach its target, all the while keeping the network operational. This can be seen as an example of moving target defense (MTD) [8]. We frame the solution as an entropy maximization problem, in which the goal is to increase the surprise of the network in order to disrupt the navigation of the attacker inside it. Indeed, entropy acts as proxy measurement of the difficulty of this task, with an increase in entropy corresponding to an increase its difficulty. In particular, we consider two measures of network entropy - namely Shannon entropy and Maximal Entropy Random Walk (MERW), and we compare their effectiveness.
+
+More specifically, the contributions of this paper can be summarized as follows:
+
+ * We formulate the problem of graph rewiring so as to maximize a global structural property as an MDP, in which a central decision-maker is given a certain budget of rewiring operations that are performed sequentially. We formulate an approach that combines GNN architectures and the DQN algorithm to learn an optimal set of rewiring actions by trial-and-error;
+
+ * We present an extensive case study of the proposed approach in the context of defense against network intrusion by an attacker. We show that our method is able to obtain better gains in entropy than random rewiring, while scaling to larger networks than a local greedy search, and generalizing to larger out-of-distribution graphs in some cases. Furthermore, we demonstrate the effectiveness of this approach by simulating the movement of an attacker in the network, finding that indeed the applied modifications increase the difficulty for the attacker to reach its targets in both synthetic and real-world graph topologies.
+
+§ 2 RELATED WORK
+
+RL for graph reconfiguration. Recently, an increasing amount of research has been conducted on the use of reinforcement learning in graph reconfiguration. In particular, in [14] a solution based on reinforcement learning for modifying graphs with the aim of attacking both node and graph classification is presented. In addition, the authors briefly introduce a defense method using adversarial training and edge removal, which decreases their proposed classifier attack rate slightly by $1\%$ . This defense strategy is however only effective on the attack strategy it is trained on and does not generalize. Instead, the authors of [34] use a reinforcement learning approach to learn an attack strategy for neural network classifiers of graph topologies based on edge rewiring, and show that they are able to achieve misclassification with changes that are less noticeable compared to edge and vertex removal and addition. Our paper focuses on a different problem that does not involve classification tasks, but the maximization of a given network objective function. In [16] reinforcement learning techniques are applied to the problem of optimizing the robustness of a graph by means of graph construction; the authors show that their proposed method is able to outperform existing techniques and generalize to different graphs. In the present work, we optimize a global structural property through rewiring instead of constructing a graph through edge addition.
+
+Graph robustness and attacks. A related research area is the optimization of graph robustness [37], which denotes the capacity of a graph to withstand targeted attacks and random failures. [42] demonstrates how small changes in complex networks such as an electricity system or the Internet can improve their robustness against malicious attacks. [6] investigates several heuristic reconfiguration techniques that aim to improve graph robustness without substantially modifying the network structure, and find that preferential rewiring is superior to random rewiring. The authors of [11] extend this study to a framework that can accommodate multiple rewiring strategies and objectives. Several works have used information-based complexity metrics in the context of network defense or attack strategies: [27] proposes a network security metric to assess network vulnerability by measuring the Kolmogorov complexity of effective attack paths. The underlying reasoning is that the more complex attack paths have to be in order to harm a network, the less vulnerable a network is to external attacks. Furthermore, [25] investigates the vulnerability of complex networks, finding that attacks based on edge and vertex removal are substantially more effective when the network properties are recomputed after each attack.
+
+ < g r a p h i c s >
+
+Figure 1: Illustrative example of the MDP timesteps comprising a single rewiring operation. The agent observes an initial state ${S}_{0} = \left( {{G}_{0},\varnothing ,\varnothing }\right)$ (first panel), from which it then selects a base node ${v}_{1} = \{ 1\}$ that will be rewired (second panel). Given the new state that contains the initial graph and the selected base node, the agent selects a target node ${v}_{2} = \{ 5\}$ to which an edge will be added (third panel). Finally, a third node ${v}_{3} = \{ 0\}$ is selected from the neighborhood of ${v}_{1} = \{ 1\}$ and the corresponding edge is removed (last panel). After a sequence of $b$ rewiring operations, the agent will receive a reward proportional to the improvement in the objective function $\mathcal{F}$ .
+
+Cybersecurity and network defense. In the last decade and in recent years in particular, a drastic surge in cyberattacks on governmental and industrial organizations has exposed the imminent vulnerability of global society to cyberthreats [43]. The targeted digital systems are generally structured as a network in which entities in the system communicate and share resources among each other. Typically, attackers seek to gain unauthorized access to the underlying network through an entry point and search for highly valuable nodes in order to infect these digital systems with malicious software such as viruses, ransomware and spyware [3], enabling them to extract sensitive information or control the functioning of the network [26]. Moving target defense (MTD) is a cybersecurity defense technique by which a network and the underlying software are dynamically changed to counteract attack strategies [4, 8, 9, 44, 51] Most existing MTD techniques involve NP-hard problems, and approximate or heuristic solutions are often impractical [8]. We note that while most studies are applied to specific software architectures, which prevent them from being applied effectively to large scale deployments, in this work we focus on modeling this problem from an abstract, infrastructure-agnostic perspective.
+
+§ 3 GRAPH REWIRING AS AN MDP
+
+§ 3.1 PROBLEM STATEMENT
+
+We define a graph (network) as $G = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\}$ is the set of $n = \left| \mathcal{V}\right|$ vertices (nodes) and $\mathcal{E} = \left\{ {{e}_{1},\ldots ,{e}_{m}}\right\}$ is the set of $m = \left| \mathcal{E}\right|$ edges (links). A rewiring operation $\gamma \left( {G,{v}_{i},{v}_{j},{v}_{k}}\right)$ transforms the graph $G$ by adding the non-edge $\left( {{v}_{i},{v}_{j}}\right)$ and removing the existing edge $\left( {{v}_{i},{v}_{k}}\right)$ ; we denote the set of all such operations by $\Gamma$ . Given a budget $b \propto m$ of rewiring operations, and a global objective function $\mathcal{F}\left( G\right)$ to be maximized, the goal is to find the set of unique rewiring operations out of ${\Gamma }^{b}$ such that the resulting graph ${G}^{\prime }$ maximizes $\mathcal{F}\left( {G}^{\prime }\right)$ .
+
+Since the size of the set of possible rewirings grows rapidly with the graph size, we cast this problem as a sequential decision-making process, which is detailed below.
+
+§ 3.2 MDP FRAMEWORK
+
+We let every rewiring operation consist of three sub-steps: 1) base node selection; 2) node selection for edge addition; and 3) node selection for edge removal. We precede the edge removal step by edge addition to suppress potential disconnections of the graph. The rewiring procedure is illustrated in Figure 1. For reducing the size of the decision space, we model each sub-step of the rewiring operation as a separate timestep in the MDP itself. Its elements are defined as:
+
+State. The state ${S}_{t}$ is the tuple ${S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right)$ , containing the graph ${G}_{t} = \left( {\mathcal{V},{\mathcal{E}}_{t}}\right)$ , the chosen base node ${a}_{1}$ , and the chosen addition node ${a}_{2}$ . The base node and addition node may be null $\left( \varnothing \right)$ depending on the rewiring operation sub-step.
+
+Actions. We specify three distinct action spaces ${\mathcal{A}}_{\widehat{t}}\left( {S}_{t}\right)$ , where $\widehat{t} \mathrel{\text{ := }} \left( \begin{array}{ll} t & \text{ mod }3 \end{array}\right)$ denotes the sub-step within a rewiring operation. Letting the degree of node $v$ be ${k}_{v}$ , they are defined as:
+
+$$
+{\mathcal{A}}_{0}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,\varnothing ,\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V}\left| {0 < {k}_{v} < }\right| \mathcal{V} \mid - 1}\right\} , \tag{1}
+$$
+
+$$
+{\mathcal{A}}_{1}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},\varnothing }\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1},v}\right) \notin {\mathcal{E}}_{t}}\right\} , \tag{2}
+$$
+
+$$
+{\mathcal{A}}_{2}\left( {{S}_{t} = \left( {\left( {\mathcal{V},{\mathcal{E}}_{t}}\right) ,{a}_{1},{a}_{2}}\right) }\right) = \left\{ {v \in \mathcal{V} \mid \left( {{a}_{1},v}\right) \in {\mathcal{E}}_{t} \smallsetminus \left( {{a}_{1},{a}_{2}}\right) }\right\} . \tag{3}
+$$
+
+Transitions. Transitions are deterministic; the model $P\left( {{S}_{t} = {s}^{\prime } \mid {S}_{t - 1} = s,{A}_{t - 1} = {a}_{t - 1}}\right)$ transitions to state ${S}^{\prime }$ with probability 1, where:
+
+$$
+{S}^{\prime } = \left\{ \begin{array}{lll} \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1}}\right) ,{a}_{1},\varnothing }\right) , & \text{ if }3 \mid t + 2 & \text{ mark base node } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \cup \left( {{a}_{1},{a}_{2}}\right) }\right) ,{a}_{1},{a}_{2}}\right) , & \text{ if }3 \mid t & \text{ mark addition node }\& \text{ add edge } \\ \left( {\left( {\mathcal{V},{\mathcal{\xi }}_{t - 1} \smallsetminus \left( {{a}_{1},{a}_{3}}\right) }\right) ,\varnothing ,\varnothing }\right) , & \text{ if }3 \mid t + 1 & \text{ remove edge }\& \text{ reset marked nodes } \end{array}\right.
+$$
+
+(4)
+
+Rewards. The reward signal ${R}_{t}$ is proportional to the difference in the value of the objective function $\mathcal{F}$ before and after the graph reconfiguration. Furthermore, a key operational constraint in the domain we consider is that the network remains connected after the rewiring operations. Instead of running connectivity algorithms at every time-step to determine if a potential removed edge disconnects the graph, we encourage maintaining connectivity by giving a penalty $\bar{r} < 0$ at the end of the episode if the graph becomes disconnected. All rewards and penalties are provided at the final timestep $T$ , and no intermediate rewards are given. This enables the flexibility to discover long-term strategies that maximize the total cumulative reward of a sequence of reconfigurations rather than a single-step rewiring operation, even if the graph is disconnected during intermediate steps. Concretely, given an initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we define the reward function at timestep $t$ as:
+
+$$
+{R}_{t} = \left\{ \begin{array}{ll} {c}_{\mathcal{F}} \cdot \left( {\mathcal{F}\left( {G}_{t}\right) - \mathcal{F}\left( {G}_{0}\right) }\right) & \text{ if }t = T \land c\left( G\right) = 1, \\ \bar{r} & \text{ if }t = T \land c\left( G\right) \geq 2, \\ 0 & \text{ otherwise, } \end{array}\right. \tag{5}
+$$
+
+where $c\left( G\right)$ denotes the number of connected components of $G$ , and $\bar{r} < 0$ is the disconnection penalty. As the different objective functions may act on different scales, we use a reward scaling ${c}_{\mathcal{F}}$ , which we empirically establish for every objective function $\mathcal{F}$ .
+
+§ 4 REINFORCEMENT LEARNING REPRESENTATION AND PARAMETRIZATION
+
+In this section, we extend the graph representation and value function approximation parametrizations proposed in past work $\left\lbrack {{14},{16}}\right\rbrack$ for the problem of graph rewiring.
+
+§ 4.1 GRAPH REPRESENTATION
+
+As the state and action spaces in network reconfiguration quickly become intractable for a sequence of rewiring operations, we require a graph representation that generalizes over similar states and actions. To this end, we use a GNN architecture that is based on a mean field inference method [46]. More specifically, we use a variant of the structure2vec [13] embedding method to represent every node ${v}_{i} \in \mathcal{V}$ in a graph $G = \left( {\mathcal{V},\mathcal{E}}\right)$ by an embedding vector ${\mu }_{i}$ . This embedding vector is constructed in an iterative process by linearly transforming feature vectors ${x}_{i}$ with a set of weights $\left\{ {{\theta }^{\left( 1\right) },{\theta }^{\left( 2\right) }}\right\}$ , aggregating the ${x}_{i}$ with the feature vectors of neighboring nodes ${v}_{j} \in {\mathcal{N}}_{i}$ , then applying the nonlinear Rectified Linear Unit (ReLU) activation function. Hence, at every step $l \in \left( {1,2,\ldots ,L}\right)$ , embedding vectors are updated according to:
+
+$$
+{\mu }_{i}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {{\theta }^{\left( 1\right) }{x}_{i} + {\theta }^{\left( 2\right) }\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}}}{\mu }_{j}^{\left( l\right) }}\right) , \tag{6}
+$$
+
+where all embedding vectors are initialized as ${\mu }_{i}^{\left( 0\right) } = \mathbf{0}$ . After $L$ iterations of feature aggregation, we obtain the node embedding vectors ${\mu }_{i} \equiv {\mu }_{i}^{\left( L\right) }$ . By summing the embedding vectors of nodes in a graph $G$ , we obtain its permutation-invariant embedding: $\mu \left( G\right) = \mathop{\sum }\limits_{{i \in \mathcal{V}}}{\mu }_{i}$ . These invariant graph embeddings represent part of the state that the RL agent observes. Aside from permutation invariance, such embeddings allow learned models to be applied to graphs of different sizes, potentially larger than those seen during training.
+
+§ 4.2 VALUE FUNCTION APPROXIMATION
+
+Due to the intractable size of the state-action space in graph reconfiguration tasks, we make use of neural networks to learn approximations of the state-action values $Q\left( {s,a}\right)$ [47]. More specifically, as the action spaces defined in Equation (1) are discrete, we use the DQN algorithm [36] to update the state-action values as follows:
+
+$$
+Q\left( {s,a}\right) \leftarrow Q\left( {s,a}\right) + \alpha \left\lbrack {r + \gamma \mathop{\max }\limits_{{{a}^{\prime } \in \mathcal{A}}}Q\left( {{s}^{\prime },{a}^{\prime }}\right) - Q\left( {s,a}\right) }\right\rbrack . \tag{7}
+$$
+
+The DQN algorithm uses an experience replay buffer [33] from which it samples previously observed transitions $\left( {s,a,r,{s}^{\prime }}\right)$ , and periodically synchronizes a target network with the parameters of the Q-network. The target network is used in the computation of the learning target for estimating the Q-value of the best action in the next timestep, making the learning more stable as the parameters are - kept fixed between updates. We use three separate MLP parametrizations of the Q-function, each corresponding to one of the three sub-steps of the rewiring procedure:
+
+$$
+{Q}_{1}\left( {{S}_{t} = \left( {{G}_{t},\varnothing ,\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 3\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 4\right) }\left\lbrack {{\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8a}
+$$
+
+$$
+{Q}_{2}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},\varnothing }\right) ,{A}_{t}}\right) = {\theta }^{\left( 5\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 6\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8b}
+$$
+
+$$
+{Q}_{3}\left( {{S}_{t} = \left( {{G}_{t},{a}_{1},{a}_{2}}\right) ,{A}_{t}}\right) = {\theta }^{\left( 7\right) }\operatorname{ReLU}\left( {{\theta }^{\left( 8\right) }\left\lbrack {{\mu }_{{a}_{1}} \oplus {\mu }_{{a}_{2}} \oplus {\mu }_{{A}_{t}} \oplus \mu \left( {G}_{t}\right) }\right\rbrack }\right) , \tag{8c}
+$$
+
+where $\oplus$ denotes concatenation. We highlight that, since the underlying structure2vec parameters shown in Equation (6) are shared, the combined set of the learnable parameters in our model is $\Theta = {\left\{ {\theta }^{\left( i\right) }\right\} }_{i = 1}^{8}$ . During validation and test time, we derive a greedy policy from the above learned Q-functions as $\arg \mathop{\max }\limits_{{a \in {\mathcal{A}}_{t}}}Q\left( {s,a}\right)$ . During training, however, we use a linearly decaying $\epsilon$ -greedy behavioral policy. We refer the reader to Appendix B for a detailed description of our implementation.
+
+§ 5 CASE STUDY: NETWORK RECONFIGURATION FOR INTRUSION DEFENSE
+
+In this section, we detail the specifics of our intrusion defense application scenario. We first present the definition of the objective functions we leverage, which act as proxy metrics for the difficulty of navigating the graph. Secondly, we detail the procedure we use for simulating attacker behavior during an intrusion, which will allow us to compare the pre- and post-rewiring costs of traversal.
+
+§ 5.1 OBJECTIVE FUNCTIONS FOR NETWORK OBFUSCATION
+
+Our goal is to reconfigure the network so as to deter an attacker with partial knowledge of the network topology. Equivalently, we seek to modify the network so as to increase the surprise of the network and render this prior knowledge obsolete, while keep the network operational. A natural formalization of surprise is the concept of entropy, which measures the quantity of information encoded in a graph or, equivalently, its complexity.
+
+As measures of entropy, we investigate two graph quantities that are invariant to permutations in representation: the Shannon entropy of the degree distribution [2] and the Maximum Entropy Random Walk (MERW) [7] calculated from the spectrum of the adjacency matrix. The former captures the idea that graphs with heterogeneous degrees are less predictable than regular graphs, while the latter is related to random walks on the network. Whereas generic random walks generally do not
+
+ < g r a p h i c s >
+
+Figure 2: Illustrative example of the evaluation process for a network reconfiguration. (i) The graph is rewired by our approach, removing and adding the highlighted edges respectively. (ii) The leftmost nodes in the graph become unreachable by the attacker from the entry point marked E, and hence a path to them must be rediscovered by exploring the graph. (iii) To reach the nodes, the attacker pays a cost of 1 and 2 respectively for "unlocking" the previously unseen links along the highlighted paths. The total cost induced by the rewiring strategy is ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }} = 3$ .
+
+maximize entropy [17], MERW uses a specific choice of transition probabilities that ensures every trajectory of fixed length is equiprobable, resulting in a maximal global entropy in the limit of infinite trajectory length. Although the local transition probabilities depend on the global structure of the graph, the generating process is local [7]. More formally, the two objective functions are formulated as follows: the Shannon entropy is defined as ${\mathcal{F}}_{\text{ Shannon }}\left( G\right) = - \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}q\left( k\right) {\log }_{2}q\left( k\right)$ , where $q\left( k\right)$ is the degree distribution; MERW is defined as ${\mathcal{F}}_{\text{ MERW }}\left( G\right) = \ln \lambda$ , where $\lambda$ is the largest eigenvalue of the adjacency matrix. In terms of time complexity, computing the Shannon entropy scales as $\mathcal{O}\left( n\right)$ . The calculation of MERW has instead an $\mathcal{O}\left( {n}^{3}\right)$ complexity due to the eigendecomposition required to compute the spectrum of the adjacency matrix.
+
+It is worth noting that, in preliminary experiments, we have additionally investigated objective functions related to the Kolmogorov complexity. Also known as algorithmic complexity, this measure does not suffer from distributional dependencies [32]. As the Kolmogorov complexity is theoretically incomputable [10], we used graph compression algorithms such as bzip-2 [12] and Block Decomposition Methods [52] to approximate the Kolmogorov complexity. However, as these approximations depend on the representation of the graph such as the adjacency matrix, one has to consider many permutations of the graph representation. Compressing the representation for a sufficient number of permutations becomes infeasible even for small graphs. While the MERW objective function is also derived from the adjacency matrix through its largest eigenvalue, it does not suffer from this artifact as the spectrum of the adjacency matrix is invariant to permutations.
+
+§ 5.2 SIMULATING AND EVALUATING ATTACKER BEHAVIOR
+
+Given an initial connected and undirected graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ , we model the attacker as having entered the network through an arbitrary node $u \in \mathcal{V}$ , and having built a local map ${\mathcal{M}}_{0}^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{0}^{u}}\right)$ around this entry point, where ${\mathcal{V}}^{v} \subset \mathcal{V}$ is the set of nodes and ${\mathcal{E}}_{0}^{u} \subset {\mathcal{E}}_{0}$ is the set of edges in the map. The rewiring procedure transforms the initial graph ${G}_{0} = \left( {\mathcal{V},{\mathcal{E}}_{0}}\right)$ to the graph ${G}_{ * } = \left( {\mathcal{V},{\mathcal{E}}_{ * }}\right)$ , yielding the new local map ${\mathcal{M}}_{ * }^{u} = \left( {{\mathcal{V}}^{u},{\mathcal{E}}_{ * }^{u}}\right)$ that is unknown to the attacker. Our goal is to evaluate the effectiveness of the reconfiguration by measuring how "stale" the prior information of the attacker has become in comparison to the new map: if the attacker struggles to find its targets in the updated topology, the rewiring has succeeded.
+
+Let $\overline{{\mathcal{V}}^{u}}$ denote the set of nodes in the new local map ${\mathcal{M}}_{ * }^{u}$ that are unreachable through at least one trajectory composed of original edges ${E}_{0}^{u}$ in the old map. For each newly unreachable node ${v}_{i}$ , we measure the cost ${\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ of finding it with a forward random walk, in which the random walker only returns to the previous node if the current node has no other outgoing links. Every time the random walker encounters a link that is (i) not included in ${E}_{0}^{u}$ and (ii) not yet encountered during the random walk, the cost increases by one. This simulates the cost of having to explore the new graph topology due to the reconfigurations that were introduced. Finally, we let ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }} = \mathop{\sum }\limits_{{{v}_{i} \in {\mathcal{V}}^{u}}}{\mathcal{C}}_{\mathrm{{RW}}}\left( {v}_{i}\right)$ denote the sum of the costs for all newly unreachable nodes, which is our metric for the effectiveness of a rewiring strategy. An illustrative example of a forward random walk and cost evaluation is shown in Figure 2, and a formal description is presented in Algorithm 1 in Appendix B to aid reproducibility.
+
+§ 6 EXPERIMENTS
+
+§ 6.1 EXPERIMENTAL SETUP
+
+Training and evaluation procedure. Our agent is trained on synthetic graphs of size $n = {30}$ that are generated using the graph models listed below. The given budget is ${15}\%$ of the total edges $m$ that are present in the initial graph. When performing the attacker simulations, the initial local map contains the subgraph induced by all nodes that are 2 hops away from the entry point, which is sampled without replacement from the node set. Training occurs separately for each graph model and objective $\mathcal{F}$ on a set of graphs ${\mathcal{G}}_{\text{ train }}$ of size $\left| {\mathcal{G}}_{\text{ train }}\right| = 6 \cdot {10}^{2}$ . Every 10 training steps, we measure the performance on a disjoint validation set ${\mathcal{G}}_{\text{ validation }}$ of size $\left| {\mathcal{G}}_{\text{ validation }}\right| = 2 \cdot {10}^{2}$ . We perform reconfiguration operations on a test set ${\mathcal{G}}_{\text{ test }}$ of size $\left| {\mathcal{G}}_{\text{ test }}\right| = {10}^{2}$ . To account for stochasticity, we train our models with 10 different seeds and present mean and confidence intervals accordingly. Further details about the experimental procedure (e.g., hyperparameter optimization) can be found in Appendix B.
+
+Synthetic graphs. We evaluate the approaches on graphs generated by the following models:
+
+Barabási-Albert (BA): A preferential attachment model where nodes joining the network are linked to $M$ nodes [5]. We consider values of ${M}_{ba} = 2$ and ${M}_{ba} = 1$ (abbreviated BA-2 and BA-1).
+
+Watts-Strogatz (WS): A model that starts with a ring lattice of nodes with degree $k$ . Each edge is rewired to a random node with probability $p$ , yielding characteristically small shortest path lengths [48]. We use $k = 4$ and $p = {0.1}$ .
+
+Erdős-Rényi (ER): A random graph model in which the existence of each edge is governed by a uniform probability $p$ [20]. We use $p = {0.15}$ .
+
+Real-world graphs. We also consider the real-world Unified Host and Network (UHN) dataset [45], which is a subset of network and host events from an enterprise network. We transform this dataset into a graph by identifying the bidirectional links between hosts appearing in these records, obtaining a graph with $n = {461}$ nodes and $m = {790}$ edges. Further information about this processing can be found in Appendix B.
+
+Baselines. We compare the approach against two baselines: Random, which acts in the same MDP as the agent but chooses actions uniformly, and Greedy, which is a shallow one-step search over all rewirings from a given configuration. The latter picks the rewiring that gives the largest improvement in $\mathcal{F}$ . As this search scales very poorly with graph size and budget, we only evaluate it on graphs of size 30 that are used to train the DQN as a comparison point for validating the learned strategies.
+
+§ 6.2 ENTROPY MAXIMIZATION RESULTS
+
+We first consider the results for the maximization of the entropy-based objectives. The gains in entropy obtained by the methods on the held-out test set are shown in Table 1, while training curves are presented in Appendix A. The results demonstrate that the approach discovers better reconfiguration strategies than random rewiring in all cases, and even the greedy search in one setting. Furthermore, we evaluate the out-of-distribution generalization properties of the learned models along two dimensions: varying the graph size $n \in \left\lbrack {{10},{300}}\right\rbrack$ and the budget $b$ as a percentage of existing edges $\in \{ 5,{10},{15},{20},{25}\}$ . The results for this experiment (from which Greedy is excluded due to poor scalability) are shown in Figure 3. We find that, with the exception of the (BA, ${\mathcal{F}}_{\text{ Shannon }}$ ) combination, the learned models generalize well to graphs substantially larger in size as well as varying rewiring budgets.
+
+Table 1: Entropy gains on test graphs with $n = {30}$ .
+
+max width=
+
+$\mathcal{F}$ ${\mathcal{G}}_{\text{ test }}$ DQN Greedy Random
+
+1-5
+4*$\Delta {\mathcal{F}}_{MERW}$ BA-2 ${0.197}_{\pm {0.002}}$ ${0.225} \pm {0.003}$ $- {0.019}_{\pm {0.003}}$
+
+2-5
+ BA-1 ${0.167}_{\pm {0.003}}$ ${0.135}_{\pm {0.003}}$ $- {0.045}_{\pm {0.004}}$
+
+2-5
+ ER ${0.182}_{\pm {0.004}}$ ${0.209}_{\pm {0.012}}$ $- {0.005}_{\pm {0.003}}$
+
+2-5
+ WS ${0.233}_{\pm {0.003}}$ ${0.298}_{\pm {0.002}}$ ${0.035}_{\pm {0.002}}$
+
+1-5
+4*$\Delta {\mathcal{F}}_{\text{ Shannon }}$ BA-2 ${0.541}_{\pm {0.009}}$ ${0.724} \pm {0.015}$ ${0.252} \pm {0.024}$
+
+2-5
+ BA-1 ${0.167}_{\pm {0.008}}$ ${0.242}_{\pm {0.012}}$ ${0.084}_{\pm {0.015}}$
+
+2-5
+ ER ${0.101} \pm {0.012}$ ${0.400}_{\pm {0.023}}$ $- {0.022}_{\pm {0.018}}$
+
+2-5
+ WS ${0.926}_{\pm {0.016}}$ ${1.116} \pm {0.022}$ ${0.567}_{\pm {0.036}}$
+
+1-5
+
+§ 6.3 EVALUATING THE RECONFIGURATION IMPACT
+
+We next evaluate the performance of the learned models for entropy maximization on the downstream task of disrupting the navigation of the graph by the attacker.
+
+ < g r a p h i c s >
+
+Figure 3: Evaluation of the out-of-distribution generalization performance (higher is better) of the learned entropy maximization models as a function of graph size (top) and budget size (bottom). All models are trained on graphs with $n = {30}$ . In the bottom figure, the solid and dotted lines represent graphs with $n = {30}$ and $n = {100}$ respectively. Note the different $\mathrm{x}$ -axes used for ER graphs due to their high edge density.
+
+Synthetic graphs. The results for synthetic graphs are shown in Figure 4 in an out-of-distribution setting as a function of graph size, a regime in which the Greedy baseline is too expensive to scale. We find that the best proxy metric varies with the class of synthetic graphs - Shannon entropy performs better for BA graphs, MERW performs better for ER, and performance is similar for WS. Strong out-of-distribution generalization performance is observed for 3 out of 4 synthetic graph models. The results also show that, in the case of WS graphs, even though the performance in terms of the metric itself is high (as shown in Figure 3), the objective is not a suitable proxy for the downstream task in an out-of-distribution setting since the random walk cost decays rapidly. This might be explained by the fact that the graph topology is derived through a rewiring process of cliques of nodes of a given size.
+
+Real-world graphs. We also evaluate the models trained on synthetic graphs on the real-world graph constructed from the UHN dataset. Results are shown in Table 2. All but one of the trained models maintain a statistically significant random walk cost difference over the Random baseline. The best-performing models were trained on the (WS, ${\mathcal{F}}_{MERW}$ ) and (BA-1, ${\mathcal{F}}_{\text{ Shannon }}$ ) combinations, obtaining total gains in random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }}$ of ${136}\%$ and ${125}\%$ respectively. The Greedy baseline is not applicable for a graph of this size.
+
+ < g r a p h i c s >
+
+Figure 4: Evaluation of the learned rewiring strategies for entropy maximization on the downstream task of disrupting attacker navigation. All models are trained on graphs with $n = {30}$ . The random walk cost ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }}$ (higher is better) is normalized by $n$ for meaningful comparisons. Note the different $\mathrm{x}$ -axis used for ER graphs due to their high edge density.
+
+303
+
+§ 7 CONCLUSION
+
+Summary. In this work, we have addressed the problem of graph reconfiguration for the optimization of a given property of a networked system, a computationally challenging problem given the generally large decision space. We have then have formulated it as a Markov Decision Process that treats rewirings as sequential, and proposed an approach based on deep reinforcement learning and graph neural networks for efficient learning of network reconfigurations. As a case study, we have applied the proposed method to a cybersecurity scenario in which the task is to disrupt the navigation of potential intruders in a computer network. We have assumed that the goal of the intruder is to navigate the network given some knowledge about its topology. In order to disrupt the attack, we have designed a mechanism for increasing the level of surprise of the network through entropy maximization by means of network rewiring. More specifically, in terms of the objective of the optimization process, we have considered two entropy metrics that quantify the predictability of the network topology, and demonstrated that our method generalizes well on unseen graphs with varying rewiring budgets and different numbers of nodes. We have also validated the effectiveness of the learned models for increasing path lengths towards targeted nodes. The proposed approach outperforms the considered baselines on both synthetic and real-world graphs.
+
+Table 2: Total random walk cost of models applied to the real-world UHN graph $\left( {n = {461},m = {790}}\right)$ .
+
+max width=
+
+X $\mathcal{F}$ X ${\mathcal{C}}_{\mathrm{{RW}}}^{\text{ tot }}/n\left( \widehat{ \uparrow }\right)$
+
+1-4
+8*DQN 3*${\mathcal{F}}_{MERW}$ BA-2 ${3.087}_{\pm {0.225}}$
+
+3-4
+ BA-1 ${1.294}_{\pm {0.185}}$
+
+3-4
+ ER ${2.887}_{\pm {0.335}}$
+
+2-4
+ X WS ${\mathbf{{4.888}}}_{\pm {0.568}}$
+
+2-4
+ ${\mathcal{F}}_{\text{ Shannon }}$ BA-2 ${3.774}_{\pm {0.445}}$
+
+2-4
+ X BA-1 ${\mathbf{{4.660}}}_{\pm {0.461}}$
+
+2-4
+ X ER ${3.891}_{\pm {0.559}}$
+
+2-4
+ X WS ${3.555}_{\pm {0.318}}$
+
+1-4
+Random - - ${2.071}_{\pm {0.289}}$
+
+1-4
+Greedy - - ∞
+
+1-4
+
+Limitations and future work. An advantage of the proposed approach is that it does not require any knowledge of the exact position of the attacker as the traversal of the graph takes place. One may also consider a real-time scenario in which the network reconfiguration aims to "close off" the attacker given knowledge of their location, which may lead to a more efficient defense if such information is available. We have also adopted a simple model of attacker navigation (forward random walks). Different, more complex navigation strategies (e.g., targeting vulnerable machines) can also be considered. This knowledge might be integrated as part of the training process, for example by increasing the probability of rewiring of edges around these nodes through a corresponding reward structure (i.e., higher reward for protecting more sensitive nodes). More generally, we have identified an important application to cybersecurity, which might have a positive impact in safeguarding networks from malicious intrusions. With respect to potential dual-use, we note that the proposed defense mechanism cannot be exploited by attackers directly, since it requires knowledge of at least part of the underlying network topology.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c149750d6bfff58457d8886b912217fd7d29c25
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,357 @@
+# Learning Graph Search Heuristics
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Searching for a path between two nodes in a graph is one of the most well-studied and fundamental problems in computer science. In numerous domains such as robotics, AI, or biology, practitioners develop search heuristics to accelerate their pathfinding algorithms. However, it is a laborious and complex process to hand-design heuristics based on the problem and the structure of a given use case. Here we present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigation heuristics from data by leveraging recent advances in imitation learning and graph representation learning. At training time, we aggregate datasets of search trajectories and ground-truth shortest path distances, which we use to train a specialized graph neural network-based heuristic function using backpropagation through steps of the pathfinding process. Our heuristic function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as ${\mathrm{A}}^{ * }$ at test time. Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by ${58.5}\%$ on average, can be directly applied in diverse graphs ranging from biological networks to road networks, and allows for fast planning in time-critical robotics domains.
+
+## 201 Introduction
+
+Search heuristics are essential in several domains, including robotics, AI, biology, and chemistry [1- 6]. For example, in robotics, complex robot geometries often yield slow collision checks, and search algorithms are constrained by the robot's onboard computation resources, requiring well-performing search heuristics that visit as few nodes as possible [1, 4]. In AI, domain-specific search heuristics are useful for improving the performance of inference engines operating on knowledge bases [3, 5]. Search heuristics have been previously also developed to reduce search efforts in protein-protein interaction networks [6] and in planning chemical reactions that can synthesize target chemical products [2]. This broad set of applications underlines the importance of good search heuristics that are applicable to a wide range of problems.
+
+
+
+Figure 1: The goal is to navigate (find a path) from the start to the goal node. While BFS visits many nodes to find a start-to-goal path (left), one can use a heuristic based on the features of the nodes (e.g., Euclidean distance) on the graph to reduce the search effort (middle). We propose PHIL to learn a tailored search heuristic for a given graph, capable of reducing the number of visited nodes even further by exploiting the inductive biases of the graph (right).
+
+The search task can be formulated as a pathfinding problem on a graph, where given a graph, the task is to navigate and find a short feasible path from a start node to a goal node, while in the process visiting as few nodes as possible (Figure 1). The most straightforward approach would be to launch a search algorithm such as breadth-first search (BFS) and iteratively expand the graph from the start node until it reaches the goal node. Since BFS does not harness any prior knowledge about the graph, it usually visits many nodes before reaching the goal, which is expensive in cases such as robotics where visiting nodes is costly. To visit fewer nodes during the search, one may use domain-specific information about the graph via a heuristic function [7], which allows one to define a distance metric on graph nodes to prune directions that seem less promising to explore. Unfortunately, coming up with good search heuristics requires significant domain expertise and manual effort.
+
+While there has been significant progress in designing search heuristics, it remains a challenging problem. Classical approaches $\left\lbrack {8,9}\right\rbrack$ tend to hand-design search heuristics, which requires domain knowledge and a lot of trial and error. To alleviate this problem, there has been significant development in general-purpose search heuristics based on trading-off greedy expansions and novelty-based exploration [10-13] or search problem simplifications [14-16]. These approaches alleviate some of the common pitfalls of goal-directed heuristics, but we demonstrate that if possible, it is useful to learn domain-specific heuristics that can better exploit problem structure.
+
+On the other hand, learning-based methods face a set of different challenges. Firstly, the data distribution is not i.i.d., as newly encountered graph nodes depend on past heuristic values, which means that supervised learning-based methods are not directly applicable. Secondly, heuristics should run fast, with ideally constant time complexity. Otherwise, the overall asymptotic time complexity of the search procedure could be increased. Finally, as the environment (search graph) sizes increase, reinforcement learning-based heuristic learning approaches tend to perform poorly [1]. State-of-the-art imitation learning-based methods can learn useful search heuristics [1]; however, these methods 4 still rely on feature-engineering for a specific domain and do not generally guarantee a constant time complexity with respect to graph sizes.
+
+
+
+Figure 2: Main components of PHIL: On the left, using a greedy mixture policy induced by the current version of our parameterized heuristic ${h}_{\theta }$ and an oracle heuristic ${h}^{ * }$ (i.e., a heuristic that correctly determines distances between nodes), we roll-out a search trajectory from the start node to the goal node. Each trajectory step contains a set of newly added fringe nodes with bounded random subsets of their 1-hop neighborhoods and their oracle $\left( {h}^{ * }\right)$ distances to the goal node. Trajectories are aggregated throughout the training procedure. On the right, we use truncated backpropagation through time on each collected trajectory to train ${h}_{\theta }$ , where $\widehat{h}$ is the predicted distance between ${x}_{2}$ and ${x}_{g}$ , and ${z}_{2}$ is the updated state of the memory. Here, the memory captures the embedding of the graph visited so far.
+
+In this paper, we propose Path Heuristic with Imitation Learning (PHIL), a framework that extends the recent imitation learning-based heuristic search paradigm with a learnable explored graph memory. This means that PHIL learns a representation that allows it to capture the structure of the so far 59 explored graph, so that it can then better select what node to explore next (Figure 2). We train our approach to predict the node-to-goal distances ( ${h}^{ * }$ in Figure 2) of graph nodes during search. To train our memory module, which captures the explored graph, we use truncated backpropagation through time (TBTT) [17], where we utilize ground-truth node-to-goal distances as a supervision signal at each search step. Our TBTT procedure is embedded within an adaptation of the AggreVaTe imitation learning algorithm [18]. PHIL also includes a specialized graph neural network architecture, which allows us to apply PHIL to diverse graphs from different Fdomains.
+
+We evaluate PHIL on standard benchmark heuristic learning datasets (Section 5.1), diverse graph-based datasets from different domains (Section 5.2), and practical UAV flight use cases (Section 5.3). Experiments demonstrate that PHIL outperforms state-of-the-art heuristic learning methods up to $4 \times$ . Further, PHIL performs within 4.9% of an oracle in indoor drone planning scenarios, which is up to a 21.5% reduction compared with commonly used approaches. In practice, our contributions enable practitioners to quickly extract useful search heuristics from their graph datasets without any hand-engineering.
+
+## 2 Preliminaries
+
+Graph search. Suppose that we are given an unweighted connected graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is a set of nodes, and $\mathcal{E}$ a corresponding set of edges. Further suppose that each node $i \in \mathcal{V}$ has corresponding features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and each edge $\left( {i, j}\right) \in \mathcal{E}$ has features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ . Assume that we are also given a start node ${v}_{s} \in \mathcal{V}$ and a goal node ${v}_{q} \in \mathcal{V}$ . At any stage of our search algorithm, we can partition the nodes of our graph into three sets as $\mathcal{V} = \operatorname{CLOSE} \cup \mathrm{{OPEN}} \cup \mathrm{{REST}}$ , where CLOSE are the nodes already explored, OPEN are candidate nodes for exploration (i.e., all nodes connected to any node in CLOSE, but not yet in CLOSE), and REST is the rest of the graph. Each expansion moves a node from OPEN to CLOSE, and adds the neighbors of the given node from REST to OPEN. We call the set of newly added fringe nodes ${\mathcal{V}}_{\text{new }}$ at each search step. At the start of the search procedure, CLOSE $= \left\{ {v}_{s}\right\}$ and we expand the nodes until ${v}_{g}$ is encountered (i.e., until ${v}_{g} \in$ CLOSE).
+
+Greedy best-first search. We can perform greedy best-first search using a greedy fringe expansion policy, such that we always expand the node $v \in$ OPEN that minimizes $h\left( {v,{v}_{g}}\right)$ . Here, $h : \mathcal{V} \times \mathcal{V} \rightarrow$ $\mathbb{R}$ is a tailored heuristic function for a given use case. In our work, we are interested in learning a function $h$ that predicts shortest path lengths, this way minimizing $\left| \text{CLOSE}\right|$ in a greedy best-first search regime.
+
+Imitation of perfect heuristics. Partially observable Markov decision processes (POMDPs) are a suitable framework to describe the problem of learning search heuristics [1]. We can have $s =$ (CLOSE, OPEN, REST) as our state, an action $a \in \mathcal{A}$ corresponds to moving a node from OPEN to CLOSE, and the observations $o \in \mathcal{O}$ are the features of newly included nodes in OPEN. Note that one could consider an MDP framework to learn heuristics, but the time complexity of operating on the whole state is in most cases prohibitive. We also define a history $\psi \in \Psi$ as a sequence of observations $\psi = {o}_{1},{o}_{2},{o}_{3},\ldots$ Our work leverages the observation that using a heuristic function during greedy best-first search that correctly determines the length of the shortest path between fringe nodes and the goal node will also yield minimal |CLOSE|. For training, we adopt a perfect heuristic ${h}^{ * }$ , similar to [1], which has full information about $s$ during search. Such oracle can provide ground-truth distances ${h}^{ * }\left( {s, v,{v}_{g}}\right)$ , where $v \in$ OPEN. To conclude, we define a greedy best-first search policy ${\pi }_{\theta }$ that uses a parameterized heuristic ${h}_{\theta }$ to expand nodes from OPEN with minimal heuristic values. One could also directly use a POMDP solver for the above-described problem, but this approach is usually infeasible due to the dimensionality of the search state [19].
+
+## 3 Related Work
+
+General purpose heuristic design. There has been significant research in designing general-purpose heuristics for speeding up satisficing planning. The first set of approaches are based on simplifying the search problem for example using landmark heuristics [14, 16]. The next set of approaches aim to include novelty-based exploration in greedy best-first search [10-13]. The latter set of approaches showed state-of-the-art performance (best-first width search [12, 13], BFWS) in numerous settings. We show that in domains where data is available, it can be more effective to incorporate a learned heuristic into a greedy best-first search procedure.
+
+Learning heuristic search. There have been numerous previous works that attempt to learn search heuristics: Arfaee et al. [20] propose to improve heuristics iteratively, Virseda et al. [21] learn to combine heuristics to estimate graph node distances, Wilt et al. [22] and Garrett et al. [23] propose to learn node rankings, Thayer et al. [24] suggest to infer heuristics during a search, and Kim et al. [25] train a neural network to predict graph node distances. These methods generally do not consider the non-i.i.d. nature of heuristic search. Further, Bhardwaj et al. [1] propose SAIL, where heuristic learning is framed as an imitation learning problem with cost-to-go oracles. The SAIL heuristic uses hand-designed features tailored for obstacle avoidance, with a linear time-complexity in the number of explored grid nodes found to be colliding with an obstacle. Feature-engineering becomes more difficult as we attempt to learn heuristics on diverse graphs such as ones seen in Section 5.2, where we may need expert knowledge. Further, heuristics that do not have a constant time complexity in the size of the graph $\left\lbrack {1,{26} - {29}}\right\rbrack$ generally scale poorly with graph size and hence have constrained use cases. Recent approaches to learning heuristics include Retro* [2] by Chen et al., where a heuristic is learned in the context of AND-OR search trees for chemical retrosynthetic planning. Our work focuses on a more general graph setting.
+
+There has been significant progress on learning heuristics for NP-hard combinatorial optimization problems [30-32]. Still, these heuristic learning methods, due to their time complexities, are impractical for the application in polynomial-time search problems, on which this work focuses.
+
+Learning general purpose search. Learning general search policies is a very well-studied research area with a rich set of developments and applications. These include Monte Carlo Tree Search methods [33, 34], implicit planning methods [35-37], and imagination-based planning approaches $\left\lbrack {{38},{39}}\right\rbrack$ . Learning search heuristics can be seen as a special case of general purpose search, where the search problem is treated as a partially observable Markov decision process with restricted action evaluation (see Section 4), and with models running in $\mathcal{O}\left( 1\right)$ to remain competitive time-complexity-wise on problems where best-first search performs well. General purpose search methods do not take into account the above-mentioned constraints, which motivates the development of tailored approaches for learning heuristics $\left\lbrack {1,2}\right\rbrack$ .
+
+Imitation learning. Our approach builds on prior work in imitation learning (IL) with cost-to-go oracles. Cost-to-go oracles have been incorporated in the context of IL in methods such as SEARN [40], AggreVaTe [18], LOLS [41], AggrevaTeD [42], DART [43], and THOR [44]. SAIL [1] presents an AggreVaTe-based algorithm for learning heuristic search. We extend SAIL by incorporating a recurrent $Q$ -like function, in which sense our algorithm more closely resembles AggreVaTeD by Sun et al. [42]. While a recurrent policy can be easily incorporated in AggreVaTeD, we cannot use a policy to evaluate actions. This is due to the fact that we would either have to evaluate all actions in a state, which is computationally infeasible, or we would have to give up on taking actions that are not in the most recent version of the search fringe, which would degrade the performance (see Section 4).
+
+## 4 Path Heuristic with Imitation Learning
+
+Training objective. With the aim of minimizing |CLOSE| after search, our goal is to train a parameterized heuristic function ${h}_{\theta } : \Psi \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ to predict ground-truth node distances ${h}^{ * }$ and use ${h}_{\theta }$ within a greedy best-first policy ${\pi }_{\theta }$ at test time. More specifically, we assume access to a distribution over graphs ${P}_{\mathcal{G}}$ , a start-goal node distribution ${P}_{{v}_{sg}}\left( {\cdot \mid \mathcal{G}}\right)$ , and a time horizon $T$ . Moreover, we assume a joint state-history distribution $s,\psi \sim {P}_{s}\left( {\cdot \mid \mathcal{G}, t,{\pi }_{\theta },{v}_{s},{v}_{g}}\right)$ , where ${P}_{s}$ represents the probability our search being in state $s$ , at time $0 \leq t \leq T$ on graph $\mathcal{G}$ with pathfinding problem $\left( {{v}_{s},{v}_{g}}\right)$ , with a greedy best-first search policy ${\pi }_{\theta }$ using heuristic ${h}_{\theta }$ . Hence, our goal can be summarized as minimizing the following objective:
+
+$$
+\mathcal{L}\left( \theta \right) = \underset{\begin{matrix} {\xi \sim {P}_{g},} \\ {\left( {{v}_{s},{v}_{g}}\right) \sim {P}_{vsg}} \\ {t \sim \mathcal{U}\left( {0,\ldots , T}\right) ,} \\ {s,\psi \sim {P}_{s}} \end{matrix}}{\mathbb{E}}\left\lbrack {\frac{1}{\left| \mathrm{{OPEN}}\right| }\mathop{\sum }\limits_{{v \in \mathrm{{OPEN}}}}{\left( {h}^{ * }\left( s, v,{v}_{g}\right) - {h}_{\theta }\left( \psi , v,{v}_{g}\right) \right) }^{2}}\right\rbrack \tag{1}
+$$
+
+Before we describe the algorithm that can be used to minimize $\mathcal{L}$ , we rewrite ${h}_{\theta }$ to include a memory digest component $\left( {z}_{t}\right)$ , which represents an embedding of $\psi$ at time step $t$ . Hence, ${h}_{\theta }$ becomes ${h}_{\theta } : {\mathbb{R}}^{d} \times \mathcal{O} \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ , where $d$ is the dimensionality of our memory’s embedding space. As opposed to previous methods [1], ${z}_{t}$ allows us to automatically extract relevant features for heuristic
+
+Algorithm 1: PHIL— Sequential Heuristic Training
+
+---
+
+Obtain hyperparameters $T,{\beta }_{0}, N, m,{t}_{\tau }$ ;
+
+Initialize $\mathcal{D} \leftarrow \varnothing ,{h}_{{\theta }_{1}}$ ;
+
+for $i = 1,\ldots , N$ do
+
+ Sample $\mathcal{G} \sim {P}_{\mathcal{G}}$ ;
+
+ Sample ${v}_{s},{v}_{g} \sim {P}_{{v}_{sg}}$ ;
+
+ Set $\beta \leftarrow {\beta }_{0}^{i}$ ;
+
+ Set mixture policy ${\pi }_{\text{mix }} \leftarrow \left( {1 - \beta }\right) * {\pi }_{{\theta }_{i}} + \beta * {\pi }^{ * }$ ;
+
+ Collect $m$ trajectories ${\tau }_{ij}$ as follows;
+
+ for $j = 1,\ldots , m$ do
+
+ Sample $t \sim \mathcal{U}\left( {0,\ldots , T - {t}_{\tau }}\right)$ ;
+
+ Roll-in $t$ time steps of ${\pi }_{{\theta }_{i}}$ to obtain ${z}_{t}$ and new state ${s}_{t} = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\operatorname{REST}}^{0}}\right)$ ;
+
+ Roll-out trajectory ${\tau }_{ij}$ as follows;
+
+ for $k = 1,\ldots ,{t}_{\tau }$ do
+
+ Update ${s}_{t + k - 1}$ using ${\pi }_{\operatorname{mix}}$ to get new state ${s}_{t + k}$ and new fringe state ${\mathrm{{OPEN}}}^{k}$ ;
+
+ Obtain new fringe nodes ${\mathcal{V}}_{\text{new }} = {\mathrm{{OPEN}}}^{k} \smallsetminus {\mathrm{{OPEN}}}^{k - 1}$ ;
+
+ Update trajectory ${\tau }_{ij} \leftarrow {\tau }_{ij} \cup \left\{ \left( {{\mathcal{V}}_{\text{new }},{h}^{ * }\left( {{s}_{t + k},{\mathcal{V}}_{\text{new }},{v}_{g}}\right) }\right) \right\}$ ;
+
+ Update dataset $\mathcal{D} \leftarrow \mathcal{D} \cup \left\{ \left( {{\tau }_{ij},{z}_{t}}\right) \right\}$ or $\mathcal{D} \cup \left\{ \left( {{\tau }_{ij},0}\right) \right\}$ ;
+
+ Train ${h}_{{\theta }_{i}}$ using TBTT on each $\tau \in \mathcal{D}$ to get ${h}_{{\theta }_{i + 1}}$ ;
+
+return best performing ${h}_{{\theta }_{i}}$ on validation;
+
+---
+
+computations and concurrently reduce the computational complexity of the heuristic function. Further, as shown in [1], if we would use ${h}_{\theta }$ to evaluate all actions in a state (i.e., recalculate the heuristic values of all nodes in OPEN), we would need a squared reduction in the number of expanded nodes compared with BFS for PHIL to bring performance benefits over BFS, which however may not be possible on all datasets. Hence, we constrain the heuristic only to evaluate new OPEN nodes which we obtain after moving a node to CLOSE, calling the set of new fringe nodes ${\mathcal{V}}_{\text{new }}$ after each expansion. In practice, the policy ${\pi }_{\theta }$ yields an algorithm equivalent to greedy best-first search, with the heuristic function replaced by ${h}_{\theta }$ .
+
+### 4.1 Learning algorithm & architecture
+
+Imitation learning algorithm. In Algorithm 1, we present the pseudo-code of the IL algorithm used to train our heuristic models (Figure 3). The high-level idea of our algorithm is that we aggregate trajectories of search traces (i.e., sequences of new fringe nodes) and use truncated backpropagation through time to optimize ${h}_{\theta }$ after each data-collection step. In particular, after sampling a graph $\mathcal{G}$ and a search problem ${v}_{s},{v}_{g}$ , we use our greedy learned policy ${\pi }_{\theta }$ induced by ${h}_{\theta }$ to roll-in for $t \sim \mathcal{U}\left( {0,\ldots , T - {t}_{\tau }}\right)$ expansions, where $T$ is the episode time horizon, and ${t}_{\tau }$ is the roll-out length. From our roll-in, we obtain a new state $s = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\mathrm{{REST}}}^{0}}\right)$ , and an initial memory state ${z}_{t}$ . After our roll-in, we roll-out for ${t}_{\tau }$ steps using our mixture policy ${\pi }_{mix}$ , which is obtained by probabilistically blending ${\pi }_{\theta }$ and the greedy best-first policy induced by the oracle heuristic ${\pi }^{ * }$ . In a roll-out, we collect sequences of new fringe nodes, together with their ground-truth distances to the goal ${v}_{g}$ , given by ${h}^{ * }$ . Once the roll-out is complete, we append the obtained trajectory and the initial state for the following optimization using backpropagation through time. Further analysis on the trade-offs between using rolled-in states ${z}_{t}$ or zeroed-out states for training can be found in the supplementary material.
+
+Note that we could also use supervised learning-based approaches to sample a fixed dataset of $\left( {v}_{s}\right.$ , $\left. {{v}_{g},{h}^{ * }\left( {s,{v}_{s},{v}_{g}}\right) }\right)$ 3-tuples and train a model to predict node distances conditioned on their features. However, our experiments in Section 5 demonstrate that ignoring the non-i.i.d. nature of heuristic search negatively impacts model performance, with supervised learning-based methods performing up to ${40} \times$ worse.
+
+Recurrent GNN architecture. In each forward pass, ${h}_{\theta }$ obtains a set of new fringe nodes ${\mathcal{V}}_{\text{new }}$ , the goal node ${v}_{g}$ , and the memory ${z}_{t}$ at time step $t$ . We represent each node in ${\mathcal{V}}_{\text{new }}$ using its features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and likewise the goal node ${v}_{g}$ using its features ${x}_{g} \in {\mathbb{R}}^{{D}_{v}}$ . Further, for each $i \in {\mathcal{V}}_{\text{new }}$ , we uniformly sample an $n \in {\mathbb{N}}_{ \geq 0}$ bounded set of nodes present in the 1-hop neighborhood of $i$ , calling
+
+
+
+Figure 3: This figure demonstrates the core idea behind our IL algorithm. We present the roll-in phase on the left-hand side, where our policy is rolled in for $t$ steps to obtain state ${s}_{t}$ and embedding ${z}_{t}$ . On the right-hand side, we show the trajectory collection and training steps, where we aggregate the trajectory for downstream training (green) and use truncated backpropagation through time on the collected dataset (red).
+
+this set ${\mathcal{N}}_{i}$ , with $\left| {\mathcal{N}}_{i}\right| \leq n$ . This sampling step produces a set of neighboring node features, where each $j \in {\mathcal{N}}_{i}$ has features ${x}_{j} \in {\mathbb{R}}^{{D}_{v}}$ , and corresponding edge features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ .
+
+${h}_{\theta }$ forward pass. Algorithm 2 presents a single forward pass of ${h}_{\theta }$ . The forward
+
+---
+
+Algorithm 2: Heuristic func. $\left( {h}_{\theta }\right)$ forward pass
+
+---
+
+Obtain ${x}_{i},{x}_{j},{e}_{ij},{x}_{g}{z}_{t}$ ;
+
+${x}_{i} \leftarrow f\left( {{x}_{i},{x}_{g},{D}_{EUC}\left( {{x}_{i},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{i},{x}_{g}}\right) }\right) ;$
+
+${x}_{j} \leftarrow f\left( {{x}_{j},{x}_{g},{D}_{EUC}\left( {{x}_{j},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{j},{x}_{g}}\right) }\right) ;$
+
+${g}_{i} \leftarrow \phi \left( {{x}_{i},{ \oplus }_{j \in {\mathcal{N}}_{i}}\gamma \left( {{x}_{i},{x}_{j},{e}_{ij}}\right) }\right) ;$
+
+${g}_{i}^{\prime },{z}_{i, t + 1} \leftarrow \operatorname{GRU}\left( {{g}_{i},{z}_{t}}\right)$ ;
+
+${z}_{t + 1} \leftarrow \overline{{z}_{i, t + 1}}$ ;
+
+${\widehat{h}}_{i} \leftarrow \operatorname{MLP}\left( {{g}_{i}^{\prime },{x}_{g}}\right) ;$
+
+return ${\widehat{h}}_{i},{z}_{t + 1}$ ;
+
+---
+
+---
+
+pass outputs predicted distances of the new fringe nodes to the goal ${\widehat{h}}_{i}$ , together with an updated memory digest ${z}_{t + 1}$ . In Algorithm $2, f,\phi ,\gamma ,\operatorname{GRU}\left\lbrack {45}\right\rbrack$ , MLP are each param-eterised differentiable functions, with $\phi ,\gamma$ representing the update and message functions [46] of a graph neural network, respectively.
+
+In our forward pass, using the function $f$ , we first project ${x}_{i},{x}_{j}$ into a node embedding space, together with the goal features ${x}_{g}$ , and their Euclidean $\left( {D}_{EUC}\right)$ and cosine distances $\left( {D}_{COS}\right)$ . After that, using a 1-layer GNN, we perform a single convolution over each ${x}_{i}$ and the corresponding neighborhood ${\mathcal{N}}_{i}$ , to obtain ${g}_{i}$ . The specific GNN choice is a design decision left to the practitioner, and further analysis of GNN choices can be found in Appendix D. Our graph convolution processing step allows us to easily incorporate edge features and work with variable sizes of ${\mathcal{N}}_{i}$ . After the graph convolution, we apply the GRU module over each embedding ${g}_{i}$ to obtain hidden states ${z}_{i, t + 1}$ , and new embeddings ${g}_{i}^{\prime }$ . We compute the sample mean of ${z}_{i, t + 1}$ for each node $i \in {\mathcal{V}}_{\text{new }}$ to obtain a new hidden state ${z}_{t + 1}$ , and process ${g}_{i}^{\prime }$ with ${x}_{g}$ using an MLP to compute the distances between the graph nodes.
+
+Permutation invariant ${\mathcal{V}}_{\text{new }}$ embedding. There is a trade-off between processing new fringe nodes in batch, as in Algorithm 2, and processing them sequentially. Namely, when we process the nodes in batch, we do not use the in-batch observations to predict batch node values, which means that ${z}_{t}$ is slightly outdated. On the other hand, in PHIL, batch processing allows us to compute the heuristic values of all $v \in {\mathcal{V}}_{\text{new }}$ in parallel on a GPU and preserves the memory’s permutation invariance with respect to nodes in ${\mathcal{V}}_{\text{new }}$ . That is, because our observations are nodes &edges of a graph, the respective observation ordering usually does not contain inductive biases useful for predictions, which means that we can apply a permutation invariant operator such as the mean of all new states ${z}_{i, t + 1}$ to obtain an aggregated updated state. This approach provides additional scalability as we can process values in parallel and PHIL does not have to infer permutation invariance in ${\mathcal{V}}_{new}$ from data.
+
+Runtime complexity. Since $\forall i \in {\mathcal{V}}_{\text{new }} : \left| {\mathcal{N}}_{i}\right| \leq n$ , Algorithm 2 together with neighborhood sampling runs in up to $n{c}_{1} + \left( {n + 1}\right) {c}_{2}$ operations per each node $i \in {\mathcal{V}}_{\text{new }}$ , which is $\mathcal{O}\left( 1\right)$ with respect to the size of the graph. Here, ${c}_{1}$ is the maximal number of operations associated with evaluating a node, such as performing robot collision checks in dynamically constructed graphs, and ${c}_{2}$ is the maximal count of total model operations (e.g., $f\& \gamma$ operations) on the node set $\{ i\} \cup {\mathcal{N}}_{i}$ . In general, we expect to learn a better search heuristic with increasing $n$ (see Appendix D for ablations), but in some use cases, ${c}_{1}$ may dominate overall complexity, which means the hyperparameter $n$ is helpful for practitioners to tune trade-offs between constant factors and search effort minimization.
+
+## 5 Experiments
+
+In our experiments, we evaluate PHIL both on benchmark heuristic learning datasets [1] (Section 5.1) as well on a diverse set of graph datasets (Section 5.2). Finally, we show that PHIL can be applied to efficient planning in the context of drone flight (Section 5.3). Our main goal is to assess how PHIL compares to baseline methods in terms of necessary expansions before the goal node is reached. Please refer to the supplementary material for information about baselines, an ablation study, and additional experiment details.
+
+### 5.1 Heuristic search in grids
+
+In Section 5.1, we evaluate PHIL on $8,{200} \times {200}$ 8-connected grid graph-based datasets by Bhardwaj et al. [1]. These datasets present challenging obstacle configurations for naive greedy planning heuristics, especially when ${v}_{s}$ is in the bottom-left of the grid, and ${v}_{g}$ in the top-right. Each dataset contains 200 training graphs, 70 validation graphs, and 100 test graphs. Example graphs from each dataset can be found in Table 1.
+
+
+
+Figure 4: Example of PHIL escaping local search minima.
+
+We train PHIL with a hyperparameter configuration of $T = {128}$ , ${t}_{\tau } = {32},{\beta }_{0} = {0.7}, n = 8$ , and using rolled-in ${z}_{t}$ states as initial states for training. We use a 3-layer MLP of width 128 with LeakyReLU activations, followed by a DeeperGCN [47] graph convolution with softmax aggregation. Our memory's embedding dimensionality is 64 . See Appendix C for an overview of our baselines and datasets.
+
+| Dataset | | Graph Examples | | SAIL | SL | CEM | QL | ${h}_{euc}$ | ${h}_{man}$ | A* | MHA* | BFWS | Neural ${\mathrm{A}}^{ * }$ | PHIL |
| Alternating gaps | | | | 0.039 | 0.432 | 0.042 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.34 | 0.546 | 0.024 |
| Single Bugtrap | | | | 0.158 | 0.214 | 0.057 | 1.000 | 0.184 | 0.192 | 1.000 | 0.286 | 0.099 | 0.394 | 0.077 |
| Shifting gaps | | | | 0.104 | 0.464 | 1.000 | 1.000 | 0.506 | 0.589 | 1.000 | 0.804 | 0.206 | 0.563 | 0.027 |
| Forest | | | | 0.036 | 0.043 | 0.048 | 0.121 | 0.041 | 0.043 | 1.000 | 0.075 | 0.039 | 0.399 | 0.027 |
| Bugtrap+Forest | | | | 0.147 | 0.384 | 0.182 | 1.000 | 0.410 | 0.337 | 1.000 | 3.177 | 0.149 | 0.651 | 0.135 |
| Gaps+Forest | | | | 0.221 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 0.401 | 0.580 | 0.039 |
| Mazes | | | | 0.103 | 0.238 | 0.479 | 0.399 | 0.185 | 0.171 | 1.000 | 0.279 | 0.095 | 1.000 | 0.069 |
| Multiple Bugtraps | | | | 0.479 | 0.480 | 1.000 | 0.835 | 0.648 | 0.617 | 1.000 | 0.876 | 0.169 | 0.331 | 0.136 |
+
+Table 1: The number of expanded graph nodes of PHIL with respect to SAIL. We can observe that out of all baselines, SAIL performs best. PHIL outperforms SAIL by 58.5% on average over all datasets, with a maximal search effort reduction of ${82.3}\%$ in the Gaps+Forest dataset.
+
+Discussion. As we can see in Table 1, PHIL outperforms the best baseline (SAIL) on all datasets, with an average reduction of explored nodes before ${v}_{q}$ is found of ${58.5}\%$ . Qualitatively, observing Figure 5, we can attribute these results to PHIL's ability to reduce the redundancy in explored nodes during a search. Further, PHIL is also capable of escaping local minima, which is illustrated in Figure 4. However, note that we occasionally observe failure cases in practice, where PHIL gets stuck in a bug trap-like structure. We discuss possible remedies and opportunities for future work in the supplementary material.
+
+Runtime &convergence speed. PHIL converges in up to $N = {36}$ iterations, with $m = 1,{t}_{\tau } = {32}$ (i.e., after observing less than $N * {t}_{\tau } * \max \left( \left| {\mathcal{V}}_{\text{new }}\right| \right) \approx 9,{216}$ shortest path distances, where we take $\max \left( \left| {\mathcal{V}}_{\text{new }}\right| \right) = 8$ as the maximal size of ${\mathcal{V}}_{\text{new }}$ ). According to figures reported in [1], this is approximately $5 \times$ less data than it takes for SAIL to converge.
+
+
+
+Figure 5: In each image pair of this figure, we provide a qualitative comparison with the SAIL method. In particular, we show comparisons on the Shifting gaps, Gaps+Forest, Mazes, and Forest datasets. We can observe that PHIL (right) learns the appropriate heuristics for the given dataset and makes fewer redundant expansions than SAIL (left).
+
+### 5.2 Search in real-life graphs of different structures
+
+In this experiment, our goal is to demonstrate the general applicability of PHIL to various graphs. We train PHIL on 4 different groups of graph datasets: citation networks, biological networks, abstract syntax trees (ASTs), and road networks. We use the same graph for citation networks and road networks for training and evaluation, and we use 100 random ${v}_{s},{v}_{g}$ pairs for testing. In the case of biological networks and ASTs, we usually have train/validation/test splits of 80/10/10, and in the case of the OGB [48] datasets, we use the provided splits.
+
+ | Dataset | $\left| \mathcal{D}\right|$ | $\left| \bar{\mathcal{V}}\right|$ | $\left| \overline{\mathcal{E}}\right|$ | SL | A* | ${h}_{euc}$ | BFS | SAIL | BFWS | PHIL |
| Citation Networks | Cora (Sen et al. [49]) | 1 | 2,708 | 5,429 | 2.201 | 2.067 | 1.000 | 4.001 | 0.669 | 1.378 | 0.475 |
| PubMed (Sen et al. [49])) | 1 | 19,717 | 44,338 | 2.157 | 2.983 | 1.000 | 3.853 | 1.196 | 1.000 | 0.745 |
| CiteSeer (Sen et al. [49])) | 1 | 3,327 | 4,732 | 1.636 | 1.487 | 1.000 | 2.190 | 1.062 | 0.951 | 0.599 |
| Coauthor (cs) (Schur et al. [50]) | 1 | 18,333 | 81,894 | 1.571 | 1.069 | 1.000 | 2.820 | 1.941 | 1.026 | 0.835 |
| Coauthor (physics) (Schur et al. [50]) | 1 | 34,493 | 247,962 | 4.076 | 1.081 | 1.000 | 4.523 | - | 1.012 | 0.964 |
| Biological Networks | OGBG-Molhiv (Hu et al. [48]) | 41,127 | 25.5 | 27.5 | 1.086 | 1.065 | 1.000 | 1.267 | 1.104 | 1.146 | 1.016 |
| PPI (Zitnik et al. [51]) | 24 | 2,372.67 | 34,113.16 | 0.772 | 0.831 | 1,000 | 5.618 | 1.746 | 3.941 | 0.658 |
| Proteins (Full) (Morris et al. [52]) | 1.113 | 39.06 | 72.82 | 0.995 | 0.997 | 1.000 | 2.645 | 0.891 | 0.966 | 0.831 |
| Enzymes (Morris et al. [52]) | 600 | 32.63 | 62.14 | 1.073 | 1.007 | 1.000 | 1.358 | 1.036 | 0.992 | 0.757 |
| ASTs | OGBG-Code2 (Hu et al. [48]) | 452,741 | 125.2 | 124.2 | 1.196 | 1.013 | 1.000 | 1.267 | 1.029 | 0.817 | 1.219 |
| Road Networks | OSMnx - Modena (Boeing [53]) | 1 | 29.324 | 38,309 | 2.904 | 3.085 | 1.000 | 3.493 | 1.182 | 0.997 | 0.489 |
| OSMnx - New York (Boeing [53]) | 1 | 54.128 | 89.618 | 39.424 | 36.529 | 1.000 | 63.352 | 1.583 | 1.013 | 0.962 |
+
+Table 2: Comparison of PHIL with baseline approaches on 4 groups of datasets: citation networks, biological networks, abstract syntax trees, and road networks. "-" denotes being out of a 4-day's training time limit. We can observe that, on average across all datasets, PHIL outperforms the best baseline per dataset by 13.4%. Discounting the OGBG datasets, this number becomes 19.5%.
+
+Similarly as in Section 5.1, our MLP has four layers of width 128 with LeakyReLU activations and we use a DeeperGCN [47] graph convolution with softmax aggregation. The utilized node and edge features are the provided features in each dataset, except for a few minor modifications which are discussed in Appendix A & Appendix C. We train an MLP of depth 5 and width 256 using supervised learning (SL) for our learning-based baseline method.
+
+Discussion. The results presented in Table 2 suggest that PHIL can learn superior search heuristics compared with baseline methods, outperforming top baselines per dataset in terms of visited nodes during a search by ${13.4}\%$ on average. Two datasets where PHIL fell short compared to other baselines are the OGBG-Molhiv and OGBG-Code2 datasets. The OGBG-Code2 dataset adopts a project split [54] and OGBG-Mohliv adopts a scaffold split [55], both of which ensure that graphs of different structure are present in the training & test sets. Although PHIL improved upon uninformed search (BFS) in the OGB datasets, structural graph consistency is explicitly discouraged in the above-mentioned OGBG splits. Without the OGBG datasets, PHIL improves on the top baselines per dataset by ${19.5}\%$ on average, and upon the Euclidean node feature heuristic $\left( {h}_{\text{euc }}\right)$ by ${20.4}\%$ . Note that we trained PHIL up to $N = {60}$ iterations, which means that it only encountered a small subset of the pathfinding problems in the single graph setting, which means that PHIL had to generalize to learn useful heuristics. Even in Cora, the $\left| \mathcal{D}\right| = 1$ dataset with least number of nodes, PHIL observed roughly 6,000 node distances during training, which is less than ${0.2}\%$ of total distances in the Cora graph.
+
+### 5.3 Planning for drone flight
+
+In our final experiment, we use PHIL to plan collision-free paths in a practical drone flight use case within an indoor environment. We built our environment using the CoppeliaSim simulator [56], and the Ivy framework [57]. Figure 6 presents the environment which we refer to as room adversarial in Table 3. For more detail about each environment, please refer to the supplementary material. We discretize the environments into 3D grid graphs of size ${50} \times {50} \times {25}$ , and randomly remove 5 sub-graphs of size $5 \times$ $5 \times 5$ both during training and testing, this way simulating real-life planning scenarios with random obstacles. The hyperparameter configuration and the specific architecture we utilize are equivalent to Section 5.1, but with $n = 4$ . Likewise, the node features are 3D grid coordinates, and the baselines include supervised learning (SL), ${h}_{euc},{\mathrm{\;A}}^{ * }$ , and BFS, similarly as in Sections 5.1, 5.2. In Table 3 we report the ratio of expanded nodes with respect to ${h}_{euc}$ .
+
+
+
+Figure 6: This figure illustrates the room adversarial environment with an example planning problem (red) and the expanded graph by PHIL (blue).
+
+311 Video demo. We provide a video demonstration of PHIL running in room adversarial: https: //cutt.ly/eniu5ax.
+
+| Dataset | SL | A* | ${h}_{euc}$ | BFS | SAIL | BFWS | PHIL | Shortest path |
| Room simple | 1.124 | 76.052 | 1.000 | 291.888 | 0.973 | 1.286 | 0.785 | 0.782 |
| Room adversarial | 2.022 | 67.215 | 1.000 | 238.768 | 0.944 | 1.583 | 0.895 | 0.853 |
+
+Table 3: Results of PHIL in the context of planning for indoor UAV flight. PHIL outperforms all baselines in both the room simple and room adversarial environments while remaining close performance-wise to the optimal number of expansions.
+
+Discussion. As we can observe in Table 3, PHIL outperforms all baselines in both environments. Interestingly, PHIL expands only approximately 0.3% more nodes in the simple room than least possible and ${4.9}\%$ more in the adversarial room case. The same figures for the greedy method $\left( {h}_{euc}\right)$ are ${27.8}\%$ and ${17.2}\%$ , respectively. These results indicate that PHIL is capable of learning planning strategies that are close to optimal in both simple and adversarial graphs, while the performance of naive heuristics degrades.
+
+### 5.4 Runtime Analysis
+
+We summarize test run-times of different approaches in Appendix G. PHIL runs 57.9% faster than BFWS and 32.2% faster than SAIL, and not much slower than traditional A* (34.7%) and ${h}_{man}$ (18.3%). Although Neural A* is ${71.0}\%$ faster than PHIL due to the fact that it casts the whole search process into matrix operations on images, it cannot be employed in a generic search setting.
+
+## 6 Conclusion
+
+In our work, we consider the problem of learning to search for feasible paths in graphs efficiently. We propose a model and a training procedure to learn search heuristics that can be easily deployed across diverse graphs, with tuneable trade-off parameters between constant factors and performance. Our results demonstrate that PHIL outperforms current state-of-the-art approaches and can be applied to various graphs with practical use cases.
+
+References
+
+[1] Mohak Bhardwaj, Sanjiban Choudhury, and Sebastian Scherer. Learning heuristic search via imitation. In Conference on Robot Learning, 2017. 1, 2, 3, 4, 5, 7, 13, 14, 15, 16, 19, 22
+
+[2] Binghong Chen, Chengtao Li, Hanjun Dai, and Le Song. Retro*: Learning retrosynthetic planning with neural guided a* search. In ${ICML},{2020.1},4$
+
+[3] Martin Gebser, Benjamin Kaufmann, Javier Romero, Ramón Otero, Torsten Schaub, and Philipp Wanko. Domain-specific heuristics in answer set programming. In ${AAAI},{2013.1}$
+
+[4] Thi Thoa Mac, Cosmin Copot, Duc Trung Tran, and Robin De Keyser. Heuristic approaches in robot path planning: A survey. In Robotics and Autonomous Systems, 2016. 1
+
+[5] Abhishek Sharma and Keith M. Goolsbey. Identifying useful inference paths in large commonsense knowledge bases by retrograde analysis. In AAAI, 2017. 1
+
+[6] Cheng-Yu Yeh, Hsiang-Yuan Yeh, Carlos Roberto Arias, and Von-Wun Soo. Pathway detection from protein interaction networks and gene expression data using color-coding methods and a* search algorithms. In The Scientific World booktitle, 2012. 1
+
+[7] Judea Pearl. Heuristics: intelligent search strategies for computer problem solving. 1984. 2
+
+[8] Danish Khalidi, Dhaval Gujarathi, and Indranil Saha. T*: A heuristic search based path planning algorithm for temporal logic specifications. In ${ICRA},{2020.2}$
+
+[9] Bhargav Adabala and Zlatan Ajanovic. A multi-heuristic search-based motion planning for autonomous parking. In 30th International Conference on Automated Planning and Scheduling: Planning and Robotics Workshop, 2020. 2
+
+[10] Fan Xie, Hootan Nakhost, and Martin Müller. Planning via random walk-driven local search. In Twenty-Second International Conference on Automated Planning and Scheduling, 2012. 2, 3
+
+[11] Fan Xie, Martin Müller, and Robert Holte. Adding local exploration to greedy best-first search in satisficing planning. In ${AAAI},{2014}$ .
+
+[12] Nir Lipovetzky and Hector Geffner. Best-first width search: Exploration and exploitation in classical planning. In ${AAAI},{2017.3},{14}$
+
+[13] Florent Teichteil-Königsbuch, Miquel Ramirez, and Nir Lipovetzky. Boundary extension features for width-based planning with simulators on continuous-state domains. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021. 2, 3, 14
+
+[14] Blai Bonet and Héctor Geffner. Planning as heuristic search. pages 5-33. 2,3
+
+[15] Lin Zhu and Robert Givan. Landmark extraction via planning graph propagation. 2003.
+
+[16] Silvia Richter and Matthias Westphal. The lama planner: Guiding cost-based anytime planning with landmarks.2010.2,3
+
+[17] Ilya Sutskever. Training recurrent neural networks. University of Toronto, Toronto, Canada, 2013. 3
+
+[18] Stephane Ross and J Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. In arXiv preprint arXiv:1406.5979, 2014. 3, 4
+
+[19] Sanjiban Choudhury, Ashish Kapoor, Gireeja Ranade, Sebastian Scherer, and Debadeepta Dey. Adaptive information gathering via imitation learning. 2017. 3
+
+[20] Shahab Jabbari Arfaee, Sandra Zilles, and Robert C Holte. Learning heuristic functions for large state spaces. In Artificial Intelligence, 2011. 4
+
+[21] Jes ús Virseda, Daniel Borrajo, and Vidal Alcázar. Learning heuristic functions for cost-based planning. In Planning and Learning, 2013. 4
+
+[22] Christopher Makoto Wilt and Wheeler Ruml. Building a heuristic for greedy search. In SOCS, 2015.4
+
+[23] Caelan Reed Garrett, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. Learning to rank for synthesizing planning heuristics. In IJCAI, 2016. 4
+
+[24] Jordan Thayer, Austin Dionne, and Wheeler Ruml. Learning inadmissible heuristics during search. In Proceedings of the International Conference on Automated Planning and Scheduling, 2011.4
+
+[25] Soonkyum Kim and Byungchul An. Learning heuristic a*: efficient graph search using neural network. In ${ICRA},{2020.4}$
+
+[26] Yuka Ariki and Takuya Narihira. Fully convolutional search heuristic learning for rapid path planners. In arXiv preprint arXiv:1908.03343, 2019. 4
+
+[27] Ryo Terasawa, Yuka Ariki, Takuya Narihira, Toshimitsu Tsuboi, and Kenichiro Nagasaka. 3d-cnn based heuristic guided task-space planner for faster motion planning. In ICRA, 2020.
+
+[28] Ryo Yonetani, Tatsunori Taniai, Mohammadamin Barekatain, Mai Nishimura, and Asako Kanezaki. Path planning using neural a* search. In ICML, 2021. 14
+
+[29] Alberto Archetti, Marco Cannici, and Matteo Matteucci. Neural weighted a*: Learning graph costs and heuristics with differentiable anytime a. 2021. 4
+
+[30] Elias B. Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In NeurIPS, 2017. 4
+
+[31] Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial optimization with graph convolutional networks and guided tree search. In NeurIPS, 2018.
+
+[32] Nikolaos Karalias and Andreas Loukas. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. In NeurIPS, 2020. 4
+
+[33] Davide Silver and Joel Veness. Monte-carlo planning in large pomdps. In NeurIPS, 2010. 4
+
+[34] Arthur Guez, Theophane Weber, Ioannis Antonoglou, Karen Simonyan, Oriol Vinyals, Daan Wierstra, Rémi Munos, and David Silver. Learning to search with mctsnets. In ICML, 2018. 4
+
+[35] Andreea Deac, Petar Veličković, Ognjen Milinković, Pierre-Luc Bacon, Jian Tang, and Mladen Nikolić. Xlvin: executed latent value iteration nets. In arXiv preprint arXiv:2010.13146, 2020. 4
+
+[36] Péter Karkus, David Hsu, and Wee Sun Lee. Qmdp-net: Deep learning for planning under partial observability. In NeurIPS, 2017.
+
+[37] Aviv Tamar, Sergey Levine, Pieter Abbeel, Yi Wu, and Garrett Thomas. Value iteration networks. In NeurIPS, 2016. 4
+
+[38] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racanière, David Reichert, Théophane Weber, Daan Wierstra, and Peter Battaglia. Learning model-based planning from scratch. In arXiv preprint arXiv:1707.06170, 2017. 4
+
+[39] Sébastien Racanière, Theophane Weber, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter W. Battaglia, Demis Hassabis, David Silver, and Daan Wierstra. Imagination-augmented agents for deep reinforcement learning. In NeurIPS, 2017. 4
+
+[40] Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. In Machine learning, 2009. 4
+
+[41] Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé III, and John Langford. Learning to search better than your teacher. In ICML, 2015. 4
+
+[42] Wen Sun, Arun Venkatraman, Geoffrey J. Gordon, Byron Boots, and J. Andrew Bagnell. Deeply aggrevated: differentiable imitation learning for sequential prediction. In ${ICML}$ , 2017. 4
+
+[43] Michael Laskey, Jonathan Lee, Roy Fox, Anca Dragan, and Ken Goldberg. Dart: Noise injection for robust imitation learning. In Conference on robot learning, 2017. 4
+
+[44] Wen Sun, J. Andrew Bagnell, and Byron Boots. Truncated horizon policy search: combining reinforcement learning & imitation learning. In ICLR, 2018. 4
+
+[45] Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014. 6
+
+[46] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In ICML, 2017. 6, 17
+
+[47] Guohao Li, Chenxin Xiong, Ali Thabet, and Bernard Ghanem. Deepergcn: All you need to train deeper gens. In arXiv preprint arXiv:2006.07739, 2020. 7, 8, 17
+
+[48] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020. 8
+
+[49] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. In AI magazine, 2008. 8
+
+[50] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation. In arXiv preprint arXiv:1811.05868, 2018. 8
+
+[51] Marinka Zitnik and Jure Leskovec. Predicting multicellular function through multi-layer tissue networks. In Bioinformatics, 2017. 8
+
+[52] Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In arXiv preprint arXiv:2007.08663, 2020. 8
+
+[53] Geoff Boeing. Osmnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks. In Computers, Environment and Urban Systems, 2017. 8
+
+[54] Miltiadis Allamanis. The adverse effects of code duplication in machine learning models of code. In ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, 2019. 8
+
+[55] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. In Chemical science, 2018. 8
+
+[56] E. Rohmer, S. P. N. Singh, and M. Freese. Coppeliasim (formerly v-rep): a versatile and scalable robot simulation framework. In IROS, 2013. 9, 15
+
+[57] Daniel Lenton, Fabio Pardo, Fabian Falck, Stephen James, and Ronald Clark. Ivy: Templated deep learning for inter-framework portability. In arXiv preprint arXiv:2102.02886, 2021. 9, 15
+
+[58] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In ICML, 2019.13
+
+[59] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In ${ACM}$ SIGKDD, 2016. 13
+
+[60] Stuart Russell and Peter Norvig. Artificial intelligence: a modern approach. 2002. 13
+
+[61] Sandip Aine, Siddharth Swaminathan, Venkatraman Narayanan, Victor Hwang, and Maxim Likhachev. Multi-heuristic a*. In The International booktitle of Robotics Research, 2016. 13, 14
+
+[62] Edo Cohen-Karlik, Avichai Ben David, and Amir Globerson. Regularizing towards permutation invariance in recurrent models. In NeurIPS, 2020. 13
+
+[63] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NeurIPS Deep Learning Workshop, 2013. 14
+
+[64] Pieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on the cross-entropy method. In Annals of operations research, 2005. 14
+
+[65] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018. 17
+
+[66] Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In ICLR, 2020. 18
+
+[67] Petar Velickovic. Tikz. https://github.com/PetarV-/TikZ, last accessed on 01/6/21. 20
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a1dec2db5812f22aa54a23a7ccf6aa12ea20b9fb
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/-xjStp_F9o/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,287 @@
+§ LEARNING GRAPH SEARCH HEURISTICS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Searching for a path between two nodes in a graph is one of the most well-studied and fundamental problems in computer science. In numerous domains such as robotics, AI, or biology, practitioners develop search heuristics to accelerate their pathfinding algorithms. However, it is a laborious and complex process to hand-design heuristics based on the problem and the structure of a given use case. Here we present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigation heuristics from data by leveraging recent advances in imitation learning and graph representation learning. At training time, we aggregate datasets of search trajectories and ground-truth shortest path distances, which we use to train a specialized graph neural network-based heuristic function using backpropagation through steps of the pathfinding process. Our heuristic function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as ${\mathrm{A}}^{ * }$ at test time. Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by ${58.5}\%$ on average, can be directly applied in diverse graphs ranging from biological networks to road networks, and allows for fast planning in time-critical robotics domains.
+
+§ 201 INTRODUCTION
+
+Search heuristics are essential in several domains, including robotics, AI, biology, and chemistry [1- 6]. For example, in robotics, complex robot geometries often yield slow collision checks, and search algorithms are constrained by the robot's onboard computation resources, requiring well-performing search heuristics that visit as few nodes as possible [1, 4]. In AI, domain-specific search heuristics are useful for improving the performance of inference engines operating on knowledge bases [3, 5]. Search heuristics have been previously also developed to reduce search efforts in protein-protein interaction networks [6] and in planning chemical reactions that can synthesize target chemical products [2]. This broad set of applications underlines the importance of good search heuristics that are applicable to a wide range of problems.
+
+ < g r a p h i c s >
+
+Figure 1: The goal is to navigate (find a path) from the start to the goal node. While BFS visits many nodes to find a start-to-goal path (left), one can use a heuristic based on the features of the nodes (e.g., Euclidean distance) on the graph to reduce the search effort (middle). We propose PHIL to learn a tailored search heuristic for a given graph, capable of reducing the number of visited nodes even further by exploiting the inductive biases of the graph (right).
+
+The search task can be formulated as a pathfinding problem on a graph, where given a graph, the task is to navigate and find a short feasible path from a start node to a goal node, while in the process visiting as few nodes as possible (Figure 1). The most straightforward approach would be to launch a search algorithm such as breadth-first search (BFS) and iteratively expand the graph from the start node until it reaches the goal node. Since BFS does not harness any prior knowledge about the graph, it usually visits many nodes before reaching the goal, which is expensive in cases such as robotics where visiting nodes is costly. To visit fewer nodes during the search, one may use domain-specific information about the graph via a heuristic function [7], which allows one to define a distance metric on graph nodes to prune directions that seem less promising to explore. Unfortunately, coming up with good search heuristics requires significant domain expertise and manual effort.
+
+While there has been significant progress in designing search heuristics, it remains a challenging problem. Classical approaches $\left\lbrack {8,9}\right\rbrack$ tend to hand-design search heuristics, which requires domain knowledge and a lot of trial and error. To alleviate this problem, there has been significant development in general-purpose search heuristics based on trading-off greedy expansions and novelty-based exploration [10-13] or search problem simplifications [14-16]. These approaches alleviate some of the common pitfalls of goal-directed heuristics, but we demonstrate that if possible, it is useful to learn domain-specific heuristics that can better exploit problem structure.
+
+On the other hand, learning-based methods face a set of different challenges. Firstly, the data distribution is not i.i.d., as newly encountered graph nodes depend on past heuristic values, which means that supervised learning-based methods are not directly applicable. Secondly, heuristics should run fast, with ideally constant time complexity. Otherwise, the overall asymptotic time complexity of the search procedure could be increased. Finally, as the environment (search graph) sizes increase, reinforcement learning-based heuristic learning approaches tend to perform poorly [1]. State-of-the-art imitation learning-based methods can learn useful search heuristics [1]; however, these methods 4 still rely on feature-engineering for a specific domain and do not generally guarantee a constant time complexity with respect to graph sizes.
+
+ < g r a p h i c s >
+
+Figure 2: Main components of PHIL: On the left, using a greedy mixture policy induced by the current version of our parameterized heuristic ${h}_{\theta }$ and an oracle heuristic ${h}^{ * }$ (i.e., a heuristic that correctly determines distances between nodes), we roll-out a search trajectory from the start node to the goal node. Each trajectory step contains a set of newly added fringe nodes with bounded random subsets of their 1-hop neighborhoods and their oracle $\left( {h}^{ * }\right)$ distances to the goal node. Trajectories are aggregated throughout the training procedure. On the right, we use truncated backpropagation through time on each collected trajectory to train ${h}_{\theta }$ , where $\widehat{h}$ is the predicted distance between ${x}_{2}$ and ${x}_{g}$ , and ${z}_{2}$ is the updated state of the memory. Here, the memory captures the embedding of the graph visited so far.
+
+In this paper, we propose Path Heuristic with Imitation Learning (PHIL), a framework that extends the recent imitation learning-based heuristic search paradigm with a learnable explored graph memory. This means that PHIL learns a representation that allows it to capture the structure of the so far 59 explored graph, so that it can then better select what node to explore next (Figure 2). We train our approach to predict the node-to-goal distances ( ${h}^{ * }$ in Figure 2) of graph nodes during search. To train our memory module, which captures the explored graph, we use truncated backpropagation through time (TBTT) [17], where we utilize ground-truth node-to-goal distances as a supervision signal at each search step. Our TBTT procedure is embedded within an adaptation of the AggreVaTe imitation learning algorithm [18]. PHIL also includes a specialized graph neural network architecture, which allows us to apply PHIL to diverse graphs from different Fdomains.
+
+We evaluate PHIL on standard benchmark heuristic learning datasets (Section 5.1), diverse graph-based datasets from different domains (Section 5.2), and practical UAV flight use cases (Section 5.3). Experiments demonstrate that PHIL outperforms state-of-the-art heuristic learning methods up to $4 \times$ . Further, PHIL performs within 4.9% of an oracle in indoor drone planning scenarios, which is up to a 21.5% reduction compared with commonly used approaches. In practice, our contributions enable practitioners to quickly extract useful search heuristics from their graph datasets without any hand-engineering.
+
+§ 2 PRELIMINARIES
+
+Graph search. Suppose that we are given an unweighted connected graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is a set of nodes, and $\mathcal{E}$ a corresponding set of edges. Further suppose that each node $i \in \mathcal{V}$ has corresponding features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and each edge $\left( {i,j}\right) \in \mathcal{E}$ has features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ . Assume that we are also given a start node ${v}_{s} \in \mathcal{V}$ and a goal node ${v}_{q} \in \mathcal{V}$ . At any stage of our search algorithm, we can partition the nodes of our graph into three sets as $\mathcal{V} = \operatorname{CLOSE} \cup \mathrm{{OPEN}} \cup \mathrm{{REST}}$ , where CLOSE are the nodes already explored, OPEN are candidate nodes for exploration (i.e., all nodes connected to any node in CLOSE, but not yet in CLOSE), and REST is the rest of the graph. Each expansion moves a node from OPEN to CLOSE, and adds the neighbors of the given node from REST to OPEN. We call the set of newly added fringe nodes ${\mathcal{V}}_{\text{ new }}$ at each search step. At the start of the search procedure, CLOSE $= \left\{ {v}_{s}\right\}$ and we expand the nodes until ${v}_{g}$ is encountered (i.e., until ${v}_{g} \in$ CLOSE).
+
+Greedy best-first search. We can perform greedy best-first search using a greedy fringe expansion policy, such that we always expand the node $v \in$ OPEN that minimizes $h\left( {v,{v}_{g}}\right)$ . Here, $h : \mathcal{V} \times \mathcal{V} \rightarrow$ $\mathbb{R}$ is a tailored heuristic function for a given use case. In our work, we are interested in learning a function $h$ that predicts shortest path lengths, this way minimizing $\left| \text{ CLOSE }\right|$ in a greedy best-first search regime.
+
+Imitation of perfect heuristics. Partially observable Markov decision processes (POMDPs) are a suitable framework to describe the problem of learning search heuristics [1]. We can have $s =$ (CLOSE, OPEN, REST) as our state, an action $a \in \mathcal{A}$ corresponds to moving a node from OPEN to CLOSE, and the observations $o \in \mathcal{O}$ are the features of newly included nodes in OPEN. Note that one could consider an MDP framework to learn heuristics, but the time complexity of operating on the whole state is in most cases prohibitive. We also define a history $\psi \in \Psi$ as a sequence of observations $\psi = {o}_{1},{o}_{2},{o}_{3},\ldots$ Our work leverages the observation that using a heuristic function during greedy best-first search that correctly determines the length of the shortest path between fringe nodes and the goal node will also yield minimal |CLOSE|. For training, we adopt a perfect heuristic ${h}^{ * }$ , similar to [1], which has full information about $s$ during search. Such oracle can provide ground-truth distances ${h}^{ * }\left( {s,v,{v}_{g}}\right)$ , where $v \in$ OPEN. To conclude, we define a greedy best-first search policy ${\pi }_{\theta }$ that uses a parameterized heuristic ${h}_{\theta }$ to expand nodes from OPEN with minimal heuristic values. One could also directly use a POMDP solver for the above-described problem, but this approach is usually infeasible due to the dimensionality of the search state [19].
+
+§ 3 RELATED WORK
+
+General purpose heuristic design. There has been significant research in designing general-purpose heuristics for speeding up satisficing planning. The first set of approaches are based on simplifying the search problem for example using landmark heuristics [14, 16]. The next set of approaches aim to include novelty-based exploration in greedy best-first search [10-13]. The latter set of approaches showed state-of-the-art performance (best-first width search [12, 13], BFWS) in numerous settings. We show that in domains where data is available, it can be more effective to incorporate a learned heuristic into a greedy best-first search procedure.
+
+Learning heuristic search. There have been numerous previous works that attempt to learn search heuristics: Arfaee et al. [20] propose to improve heuristics iteratively, Virseda et al. [21] learn to combine heuristics to estimate graph node distances, Wilt et al. [22] and Garrett et al. [23] propose to learn node rankings, Thayer et al. [24] suggest to infer heuristics during a search, and Kim et al. [25] train a neural network to predict graph node distances. These methods generally do not consider the non-i.i.d. nature of heuristic search. Further, Bhardwaj et al. [1] propose SAIL, where heuristic learning is framed as an imitation learning problem with cost-to-go oracles. The SAIL heuristic uses hand-designed features tailored for obstacle avoidance, with a linear time-complexity in the number of explored grid nodes found to be colliding with an obstacle. Feature-engineering becomes more difficult as we attempt to learn heuristics on diverse graphs such as ones seen in Section 5.2, where we may need expert knowledge. Further, heuristics that do not have a constant time complexity in the size of the graph $\left\lbrack {1,{26} - {29}}\right\rbrack$ generally scale poorly with graph size and hence have constrained use cases. Recent approaches to learning heuristics include Retro* [2] by Chen et al., where a heuristic is learned in the context of AND-OR search trees for chemical retrosynthetic planning. Our work focuses on a more general graph setting.
+
+There has been significant progress on learning heuristics for NP-hard combinatorial optimization problems [30-32]. Still, these heuristic learning methods, due to their time complexities, are impractical for the application in polynomial-time search problems, on which this work focuses.
+
+Learning general purpose search. Learning general search policies is a very well-studied research area with a rich set of developments and applications. These include Monte Carlo Tree Search methods [33, 34], implicit planning methods [35-37], and imagination-based planning approaches $\left\lbrack {{38},{39}}\right\rbrack$ . Learning search heuristics can be seen as a special case of general purpose search, where the search problem is treated as a partially observable Markov decision process with restricted action evaluation (see Section 4), and with models running in $\mathcal{O}\left( 1\right)$ to remain competitive time-complexity-wise on problems where best-first search performs well. General purpose search methods do not take into account the above-mentioned constraints, which motivates the development of tailored approaches for learning heuristics $\left\lbrack {1,2}\right\rbrack$ .
+
+Imitation learning. Our approach builds on prior work in imitation learning (IL) with cost-to-go oracles. Cost-to-go oracles have been incorporated in the context of IL in methods such as SEARN [40], AggreVaTe [18], LOLS [41], AggrevaTeD [42], DART [43], and THOR [44]. SAIL [1] presents an AggreVaTe-based algorithm for learning heuristic search. We extend SAIL by incorporating a recurrent $Q$ -like function, in which sense our algorithm more closely resembles AggreVaTeD by Sun et al. [42]. While a recurrent policy can be easily incorporated in AggreVaTeD, we cannot use a policy to evaluate actions. This is due to the fact that we would either have to evaluate all actions in a state, which is computationally infeasible, or we would have to give up on taking actions that are not in the most recent version of the search fringe, which would degrade the performance (see Section 4).
+
+§ 4 PATH HEURISTIC WITH IMITATION LEARNING
+
+Training objective. With the aim of minimizing |CLOSE| after search, our goal is to train a parameterized heuristic function ${h}_{\theta } : \Psi \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ to predict ground-truth node distances ${h}^{ * }$ and use ${h}_{\theta }$ within a greedy best-first policy ${\pi }_{\theta }$ at test time. More specifically, we assume access to a distribution over graphs ${P}_{\mathcal{G}}$ , a start-goal node distribution ${P}_{{v}_{sg}}\left( {\cdot \mid \mathcal{G}}\right)$ , and a time horizon $T$ . Moreover, we assume a joint state-history distribution $s,\psi \sim {P}_{s}\left( {\cdot \mid \mathcal{G},t,{\pi }_{\theta },{v}_{s},{v}_{g}}\right)$ , where ${P}_{s}$ represents the probability our search being in state $s$ , at time $0 \leq t \leq T$ on graph $\mathcal{G}$ with pathfinding problem $\left( {{v}_{s},{v}_{g}}\right)$ , with a greedy best-first search policy ${\pi }_{\theta }$ using heuristic ${h}_{\theta }$ . Hence, our goal can be summarized as minimizing the following objective:
+
+$$
+\mathcal{L}\left( \theta \right) = \underset{\begin{matrix} {\xi \sim {P}_{g},} \\ {\left( {{v}_{s},{v}_{g}}\right) \sim {P}_{vsg}} \\ {t \sim \mathcal{U}\left( {0,\ldots ,T}\right) ,} \\ {s,\psi \sim {P}_{s}} \end{matrix}}{\mathbb{E}}\left\lbrack {\frac{1}{\left| \mathrm{{OPEN}}\right| }\mathop{\sum }\limits_{{v \in \mathrm{{OPEN}}}}{\left( {h}^{ * }\left( s,v,{v}_{g}\right) - {h}_{\theta }\left( \psi ,v,{v}_{g}\right) \right) }^{2}}\right\rbrack \tag{1}
+$$
+
+Before we describe the algorithm that can be used to minimize $\mathcal{L}$ , we rewrite ${h}_{\theta }$ to include a memory digest component $\left( {z}_{t}\right)$ , which represents an embedding of $\psi$ at time step $t$ . Hence, ${h}_{\theta }$ becomes ${h}_{\theta } : {\mathbb{R}}^{d} \times \mathcal{O} \times \mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ , where $d$ is the dimensionality of our memory’s embedding space. As opposed to previous methods [1], ${z}_{t}$ allows us to automatically extract relevant features for heuristic
+
+Algorithm 1: PHIL— Sequential Heuristic Training
+
+Obtain hyperparameters $T,{\beta }_{0},N,m,{t}_{\tau }$ ;
+
+Initialize $\mathcal{D} \leftarrow \varnothing ,{h}_{{\theta }_{1}}$ ;
+
+for $i = 1,\ldots ,N$ do
+
+ Sample $\mathcal{G} \sim {P}_{\mathcal{G}}$ ;
+
+ Sample ${v}_{s},{v}_{g} \sim {P}_{{v}_{sg}}$ ;
+
+ Set $\beta \leftarrow {\beta }_{0}^{i}$ ;
+
+ Set mixture policy ${\pi }_{\text{ mix }} \leftarrow \left( {1 - \beta }\right) * {\pi }_{{\theta }_{i}} + \beta * {\pi }^{ * }$ ;
+
+ Collect $m$ trajectories ${\tau }_{ij}$ as follows;
+
+ for $j = 1,\ldots ,m$ do
+
+ Sample $t \sim \mathcal{U}\left( {0,\ldots ,T - {t}_{\tau }}\right)$ ;
+
+ Roll-in $t$ time steps of ${\pi }_{{\theta }_{i}}$ to obtain ${z}_{t}$ and new state ${s}_{t} = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\operatorname{REST}}^{0}}\right)$ ;
+
+ Roll-out trajectory ${\tau }_{ij}$ as follows;
+
+ for $k = 1,\ldots ,{t}_{\tau }$ do
+
+ Update ${s}_{t + k - 1}$ using ${\pi }_{\operatorname{mix}}$ to get new state ${s}_{t + k}$ and new fringe state ${\mathrm{{OPEN}}}^{k}$ ;
+
+ Obtain new fringe nodes ${\mathcal{V}}_{\text{ new }} = {\mathrm{{OPEN}}}^{k} \smallsetminus {\mathrm{{OPEN}}}^{k - 1}$ ;
+
+ Update trajectory ${\tau }_{ij} \leftarrow {\tau }_{ij} \cup \left\{ \left( {{\mathcal{V}}_{\text{ new }},{h}^{ * }\left( {{s}_{t + k},{\mathcal{V}}_{\text{ new }},{v}_{g}}\right) }\right) \right\}$ ;
+
+ Update dataset $\mathcal{D} \leftarrow \mathcal{D} \cup \left\{ \left( {{\tau }_{ij},{z}_{t}}\right) \right\}$ or $\mathcal{D} \cup \left\{ \left( {{\tau }_{ij},0}\right) \right\}$ ;
+
+ Train ${h}_{{\theta }_{i}}$ using TBTT on each $\tau \in \mathcal{D}$ to get ${h}_{{\theta }_{i + 1}}$ ;
+
+return best performing ${h}_{{\theta }_{i}}$ on validation;
+
+computations and concurrently reduce the computational complexity of the heuristic function. Further, as shown in [1], if we would use ${h}_{\theta }$ to evaluate all actions in a state (i.e., recalculate the heuristic values of all nodes in OPEN), we would need a squared reduction in the number of expanded nodes compared with BFS for PHIL to bring performance benefits over BFS, which however may not be possible on all datasets. Hence, we constrain the heuristic only to evaluate new OPEN nodes which we obtain after moving a node to CLOSE, calling the set of new fringe nodes ${\mathcal{V}}_{\text{ new }}$ after each expansion. In practice, the policy ${\pi }_{\theta }$ yields an algorithm equivalent to greedy best-first search, with the heuristic function replaced by ${h}_{\theta }$ .
+
+§ 4.1 LEARNING ALGORITHM & ARCHITECTURE
+
+Imitation learning algorithm. In Algorithm 1, we present the pseudo-code of the IL algorithm used to train our heuristic models (Figure 3). The high-level idea of our algorithm is that we aggregate trajectories of search traces (i.e., sequences of new fringe nodes) and use truncated backpropagation through time to optimize ${h}_{\theta }$ after each data-collection step. In particular, after sampling a graph $\mathcal{G}$ and a search problem ${v}_{s},{v}_{g}$ , we use our greedy learned policy ${\pi }_{\theta }$ induced by ${h}_{\theta }$ to roll-in for $t \sim \mathcal{U}\left( {0,\ldots ,T - {t}_{\tau }}\right)$ expansions, where $T$ is the episode time horizon, and ${t}_{\tau }$ is the roll-out length. From our roll-in, we obtain a new state $s = \left( {{\mathrm{{CLOSE}}}^{0},{\mathrm{{OPEN}}}^{0},{\mathrm{{REST}}}^{0}}\right)$ , and an initial memory state ${z}_{t}$ . After our roll-in, we roll-out for ${t}_{\tau }$ steps using our mixture policy ${\pi }_{mix}$ , which is obtained by probabilistically blending ${\pi }_{\theta }$ and the greedy best-first policy induced by the oracle heuristic ${\pi }^{ * }$ . In a roll-out, we collect sequences of new fringe nodes, together with their ground-truth distances to the goal ${v}_{g}$ , given by ${h}^{ * }$ . Once the roll-out is complete, we append the obtained trajectory and the initial state for the following optimization using backpropagation through time. Further analysis on the trade-offs between using rolled-in states ${z}_{t}$ or zeroed-out states for training can be found in the supplementary material.
+
+Note that we could also use supervised learning-based approaches to sample a fixed dataset of $\left( {v}_{s}\right.$ , $\left. {{v}_{g},{h}^{ * }\left( {s,{v}_{s},{v}_{g}}\right) }\right)$ 3-tuples and train a model to predict node distances conditioned on their features. However, our experiments in Section 5 demonstrate that ignoring the non-i.i.d. nature of heuristic search negatively impacts model performance, with supervised learning-based methods performing up to ${40} \times$ worse.
+
+Recurrent GNN architecture. In each forward pass, ${h}_{\theta }$ obtains a set of new fringe nodes ${\mathcal{V}}_{\text{ new }}$ , the goal node ${v}_{g}$ , and the memory ${z}_{t}$ at time step $t$ . We represent each node in ${\mathcal{V}}_{\text{ new }}$ using its features ${x}_{i} \in {\mathbb{R}}^{{D}_{v}}$ , and likewise the goal node ${v}_{g}$ using its features ${x}_{g} \in {\mathbb{R}}^{{D}_{v}}$ . Further, for each $i \in {\mathcal{V}}_{\text{ new }}$ , we uniformly sample an $n \in {\mathbb{N}}_{ \geq 0}$ bounded set of nodes present in the 1-hop neighborhood of $i$ , calling
+
+ < g r a p h i c s >
+
+Figure 3: This figure demonstrates the core idea behind our IL algorithm. We present the roll-in phase on the left-hand side, where our policy is rolled in for $t$ steps to obtain state ${s}_{t}$ and embedding ${z}_{t}$ . On the right-hand side, we show the trajectory collection and training steps, where we aggregate the trajectory for downstream training (green) and use truncated backpropagation through time on the collected dataset (red).
+
+this set ${\mathcal{N}}_{i}$ , with $\left| {\mathcal{N}}_{i}\right| \leq n$ . This sampling step produces a set of neighboring node features, where each $j \in {\mathcal{N}}_{i}$ has features ${x}_{j} \in {\mathbb{R}}^{{D}_{v}}$ , and corresponding edge features ${e}_{ij} \in {\mathbb{R}}^{{D}_{e}}$ .
+
+${h}_{\theta }$ forward pass. Algorithm 2 presents a single forward pass of ${h}_{\theta }$ . The forward
+
+Algorithm 2: Heuristic func. $\left( {h}_{\theta }\right)$ forward pass
+
+Obtain ${x}_{i},{x}_{j},{e}_{ij},{x}_{g}{z}_{t}$ ;
+
+${x}_{i} \leftarrow f\left( {{x}_{i},{x}_{g},{D}_{EUC}\left( {{x}_{i},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{i},{x}_{g}}\right) }\right) ;$
+
+${x}_{j} \leftarrow f\left( {{x}_{j},{x}_{g},{D}_{EUC}\left( {{x}_{j},{x}_{g}}\right) ,{D}_{COS}\left( {{x}_{j},{x}_{g}}\right) }\right) ;$
+
+${g}_{i} \leftarrow \phi \left( {{x}_{i},{ \oplus }_{j \in {\mathcal{N}}_{i}}\gamma \left( {{x}_{i},{x}_{j},{e}_{ij}}\right) }\right) ;$
+
+${g}_{i}^{\prime },{z}_{i,t + 1} \leftarrow \operatorname{GRU}\left( {{g}_{i},{z}_{t}}\right)$ ;
+
+${z}_{t + 1} \leftarrow \overline{{z}_{i,t + 1}}$ ;
+
+${\widehat{h}}_{i} \leftarrow \operatorname{MLP}\left( {{g}_{i}^{\prime },{x}_{g}}\right) ;$
+
+return ${\widehat{h}}_{i},{z}_{t + 1}$ ;
+
+pass outputs predicted distances of the new fringe nodes to the goal ${\widehat{h}}_{i}$ , together with an updated memory digest ${z}_{t + 1}$ . In Algorithm $2,f,\phi ,\gamma ,\operatorname{GRU}\left\lbrack {45}\right\rbrack$ , MLP are each param-eterised differentiable functions, with $\phi ,\gamma$ representing the update and message functions [46] of a graph neural network, respectively.
+
+In our forward pass, using the function $f$ , we first project ${x}_{i},{x}_{j}$ into a node embedding space, together with the goal features ${x}_{g}$ , and their Euclidean $\left( {D}_{EUC}\right)$ and cosine distances $\left( {D}_{COS}\right)$ . After that, using a 1-layer GNN, we perform a single convolution over each ${x}_{i}$ and the corresponding neighborhood ${\mathcal{N}}_{i}$ , to obtain ${g}_{i}$ . The specific GNN choice is a design decision left to the practitioner, and further analysis of GNN choices can be found in Appendix D. Our graph convolution processing step allows us to easily incorporate edge features and work with variable sizes of ${\mathcal{N}}_{i}$ . After the graph convolution, we apply the GRU module over each embedding ${g}_{i}$ to obtain hidden states ${z}_{i,t + 1}$ , and new embeddings ${g}_{i}^{\prime }$ . We compute the sample mean of ${z}_{i,t + 1}$ for each node $i \in {\mathcal{V}}_{\text{ new }}$ to obtain a new hidden state ${z}_{t + 1}$ , and process ${g}_{i}^{\prime }$ with ${x}_{g}$ using an MLP to compute the distances between the graph nodes.
+
+Permutation invariant ${\mathcal{V}}_{\text{ new }}$ embedding. There is a trade-off between processing new fringe nodes in batch, as in Algorithm 2, and processing them sequentially. Namely, when we process the nodes in batch, we do not use the in-batch observations to predict batch node values, which means that ${z}_{t}$ is slightly outdated. On the other hand, in PHIL, batch processing allows us to compute the heuristic values of all $v \in {\mathcal{V}}_{\text{ new }}$ in parallel on a GPU and preserves the memory’s permutation invariance with respect to nodes in ${\mathcal{V}}_{\text{ new }}$ . That is, because our observations are nodes &edges of a graph, the respective observation ordering usually does not contain inductive biases useful for predictions, which means that we can apply a permutation invariant operator such as the mean of all new states ${z}_{i,t + 1}$ to obtain an aggregated updated state. This approach provides additional scalability as we can process values in parallel and PHIL does not have to infer permutation invariance in ${\mathcal{V}}_{new}$ from data.
+
+Runtime complexity. Since $\forall i \in {\mathcal{V}}_{\text{ new }} : \left| {\mathcal{N}}_{i}\right| \leq n$ , Algorithm 2 together with neighborhood sampling runs in up to $n{c}_{1} + \left( {n + 1}\right) {c}_{2}$ operations per each node $i \in {\mathcal{V}}_{\text{ new }}$ , which is $\mathcal{O}\left( 1\right)$ with respect to the size of the graph. Here, ${c}_{1}$ is the maximal number of operations associated with evaluating a node, such as performing robot collision checks in dynamically constructed graphs, and ${c}_{2}$ is the maximal count of total model operations (e.g., $f\& \gamma$ operations) on the node set $\{ i\} \cup {\mathcal{N}}_{i}$ . In general, we expect to learn a better search heuristic with increasing $n$ (see Appendix D for ablations), but in some use cases, ${c}_{1}$ may dominate overall complexity, which means the hyperparameter $n$ is helpful for practitioners to tune trade-offs between constant factors and search effort minimization.
+
+§ 5 EXPERIMENTS
+
+In our experiments, we evaluate PHIL both on benchmark heuristic learning datasets [1] (Section 5.1) as well on a diverse set of graph datasets (Section 5.2). Finally, we show that PHIL can be applied to efficient planning in the context of drone flight (Section 5.3). Our main goal is to assess how PHIL compares to baseline methods in terms of necessary expansions before the goal node is reached. Please refer to the supplementary material for information about baselines, an ablation study, and additional experiment details.
+
+§ 5.1 HEURISTIC SEARCH IN GRIDS
+
+In Section 5.1, we evaluate PHIL on $8,{200} \times {200}$ 8-connected grid graph-based datasets by Bhardwaj et al. [1]. These datasets present challenging obstacle configurations for naive greedy planning heuristics, especially when ${v}_{s}$ is in the bottom-left of the grid, and ${v}_{g}$ in the top-right. Each dataset contains 200 training graphs, 70 validation graphs, and 100 test graphs. Example graphs from each dataset can be found in Table 1.
+
+ < g r a p h i c s >
+
+Figure 4: Example of PHIL escaping local search minima.
+
+We train PHIL with a hyperparameter configuration of $T = {128}$ , ${t}_{\tau } = {32},{\beta }_{0} = {0.7},n = 8$ , and using rolled-in ${z}_{t}$ states as initial states for training. We use a 3-layer MLP of width 128 with LeakyReLU activations, followed by a DeeperGCN [47] graph convolution with softmax aggregation. Our memory's embedding dimensionality is 64 . See Appendix C for an overview of our baselines and datasets.
+
+max width=
+
+Dataset X Graph Examples X SAIL SL CEM QL ${h}_{euc}$ ${h}_{man}$ A* MHA* BFWS Neural ${\mathrm{A}}^{ * }$ PHIL
+
+1-15
+Alternating gaps X X X 0.039 0.432 0.042 1.000 1.000 1.000 1.000 1.000 0.34 0.546 0.024
+
+1-15
+Single Bugtrap X X X 0.158 0.214 0.057 1.000 0.184 0.192 1.000 0.286 0.099 0.394 0.077
+
+1-15
+Shifting gaps X X X 0.104 0.464 1.000 1.000 0.506 0.589 1.000 0.804 0.206 0.563 0.027
+
+1-15
+Forest X X X 0.036 0.043 0.048 0.121 0.041 0.043 1.000 0.075 0.039 0.399 0.027
+
+1-15
+Bugtrap+Forest X X X 0.147 0.384 0.182 1.000 0.410 0.337 1.000 3.177 0.149 0.651 0.135
+
+1-15
+Gaps+Forest X X X 0.221 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.401 0.580 0.039
+
+1-15
+Mazes X X X 0.103 0.238 0.479 0.399 0.185 0.171 1.000 0.279 0.095 1.000 0.069
+
+1-15
+Multiple Bugtraps X X X 0.479 0.480 1.000 0.835 0.648 0.617 1.000 0.876 0.169 0.331 0.136
+
+1-15
+
+Table 1: The number of expanded graph nodes of PHIL with respect to SAIL. We can observe that out of all baselines, SAIL performs best. PHIL outperforms SAIL by 58.5% on average over all datasets, with a maximal search effort reduction of ${82.3}\%$ in the Gaps+Forest dataset.
+
+Discussion. As we can see in Table 1, PHIL outperforms the best baseline (SAIL) on all datasets, with an average reduction of explored nodes before ${v}_{q}$ is found of ${58.5}\%$ . Qualitatively, observing Figure 5, we can attribute these results to PHIL's ability to reduce the redundancy in explored nodes during a search. Further, PHIL is also capable of escaping local minima, which is illustrated in Figure 4. However, note that we occasionally observe failure cases in practice, where PHIL gets stuck in a bug trap-like structure. We discuss possible remedies and opportunities for future work in the supplementary material.
+
+Runtime &convergence speed. PHIL converges in up to $N = {36}$ iterations, with $m = 1,{t}_{\tau } = {32}$ (i.e., after observing less than $N * {t}_{\tau } * \max \left( \left| {\mathcal{V}}_{\text{ new }}\right| \right) \approx 9,{216}$ shortest path distances, where we take $\max \left( \left| {\mathcal{V}}_{\text{ new }}\right| \right) = 8$ as the maximal size of ${\mathcal{V}}_{\text{ new }}$ ). According to figures reported in [1], this is approximately $5 \times$ less data than it takes for SAIL to converge.
+
+ < g r a p h i c s >
+
+Figure 5: In each image pair of this figure, we provide a qualitative comparison with the SAIL method. In particular, we show comparisons on the Shifting gaps, Gaps+Forest, Mazes, and Forest datasets. We can observe that PHIL (right) learns the appropriate heuristics for the given dataset and makes fewer redundant expansions than SAIL (left).
+
+§ 5.2 SEARCH IN REAL-LIFE GRAPHS OF DIFFERENT STRUCTURES
+
+In this experiment, our goal is to demonstrate the general applicability of PHIL to various graphs. We train PHIL on 4 different groups of graph datasets: citation networks, biological networks, abstract syntax trees (ASTs), and road networks. We use the same graph for citation networks and road networks for training and evaluation, and we use 100 random ${v}_{s},{v}_{g}$ pairs for testing. In the case of biological networks and ASTs, we usually have train/validation/test splits of 80/10/10, and in the case of the OGB [48] datasets, we use the provided splits.
+
+max width=
+
+X Dataset $\left| \mathcal{D}\right|$ $\left| \bar{\mathcal{V}}\right|$ $\left| \overline{\mathcal{E}}\right|$ SL A* ${h}_{euc}$ BFS SAIL BFWS PHIL
+
+1-12
+5*Citation Networks Cora (Sen et al. [49]) 1 2,708 5,429 2.201 2.067 1.000 4.001 0.669 1.378 0.475
+
+2-12
+ PubMed (Sen et al. [49])) 1 19,717 44,338 2.157 2.983 1.000 3.853 1.196 1.000 0.745
+
+2-12
+ CiteSeer (Sen et al. [49])) 1 3,327 4,732 1.636 1.487 1.000 2.190 1.062 0.951 0.599
+
+2-12
+ Coauthor (cs) (Schur et al. [50]) 1 18,333 81,894 1.571 1.069 1.000 2.820 1.941 1.026 0.835
+
+2-12
+ Coauthor (physics) (Schur et al. [50]) 1 34,493 247,962 4.076 1.081 1.000 4.523 - 1.012 0.964
+
+1-12
+4*Biological Networks OGBG-Molhiv (Hu et al. [48]) 41,127 25.5 27.5 1.086 1.065 1.000 1.267 1.104 1.146 1.016
+
+2-12
+ PPI (Zitnik et al. [51]) 24 2,372.67 34,113.16 0.772 0.831 1,000 5.618 1.746 3.941 0.658
+
+2-12
+ Proteins (Full) (Morris et al. [52]) 1.113 39.06 72.82 0.995 0.997 1.000 2.645 0.891 0.966 0.831
+
+2-12
+ Enzymes (Morris et al. [52]) 600 32.63 62.14 1.073 1.007 1.000 1.358 1.036 0.992 0.757
+
+1-12
+ASTs OGBG-Code2 (Hu et al. [48]) 452,741 125.2 124.2 1.196 1.013 1.000 1.267 1.029 0.817 1.219
+
+1-12
+2*Road Networks OSMnx - Modena (Boeing [53]) 1 29.324 38,309 2.904 3.085 1.000 3.493 1.182 0.997 0.489
+
+2-12
+ OSMnx - New York (Boeing [53]) 1 54.128 89.618 39.424 36.529 1.000 63.352 1.583 1.013 0.962
+
+1-12
+
+Table 2: Comparison of PHIL with baseline approaches on 4 groups of datasets: citation networks, biological networks, abstract syntax trees, and road networks. "-" denotes being out of a 4-day's training time limit. We can observe that, on average across all datasets, PHIL outperforms the best baseline per dataset by 13.4%. Discounting the OGBG datasets, this number becomes 19.5%.
+
+Similarly as in Section 5.1, our MLP has four layers of width 128 with LeakyReLU activations and we use a DeeperGCN [47] graph convolution with softmax aggregation. The utilized node and edge features are the provided features in each dataset, except for a few minor modifications which are discussed in Appendix A & Appendix C. We train an MLP of depth 5 and width 256 using supervised learning (SL) for our learning-based baseline method.
+
+Discussion. The results presented in Table 2 suggest that PHIL can learn superior search heuristics compared with baseline methods, outperforming top baselines per dataset in terms of visited nodes during a search by ${13.4}\%$ on average. Two datasets where PHIL fell short compared to other baselines are the OGBG-Molhiv and OGBG-Code2 datasets. The OGBG-Code2 dataset adopts a project split [54] and OGBG-Mohliv adopts a scaffold split [55], both of which ensure that graphs of different structure are present in the training & test sets. Although PHIL improved upon uninformed search (BFS) in the OGB datasets, structural graph consistency is explicitly discouraged in the above-mentioned OGBG splits. Without the OGBG datasets, PHIL improves on the top baselines per dataset by ${19.5}\%$ on average, and upon the Euclidean node feature heuristic $\left( {h}_{\text{ euc }}\right)$ by ${20.4}\%$ . Note that we trained PHIL up to $N = {60}$ iterations, which means that it only encountered a small subset of the pathfinding problems in the single graph setting, which means that PHIL had to generalize to learn useful heuristics. Even in Cora, the $\left| \mathcal{D}\right| = 1$ dataset with least number of nodes, PHIL observed roughly 6,000 node distances during training, which is less than ${0.2}\%$ of total distances in the Cora graph.
+
+§ 5.3 PLANNING FOR DRONE FLIGHT
+
+In our final experiment, we use PHIL to plan collision-free paths in a practical drone flight use case within an indoor environment. We built our environment using the CoppeliaSim simulator [56], and the Ivy framework [57]. Figure 6 presents the environment which we refer to as room adversarial in Table 3. For more detail about each environment, please refer to the supplementary material. We discretize the environments into 3D grid graphs of size ${50} \times {50} \times {25}$ , and randomly remove 5 sub-graphs of size $5 \times$ $5 \times 5$ both during training and testing, this way simulating real-life planning scenarios with random obstacles. The hyperparameter configuration and the specific architecture we utilize are equivalent to Section 5.1, but with $n = 4$ . Likewise, the node features are 3D grid coordinates, and the baselines include supervised learning (SL), ${h}_{euc},{\mathrm{\;A}}^{ * }$ , and BFS, similarly as in Sections 5.1, 5.2. In Table 3 we report the ratio of expanded nodes with respect to ${h}_{euc}$ .
+
+ < g r a p h i c s >
+
+Figure 6: This figure illustrates the room adversarial environment with an example planning problem (red) and the expanded graph by PHIL (blue).
+
+311 Video demo. We provide a video demonstration of PHIL running in room adversarial: https: //cutt.ly/eniu5ax.
+
+max width=
+
+Dataset SL A* ${h}_{euc}$ BFS SAIL BFWS PHIL Shortest path
+
+1-9
+Room simple 1.124 76.052 1.000 291.888 0.973 1.286 0.785 0.782
+
+1-9
+Room adversarial 2.022 67.215 1.000 238.768 0.944 1.583 0.895 0.853
+
+1-9
+
+Table 3: Results of PHIL in the context of planning for indoor UAV flight. PHIL outperforms all baselines in both the room simple and room adversarial environments while remaining close performance-wise to the optimal number of expansions.
+
+Discussion. As we can observe in Table 3, PHIL outperforms all baselines in both environments. Interestingly, PHIL expands only approximately 0.3% more nodes in the simple room than least possible and ${4.9}\%$ more in the adversarial room case. The same figures for the greedy method $\left( {h}_{euc}\right)$ are ${27.8}\%$ and ${17.2}\%$ , respectively. These results indicate that PHIL is capable of learning planning strategies that are close to optimal in both simple and adversarial graphs, while the performance of naive heuristics degrades.
+
+§ 5.4 RUNTIME ANALYSIS
+
+We summarize test run-times of different approaches in Appendix G. PHIL runs 57.9% faster than BFWS and 32.2% faster than SAIL, and not much slower than traditional A* (34.7%) and ${h}_{man}$ (18.3%). Although Neural A* is ${71.0}\%$ faster than PHIL due to the fact that it casts the whole search process into matrix operations on images, it cannot be employed in a generic search setting.
+
+§ 6 CONCLUSION
+
+In our work, we consider the problem of learning to search for feasible paths in graphs efficiently. We propose a model and a training procedure to learn search heuristics that can be easily deployed across diverse graphs, with tuneable trade-off parameters between constant factors and performance. Our results demonstrate that PHIL outperforms current state-of-the-art approaches and can be applied to various graphs with practical use cases.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..0fa1b59cca183f0d4f8b951356c2e1126e577d2a
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,1001 @@
+# Graph Neural Network with Local Frame for Molecular Potential Energy Surface
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Modeling molecular potential energy surface is of pivotal importance in science. Graph Neural Networks have shown great success in this field. However, their message passing schemes need special designs to capture geometric information and fulfill symmetry requirement like rotation equivariance, leading to complicated architectures. To avoid these designs, we introduce a novel local frame method to molecule representation learning and analyze its expressivity. Projected onto a frame, equivariant features like 3D coordinates are converted to invariant features, so that we can capture geometric information with these projections and decouple the symmetry requirement from GNN design. Theoretically, we prove that given non-degenerate frames, even ordinary GNNs can encode molecules injectively and reach maximum expressivity with coordinate projection and frame-frame projection. In experiments, our model uses a simple ordinary GNN architecture yet achieves state-of-the-art accuracy. The simpler architecture also leads to higher scalability. Our model only takes about ${30}\%$ inference time and ${10}\%$ GPU memory compared to the most efficient baselines.
+
+## 17 1 Introduction
+
+Prediction of molecular properties is widely used in fields such as material searching, drug designing, and understanding chemical reactions [1]. Among properties, potential energy surface (PES) [2], the relationship between the energy of a molecule and its geometry, is of pivotal importance as it can determine the dynamics of molecular systems and many other properties. Many computational chemistry methods have been developed for the prediction, but few can achieve both high precision and scalability.
+
+In recent years, machine learning (ML) methods have emerged, which are both accurate and efficient. Graph Neural Networks (GNNs) are promising among these ML methods. They have improved continuously [3-10] and achieved state-of-the-art performance on many benchmark datasets. Compared with popular GNNs used in other graph tasks [11], these models need special designs, as molecules are more than a graph composed of merely nodes and edges. Atoms are in the continuous 3D space, and the prediction targets like energy are sensitive to the coordinates of atoms. Therefore, GNNs for molecules must include geometric information. Moreover, these models should keep the symmetry of the target properties for generalization. For example, the energy prediction should be invariant to the coordinate transformations in $\mathrm{O}\left( 3\right)$ group, like rotation and reflection.
+
+All existing methods can keep the invariance. Some models $\left\lbrack {4,5,8}\right\rbrack$ use hand-crafted invariant features like distance, angle, and dihedral angle as the input of GNN. Others use equivariant representations, which change with the coordinate transformations. Among them, some $\left\lbrack {6,9,{12}}\right\rbrack$ use irreducible representations of the $\mathrm{{SO}}\left( 3\right)$ group. The other models $\left\lbrack {7,{10}}\right\rbrack$ manually design functions for equivariant and invariant representations. All these methods can keep invariance, but they vary in performance. Therefore, expressivity analysis is necessary. However, the symmetry requirement hinders the application of the existing theoretical framework for ordinary GNNs [13].
+
+By using the local frame, we decouple the symmetry requirement. As shown in Figure 1, our model, namely ${GNN} - {LF}$ , first produces a frame (a set of bases of ${\mathbb{R}}^{3}$ space) equivariant to $\mathrm{O}\left( 3\right)$
+
+
+
+Figure 1: An illustration of our model. One local frame is generated for each atom. Frames are used to transform geometric information into invariant representations. Then an ordinary GNN is applied. transformations. Then it projects the relative positions and frames of neighbor atoms on the frame as the edge features. Therefore, an ordinary GNN with no special design for symmetry can work on the graph with only invariant features. The expressivity of the GNN for molecules can also be proved using a framework for ordinary GNNs [13]. As the GNN needs no special design for symmetry, GNN-LF also has a simpler architecture and, thus, better scalability. Our model achieves state-of-the-art performance on the MD17 and QM9 datasets. It also uses only 30% time and 10% GPU memory than the fastest baseline on the PES task.
+
+## 2 Preliminaries
+
+Ordinary GNN. Message passing neural network (MPNN) [14] is a common framework of GNNs. 51 For each node, a message passing layer aggregates information from neighbors to update the node representations. The ${k}^{\text{th }}$ layer can be formulated as follows.
+
+$$
+{\mathbf{h}}_{v}^{\left( k\right) } = {\mathrm{U}}^{\left( k\right) }\left( {{\mathbf{h}}_{v}^{\left( k - 1\right) },\mathop{\sum }\limits_{{u \in N\left( v\right) }}{M}^{\left( k\right) }\left( {{\mathbf{h}}_{u}^{\left( k - 1\right) },{e}_{vu}}\right) }\right) \tag{1}
+$$
+
+where ${\mathbf{h}}_{v}^{\left( k\right) }$ is the representations of node $v$ at the ${k}^{\text{th }}$ layer, $N\left( v\right)$ is the set of neighbors of $v,{\mathbf{h}}_{v}^{\left( 0\right) }$ is the node $v$ ’s features, ${e}_{uv}$ is the features of edge ${uv}$ , and ${U}^{\left( k\right) },{M}^{\left( k\right) }$ are some functions.
+
+Xu et al. [13] provide a theoretical framework for the expressivity of ordinary GNNs. One message passing layer can encode neighbor nodes injectively and then reaches maximum expressivity. With several message passing layers, MPNN can learn the information of multi-hop neighbors.
+
+Modeling PES. PES is the relationship between molecular energy and geometry. Given a molecule with $N$ atoms, our model takes the kinds of atoms $z \in {\mathbb{Z}}^{N}$ and the $3\mathrm{D}$ coordinates of atoms $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ as input to predict the energy $\widehat{E} \in \mathbb{R}$ of this molecule. It can also predict the force $\widehat{\overrightarrow{F}} \in {\mathbb{R}}^{N \times 3} = - {\nabla }_{\overrightarrow{r}}\widehat{E}.$
+
+Equivariance. To formalized the symmetry requirement, we define equivariant and invariant functions as in [15].
+
+Definition 2.1. Given a function $h : \mathbb{X} \rightarrow \mathbb{Y}$ and a group $G$ acting on $\mathbb{X}$ and $\mathbb{Y}$ as $\star$ . We say that $h$ is
+
+$$
+G\text{-invariant:}\;\text{if}h\left( {g \star x}\right) = h\left( x\right) ,\forall x \in \mathbb{X}, g \in G \tag{2}
+$$
+
+$$
+G\text{-equivariant: if}h\left( {g \star x}\right) = g \star h\left( x\right) ,\forall x \in \mathbb{X}, g \in G \tag{3}
+$$
+
+The energy is invariant to the permutation of atoms, coordinates' translations, and coordinates' orthogonal transformations (rotations and reflections). GNN naturally keeps the permutation invariance. As the relative position ${\overrightarrow{r}}_{ij} = {\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j} \in {\mathbb{R}}^{1 \times 3}$ , which is invariant to translation, is used as the input to GNNs, the translation invariance can also be ensured. So we focus on orthogonal transformations. Orthogonal transformations of coordinates form the group $\mathrm{O}\left( 3\right) = \left\{ {Q \in {\mathbb{R}}^{3 \times 3} \mid Q{Q}^{T} = I}\right\}$ , where $I$ is the identity matrix. Representations are considered as functions of $z$ and $\overrightarrow{r}$ , so we can define equivariant and invariant representations.
+
+Definition 2.2. Representation $s$ is called an invariant representation if $s\left( {z,\overrightarrow{r}}\right) = s\left( {z,\overrightarrow{r}{o}^{T}}\right) ,\forall o \in$ $O\left( 3\right) , z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ . Representation $\overrightarrow{v}$ is called an equivariant representation if $\overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {o}^{T} =$ $\overrightarrow{v}\left( {z,\overrightarrow{r}{o}^{T}}\right) ,\forall o \in O\left( 3\right) , z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ .
+
+Invariant and equivariant representations are also called scalar and vector representations respectively in some previous work [7].
+
+Frame is a special kind of equivariant representation. Through our theoretical analysis, frame $\overrightarrow{E}$ is an orthogonal matrix in ${\mathbb{R}}^{3 \times 3},\overrightarrow{E}{\overrightarrow{E}}^{T} = I$ . GNN-LF generates a frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{3 \times 3}$ for each node $i$ . We will discuss how to generate the frames in Section 5.
+
+In Lemma 2.1, we introduce some basic operations of representations.
+
+## Lemma 2.1.
+
+- Any function of invariant representation $s$ will produce an invariant representation.
+
+- Let $s \in {\mathbb{R}}^{F}$ denote an invariant representation, $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. We define $s \circ \overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ as a matrix whose(i, j)th element is ${s}_{i}{\overrightarrow{v}}_{ij}$ . When $\overrightarrow{v} \in {\mathbb{R}}^{1 \times 3}$ , we first broadcast it along the first dimension. Then the output is also an equivariant representation.
+
+- Let $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. $\overrightarrow{E} \in {\mathbb{R}}^{3 \times 3}$ denotes an equivariant frame. The projection of $\overrightarrow{v}$ to $\overrightarrow{E}$ , denoted as ${P}_{\overrightarrow{E}}\left( \overrightarrow{v}\right) \mathrel{\text{:=}} \overrightarrow{v}{\overrightarrow{E}}^{T}$ , is an invariant representation in ${\mathbb{R}}^{F \times 3}$ . For $\overrightarrow{v},{P}_{\overrightarrow{E}}$ is a bijective function. Its inverse ${P}_{\overrightarrow{E}}^{-1}$ convert an invariant representation $s \in {\mathbb{R}}^{F \times 3}$ to an equivariant representation in ${\mathbb{R}}^{F \times 3},{P}_{\overrightarrow{E}}^{-1}\left( s\right) = s\overrightarrow{E}$ .
+
+- Projection of $\overrightarrow{v}$ to a general equivariant representation ${\overrightarrow{v}}^{\prime } \in {\mathbb{R}}^{{F}^{\prime } \times 3}$ can also be defined. It produces an invariant representation in ${\mathbb{R}}^{F \times {F}^{\prime }},{P}_{{\overrightarrow{v}}^{\prime }}\left( \overrightarrow{v}\right) = \overrightarrow{v}{\overrightarrow{v}}^{\prime T}$ .
+
+Local Environment. Most PES models set a cutoff radius ${r}_{c}$ and encode the local environment of each atom as defined in Definition 2.3.
+
+Definition 2.3. Let ${r}_{ij}$ denote $\begin{Vmatrix}{\overrightarrow{r}}_{ij}\end{Vmatrix}$ . The local environment of atom $i$ is $L{E}_{i} = \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ , the set of invariant atom features ${s}_{j}$ (like atomic numbers) and relative positions ${\overrightarrow{r}}_{ij}$ of atoms $j$ within the sphere centered at $i$ with cutoff distance ${r}_{c}$ , where ${r}_{c}$ is usually a hyperparameter.
+
+In this work, orthogonal transformation of a set/sequence means transforming each element in the set/sequence. For example, an orthogonal transformation $o$ will map $\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ to $\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}{o}^{T}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ .
+
+## 3 Related work
+
+We classify existing ML models for PES into two classes: manual descriptors and GNNs. GNN-LF outperforms the representative of each kind in experiments.
+
+Manual Descriptor. These models first use manually designed functions with few learnable parameters to convert one molecule to a descriptor vector and then feed the vector into some ordinary ML models like kernel regression [16-18] and neural network [19-21] to produce the prediction. These methods are more scalable and data-efficient than GNNs. However, due to the hard-coded descriptors, they are less accurate and cannot process variable-size molecules or different kinds of atoms.
+
+GNN. These GNNs mainly differ in the way to incorporate geometric information.
+
+Invariant models use rotation-invariant geometric features only. Schutt et al. [3] and Schütt et al. [4] only consider the distance between atoms. Klicpera et al. [5] introduce angular features, and Gasteiger et al. [8] further use dihedral angles. Similar to GNN-LF, the input of the GNN is invariant. However, the features are largely hand-crafted and are not expressive enough, while our projections on frames are learnable and provably expressive. Moreover, as some features are of multiple atoms (for example, angle is a feature of three-atom tuple), the message passing scheme passes messages between node tuples rather than nodes, while GNN-LF uses an ordinary GNN with lower time complexity.
+
+Recent works have also utilized equivariant features, which will change as the input coordinates rotate. Some $\left\lbrack {6,9,{12}}\right\rbrack$ are based on irreducible representations of the ${SO}\left( 3\right)$ group. Though having certain theoretical expressivity guarantees [22], these methods and analyses are based on polynomial approximation. High-order tensors are needed to approximate complex functions like high-order polynomials. However, in implementation, only low-order tensors are used, and these models' empirical performance is not high. Other works $\left\lbrack {7,{10}}\right\rbrack$ model equivariant interactions in Cartesian space using both invariant and equivariant representations. They achieve good empirical performance but have no theoretical guarantees. Different sets of functions must be designed separately for different input and output types (invariant or equivariant representations), so their architectures are also complex. Our work adopts a completely different approach. We introduce $\mathrm{O}\left( 3\right)$ -equivariant frames and project all equivariant features on the frames. The expressivity can be proved using the existing framework [13] and needs no high-order tensors.
+
+"Frame" models. Some of existing methods [23, 24] designed for other tasks also use the term "frame". However, in conclusion, these methods differ significantly from ours in task, theory, and method as follows.
+
+- Most target properties of molecules are $\mathrm{O}\left( 3\right)$ -equivariant or invariant (including energy and force). Our model can fully describe symmetry, while existing models cannot. For example, a molecule and its mirroring must have the same energy, and GNN-LF will produce the same prediction while existing models cannot keep the invariance.
+
+- Our theoretical analysis removes group representation used in [22, 24].
+
+- Existing models use some schemes not learnable to initialize frames and update them. GNN-LF uses a learnable message passing scheme to produce frames and will not update them, leading to simpler architecture and lower overhead.
+
+- Only coordinate projection is used previously, while we add frame-frame projection.
+
+The comparison is detailed in Appendix F.
+
+## 4 How frames boost expressivity?
+
+Though symmetry imposes constraints on our design, our primary focus is expressivity. Therefore, we only discuss how the frame boosts expressivity in this section. Our methods, implementations, and how our model keeps invariance will be detailed in Section 6 and Appendix J. Throughout this section, we assume the existence of frames, which will be discussed in Section 5.
+
+### 4.1 Decoupling symmetry requirement
+
+Though equivariant representations have been used for a long time, it is still unclear how to transform them ideally. Existing methods $\left\lbrack {7,{10},{15},{25}}\right\rbrack$ either have no theoretical guarantee or tend to use too many parameters. This section asks a fundamental question: can we use invariant representations instead of equivariant ones and keep expressivity?
+
+Given any frame $\overrightarrow{E}$ , the projection ${P}_{\overrightarrow{E}}\left( \overrightarrow{x}\right)$ will contain all the information of the input equivariant feature $\overrightarrow{x}$ , because the inverse projection function can resume $\overrightarrow{x}$ from projection, ${P}_{\overrightarrow{E}}^{-1}\left( {{P}_{\overrightarrow{E}}\left( \overrightarrow{x}\right) }\right) = \overrightarrow{x}$ . Therefore, we can use ${P}_{\overrightarrow{E}}$ and ${P}_{\overrightarrow{E}}^{-1}$ to change the type (invariant or equivariant representation) of input and output of any function without information loss.
+
+Proposition 4.1. Given frame $\overrightarrow{E}$ and any equivariant function $g$ , there exists a function $\widetilde{g} =$ ${P}_{\overrightarrow{E}} \cdot g \cdot {P}_{\overrightarrow{E}}^{-1}$ which takes invariant representations as input and outputs invariant representations, where $\cdot$ is function composition. $g$ can be expressed with $\widetilde{g} : g = {P}_{\overrightarrow{E}}^{-1} \cdot \widetilde{g} \cdot {P}_{\overrightarrow{E}}$ .
+
+We can use a multilayer perceptron (MLP) to approximate the function $\widetilde{g}$ and thus achieving universal approximation for all $\mathrm{O}\left( 3\right)$ -equivariant functions. Proposition 4.1 motivates us to transform equivariant representations to projections in the beginning and then fully operate on the invariant representation space. Invariant representations can also be transformed back to equivariant prediction with inverse projection operation if necessary.
+
+### 4.2 Projection boosts message passing layer
+
+The previous section discusses how projection decouples the symmetry requirement. This section shows that projections contain rich geometry information. Even ordinary GNNs can reach maximum expressivity with projections on frames, while existing models with hand-crafted invariant features are not expressive enough. The discussion is composed of two parts. Coordinate projection boosts the expressivity of one single message passing layer, and frame-frame projection boosts the whole GNN composed of multiple message passing layers.
+
+Note that in this section, we consider input ${x}_{1},{x}_{2}$ (local environment or the whole molecule) as equal if they can interconvert with some orthogonal transformation $\left( {\exists o \in \mathrm{O}\left( 3\right) , o\left( {x}_{1}\right) = {x}_{2}}\right)$ , because the invariant representations and energy prediction are invariant under $\mathrm{O}\left( 3\right)$ transformation. Therefore, injective mapping and maximum expressivity mean that function can differentiate inputs unequal in this sense.
+
+
+
+Figure 2: The green balls in the figure are the center atoms. We use balls with different colors to represent different kinds of atoms. (a) SchNet cannot distinguish two local environments due to the inability to capture angle. (b) DimeNet cannot distinguish two local environments with the same set of angles. Blue lines form a regular icosahedron and help visualization. The center atom is at the symmetrical center of the icosahedron. (c) Invariant models fail to pass the orientation information, while the projection of frame vectors can solve this problem. For simplicity, we only show one vector (orange) to represent the frame.
+
+Encoding local environment. Similar to that MPNN can encode neighbor nodes injectively on the graph, GNN-LF can encode neighbor nodes injectively in 3D space. Other models can also be analyzed from an encoding local environments perspective. GNNs for PES only collect messages from atoms within the sphere of radius ${r}_{c}$ , so one message passing layer of them is equivalent to encoding the local environments in Definition 2.3. When mapping local environments injectively, a single message passing layer reaches maximum expressivity.
+
+Some popular models are under-expressive. For example, as shown in Figure 2a, SchNet [4] only considers the distance between atoms and neglects the angular information, leading to the inability to differentiate some simple local environments. Moreover, Figure 2b illustrates that though DimeNet [5] adds angular information to message passing, its expressivity is still limited, which may be attributed to the loss of high-order geometric information like dihedral angle.
+
+In contrast, no information loss will happen when we use the coordinates projected on the frame.
+
+Theorem 4.1. There exists a function $\phi$ . Given a frame ${\overrightarrow{E}}_{i}$ of the atom $i,\phi$ encodes the local environment of atom i injectively into atom i's embeddings.
+
+$$
+\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\operatorname{Concatenate}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{s}_{j}}\right) }\right) }\right) . \tag{4}
+$$
+
+Theorem 4.1 shows that an ordinary message passing layer can encode local environments injectively with coordinate projection as an edge feature.
+
+Passing messages across local environments. In physics, interaction between distant atoms is usually not negligible. Using one single message passing layer, which encodes atoms within cutoff radius only, leads to loss of such interaction. When using multiple message passing layers, GNN can pass messages between two distant atoms along a path of atoms and thus model the interaction.
+
+However, passing messages in multiple steps may lead to loss of information. For example, in Figure 2c, two molecules are different as a part of the molecule rotates. However, the local environment will not change. So the node representations, the messages passed between nodes, and finally, the energy prediction will not change while two molecules have different energy. This problem will also happen in previous PES models [4, 5]. Loss of information in multi-step message passing is a fundamental and challenging problem even for ordinary GNN [13].
+
+Nevertheless, the solution is simple in this special case. We can eliminate the information loss by frame-frame projection, i.e., projecting ${\overrightarrow{E}}_{i}$ (the frame of atom $j$ ) on ${\overrightarrow{E}}_{j}$ (the frame of atom $i$ ). For example, in Figure 2c, as the molecule rotates, frame vectors also rotate, leading to frame-frame projection change, so our model can differentiate them. We also prove the effectiveness of frame-frame projection in theory.
+
+Theorem 4.2. Let $\mathcal{G}$ denote the graph in which node $i$ represents the atom $i$ and edge ${ij}$ exists iff ${r}_{ij} < {r}_{c}$ , where ${r}_{c}$ is the cutoff radius. Assuming frames exist, if $\mathcal{G}$ is a connected graph whose diameter is $L$ , GNN with $L$ message passing layers as follows can encode the whole molecule
+
+$$
+\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\text{ Concatenate }\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}}\right) }\right) }\right) . \tag{5}
+$$
+
+
+
+Figure 3: (a) The left part shows the symmetry of the water molecule, which has a rotation axis. Its equivariant vectors must be parallel to the rotation axis. However, with a frame composed of only one vector, its geometry can be described. The right part shows that with the projection of ${\overrightarrow{r}}_{ij}$ on the frame and the distance between two atoms, the angle $\theta$ and the position of $j$ atom can be determined. (b) The left part is a molecule with central symmetry. Its global frame will be zero. However, when selected as the center (green), the atom's environment has no central symmetry.
+
+$\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid j \in \{ 1,2,\ldots , n\} }\right\}$ injectively into the embedding of node $i$ .
+
+Theorem 4.2 shows that an ordinary GNN can encode the whole molecule injectively with coordinate projection and frame-frame projection as edge features.
+
+In conclusion, when frames exist, even ordinary GNN can encode molecule injectively and thus reach maximum expressivity with coordinate projection and frame-frame projection.
+
+## 5 How to build a frame?
+
+We propose frame generation method after discussing how to use frames because generation method's connection to expressivity is less direct. Whatever frame generation method is used, GNN-LF can keep expressivity as long as the frame does not degenerate. A frame degenerates iff it has less than three linearly independent vectors. This section provides one feasible frame generation method.
+
+A straightforward idea is to use the invariant features of each atom, like the atomic number, to produce the frame. However, function of invariant features must be invariant representations rather than equivariant frames. Therefore, we consider producing the frame from the local environment of each atom, which contains equivariant 3D coordinates. In Theorem 5.1, we prove that there exists a function mapping the local environment to an $\mathrm{O}\left( 3\right)$ -equivariant frame.
+
+Theorem 5.1. There exists an $O\left( 3\right)$ -equivariant function $g$ mapping the local environment $L{E}_{i}$ to an equivariant representation in ${\mathbb{R}}^{3 \times 3}$ . The output forms a frame if $\forall o \in O\left( 3\right) , o \neq I, o\left( {L{E}_{i}}\right) \neq L{E}_{i}$ .
+
+The frame produced by the function in Theorem 5.1 will not degenerate if the local environment has no symmetry elements, such as centers of inversion, axes of rotation, or mirror planes.
+
+Building a frame for a symmetric local environment remains a problem in our current implementation but will not seriously hamper our model. Firstly, our model can produce reasonable output even with symmetric input and is provably more expressive than a widely used model SchNet [4] (see Appendix G). Secondly, symmetric molecules are rare and form a zero-measure set. In our two representative real-world datasets, less than ${0.01}\%$ of molecules (about ten molecules in the whole datasets of several hundred thousand molecules) are symmetric. Thirdly, symmetric geometry may be captured with a degenerate frame. As shown in Figure 3a, water is a symmetric molecule. We can use a frame with one vector to describe its geometry. Based on node identity features and relational pooling [26], we also propose a scheme in Appendix H to completely solve the expressivity loss caused by degeneration. However, for scalability, we do not use it in GNN-LF.
+
+A message passing layer for frame generation. The existence of the frame generation function is proved in Theorem 4.2. Here we demonstrate how to implement it. There exists a universal framework for approximating $\mathrm{O}\left( 3\right)$ -equivariant functions [15] which can be used to implement the function in Theorem 5.1. For scalability, we use a simplified form of that framework which has empirically good performance:
+
+$$
+{\overrightarrow{E}}_{i} = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{g}^{\prime }\left( {{r}_{ij},{s}_{j}}\right) \circ \frac{{\overrightarrow{r}}_{ij}}{{r}_{ij}}, \tag{6}
+$$
+
+where ${g}^{\prime }$ maps invariant features and distance to invariant weights and the entire framework reduces to a message passing process. The derivation is detailed in Appendix B.
+
+Local frame vs global frame. With the message passing framework in Equation 6, an individual frame, called local frame, is produced for each atom. These local frames can also be summed to produce a global frame.
+
+$$
+\overrightarrow{E} = \mathop{\sum }\limits_{{i = 1}}^{n}{\overrightarrow{E}}_{i} \tag{7}
+$$
+
+The global frame can replace local frames and keep the invariance of energy prediction. All previous analysis will still be valid if the frame degeneration does not happen. However, the global frame is more likely to degenerate than local frames. As shown in Figure 3b, the benzene molecule has central symmetry and produces a zero global frame. However, when choosing each atom as the center, the central symmetry is broken, and a non-zero local frame can be produced. We further formalize this intuition and prove that the global frame is more likely to degenerate in Appendix I.
+
+In conclusion, we can generate local frames with a message passing layer.
+
+## 6 GNN with local frame
+
+We formally introduce our GNN with local frame (GNN-LF) model. The whole architecture is detailed in Appendix C. The time and space complexity are $O\left( {Nn}\right)$ , where $N$ is the number of atoms in the molecule, and $n$ is the maximum number of neighbor atoms of one atom.
+
+Notations. Let $F$ denote the hidden dimension. We first convert the input features, coordinates $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ and atomic numbers $z \in {\mathbb{N}}^{N}$ , to a graph. The initial node feature ${s}_{i}^{\left( 0\right) } \in {\mathbb{R}}^{F}$ is an embedding of the atomic number ${z}_{i}$ . Edge ${ij}$ has two features: the edge weight ${w}_{ij}^{\left( e\right) } = \operatorname{cutoff}\left( {r}_{ij}\right)$ (where cutoff means the cutoff function), and the radial basis expansion of the distance ${s}_{ij}^{\left( e\right) } = \operatorname{rbf}\left( {r}_{ij}\right)$ . Edge weight ${w}_{ij}^{\left( e\right) }$ is not necessary for expressivity. However, to ensure that the energy prediction is a smooth function of coordinates, messages passed among atoms must be scaled with ${w}_{ij}^{\left( e\right) }$ [19]. These special functions are detailed in Appendix C.
+
+Producing frame. The message passing scheme for producing local frames implements Equation (6).
+
+$$
+{\overrightarrow{E}}_{i} = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{w}_{ij}^{\left( e\right) }\left( {{f}_{1}\left( {s}_{ij}^{\left( e\right) }\right) \odot {s}_{j}}\right) \circ \frac{{\overrightarrow{r}}_{ij}}{{r}_{ij}}, \tag{8}
+$$
+
+where ${f}_{1}$ is an MLP. Note that frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{F \times 3}$ in implementation is not restricted to have three vectors. The number of vectors equals the hidden dimension. This design needs no extra linear layer to change the hidden dimension. Moreover, our theoretical analysis is still valid because frame in ${\mathbb{R}}^{F \times 3}$ can be considered as an ensemble of $\frac{F}{3}$ frames in ${\mathbb{R}}^{3 \times 3}$ .
+
+Coordinate projection is as follows,
+
+$$
+{d}_{ij}^{1} = \frac{1}{{r}_{ij}}{\overrightarrow{r}}_{ij}{\overrightarrow{E}}_{i}^{T}. \tag{9}
+$$
+
+The projection in implementation is scaled by $\frac{1}{{r}_{ij}}$ to decouple the distance information in ${s}_{ij}^{\left( e\right) }$ .
+
+Frame-frame projection. ${\overrightarrow{E}}_{i}{\overrightarrow{E}}_{j}^{T}$ is a large matrix. Therefore, we only use the diagonal elements of the projection. To keep the expressivity, we transform the frame with two ordinary linear layers.
+
+$$
+{d}_{ij}^{2} = \operatorname{diag}\left( {{W}_{1}{\overrightarrow{E}}_{j}{\overrightarrow{E}}_{i}^{T}{W}_{2}^{T}}\right) . \tag{10}
+$$
+
+Adding the projections to edge features, we get a graph with invariant features only.
+
+GNN working on the invariant graph. The message passing scheme uses the form in Theorem 4.1. Let the ${s}_{i}^{\left( l\right) }$ denote the node representations produced by the ${l}^{\text{th }}$ message passing layers. ${s}_{i}^{\left( 0\right) } = {s}_{i}$ .
+
+$$
+{s}_{i}^{\left( l\right) } = \rho \left( {\mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{w}_{ij}^{\left( e\right) }\left( {{f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right) \odot {s}_{j}^{\left( l - 1\right) }}\right) }\right) , \tag{11}
+$$
+
+$$
+{f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right) = {g}_{1}\left( {s}_{ij}^{\left( e\right) }\right) \odot {g}_{2}\left( {{d}_{ij}^{1},{d}_{ij}^{2}}\right) . \tag{12}
+$$
+
+Table 1: Results on the MD17 dataset. Units: energy (E) (kcal/mol) and forces (F) (kcal/mol/Å).
+
+ | | FCHL | SchNet | DimeNet | GemNet | PaiNN | NequlP | TorchMD | GNN-LF |
| Aspirin | E | 0.182 | 0.37 | 0.204 | - | 0.167 | - | 0.124 | 0.1342 |
| F | 0.478 | 1.35 | 0.499 | 0.2168 | 0.338 | 0.348 | 0.255 | 0.2018 |
| Benzene | E | - | 0.08 | 0.078 | - | - | - | 0.056 | 0.0686 |
| F | - | 0.31 | 0.187 | 0.1453 | - | 0.187 | 0.201 | 0.1506 |
| Ethanol | E | 0.054 | 0.08 | 0.064 | - | 0.064 | - | 0.054 | 0.0520 |
| F | 0.136 | 0.39 | 0.230 | 0.0853 | 0.224 | 0.208 | 0.116 | 0.0814 |
| Malonaldehyde | E | 0.081 | 0.13 | 0.104 | - | 0.091 | - | 0.079 | 0.0764 |
| F | 0.245 | 0.66 | 0.383 | 0.1545 | 0.319 | 0.337 | 0.176 | 0.1259 |
| Naphthalene | E | 0.117 | 0.16 | 0.122 | - | 0.166 | - | 0.085 | 0.1136 |
| F | 0.151 | 0.58 | 0.215 | 0.0553 | 0.077 | 0.097 | 0.060 | 0.0550 |
| Salicylic acid | E | 0.114 | 0.20 | 0.134 | - | 0.166 | - | 0.094 | 0.1081 |
| F | 0.221 | 0.85 | 0.374 | 0.1048 | 0.195 | 0.238 | 0.135 | 0.1005 |
| Toluence | E | 0.098 | 0.12 | 0.102 | - | 0.095 | - | 0.074 | 0.0930 |
| F | 0.203 | 0.57 | 0.216 | 0.0600 | 0.094 | 0.101 | 0.066 | 0.0543 |
| Uracil | E | 0.104 | 0.14 | 0.115 | - | 0.106 | - | 0.096 | 0.1037 |
| F | 0.105 | 0.56 | 0.301 | 0.0969 | 0.139 | 0.173 | 0.094 | 0.0751 |
| average | rank | 3.93 | 6.63 | 5.38 | 2.00 | 4.36 | 5.25 | 2.25 | 1.75 |
+
+where $\rho$ is an MLP. We further use a filter decomposition design as follows.
+
+The distance information ${s}_{ij}^{\left( e\right) }$ is easier to learn as it has been expanded with a set of bases, so a linear layer ${g}_{1}$ is enough. In contrast, projections need a more expressive MLP ${g}_{2}$ .
+
+Sharing filters. Generating different filters ${f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right)$ for each message passing layer is time-consuming. Therefore, we share filters between different layers. Experimental results show that sharing filters leads to minor performance loss and significant scalability gain.
+
+## 7 Experiment
+
+In this section, we compare GNN-LF with existing models and do an ablation analysis. We report the mean absolute error (MAE) on the test set (the lower, the better). All our results are averaged over three random splits. Baselines' results are from their papers. The best and the second best results are shown in bold and underline respectively in tables. Experimental settings are detailed in Appendix D.
+
+### 7.1 Modeling PES
+
+We first evaluate GNN-LF for modeling PES on the MD17 dataset [27], which consists of MD trajectories of small organic molecules. GNN-LF is compared with a manual descriptor model: FCHL [18] , invariant models: SchNet [4], DimeNet [5], GemNet [8], a model using irreducible representations: NequIP [9], and models using equivariant representations: PaiNN [7] and TorchMD [10]. The results are shown in Table 1. GNN-LF outperforms all the baselines on $9/{16}$ targets and achieves the second-best performance on all other 7 targets. Our model leads to ${10}\%$ lower loss on average than GemNet, the best baseline. The outstanding performance verifies the effectiveness of the local frame method for modeling PES. Moreover, our model also uses fewer parameters and only about 30% time and 10% GPU memory compared with the baselines as shown in Appendix E.
+
+### 7.2 Ablation study
+
+We perform an ablation study to verify our model designs. The results are shown in Table 2.
+
+On average, ablation of frame-frame projection (NoDir2) leads to ${20}\%$ higher MAE, which verifies the necessity of frame-frame projection. The column Global replaces the local frames with the global frame, resulting in 100% higher loss, which verifies local frames' advantages over global frame. Ablation of filter decomposition (NoDecomp) leads to 9% higher loss, indicating the advantage of separately processing distance and projections. Although using different filters for each message passing layer (NoShare) uses much more computation time ( ${1.67} \times$ ) and parameters ( ${3.55} \times$ ), it 8 only leads to 0.01% lower loss on average, illustrating that sharing filters does little harm to the expressivity.
+
+Table 2: Ablation results on the MD17 dataset. Units: energy (E) (kcal/mol) and forces (F) (kcal/mol/Å). GNN-LF does not use ${d}^{2}$ for some molecules, so the NoDir2 column is empty.
+
+ | | GNN-LF | NoDir2 | Global | NoDecomp | GNN-LF | Noshare |
| Aspirin | E | 0.1342 | 0.1435 | 0.2280 | 0.1411 | 0.1342 | 0.1364 |
| F | 0.2018 | 0.2799 | 0.6894 | 0.2622 | 0.2018 | 0.1979 |
| Benzene | E | 0.0686 | 0.0716 | 0.0972 | 0.0688 | 0.0686 | 0.0713 |
| F | 0.1506 | 0.1583 | 0.3520 | 0.1499 | 0.1506 | 0.1507 |
| Ethanol | E | 0.0520 | 0.0532 | 0.0556 | 0.0518 | 0.0520 | 0.0514 |
| F | 0.0814 | 0.0930 | 0.1465 | 0.0847 | 0.0814 | 0.0751 |
| Malonaldehyde | E | 0.0764 | 0.0776 | 0.0923 | 0.0765 | 0.0764 | 0.0790 |
| F | 0.1259 | 0.1466 | 0.3194 | 0.1321 | 0.1259 | 0.1210 |
| Naphthalene | E | 0.1136 | 0.1152 | 0.1276 | 0.1254 | 0.1136 | 0.1168 |
| F | 0.0550 | 0.0834 | 0.2069 | 0.0553 | 0.0550 | 0.0547 |
| Salicylic acid | E | 0.1081 | 0.1087 | 0.1224 | 0.1123 | 0.1081 | 0.1091 |
| F | 0.1048 | 0.1328 | 0.2890 | 0.1399 | 0.1048 | 0.1012 |
| Toluence | E | 0.0930 | 0.0942 | 0.1000 | 0.0932 | 0.0930 | 0.0942 |
| F | 0.0543 | 0.0770 | 0.1659 | 0.0695 | 0.0543 | 0.0519 |
| Uracil | E | 0.1037 | 0.1069 | 0.1075 | 0.1053 | 0.1037 | 0.1042 |
| F | 0.0751 | 0.0964 | 0.1901 | 0.0825 | 0.0751 | 0.0754 |
+
+Table 3: Results on the QM9 dataset.
+
+| Target | Unit | SchNet | DimeNet++ | Cormorant | PaiNN | Torchmd | GNN-LF |
| $\mu$ | D | 0.033 | 0.0297 | 0.038 | 0.012 | 0.002 | 0.013 |
| $\alpha$ | ${a}_{0}^{3}$ | 0.235 | 0.0435 | 0.085 | 0.045 | 0.01 | 0.0353 |
| EHOMO | meV | 41 | 24.6 | 34 | 27.6 | 21.2 | 23.5 |
| ELUMO | meV | 34 | 19.5 | 38 | 20.4 | 17.8 | 17.0 |
| ${\Delta \epsilon }$ | meV | 63 | 32.6 | 61 | 45.7 | 38 | 37.1 |
| $\langle {R}^{2}\rangle$ | ${a}_{0}^{2}$ | 0.073 | 0.331 | 0.961 | 0.066 | 0.015 | 0.037 |
| ZPVE | meV | 1.7 | 1.21 | 2.027 | 1.28 | 2.12 | 1.19 |
| ${U}_{0}$ | meV | 14 | 6.32 | 22 | 5.85 | 6.24 | 5.30 |
| $U$ | meV | 19 | 6.28 | 21 | 5.83 | 6.30 | 5.24 |
| $H$ | meV | 14 | 6.53 | 21 | 5.98 | 6.48 | 5.48 |
| $G$ | meV | 14 | 7.56 | 20 | 7.35 | 7.64 | 6.84 |
| ${C}_{v}$ | cal/mol/K | 0.033 | 0.023 | 0.026 | 0.024 | 0.026 | 0.022 |
+
+### 7.3 Other chemical properties
+
+Though designed for PES, our model can also predict other properties directly. The QM9 dataset [28] consists of ${134}\mathrm{k}$ stable small organic molecules. The task is to use the atomic numbers and coordinates to predict properties of these molecules. We compare our model with invariant models: SchNet [4], DimeNet++ [29], a model using irreducible representations: Cormorant [6], and models using equivariant representations: PaiNN [7] and TorchMD [10]. Results are shown in Table 3. Our model outperforms all other models on $7/{12}$ tasks and achieves the second-best performance on $4/5$ left tasks, which illustrates that the local frame method has the potential to be applied to other fields.
+
+## 8 Conclusion
+
+This paper proposes GNN-LF, a simple and effective molecular potential energy surface model. It introduces a novel local frame method to decouple the symmetry requirement and capture rich geometric information. In theory, we prove that even ordinary GNNs can reach maximum expressivity with the local frame method. Furthermore, we propose ways to construct local frames. In experiments, our model outperforms all baselines in both scalability (using only 30% time and 10% GPU memory) and accuracy (10% lower loss). Ablation study also verifies the effectiveness of our designs.
+
+References
+
+[1] Gunnar Schmitz, Ian Heide Godtliebsen, and Ove Christiansen. Machine learning for potential energy surfaces: An extensive database and assessment of methods. The Journal of Chemical Physics, 150(24):244113, 2019. 1
+
+[2] Errol G. Lewars. The Concept of the Potential Energy Surface, pages 9-49. Springer International Publishing, 2016. 1
+
+[3] K. T. Schutt, F. Arbabzadah, S. Chmiela, K.-R. Muller, and A. Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature Communications, 8:13890, 2017. 1, 3
+
+[4] Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In Advances in Neural Information Processing Systems 30, pages 991-1001, 2017. 1, 3, 5, 6, 8, 9
+
+[5] Johannes Klicpera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. In International Conference on Learning Representations, 2020. 1, 3, 5, 8
+
+[6] Brandon M. Anderson, Truong-Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. In Advances in Neural Information Processing Systems, pages 14510-14519, 2019.1,3,9
+
+[7] Kristof Schütt, Oliver T. Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In International Conference on Machine Learning, volume 139, pages 9377-9388, 2021. 1, 2, 3, 4, 8, 9
+
+[8] Johannes Gasteiger, Florian Becker, and Stephan Günnemann. Gemnet: Universal directional graph neural networks for molecules. In Advances in Neural Information Processing Systems, 2021.1,3,8
+
+[9] Simon L. Batzner, Tess E. Smidt, Lixin Sun, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, and Boris Kozinsky. Se(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. CoRR, abs/2101.03164, 2021. 1, 3, 8
+
+[10] Philipp Thölke and Gianni De Fabritiis. Equivariant transformers for neural network based molecular potentials. In International Conference on Learning Representations, 2022. 1, 3, 4, 8, 9, 16, 17
+
+[11] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations, 2017. 1
+
+[12] Nathaniel Thomas, Tess E. Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation- and translation-equivariant neural networks for 3d point clouds. CoRR, abs/1802.08219, 2018. 1, 3
+
+[13] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. 1, 2, 4, 5
+
+[14] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272, 2017. 2
+
+[15] Soledad Villar, David W Hogg, Kate Storey-Fisher, Weichi Yao, and Ben Blum-Smith. Scalars are universal: Equivariant machine learning, structured like classical physics. In Advances in Neural Information Processing Systems, 2021. 2, 4, 6, 16
+
+[16] Matthias Rupp, Alexandre Tkatchenko, Klaus-Robert Müller, and O. Anatole von Lilienfeld. Fast and accurate modeling of molecular atomization energies with machine learning. Physical Review Letter, 108:058301, Jan 2012. 3
+
+[17] Stefan Chmiela, Huziel E. Sauceda, Igor Poltavsky, Klaus-Robert Müller, and Alexandre Tkatchenko. sgdml: Constructing accurate and data efficient molecular force fields using machine learning. Computer Physics Communications, 240:38-45, 2019.
+
+[18] Anders S Christensen, Lars A Bratholm, Felix A Faber, and O Anatole von Lilienfeld. Fchl revisited: Faster and more accurate quantum machine learning. The Journal of chemical physics, 152(4):044107, 2020. 3, 8
+
+[19] Jörg Behler and Michele Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Physical Review Letter, 98:146401, 2007. 3, 7, 16
+
+[20] J. S. Smith, O. Isayev, and A. E. Roitberg. Ani-1: an extensible neural network potential with dft accuracy at force field computational cost. Chem. Sci., 8:3192-3203, 2017.
+
+[21] Linfeng Zhang, Jiequn Han, Han Wang, Wissam Saidi, Roberto Car, and Weinan E. End-to-end symmetry preserving inter-atomic potential energy model for finite and extended systems. In Advances in Neural Information Processing Systems, 2018. 3
+
+[22] Nadav Dym and Haggai Maron. On the universality of rotation equivariant point cloud networks. In International Conference on Learning Representations, 2021. 3, 4
+
+[23] Shitong Luo, Jiahan Li, Jiaqi Guan, Yufeng Su, Chaoran Cheng, Jian Peng, and Jianzhu Ma. Equivariant point cloud analysis via learning orientations for message passing. In IEEE Conference on Computer Vision and Pattern Recognition, pages 16296-16305, 2021. 4, 18, 19
+
+[24] Weitao Du, He Zhang, Yuanqi Du, Qi Meng, Wei Chen, Nanning Zheng, Bin Shao, and Tie-Yan Liu. SE(3) equivariant graph neural networks with complete local frames. In International Conference on Machine Learning, 2022. 4, 18, 19
+
+[25] Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, and Leonidas J. Guibas. Vector neurons: A general framework for so(3)-equivariant networks. In International Conference on Computer Vision, pages 12180-12189. IEEE, 2021. 4
+
+[26] Omri Puny, Matan Atzmon, Heli Ben-Hamu, Edward J. Smith, Ishan Misra, Aditya Grover, and Yaron Lipman. Frame averaging for invariant and equivariant network design. International Conference on Learning Representations, 2022. 6, 21, 22
+
+[27] Stefan Chmiela, Alexandre Tkatchenko, Huziel E. Sauceda, Igor Poltavsky, Kristof T. Schütt, and Klaus-Robert Müller. Machine learning of accurate energy-conserving molecular force fields. Science Advances, 3(5):e1603015, 2017. 8
+
+[28] Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical Science, 9:513-530, 2018. 9
+
+[29] Johannes Gasteiger, Shankari Giri, Johannes T. Margraf, and Stephan Günnemann. Fast and uncertainty-aware directional message passing for non-equilibrium molecules. In Machine Learning for Molecules Workshop, NeurIPS, 2020. 9
+
+[30] Nimrod Segol and Yaron Lipman. On universal equivariant set networks. In International Conference on Learning Representations, 2020. 13
+
+[31] Oliver Unke and Markus Meuwly. Physnet: A neural network for predicting energies, forces, dipole moments and partial charges. J Chem Theory Comput., 6:3678-3693, 02 2019. 16
+
+[32] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks. In International Conference on Machine Learning, 2021. 18, 19
+
+[33] Wenbing Huang, Jiaqi Han, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang. Equiv-ariant graph mechanics networks with constraints. International Conference on Learning Representations, 2022. 18, 19
+
+[34] Nadav Dym and Haggai Maron. On the universality of rotation equivariant point cloud networks. In International Conference on Learning Representations, 2021. 19
+
+[35] Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral CNN. In International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR, 2019. 19
+
+[36] Pim de Haan, Maurice Weiler, Taco Cohen, and Max Welling. Gauge equivariant mesh cnns: Anisotropic convolutions on geometric graphs. In International Conference on Learning Representations, 2021. 19
+
+[37] Julian Suk, Pim de Haan, Phillip Lippe, Christoph Brune, and Jelmer M. Wolterink. Mesh convolutional neural networks for wall shear stress estimation in $3\mathrm{\;d}$ artery models. In ${STA}$ - COM@MICCAI, volume 13131, pages 93-102, 2021. 19
+
+## A Proofs
+
+Due to repulsive force, atoms cannot be too close to each other in stable molecules. Therefore, we assume that there exist an upper bound $N$ of the number of neighbor atoms.
+
+### A.1 Proof of Lemma 2.1
+
+Proof. For all function $g$ , invariant representation $s$ , transformation $o \in \mathrm{O}\left( 3\right) , g\left( {s\left( {z,{\overrightarrow{r}}_{o}{}^{T}}\right) }\right) = g\left( s\right)$ . Therefore, $g\left( s\right)$ is an invariant representation.
+
+For all invariant representation $s$ , equivariant representation $\overrightarrow{v}$ , and transformation $o \in \mathrm{O}\left( 3\right)$ , $s\left( {z,\overrightarrow{r}{o}^{T}}\right) \circ \overrightarrow{v}\left( {z,\overrightarrow{r}{o}^{T}}\right) = s\left( {z,\overrightarrow{r}}\right) \circ \left( {\overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {o}^{T}}\right) = \left( {s\left( {z,\overrightarrow{r}}\right) \circ \overrightarrow{v}\left( {z,\overrightarrow{r}}\right) }\right) {o}^{T}$ . Therefore, $s \circ \overrightarrow{v}$ is an equivariant representation.
+
+For all equivariant representations $\overrightarrow{v},\overrightarrow{v}\left( {z,\overrightarrow{r}{o}^{T}}\right) \overrightarrow{E}{\left( z,\overrightarrow{r}{o}^{T}\right) }^{T} = \overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {o}^{T}o\overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{T} =$ $\overrightarrow{v}\left( {z,\overrightarrow{r}}\right) \overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{T}.{P}_{\overrightarrow{E}}$ is inversible because ${P}_{\overrightarrow{E}}\left( \overrightarrow{v}\right) \overrightarrow{E} = \overrightarrow{v}{\overrightarrow{E}}^{T}\overrightarrow{E} = \overrightarrow{v}$ . For all invariant representations $s \in {\mathbb{R}}^{F \times 3}, s\left( {z,\overrightarrow{r}{o}^{T}}\right) \overrightarrow{E}\left( {z,\overrightarrow{r}{o}^{T}}\right) = s\left( {z,\overrightarrow{r}}\right) \overrightarrow{E}\left( {z,\overrightarrow{r}}\right) {o}^{T}$ .
+
+Similarly, $\overrightarrow{v}\left( {z,\overrightarrow{r}{o}^{T}}\right) {\overrightarrow{v}}^{\prime }{\left( z,\overrightarrow{r}{o}^{T}\right) }^{T} = \overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {o}^{T}o{\overrightarrow{v}}^{\prime }{\left( z,\overrightarrow{r}\right) }^{T} = \overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {\overrightarrow{v}}^{\prime }{\left( z,\overrightarrow{r}\right) }^{T}$ . Therefore, projection on general equivariant representations can also produce invariant representation.
+
+### A.2 Proof of Proposition 4.1
+
+Proof. Assume that $s$ is an invariant representation.
+
+$$
+\widetilde{g}\left( s\right) = {P}_{\overrightarrow{E}}\left( {g\left( {{P}_{\overrightarrow{E}}^{-1}\left( s\right) }\right) }\right) \tag{13}
+$$
+
+$$
+= g\left( {s{\left( \overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{-1}\right) }^{T}}\right) )\overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{T}\text{.} \tag{14}
+$$
+
+443 The representation $\widetilde{g}\left( s\right)$ can be written as a function of $\left( {z,\overrightarrow{r}}\right)$ . Then, we have
+
+$$
+\forall o \in \mathrm{O}\left( 3\right) ,\widetilde{g}\left( s\right) \left( {z,\overrightarrow{r}{o}^{T}}\right) = g\left( {s{\left( \overrightarrow{E}{\left( z,\overrightarrow{r}{o}^{T}\right) }^{-1}\right) }^{T}}\right) )\overrightarrow{E}{\left( z,\overrightarrow{r}{o}^{T}\right) }^{T} \tag{15}
+$$
+
+$$
+= g\left( {s{\left( \overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{-1}\right) }^{T}}\right) {o}^{T})o\overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{T} \tag{16}
+$$
+
+$$
+= g\left( {s{\left( \overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{-1}\right) }^{T}}\right) ){o}^{T}o\overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{T} \tag{17}
+$$
+
+$$
+= g\left( {s{\left( \overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{-1}\right) }^{T}}\right) \overrightarrow{E}{\left( z,\overrightarrow{r}\right) }^{T} \tag{18}
+$$
+
+$$
+= \widetilde{g}\left( s\right) \left( {z,\overrightarrow{r}}\right) \text{.} \tag{19}
+$$
+
+Therefore, $\widetilde{g}\left( s\right)$ is also an invariant representation.
+
+### A.3 Proof of Theorem 4.1
+
+We first prove that when the multiset of invariant features and coordinate projections equal, the multiset of invariant features and coordinates are just distinguished from each other with an orthogonal transformation.
+
+Lemma A.1. Given two frames ${\overrightarrow{E}}_{1},{\overrightarrow{E}}_{2}$ , and two sets of atoms $\left\{ \left( {{s}_{1, i},{\overrightarrow{r}}_{1, i} \mid i = 1,2,\ldots , n\} }\right. \right.$ , $\left\{ {\left( {{s}_{2, i},{\overrightarrow{r}}_{2, i} \mid i = 1,2,..., n}\right) .\text{ If }\left\{ \left( {{s}_{1, i},{P}_{{\overrightarrow{E}}_{1}}\left( {\overrightarrow{r}}_{1, i}\right) \mid i = 1,2,..., n}\right) \right\} = \left\{ \left( {{s}_{2, i},{P}_{{\overrightarrow{E}}_{2}}\left( {\overrightarrow{r}}_{2, i}\right) \mid i = 1,2,..., n}\right) \right\} ,}\right\}$ there exists $o \in O\left( 3\right) ,\left\{ \left( {{s}_{1, i},{\overrightarrow{r}}_{1, i} \mid i = 1,2,\ldots , n}\right) \right\} = \left\{ \left( {{s}_{2, i},{\overrightarrow{r}}_{2, i}{o}^{T} \mid i = 1,2,\ldots , n}\right. \right\}$
+
+Proof. As $\left\{ \left( {{s}_{1, i},{P}_{{\overrightarrow{E}}_{1}}\left( {\overrightarrow{r}}_{1, i}\right) \mid i = 1,2,\ldots , n}\right) \right\} = \left\{ \left( {{s}_{2, i},{P}_{{\overrightarrow{E}}_{2}}\left( {\overrightarrow{r}}_{2, i}\right) \mid i = 1,2,\ldots , n}\right) \right\}$ , there exists permutation $\pi : \{ 1,2,\ldots , n\} \rightarrow \{ 1,2,\ldots , n\}$ , so that
+
+$$
+{s}_{1, i} = {s}_{1,\pi \left( i\right) },{P}_{{\overrightarrow{E}}_{1}}\left( {\overrightarrow{r}}_{1, i}\right) = {P}_{{\overrightarrow{E}}_{2}}\left( {\overrightarrow{r}}_{2,\pi \left( i\right) }\right) \tag{20}
+$$
+
+454
+
+$$
+{s}_{1, i} = {s}_{1,\pi \left( i\right) },{\overrightarrow{r}}_{1, i}{\overrightarrow{E}}_{1}^{T} = {\overrightarrow{r}}_{2,\pi \left( i\right) }{\overrightarrow{E}}_{2}^{T} \tag{21}
+$$
+
+455
+
+$$
+{s}_{1, i} = {s}_{1,\pi \left( i\right) },{\overrightarrow{r}}_{1, i} = {\overrightarrow{r}}_{2,\pi \left( i\right) }{\overrightarrow{E}}_{2}^{T}{\overrightarrow{E}}_{1}. \tag{22}
+$$
+
+As ${\overrightarrow{E}}_{1},{\overrightarrow{E}}_{2}$ are both orthogonal matrix, ${\overrightarrow{E}}_{2}^{T}{\overrightarrow{E}}_{1} \in \mathrm{O}\left( 3\right)$ . Let $o$ denotes ${\overrightarrow{E}}_{2}^{T}{\overrightarrow{E}}_{1}$ ,
+
+$$
+\left\{ \left( {{s}_{1, i},{\overrightarrow{r}}_{1, i} \mid i = 1,2,\ldots , n}\right. \right\} = \left\{ \left( {{s}_{2, i},{\overrightarrow{r}}_{2, i}{o}^{T} \mid i = 1,2,\ldots , n}\right) \right\} . \tag{23}
+$$
+
+457
+
+According to [30], there exists $\rho$ and $\varphi$ so that
+
+$$
+\rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\operatorname{Concatenate}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{s}_{j}}\right) }\right) }\right) \tag{24}
+$$
+
+encoding $\left\{ {\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{s}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ injectively. Let $\phi$ denote this function. According to Lemma A.1, $\phi$ encodes local environments injectively when the difference caused by orthogonal transformation is neglected.
+
+### A.4 Proof of Theorem 4.2
+
+Notation: Given a molecule with atom coordinates $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ and atomic features (like embedding of atomic number) $s \in {\mathbb{R}}^{N \times F}$ , let $\mathcal{G}$ denote the undirected graph corresponding to the molecule. Node $i$ in $\mathcal{G}$ represents the atom $i$ in the molecule. $\mathcal{G}$ has edge ${ij}$ iff ${r}_{ij} < {r}_{c}$ , where ${r}_{c}$ is the cutoff radius. Let $d\left( {\mathcal{G}, i, j}\right)$ denote the shortest path distance between node $i$ and $j$ in graph $\mathcal{G}$ .
+
+Note that a single layer defined in Equation 5 can still encode the local environment, as extra frame-frame projection cannot lower the expressivity.
+
+Lemma A.2. Given a frame $\overrightarrow{E}$ , with suitable functions $\rho$ and $\varphi ,\phi$ defined in Equation 5 encodes the local environment injectively.
+
+Proof. According to Theorem 4.1, there exists ${\rho }^{\prime }$ and ${\varphi }^{\prime }$ so that $\rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\operatorname{Concatenate}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}}\right) }\right) }\right.$ can encode local environment injec-tively. Let $\rho$ equals to ${\rho }^{\prime },\varphi \left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}}\right) }\right) = {\varphi }^{\prime }\left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{s}_{j}}\right) }\right)$ . Then $\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = {\rho }^{\prime }\left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}{\varphi }^{\prime }\left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{s}_{j}}\right) }\right) }\right.$ encodes local environment injectively.
+
+## Now we begin to prove the Theorem 4.2.
+
+Proof. We use cat to represent concatenate throughout the proof. Let $N{\left( i\right) }_{l}$ denote $\{ j \mid d\left( {\mathcal{G}, i, j}\right) \leq l\}$ . The ${l}^{\text{th }}$ message passing layer has the following form.
+
+$$
+{s}_{i}^{\left( l\right) } = {\rho }_{l}\left( {\mathop{\sum }\limits_{{j \in {N}_{1}\left( i\right) }}{\varphi }_{l}\left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}^{\left( l - 1\right) }}\right) }\right) }\right) , \tag{25}
+$$
+
+where ${s}_{i}^{\left( 0\right) } = {s}_{i}$ .
+
+By enumeration on $l$ , we prove that there exist ${\rho }_{l},{\varphi }_{l}$ so that ${s}_{i}^{\left( l\right) } = \psi \left( \left\{ {\operatorname{cat}\left( {{s}_{j},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) \mid j \in {N}_{l}\left( i\right) }\right\} \right)$ . We first define some auxiliary functions.
+
+According to [30], there exists a multiset function $\psi$ mapping a multiset of invariant representations to an invariant representation injectively. $\psi$ can have the following form
+
+$$
+\psi \left( \left\{ {{x}_{i} \mid i \in \mathbb{I}}\right\} \right) = \mathop{\sum }\limits_{i}\varphi \left( {x}_{i}\right) , \tag{26}
+$$
+
+where $\mathbb{I}$ is some finite index set. As $\psi$ is injective, it has an inverse function.
+
+We define function $m,{m}^{\prime },{m}^{\prime \prime }$ to extract invariant representation from concatenated node features.
+
+$$
+m\left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}^{\left( 0\right) }}\right) }\right) = \operatorname{cat}\left( {{s}_{j}^{\left( 0\right) },{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) . \tag{27}
+$$
+
+486
+
+$$
+{m}^{\prime }\left( {\operatorname{cat}\left( {{s}_{j},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) }\right) = {s}_{j}. \tag{28}
+$$
+
+487
+
+$$
+{m}^{\prime \prime }\left( {\operatorname{cat}\left( {{s}_{j},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) }\right) = {P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) . \tag{29}
+$$
+
+Last but not least, there exist a function $T$ transforming coordinate projections from one frame to another frame.
+
+$$
+T\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{P}_{{\overrightarrow{E}}_{j}}\left( {\overrightarrow{r}}_{jk}\right) }\right) = {P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) + {P}_{{\overrightarrow{E}}_{j}}\left( {\overrightarrow{r}}_{jk}\right) {P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) = {P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ik}\right) \tag{30}
+$$
+
+$l = 1$ : let ${\varphi }_{1} = \varphi \circ m,{\rho }_{1}$ is identity mapping.
+
+$l > 1$ : Assume for all ${l}^{\prime } < l,{s}_{i}^{\left( {l}^{\prime }\right) } = \psi \left( \left\{ {\operatorname{cat}\left( {{s}_{j},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) \mid j \in {N}_{{l}^{\prime }}\left( i\right) }\right\} \right)$ .
+
+${\varphi }_{l}$ has the following form.
+
+$$
+{\varphi }_{l}\left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}^{\left( l - 1\right) }}\right) }\right) = \varphi (\psi (
+$$
+
+$$
+\left. \left\{ {\operatorname{cat}\left( {{m}^{\prime }\left( x\right) , T\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{m}^{\prime \prime }\left( x\right) }\right) }\right) \mid x \in {\psi }^{-1}\left( {s}_{j}^{\left( l - 1\right) }\right) }\right\} \right) )\text{.} \tag{31}
+$$
+
+Therefore
+
+$$
+{\varphi }_{l}\left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}^{\left( l - 1\right) }}\right) }\right) = \varphi \left( {\psi \left( \left\{ {\operatorname{cat}\left( {{s}_{k},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ik}\right) }\right) \mid k \in {N}_{l - 1}\left( j\right) }\right\} \right) }\right) . \tag{32}
+$$
+
+Note ${\varphi }_{l}$ transforms coordinate projection from an old frame to a new frame.
+
+495 Therefore, the input of ${\rho }_{l}$ , namely ${a}_{i}^{\left( l\right) }$ , has the following form.
+
+$$
+{a}_{i}^{\left( l\right) } = \mathop{\sum }\limits_{{j \in N\left( i\right) }}{\varphi }_{l}\left( {\operatorname{cat}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}^{\left( l - 1\right) }}\right) }\right. \tag{33}
+$$
+
+$$
+= \mathop{\sum }\limits_{{j \in N\left( i\right) }}\varphi \left( {\psi \left( \left\{ {\operatorname{cat}\left( {{s}_{k},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ik}\right) }\right) \mid k \in {N}_{l - 1}\left( j\right) }\right\} \right) }\right) \tag{34}
+$$
+
+$$
+= \psi \left( \left\{ {\psi \left( \left\{ {\operatorname{cat}\left( {{s}_{k},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ik}\right) }\right) \mid k \in {N}_{l - 1}\left( j\right) }\right\} \right) \mid j \in N\left( i\right) }\right\} \right) \tag{35}
+$$
+
+496 We can transform ${a}_{i}^{\left( l\right) }$ to a set of set of invariant representations with the following function.
+
+$$
+\eta \left( {a}_{i}^{\left( l\right) }\right) = \left\{ {{\psi }^{-1}\left( s\right) \mid s \in {\psi }^{-1}\left( {a}_{i}^{\left( l\right) }\right) }\right\} . \tag{36}
+$$
+
+Therefore, $\eta \left( {a}_{i}^{\left( l\right) }\right) = \left\{ {\left\{ {\operatorname{cat}\left( {{s}_{k},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ik}\right) }\right) \mid k \in {N}_{l - 1}\left( j\right) }\right\} \mid j \in N\left( i\right) }\right\}$
+
+8 We can use another function $\iota$ unions invariant representation sets in set $\mathbb{S}$ to a set of invariant representation.
+
+$$
+\iota \left( \mathbb{S}\right) = \mathop{\bigcup }\limits_{{s \in \mathbb{S}}}s. \tag{37}
+$$
+
+${\rho }_{l}$ has the following form.
+
+$$
+{\rho }_{l}\left( {a}_{i}^{\left( l\right) }\right) = \psi \circ \iota \circ \eta \left( {a}_{i}^{\left( l\right) }\right) . \tag{38}
+$$
+
+Therefore, the output is
+
+$$
+{\rho }_{l}\left( {a}_{i}^{\left( l\right) }\right) = \psi \left( \left\{ {\operatorname{cat}\left( {{s}_{k},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ik}\right) }\right) \mid k \in {N}_{l}\left( i\right) \} }\right\} \right) \tag{39}
+$$
+
+Therefore, $\forall l \in \mathbb{N}$ , there exists ${\rho }_{l},{\varphi }_{l}$ so that ${s}_{i}^{\left( l\right) } = \psi \left( \left\{ {\operatorname{cat}\left( {{s}_{j},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) \mid j \in {N}_{l}\left( i\right) }\right\} \right)$ .
+
+As $L$ is the diameter of $\mathcal{G},{s}_{i}^{\left( L\right) } = {s}_{i}^{\left( l\right) } = \psi \left( \left\{ {\operatorname{cat}\left( {{s}_{j},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) \mid j \in {N}_{L}\left( i\right) }\right\} \right) =$ $\psi \left( \left\{ {\operatorname{cat}\left( {{s}_{j},{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) }\right) \mid j \in \{ 1,2,\ldots , n\} }\right\} \right)$ . As $\psi$ is an injective function, GNN with $L$ message passing layers defined in Equation 5 can encode the $\left\{ {\left( {{s}_{i},{P}_{{\overrightarrow{E}}_{i}}{\overrightarrow{r}}_{ij}}\right) \mid j \in \{ 1,2,\ldots , n\} }\right\}$ injectively to ${s}_{i}^{\left( L\right) }$ . According to Lemma A.1, this GNN encodes the whole molecule $\left\{ {\left( {{s}_{i},{\overrightarrow{r}}_{ij}}\right) \mid j \in \{ 1,2,\ldots , n\} }\right\}$ when the difference caused by orthogonal transformation is neglected.
+
+### A.5 Proof of Theorem 5.1
+
+Proof. (1) We first prove there exists an $\mathrm{O}\left( 3\right)$ -equivariant function $g$ mapping the local environment ${\mathrm{{LE}}}_{i}$ to a frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{3 \times 3}$ . The frame has full rank if there does not exist $o \in \mathrm{O}\left( 3\right) , o \neq I, o\left( {L{E}_{i}}\right) =$ $L{E}_{i}$ .
+
+Let $\gamma$ denote a function mapping local environments to sets of vectors.
+
+$$
+\gamma \left( \left\{ {\left( {{\overrightarrow{r}}_{ij},{s}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \left\{ {\text{ Concatenate }\left( {{s}_{j}{\overrightarrow{r}}_{ij},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\} , \tag{40}
+$$
+
+in which ${s}_{j}$ is reshaped as $F \times 1,{\overrightarrow{r}}_{ij}$ is of shape $1 \times 3.\gamma$ is $\mathrm{O}\left( 3\right)$ -equivariant. Therefore, we discuss the aggregation function on a set of equivariant representation, denoted as $\left\{ {{\overrightarrow{u}}_{i} \mid i = 1,2,\ldots , n,{\overrightarrow{u}}_{i} \in }\right.$ $\left. {\mathbb{R}}^{F \times 3}\right\}$ .
+
+Assume that $V = \left\{ {\left\{ {{\overrightarrow{u}}_{i} \mid i = 1,2,\ldots , n,{\overrightarrow{u}}_{i} \in {\mathbb{R}}^{F \times 3}}\right\} \mid n = 1,2,\ldots , N}\right\}$ , where $N$ is the upper bound of the size of local environment, is the set of sets of equivariant messages in local environment.
+
+An equivalence relation can be defined on $V : {v}_{1} \in V,{v}_{2} \in V,{v}_{1} \sim {v}_{2}$ iff there exists $o \in$ $\mathrm{O}\left( 3\right) , o\left( {v}_{1}\right) = {v}_{2}$ . Let $\widetilde{V} = V/ \sim$ denote the quotient set. For each equivalence class $\left\lbrack v\right\rbrack$ with no symmetry, a representative $v$ can be selected. We can define a function $r : \widetilde{V} - \{ \left\lbrack v\right\rbrack \mid \left\lbrack v\right\rbrack \in \widetilde{V},\exists v \in$ $\left\lbrack v\right\rbrack , o \in \mathrm{O}\left( 3\right) , o \neq I, o\left( v\right) = v\} \rightarrow V$ as $r\left( \left\lbrack v\right\rbrack \right) = v$ mapping each equivalence class with no symmetry to its representative. For a message set with no symmetry, the transformation from its representative to it is also unique. Let $h : V - \{ v \mid v \in V,\exists o \in \mathrm{O}\left( 3\right) , o \neq I, o\left( v\right) = v\} \rightarrow \mathrm{O}\left( 3\right)$ . $h\left( v\right) = o, o\left( {r\left( \left\lbrack v\right\rbrack \right) }\right) = v$ .
+
+Therefore, the function $g$ can take the form as follows.
+
+$$
+g\left( v\right) = \left\{ \begin{array}{ll} \left\lbrack \begin{array}{lll} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right\rbrack & \text{ if there exists }o \in \mathrm{O}\left( 3\right) , o \neq I,{ov} = v \\ h{\left( v\right) }^{T} & \text{ otherwise } \end{array}\right.
+$$
+
+Therefore, $g \cdot \gamma$ is the required function.
+
+We further detail how to select the representative elements: We first define a linear order relation ${ \leq }_{l}$ in $V$ . If ${v}_{1},{v}_{2} \in V,\left| {v}_{1}\right| < \left| {v}_{2}\right| ,{v}_{1}{ < }_{l}{v}_{2}$ . So we only consider the order relation between two sets of the same size $n$ .
+
+We first define a function $\psi$ mapping message set to a sequence injectively.
+
+$$
+\psi \left( \left\{ {{u}_{i} \mid i = 1,2,\ldots , n,{u}_{i} \in {\mathbb{R}}^{F \times 3}}\right\} \right) = \left\lbrack {\operatorname{flatten}\left( {u}_{\pi \left( i\right) }\right) \mid i = 1,2,\ldots , n,}\right.
+$$
+
+$\pi$ is a permutation sorting ${u}_{i}$ by lexicographical order].(41)
+
+Forall ${v}_{1},{v}_{2} \in V,\left| {v}_{1}\right| = \left| {v}_{2}\right| ,{v}_{1}{ \leq }_{l}{v}_{2}$ iff $\psi \left( {v}_{1}\right) \leq \psi \left( {v}_{2}\right)$ by lexicographical order. As the size of local environent is bounded, the sequence is also of a finite length. Therefore, the lexicographical order and thus the linear order relation ${ \leq }_{l}$ are well-defined.
+
+All permutations of $\{ 1,2,\ldots , n\}$ form a permutation set ${\Pi }_{n}$ .
+
+For all $\left\lbrack v\right\rbrack \in \widetilde{V}$ , let $r\left( \left\lbrack v\right\rbrack \right) = \arg \mathop{\min }\limits_{{{v}^{\prime } \in \left\lbrack v\right\rbrack }}\psi \left( {v}^{\prime }\right)$ . To illustrate the existence of such minimal sequence, we reform it.
+
+$$
+\mathop{\min }\limits_{{{v}^{\prime } \in \left\lbrack v\right\rbrack }}\psi \left( {v}^{\prime }\right) = \mathop{\min }\limits_{{\pi \in {\Pi }_{n}, o \in \mathrm{O}\left( 3\right) }}S\left( {o,\pi }\right) \tag{42}
+$$
+
+$$
+= \mathop{\min }\limits_{{\pi \in {\Pi }_{n}}}\mathop{\min }\limits_{{o \in \mathrm{O}\left( 3\right) }}S\left( {o,\pi }\right) , \tag{43}
+$$
+
+where $S\left( {o,\pi }\right) = \left\lbrack {\operatorname{flatten}\left( {{u}_{\pi \left( i\right) }{o}^{T}}\right) \mid i = 1,2,\ldots , n}\right\rbrack$ . Each element of this sequence is continuous to $o$ . We first fix the $\pi$ . As $\mathrm{O}\left( 3\right)$ is a compact group, $\arg \mathop{\min }\limits_{{o \in \mathrm{O}\left( 3\right) }}S{\left( o,\pi \right) }_{1}$ exists. Let ${L}_{1} = \{ o \mid o \in$ $\mathrm{O}\left( 3\right) , S{\left( o,\pi \right) }_{1} = \mathop{\min }\limits_{{{o}^{\prime } \in \mathrm{O}\left( 3\right) }}S{\left( {o}^{\prime },\pi \right) }_{1}\}$ is still a compact set. Therefore, $\arg \mathop{\min }\limits_{{o \in {L}_{1}}}S{\left( o,\pi \right) }_{1}$ exists. Let ${L}_{2} = \left\{ {o \mid o \in {L}_{1}, S{\left( o,\pi \right) }_{2} = \mathop{\min }\limits_{{{o}^{\prime } \in {L}_{1}}}S{\left( {o}^{\prime },\pi \right) }_{2}}\right\}$ . Similarly, ${L}_{3},{L}_{4},\ldots ,{L}_{3Fn}$ can be defined and they are non-empty set. Forall ${o}_{1},{o}_{2} \in {L}_{3Fn}$ , as $S\left( {{o}_{1},\pi }\right) \leq S\left( {{o}_{2},\pi }\right)$ and $S\left( {{o}_{2},\pi }\right) \leq$ $S\left( {{o}_{1},\pi }\right)$ by lexicographical order, $S\left( {{o}_{2},\pi }\right) = S\left( {{o}_{1},\pi }\right)$ and thus ${o}_{1}\left( v\right) = {o}_{2}\left( v\right)$ . If $v$ has no symmetry, $\forall o \in \mathrm{O}\left( 3\right) , o \neq I, o\left( v\right) \neq v,{o}_{1}\left( v\right) = {o}_{2}\left( v\right) \Rightarrow {o}_{1} = {o}_{2}$ . Therefore, ${L}_{3Fn}$ contains a unique element ${o}_{v}^{\left( 0\right) }$ and $\mathop{\min }\limits_{{o \in \mathrm{O}\left( 3\right) }}S\left( {o,\pi }\right)$ is unique.
+
+As ${\Pi }_{n}$ is a finite set, if $\mathop{\min }\limits_{{o \in \mathrm{O}\left( 3\right) }}S\left( {o,\pi }\right)$ exist for all $\pi \in {\Pi }_{n},\mathop{\min }\limits_{{\pi \in {\Pi }_{n}}}\mathop{\min }\limits_{{o \in \mathrm{O}\left( 3\right) }}S\left( {o,\pi }\right)$ must exist. Therefore the minimal sequence exists. As ${ \leq }_{l}$ is a linear order, the minimal sequence is unique. With the unique sequence, the unique representative can be determined.
+
+(2) Then we prove there does not exist $o \in \mathrm{O}\left( 3\right) , o \neq I, o\left( {L{E}_{i}}\right) = L{E}_{i}$ if the frame has full rank.
+
+The frame $\overrightarrow{E}$ is a function of local environment. If there exists
+
+$$
+o \in \mathrm{O}\left( 3\right) , o\left( {L{E}_{i}}\right) = L{E}_{i}.
+$$
+
+Then $\overrightarrow{E}\left( {o\left( {L{E}_{i}}\right) }\right) = \overrightarrow{E}\left( {L{E}_{i}}\right) {o}^{T} = \overrightarrow{E}\left( {L{E}_{i}}\right)$ .
+
+As $\overrightarrow{E}$ is an invertible matrix, $o = I$ . Therefore, $L{E}_{i}$ has no symmetry.
+
+## B Derivation of the message passing section for frame
+
+The framework proposed by Villar et al. [15] is
+
+$$
+{h}_{n}\left( \left\{ {{\overrightarrow{m}}_{i1},{\overrightarrow{m}}_{i2},{\overrightarrow{m}}_{i2},\ldots ,{\overrightarrow{m}}_{in}}\right\} \right) = \mathop{\sum }\limits_{{j = 1}}^{n}g\left( {{\overrightarrow{m}}_{ij},\left\{ {{\overrightarrow{m}}_{i1},\ldots ,{\overrightarrow{m}}_{in}}\right\} - \left\{ {\overrightarrow{m}}_{ij}\right\} }\right) \circ {\overrightarrow{m}}_{ij}, \tag{44}
+$$
+
+where ${h}_{n}$ is the aggregation function for $n$ messages. $g$ is a $\mathrm{O}\left( 3\right)$ -invariant functions. We can further reform it.
+
+$$
+{h}_{n}\left( \left\{ \left\{ {{\overrightarrow{m}}_{i1},{\overrightarrow{m}}_{i2},\ldots ,{\overrightarrow{m}}_{in}}\right\} \right\} \right) = \mathop{\sum }\limits_{{j = 1}}^{n}{g}_{1}^{\left( n\right) }\left( {{g}_{2}^{\left( n\right) }\left( {\overrightarrow{m}}_{ij}\right) ,{h}_{n - 1}\left( {\left\{ {{\overrightarrow{m}}_{i1},{\overrightarrow{m}}_{i2},\ldots ,{\overrightarrow{m}}_{i, n}}\right\} - \left\{ {\overrightarrow{m}}_{ij}\right\} }\right) }\right) {\overrightarrow{m}}_{ij},
+$$
+
+(45)
+
+where ${g}_{1}^{\left( n\right) },{g}_{2}^{\left( n - 1\right) }$ are two $\mathrm{O}\left( 3\right)$ -invariant functions. With this equation, we can recursively build $n$ message aggregation function ${h}_{n}$ with ${h}_{n - 1}$ . Its universal approximation power has been proved in [15].
+
+However, as they can have varied numbers of neighbors, different nodes have to use different aggregation functions, which is hard to implement. Therefore, we desert the recursive term ${h}_{n - 1}$ .
+
+$$
+{h}_{n}\left( \left\{ \left\{ {{\overrightarrow{m}}_{i1},{\overrightarrow{m}}_{i2},\ldots ,{\overrightarrow{m}}_{in}}\right\} \right\} \right) = \mathop{\sum }\limits_{{j = 1}}^{n}g\left( {\overrightarrow{m}}_{ij}\right) {\overrightarrow{m}}_{ij}. \tag{46}
+$$
+
+The message ${\overrightarrow{m}}_{ij}$ can have the form Concatenate $\left( {1, s}\right) \circ {\overrightarrow{r}}_{ij}$ in Theorem 4.1. As $g$ is an invariant function, we can further simplify Equation 46.
+
+$$
+{h}_{n}\left( \left\{ \left\{ {{\overrightarrow{m}}_{i1},{\overrightarrow{m}}_{i2},\ldots ,{\overrightarrow{m}}_{in}}\right\} \right\} \right) = \mathop{\sum }\limits_{{j = 1}}^{n}{g}^{\prime }\left( {{r}_{ij},{s}_{j}}\right) \frac{{\overrightarrow{r}}_{ij}}{{r}_{ij}}, \tag{47}
+$$
+
+where ${g}^{\prime }$ is a function mapping invariant representations to invariant representations.
+
+## C Archtecture of GNN-LF
+
+The full architecture is shown in Figure 4.
+
+Following Thölke and Fabritiis [10], we also use a neighborhood embedding block which aggregates neighborhood information as the initial atom feature.
+
+$$
+{s}_{i}^{\left( 0\right) } = {\operatorname{Emb}}_{1}\left( {z}_{i}\right) + \mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}{\operatorname{Emb}}_{2}\left( {z}_{j}\right) \odot f\left( {s}_{ij}^{\left( e\right) }\right) . \tag{48}
+$$
+
+where ${\mathrm{{Emb}}}_{1}$ and ${\mathrm{{Emb}}}_{2}$ are two embedding layers and $f$ is the filter function.
+
+These special functions are proposed by previous methods [19, 31].
+
+$$
+\operatorname{cutoff}\left( r\right) = \left\{ \begin{array}{l} \frac{1}{2}\left( {1 + \cos \frac{\pi r}{{r}_{c}}}\right) , r < {r}_{c} \\ 0, r > {r}_{c} \end{array}\right. \tag{49}
+$$
+
+$$
+{\operatorname{rbf}}_{k}\left( {r}_{ij}\right) = {e}^{-{\beta }_{k}{\left( \exp \left( -{r}_{ij}\right) - {\mu }_{k}\right) }^{2}}, \tag{50}
+$$
+
+where ${\beta }_{k},{\mu }_{k}$ are coefficients of the ${k}^{\text{th }}$ basis.
+
+For PES tasks, the output module is a sum pooling and a linear layer. Other invariant prediction tasks can also use this module. However, on the QM9 dataset, we design special output modules
+
+
+
+Figure 4: The architecture of GNN-LF. (a) The full architecture of GNN-LF contains four parts: an embedding block, a geo2filter block, message passing (MP) layers, and an output module. Embedding block consists of an embedding layer converting atomic numbers to learnable tensors and a neighborhood embedding block proposed by Thölke and Fabritiis [10]. (b) The geo2filter block builds a graph with the coordinates of atoms, passes messages to produce local frames, projects equivariant features onto the frames, and uses edge invariant features to produce edge filters. (c) A message passing layer filters atom representations with edge filters to produce messages and aggregates these messages to update atom embeddings. (d) The projection block produces ${d}^{1},{d}^{2}$ and concatenates them.
+
+for two properties. For dipole moment $\mu$ , given node representations $\left\lbrack {{s}_{i} \mid i = 1,2,\ldots , N}\right\rbrack$ and atom coordinates $\left\lbrack {{\overrightarrow{r}}_{i} \mid i = 1,2,\ldots , N}\right\rbrack$ , our prediction is as follows.
+
+$$
+\widehat{\mu } = \left| {\mathop{\sum }\limits_{i}\left( {{q}_{i} - {\operatorname{average}}_{j}\left( {q}_{j}\right) }\right) {\overrightarrow{r}}_{i}}\right| , \tag{51}
+$$
+
+where ${q}_{i} \in \mathbb{R}$ , the prediction of charge, is function of ${s}_{i}$ . We use a linear layer to convert ${s}_{i}$ to ${q}_{i}$ . As the whole molecule is electroneutral, we use ${q}_{i} - {\text{average}}_{j}\left( {q}_{j}\right)$ .
+
+For electronic spatial extent $\left\langle {R}^{2}\right\rangle$ , we make use of atom mass (known constants) $\left\lbrack {{m}_{i} \mid i = 1,2,\ldots , N}\right\rbrack$ . The output module is as follows.
+
+$$
+{\overrightarrow{r}}_{c} = \frac{\mathop{\sum }\limits_{i}{m}_{i}{\overrightarrow{r}}_{i}}{\mathop{\sum }\limits_{i}{m}_{i}} \tag{52}
+$$
+
+$$
+\left\langle {\widehat{R}}^{2}\right\rangle = \left| {\mathop{\sum }\limits_{i}{x}_{i}}\right| {\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{c}\left| {}^{2}\right| , \tag{53}
+$$
+
+where ${x}_{i} \in \mathbb{R}$ is an invariant representation feature of node $i$ . We also use a linear layer to convert ${s}_{i}$ to ${x}_{i}$ .
+
+## D Experiment settings
+
+Computing infrastructure. We leverage Pytorch for model development. Hyperparameter searching and model training are conducted on an Nvidia A100 GPU. Inference times are calculated on an Nvidia RTX 3090 GPU.
+
+Training process. For MD17/QM9 dataset, we set an upper bound (6000/1000) for the number of epoches and use an early stop strategy which finishes training if the validation score does not increase after 500/50 epoches. We utilize Adam optimizer and ReduceLROnPlateau learning rate scheduler to optimize models.
+
+Table 4: The inference time and the GPU memory consumption of $f$ random batches of 32 molecules (16 molecules for GemNet) from the MD17 dataset. The format is inference time in ms/GPU memory comsumption in MB.
+
+ | DimeNet | GemNet | torchmd | GNN-LF | NoShare |
| number of parameters | 2.1 M | 2.2 M | 1.3 M | 0.8M〜1.3M | 2.4M~5.3M |
| aspirin | 133/5790 | 612/15980 | 32/2065 | 10/279 | 22/883 |
| benzene | 94/1831 | 393/3761 | 33/918 | 8/95 | 11/213 |
| ethanol | 95/784 | 344/1565 | 32/532 | 8/54 | 11/115 |
| malonaldehyde | 88/784 | 355/1565 | 32/532 | 7/68 | 10/127 |
| naphthalene | 112/4470 | 498/11661 | 32/1694 | 9/175 | 15/491 |
| salicylic_acid | 92/3489 | 430/8182 | 34/1418 | 9/176 | 15/424 |
| toluene | 113/3148 | 423/7153 | 45/1322 | 8/176 | 15/458 |
| uracil | 107/1782 | 354/3735 | 32/907 | 8/99 | 14/302 |
| average | 104/2760 | 426/6700 | 34/1174 | 9/140 | 14/377 |
+
+Model hyperparameter tuning. Hyperparameters were selected to minimize 11 loss on the validation sets. The best hyperparameters selected for each model can be found in our code in the supplement materials. For MD17/QM9, we fix the initial Ir to ${1e} - 3/{3e} - 4$ , batch size to ${16}/{64}$ , hidden dimension to 256. The cutoff radius is selected from $\left\lbrack {4,{12}}\right\rbrack$ . The number of message passing layer is selected from $\left\lbrack {4,8}\right\rbrack$ . The dimension of rbf is selected from $\left\lbrack {{32},{96}}\right\rbrack$ . Please refer to our code for the detailed settings.
+
+Dataset split. We randomly split the molecule set into train/validation/test sets. For MD17, the size of the train and validation set are 950,50 respectively. All remaining data is used for test. For QM9: The sizes of randomly splited train/validation/test sets are11000,10000,10831respectively.
+
+## E Scalability
+
+To assess the scalability of our model, we show the inference time of random MD17 batches of 32 molecules on an NVIDIA RTX 3090 GPU. The results are shown in Table 4. Note that GemNet consumes too much memory, and only batches of 16 molecules can fit in the GPU. Our model only takes ${30}\%$ time and ${12}\%$ space compared with the fastest baseline. Moreover, NoShare use 260% more space and 67% more computation time than GNN-LF with filter sharing technique.
+
+## F Existing methods using frame
+
+Though some previous works $\left\lbrack {{23},{24},{32},{33}}\right\rbrack$ also use the term "frame" or designs similar to "frame", they are very different methods from ours.
+
+The primary motivation of our work is to get rid of equivariant representation for higher and provable expressivity, simpler architecture, and better scalability. We only use equivariant representations in the frame generation and projection process. After projection, all the remaining parts of our model only operates on invariant representations. In contrast, existing works $\left\lbrack {{23},{24},{32},{33}}\right\rbrack$ still use both equivariant and invariant representations, resulting in extra complexity even after using frame. For example, functions for equivariant and invariant representations still need to be defined separately, and complex operation is needed to mix the information contained in the two kinds of representations. In addition, our model can beat the representative methods of this kind in both accuracy and scalability on potential energy surface prediction tasks.
+
+Other than the different primary motivation, our model has an entirely different architecture from existing ones.
+
+1. Generating frame: ClofNet [24] produces a frame for each pair of nodes and use cross product to produce the frame. Both EGNN [32] and GMN [33] use coordinate embeddings which are initialized with coordinates. Luo et al. [23] initializes the frame with zero. Then these models use some schemes to update the frame. Our model uses a novel message passing scheme to produce frames and will not update it, leading to simpler architecture and low computation overhead.
+
+2. Projection: Existing models $\left\lbrack {{23},{24},{32},{33}}\right\rbrack$ only project equivariant features onto the frame, while we also use frame-frame projection, which is verified to be critical both experimentally and theoretically.
+
+3. Message passing layer: Existing models [23, 24, 32, 33] all use both invariant representation and equivariant features and pass both invariant and equivariant messages, which needs to mix invariant representations and equivariant representations, update invariant representations, and update equivariant representations, while our model only uses invariant representations, resulting in an entirely different and much simpler design with significantly higher scalability.
+
+4. Our designing tricks, including: message passing scheme to produce frame, filter decomposition, and filter sharing, are not used in $\left\lbrack {{23},{24},{32},{33}}\right\rbrack$ . Our experiments and ablation study verified their effectiveness.
+
+Furthermore, existing models use different groups to describe symmetry. Luo et al. [23], Du et al. [24] designs $\mathrm{{SO}}\left( 3\right)$ -equivariant model, while our model is $\mathrm{O}\left( 3\right)$ -equivariant. We emphasize that this is not a contraint of our model but a requirement of the task. As most target properties of molecules are $\mathrm{O}\left( 3\right)$ -equivariant (including energy and force we aim to predict), our model can fully describe the symmetry.
+
+Our theoretical analysis is also novel. Luo et al. [23], Satorras et al. [32], Huang et al. [33] have no theoretical analysis of expressivity. Du et al. [24]'s analysis is primarily based on the framework of Dym and Maron [34], which is further based on the polynomial approximation and the group representation theory. The conclusion is that a model needs many message passing layers to approximate high-order polynomials and achieve universality. Our theoretical analysis gets rid of polynomial and group representation and provides a much simpler analysis. We also prove that one message passing layer proposed in our paper are enough to be universal.
+
+In summary, although also using "frame", our work is significantly different to any existing work in either method, theory, or task.
+
+Gauge-equivariant CNNs Gauge-equivariance methods [35-37] have never been used in the potential energy surface task. The methods seem similar to ours as they also project equivariant representations onto some selected orientations. However, the differences are apparent.
+
+1. Some of these methods are not strictly $\mathrm{O}\left( 3\right)$ -equivariant. For example, the model of de Haan et al. [36] is not strictly equivariant for angle $\neq {2\pi }/N$ , while our model (and all existing models for potential enerby surface) is strictly $\mathrm{O}\left( 3\right)$ -equivariant. Loss of $\mathrm{O}\left( 3\right)$ -equivariance leads to high sample complexity.
+
+2. Building grid is infeasible for potential energy surface tasks as atoms can move in the whole space. Moreover, the energy prediction must be a smooth function of the coordinates of atoms. Therefore, the space should not be discretized. The model of Suk et al. [37] works on some discrete grid and cannot be used for the molecular force field.
+
+3. Even though Suk et al. [37] seem to achieve strict O(3)-equivariance with high complexity, it only uses the tangent plane's angle and loses some information. Only one angle relative to a reference neighbor is used. Such an angle is expressive enough in a 2D tangent space because the coordinate can be represented as $\left( {r\cos \theta , r\sin \theta }\right)$ . However, for molecule in 3D space, such an angle is not enough(the coordinates can be represented as $\left( {r\cos \theta \sin \phi , r\sin \theta \sin \phi , r\cos \phi }\right)$ . The angles in tangent space only provide $\theta$ ). In contrast, we use the projection on three frame directions, so our model can fully capture the coordinates.
+
+4. Gauge-equivariance methods all use some constained kernels, which needs careful designation. Our model needs no specially designed kernel and can directly use the ordinary message passing scheme. Such simple design leads to provable expressivity, simpler architecture, and low time complexity. Our time complexity is $O\left( {Nn}\right)$ , while the that of Suk et al. [37] is $O\left( {N{n}^{2}}\right)$ , where $N$ is the number of atoms, $n$ is the maximum node degree).
+
+## G Expressivity with symmetric input
+
+672
+
+We use the symbol in Equation 5 SchNet's message can be formalized as follows.
+
+$$
+\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\operatorname{Concatenate}\left( {{s}_{j},{r}_{ij}}\right) }\right) .}\right. \tag{54}
+$$
+
+24 In implementation, GNN-LF has the following form.
+
+$$
+\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\operatorname{Concatenate}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j},{r}_{ij}}\right) }\right) }\right) . \tag{55}
+$$
+
+Therefore, for all input molecules, GNN-LF is at least as expressive as SchNet.
+
+Theorem G.1. $\forall L \in {\mathbb{N}}^{ + }$ , for all $L$ layer SchNet, there exists a $L$ layer GNN-LF produce the same output for all input molecule.
+
+Proof. Let the following equation denote the ${l}^{\text{th }}$ layer SchNet.
+
+$$
+{\phi }_{l}^{\prime }\left( \left\{ {\left( {{s}_{j}^{\left( l\right) },{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = {\rho }_{l}^{\prime }\left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}{\varphi }_{l}^{\prime }\left( {\text{ Concatenate }\left( {{s}_{j}^{\left( l - 1\right) },{r}_{ij}}\right) }\right) }\right) . \tag{56}
+$$
+
+Let ${\rho }_{l} = {\rho }_{l}^{\prime },{\varphi }_{l}\left( \right.$ Concatenate $\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j},{r}_{ij})}\right) = {\varphi }_{l}^{\prime }\left( \right.$ Concatenate $\left( {{s}_{j},{r}_{ij}}\right)$ ), which neglects the projection input.
+
+Let the ${l}^{\text{th }}$ layer of GNN-LF have the following form.
+
+$$
+{\phi }_{l}\left( \left\{ {\left( {{s}_{j}^{\left( l\right) },{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = {\rho }_{l}\left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}{\varphi }_{l}\left( {\text{ Concatenate }\left( {{s}_{j}^{\left( l - 1\right) },{r}_{ij}}\right) }\right) }\right) . \tag{57}
+$$
+
+This GNN-LF produces the same output as the SchNet.
+
+## H How to overcome the frame degeneration problem.
+
+As shown in Theorem 5.1, if the frame is O(3)-equivariant, no matter what scheme is used, the frame will degenerate when the input molecule is symmetric. In other words, the projection degeneration problem roots in the symmetry of molecule. Therefore, we try to break the symmetry by assigning node identity features ${s}^{\prime }$ to atoms. The ${i}^{\text{th }}$ row of ${s}^{\prime }$ is $i$ . We concatenate $s$ and ${s}^{\prime }$ as the new node feature $\widetilde{s} \in {\mathbb{R}}^{N \times \left( {F + 1}\right) }$ . Let $\eta$ denote a function concatenating node feature $s$ and node identity features ${s}^{\prime },\eta \left( s\right) = \widetilde{s}$ . Its inverse function removes the node identity ${\eta }^{-1}\left( \widetilde{s}\right) = s$ .
+
+In this section, we assume that the cutoff radius is large enough so that local environments cover the whole molecule. Let $\left\lbrack n\right\rbrack$ denote the sequence $1,2,\ldots , n$ . Let $s \in {\mathbb{R}}^{N \times F}$ denote the invariant atomic features, $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ denote the $3\mathrm{D}$ coordinates of atoms, and ${\overrightarrow{r}}_{i}$ , the ${i}^{\text{th }}$ row of $\overrightarrow{r}$ , denote the coordinates of atom $i$ . Let $\overrightarrow{r} - {\overrightarrow{r}}_{i}$ denote an $N \times 3$ matrix whose ${j}^{\text{th }}$ row is ${\overrightarrow{r}}_{j} - {\overrightarrow{r}}_{i}$ . We assume that $N > 1$ throughout this section.
+
+Now each atom in the molecule has a different feature. The frame generation is much simpler now. Proposition H.1. With node identity features, there exists an O(3)-equivariant function mapping the local environment $L{E}_{i} = \left\{ {\left( {{\widetilde{s}}_{i},{\overrightarrow{r}}_{ij}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\}$ to a frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{3 \times 3}$ , and the first rank $\left( {\overrightarrow{E}}_{i}\right)$ rows of ${\overrightarrow{E}}_{i}$ form an orthonormal basis of span $\left( \left\{ {{\overrightarrow{r}}_{ij} \mid j \in \left\lbrack N\right\rbrack }\right\} \right)$ while other rows are zero.
+
+Proof. For node $i$ , we can use the following procedure to produce a frame.
+
+Initialize ${\overrightarrow{E}}_{i}$ as an empty matrix. For $\mathrm{j}$ in $\left\lbrack {1,2,3,\ldots , n}\right\rbrack$ , if ${\overrightarrow{r}}_{ij}$ is linearly independent to row vectors in ${\overrightarrow{E}}_{i}$ , add ${\overrightarrow{r}}_{ij}$ as a row vector of ${\overrightarrow{E}}_{i}$ .
+
+Therefore, when the procedure finishes, the row vectors of $\overrightarrow{E}$ form a maximal linearly independent system of $\left\{ {{\overrightarrow{r}}_{ij} \mid j \in \left\lbrack N\right\rbrack }\right\}$ .
+
+15 Then, we use the Gram-Schmidt process to orthonormalize the non-empty row vectors in $\overrightarrow{E}$ , and then use zero to fill the empty rows in $\overrightarrow{E}$ to form a $3 \times 3$ matrix. Therefore, the first rank $\left( {\overrightarrow{E}}_{i}\right)$ rows of ${\overrightarrow{E}}_{i}$ are orthonormal, and can linearly express all vector in $\left\{ {{\overrightarrow{r}}_{ij} \mid j \in \left\lbrack N\right\rbrack }\right\}$ . In other words, the first $\operatorname{rank}\left( {\overrightarrow{E}}_{i}\right)$ rows of ${\overrightarrow{E}}_{i}$ form an orthonormal basis of $\operatorname{span}\left( \left\{ {{\overrightarrow{r}}_{ij} \mid j \in \left\lbrack N\right\rbrack }\right\} \right)$ .
+
+Note that ${\overrightarrow{r}}_{ij},\overrightarrow{0}$ are $\mathrm{O}\left( 3\right)$ -equivariant vectors. Therefore, the frame produced with this scheme is $\mathrm{O}\left( 3\right)$ -equivariant.
+
+With the frame, GNN-LF has the universal approximation property.
+
+Proposition H.2. Assuming that the node identity features are used, and the frame is produced by the method in Proposition H.1. For all $O\left( 3\right)$ -invariant and translation-invariant functions $f\left( {s,\overrightarrow{r}}\right) , f$ can be written as a function of the embeddings of node 1 produced by one message passing layer proposed in Theorem 4.1.
+
+Proof. Let ${e}_{r} \in {\mathbb{R}}^{3 \times 3}$ denote a diagonal matrix whose first $r$ diagonal elements are 1 and others are 0.
+
+With node identity features and method in Proposition H.1, the first rank $\left( {\overrightarrow{E}}_{i}\right)$ rows of ${\overrightarrow{E}}_{i}$ form an orthonormal basis of $\operatorname{span}\left( \left\{ {{\overrightarrow{r}}_{ij} \mid j \in \left\lbrack N\right\rbrack }\right\} \right)$ while other rows are zero. Especial, all vectors in $\left\{ {{\overrightarrow{r}}_{1j} \mid j \in \left\lbrack N\right\rbrack }\right\}$ can be written as linear combination of rows in ${\overrightarrow{E}}_{1},{\overrightarrow{r}}_{1j} = {w}_{1j}{\overrightarrow{E}}_{j}$ . Therefore, the projection operation ${P}_{{\overrightarrow{E}}_{1}} : \left\{ {{\overrightarrow{r}}_{1j} \mid j \in \left\lbrack N\right\rbrack }\right\} \rightarrow \left\{ {\left( {\overrightarrow{r}}_{1j}\right) {\overrightarrow{E}}_{1}^{T} \mid j \in \left\lbrack N\right\rbrack }\right\}$ is injective, as ${P}_{{\overrightarrow{E}}_{1}}\left( {\overrightarrow{r}}_{1j}\right) {\overrightarrow{E}}_{1} =$ ${w}_{1j}{\overrightarrow{E}}_{1}{\overrightarrow{E}}_{1}^{T}{\overrightarrow{E}}_{1} = {w}_{1j}{e}_{\operatorname{rank}\left( {\overrightarrow{E}}_{1}\right) }{\overrightarrow{E}}_{1} = {w}_{1j}{\overrightarrow{E}}_{1} = {\overrightarrow{r}}_{1j}.$
+
+According to the proof of Theorem 4.1, there exists injective function $\phi$ so that node embeddings ${z}_{1} = \phi \left( \left\{ {{\widetilde{s}}_{j},{P}_{{\overrightarrow{E}}_{1}}\left( {\overrightarrow{r}}_{1j}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} \right)$ . Note that both ${\overrightarrow{E}}_{1}$ and ${z}_{1}$ are functions of $L{E}_{1}$ .
+
+Let $\tau$ denote a function $\left( {{z}_{1},{\overrightarrow{E}}_{1}}\right) = \tau \left( \left\{ {\left( {{\widetilde{s}}_{j},{\overrightarrow{r}}_{1j}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} \right) .\;\forall o \in \mathrm{O}\left( 3\right) ,\left( {{z}_{1},{\overrightarrow{E}}_{1}{o}^{T}}\right) =$ $\tau \left( \left\{ {\left( {{\widetilde{s}}_{j},{\overrightarrow{r}}_{1j}{o}^{T}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} \right)$ . Moreover, $\tau$ is also an invertible function because $\left\{ {\left( {{\widetilde{s}}_{j},{\overrightarrow{r}}_{1j}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} =$ $\left\{ {\left( {s, p{\overrightarrow{E}}_{1}}\right) \mid \left( {s, p}\right) \in {\phi }^{-1}\left( {z}_{1}\right) }\right\}$ .
+
+Because the last column of $\widetilde{s}$ is the node identity feature, there exists an bijective function $\psi$ converting set of features to matrix of features. $\psi \left( \left\{ {\left( {{\widetilde{s}}_{j},{\overrightarrow{r}}_{1j}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} \right) = \left( {s, - \left( {\overrightarrow{r} - {\overrightarrow{r}}_{1}}\right) }\right)$ . Intuitively, it puts the features of atom with node identity $i$ to the ${i}^{\text{th }}$ row of feature matrix. Similarly, ${\psi }^{\prime }\left( \left\{ {\left( {{\widetilde{s}}_{j},{P}_{{\overrightarrow{E}}_{1}}\left( {\overrightarrow{r}}_{1j}\right) }\right) \mid j \in \left\lbrack N\right\rbrack }\right\} \right) = \left( {s, - \left( {\overrightarrow{r} - {\overrightarrow{r}}_{1}}\right) {\overrightarrow{E}}_{1}^{T}}\right) .$
+
+As $f$ is a translation- and O(3)-invariant function, $f\left( {s,\overrightarrow{r}}\right) = f\left( {s,\overrightarrow{r} - {\overrightarrow{r}}_{1}}\right) = f\left( {s, - \left( {\overrightarrow{r} - {\overrightarrow{r}}_{1}}\right) }\right) =$ $f\left( {\psi \left( \left\{ {\left( {{\widetilde{s}}_{j},{\overrightarrow{r}}_{1j}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} \right) }\right)$ . Let $g = f \circ \psi \circ {\tau }^{-1}, g\left( {{\overrightarrow{E}}_{1},{z}_{1}}\right) = f\left( {s,\overrightarrow{r}}\right)$ . Moreover, $\forall o \in \mathrm{O}\left( 3\right)$ , $g\left( {{\overrightarrow{E}}_{1}{o}^{T},{z}_{1}}\right) = f\left( {s,\overrightarrow{r}{o}^{T}}\right) = f\left( {s,\overrightarrow{r}}\right) .$
+
+Let extend $\left( {E}_{i}\right) \in O\left( 3\right)$ denote any matrix whose first $\operatorname{rank}\left( {E}_{i}\right)$ rows equals to ${E}_{i}$ ’s first rows. Therefore, $f\left( {s,\overrightarrow{r}}\right) = f\left( {s,\overrightarrow{r}\text{extend}{\left( {\overrightarrow{E}}_{1}\right) }^{T}}\right) = g\left( {{\overrightarrow{E}}_{1}\text{extend}{\left( {\overrightarrow{E}}_{1}\right) }^{T},{z}_{1}}\right) = g\left( {{e}_{\operatorname{rank}\left( {\overrightarrow{E}}_{1}\right) },{z}_{1}}\right) =$ ${g}^{\prime }\left( {\operatorname{rank}\left( {\overrightarrow{E}}_{1}\right) ,{z}_{1}}\right)$ .
+
+Note that $\operatorname{rank}\left( {\overrightarrow{E}}_{1}\right) = \operatorname{rank}\left( {\overrightarrow{r} - {\overrightarrow{r}}_{1}}\right) = \operatorname{rank}\left( {{P}_{{\overrightarrow{E}}_{1}}\left( {\overrightarrow{r} - {\overrightarrow{r}}_{1}}\right) }\right) = \operatorname{rank}\left( {\iota \circ {\psi }^{\prime } \circ {\phi }^{-1}\left( {z}_{i}\right) }\right)$ , where $\iota$ is a selection function: $\iota \left( {z, - \left( {\overrightarrow{r} - {\overrightarrow{r}}_{0}}\right) {\overrightarrow{E}}_{1}^{T}}\right) = - \left( {\overrightarrow{r} - {\overrightarrow{r}}_{0}}\right) {\overrightarrow{E}}_{1}^{T}$ . Therefore, $f\left( {s,\overrightarrow{r}}\right) = {g}^{\prime }(\operatorname{rank}(\iota \circ$ $\left. {\left. {{\phi }^{-1}\left( {z}_{1}\right) }\right) ,{z}_{1}}\right) = {g}^{\prime \prime }\left( {z}_{1}\right)$ .
+
+For simplicity, let function $\varphi$ denote GNN-LF with node identity features (including adding node identity feature, generating frame, and a message passing layer proposed in Theorem 4.1), $\varphi \left( {z,\overrightarrow{r}}\right)$ is the embeddings of node 1 .
+
+Node identity features help avoiding expressivity loss caused by frame degeneration. However, GNN-LF's output is no longer permutation invariant. Therefore, we use the relational pooling method [26], which introduces extra computation overhead and keeps the permutation invariance.
+
+To illustrate this method, we first define some concepts. Function $\pi : \left\lbrack n\right\rbrack \rightarrow \left\lbrack n\right\rbrack$ is a permutation iff it is bijective. All permutation on $\left\lbrack n\right\rbrack$ forms the permutation group ${S}_{n}$ . We compute the output of all possible atom permutations and average them, in order to keep permutation invariance. We define the permutation of matrix here: for all matrix of shape $N \times m,\forall \pi \in {S}_{N}$ , the ${i}^{\text{th }}$ row of $\pi \left( M\right)$ equals to the ${\left( {\pi }^{-1}\left( i\right) \right) }^{\text{th }}$ row of $M$ .
+
+Proposition H.3. For all $O\left( 3\right)$ -invariant, permutation-invariant and translation invariant function $f\left( {s,\overrightarrow{r}}\right)$ , there exists ${GNN} - {LF\varphi }$ and some function $g$ , with which $\frac{1}{N!}\mathop{\sum }\limits_{{\pi \in {S}_{N}}}g\left( {\varphi \left( {\pi \left( s\right) ,\pi \left( \overrightarrow{r}\right) }\right) }\right)$ is permutation invariant and equals to $f\left( {s,\overrightarrow{r}}\right)$ .
+
+Proof. Define a "frame" (defined in Definition 1 in [26]) $F : V \rightarrow {2}^{{S}_{n}}$ , where $V$ is the embedding space. $\forall v \in V, F\left( v\right) = {S}_{n}$ . So the relational pooling of GNN-LF with node identity features $\langle g \circ \varphi {\rangle }_{F}\left( {s,\overrightarrow{r}}\right) = \frac{1}{N!}\mathop{\sum }\limits_{{\pi \in {S}_{N}}}g\left( {\varphi \left( {\pi \left( s\right) }\right) ,\pi \left( \overrightarrow{r}\right) }\right)$ . Note that the permutation operation $\pi$ and $\mathrm{O}\left( 3\right)$ operation $o$ commute: $\pi \left( {\overrightarrow{r}{o}^{T}}\right) = \pi \left( \overrightarrow{r}\right) {o}^{T}$ . According to Theorem 2 in [26], $\langle g \circ \varphi {\rangle }_{F}$ is permutation invariant.
+
+According to Theorem 4 in [26], if there exist function ${g}^{\prime }$ and ${\varphi }^{\prime }$ so that ${g}^{\prime } \circ {\varphi }^{\prime } = f$ (the existence is shown in Proposition H.2), there will also exist GNN-LF $\varphi$ and function $g$ , so that $\langle g \circ \varphi {\rangle }_{F}\left( {s,\overrightarrow{r}}\right) =$ $f\left( {s,\overrightarrow{r}}\right)$ .
+
+Therefore, we can completely solve the frame degeneration problem with the relational pooling trick and node identity features. However, the time complexity is up to $O\left( {N!{N}^{2}}\right)$ , so we only analyze this method theoretically.
+
+## [Why is global frame more likely to degenerate than local frame?
+
+Let $\left\lbrack N\right\rbrack$ denote the sequence $1,2,\ldots , N.N$ is the number of atoms in the molecule.
+
+We first consider when local frame degenerates. As shown in Theorem 5.1, the degeneration happens if and only if the local environment is symmetric under some orthogonal transformations. $\operatorname{rank}\left( {\overrightarrow{E}}_{i}\right) < 3 \Leftrightarrow \exists o \in \mathrm{O}\left( 3\right) , o \neq I,\left\{ {\left( {{s}_{i},{\overrightarrow{r}}_{ij}{o}^{T}}\right) \mid {r}_{ij} < {r}_{c}}\right\} = \left\{ {\left( {{s}_{i},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\} .$
+
+The global frame has the following form,
+
+$$
+\overrightarrow{E} = \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i} \tag{58}
+$$
+
+We first prove some properties of $\overrightarrow{E}$ function.
+
+Proposition I.1. $\overrightarrow{E}$ is an $O\left( 3\right)$ -equivariant, translation-invariant, and permutation-invariant function.
+
+74 Proof. $\mathrm{O}\left( 3\right)$ -equivariance: $\forall o \in \mathrm{O}\left( 3\right) ,{\overrightarrow{E}}_{i}\left( {s,\overrightarrow{r}{o}^{T}}\right) = {\overrightarrow{E}}_{i}\left( {s,\overrightarrow{r}}\right) {o}^{T}$ . Therefore,
+
+$$
+\overrightarrow{E}\left( {s,\overrightarrow{r}{o}^{T}}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( {s,\overrightarrow{r}{o}^{T}}\right) = \left( {\mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( {s,\overrightarrow{r}}\right) }\right) {o}^{T} = \overrightarrow{E}\left( {s,\overrightarrow{r}}\right) {o}^{T}. \tag{59}
+$$
+
+5 Translation-invariance: For all translation $\overrightarrow{t} \in {\mathbb{R}}^{3}$ , let $\overrightarrow{r} + \overrightarrow{t}$ denote a matrix of shape $N \times 3$ whose ${i}^{\text{th }}$ 6 row is ${\overrightarrow{r}}_{i} + \overrightarrow{t}$ . As ${\overrightarrow{E}}_{i}$ is a function of ${\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j} = {\overrightarrow{r}}_{i} + \overrightarrow{t} - \left( {{\overrightarrow{r}}_{j} + \overrightarrow{t}}\right) ,{\overrightarrow{E}}_{i}\left( {z,\overrightarrow{r} + t}\right) = {\overrightarrow{E}}_{i}\left( {z,\overrightarrow{r}}\right)$ . Therefore,
+
+$$
+\overrightarrow{E}\left( {s,\overrightarrow{r} + t}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( {s,\overrightarrow{r} + t}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( {s,\overrightarrow{r}}\right) = \overrightarrow{E}\left( {s,\overrightarrow{r}}\right) . \tag{60}
+$$
+
+Permutation-invariance: for all permutation $\pi \in {S}_{n},\pi {\left( \overrightarrow{r}\right) }_{i} = \pi {\left( \overrightarrow{r}\right) }_{{\pi }^{-1}\left( i\right) }$ .
+
+$$
+\overrightarrow{E}\left( {\pi \left( s\right) ,\pi \left( \overrightarrow{r}\right) }\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( {\pi \left( s\right) ,\pi \left( \overrightarrow{r}\right) }\right) \tag{61}
+$$
+
+$$
+= \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( \left\{ \left( {\pi {\left( s\right) }_{j},\pi {\left( \overrightarrow{r}\right) }_{i} - \pi {\left( \overrightarrow{r}\right) }_{j}\left| \right| \pi {\left( \overrightarrow{r}\right) }_{i} - \pi {\left( \overrightarrow{r}\right) }_{j} \mid < {r}_{c}}\right) \right\} \right) \tag{62}
+$$
+
+$$
+= \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( \left\{ {\left( {{s}_{{\pi }^{-1}\left( j\right) },{\overrightarrow{r}}_{{\pi }^{-1}\left( i\right) } - {\overrightarrow{r}}_{{\pi }^{-1}\left( j\right) }}\right) \mid {r}_{{\pi }^{-1}\left( i\right) {\pi }^{-1}\left( j\right) } < {r}_{c}}\right\} \right) \tag{63}
+$$
+
+$$
+= \mathop{\sum }\limits_{{i = 1}}^{N}{\overrightarrow{E}}_{i}\left( \left\{ \left( {{s}_{j},{\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j} \mid {r}_{ij} < {r}_{c}}\right\rangle \right\} \right) \tag{64}
+$$
+
+$$
+= \overrightarrow{E}\left( {s,\overrightarrow{r}}\right) \text{.} \tag{65}
+$$
+
+778
+
+Then we prove a sufficient condition for global frame degeneration.
+
+Proposition I.2. rank $\left( {\overrightarrow{E} < 3}\right)$ if there exists $\overrightarrow{t} \in {\mathbb{R}}^{3}$ and $o \in O\left( 3\right) , o \neq I$ such that $\left\{ {\left( {{s}_{i},{\overrightarrow{r}}_{i} - \overrightarrow{t}}\right) \mid i \in }\right.$ $\left. {\left\lbrack N\right\rbrack \} = \left\{ {\left( {{s}_{i},\left( {{\overrightarrow{r}}_{i} - \overrightarrow{t}}\right) {o}^{T}}\right) \mid i \in \left\lbrack N\right\rbrack }\right\} .}\right\}$
+
+Proof. As $\overrightarrow{E}$ is a permutation invariant function, $\overrightarrow{E} = \overrightarrow{E}\left( \left\{ {\left( {{s}_{i},{\overrightarrow{r}}_{i}}\right) \mid i \in \left\lbrack N\right\rbrack }\right\} \right)$ .
+
+As $\overrightarrow{E}$ is a translation-invariant and $\mathrm{O}\left( 3\right)$ -equivariant function,
+
+$$
+\overrightarrow{E}\left( \left\{ {\left( {{s}_{i},\left( {{\overrightarrow{r}}_{i} - \overrightarrow{t}}\right) {o}^{T}}\right) \mid i \in \left\lbrack N\right\rbrack }\right\} \right) = \overrightarrow{E}\left( \left\{ {\left( {{s}_{i},\left( {{\overrightarrow{r}}_{i} - \overrightarrow{t}}\right) }\right) \mid i \in \left\lbrack N\right\rbrack }\right\} \right) {o}^{T} = \overrightarrow{E}\left( \left\{ \left( {{s}_{i},{\overrightarrow{r}}_{i}}\right) \right\} \right) \mid i \in \left\lbrack N\right\rbrack \} ){o}^{T}. \tag{66}
+$$
+
+Therefore, under the condition $\left\{ {\left( {{s}_{i},{\overrightarrow{r}}_{i} - \overrightarrow{t}}\right) \mid i \in \left\lbrack N\right\rbrack }\right\} = \left\{ {\left( {{s}_{i},\left( {{\overrightarrow{r}}_{i} - \overrightarrow{t}}\right) {o}^{T}}\right) \mid i \in \left\lbrack N\right\rbrack }\right\}$ , we have
+
+$$
+\overrightarrow{E}\left( \left\{ {\left( {{s}_{i},{\overrightarrow{r}}_{i}}\right) \mid i \in \left\lbrack N\right\rbrack }\right\} \right) {o}^{T} = \overrightarrow{E}\left( \left\{ \left( {{s}_{i},{\overrightarrow{r}}_{i}}\right) \right\} \right) |i \in \left\lbrack N\right\rbrack \} ), \tag{67}
+$$
+
+$$
+\Rightarrow \overrightarrow{E}\left( \left\{ \left( {{s}_{i},{\overrightarrow{r}}_{i}}\right) \mid i \in \left\lbrack N\right\rbrack \right\} \right) \left( {I - {o}^{T}}\right) = 0\text{.} \tag{68}
+$$
+
+Therefore, $\operatorname{rank}\left( \overrightarrow{E}\right) + \operatorname{rank}\left( {I - {o}^{T}}\right) - 3 \leq 0$ . As $I \neq {o}^{T},\operatorname{rank}\left( {I - {o}^{T}}\right) > 0,\operatorname{rank}\left( \overrightarrow{E}\right) < 3$ .
+
+The main difference between the degeneration conditions is the choice of origin. The local frame of atom $i$ degenerates when the molecule is symmetric with atom $i$ as the origin point, while the global frame degenerates if the molecule is symmetric with any origin point. Therefore, the global frame is more likely to degenerate.
+
+Corollary I.1. Assume the cutoff radius is large enough so that local environments contain all atoms. If there exists $i,\operatorname{rank}\left( {\overrightarrow{E}}_{i}\right) < 3$ , then $\operatorname{rank}\left( \overrightarrow{E}\right) < 3$ .
+
+Proof. As $\operatorname{rank}\left( {\overrightarrow{E}}_{i}\right) < 3,\exists o \in \mathrm{O}\left( 3\right) , o \neq I,\left\{ {\left( {{s}_{j},\left( {{\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j}}\right) {o}^{T}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} = \left\{ {\left( {{s}_{j},\left( {{\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j}}\right) }\right) \mid j \in }\right.$ $\left\lbrack N\right\rbrack \}$ .
+
+Therefore,
+
+$$
+\left\{ {\left( {{s}_{j}, - \left( {{\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j}}\right) {o}^{T}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} = \left\{ {\left( {{s}_{j}, - \left( {{\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j}}\right) }\right) \mid j \in \left\lbrack N\right\rbrack }\right\} \tag{69}
+$$
+
+$$
+\Rightarrow \left\{ {\left( {{s}_{j},\left( {{\overrightarrow{r}}_{j} - {\overrightarrow{r}}_{i}}\right) {o}^{T}}\right) \mid j \in \left\lbrack N\right\rbrack }\right\} = \left\{ \left( {{s}_{j},{\overrightarrow{r}}_{j} - {\overrightarrow{r}}_{i}}\right) \right\} \mid j \in \left\lbrack N\right\rbrack \} \tag{70}
+$$
+
+Let $\overrightarrow{t} = {\overrightarrow{r}}_{i}$ , according to Proposition I.2, $\operatorname{rank}\left( \overrightarrow{E}\right) < 3$ .
+
+Therefore, when the cutoff radius is large enough, the global frame will also degenerate if some local frame degenerates.
+
+## J How does GNN-LF keep O(3)-invariance.
+
+The input of GNN-LF is atomic numbers $z \in {\mathbb{Z}}^{N}$ and 3D coordinates $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ , where $N$ is the number of atoms in our molecule. The energy prediction produced by GNN-LF should be O(3)- equivariant. To formalize, $\forall o \in \mathrm{O}\left( 3\right) ,\operatorname{GNN-LF}\left( {z,\overrightarrow{r}}\right) = \operatorname{GNN-LF}\left( {z,\overrightarrow{r}{o}^{T}}\right)$ . For example, when the input molecule rotates, the output of GNN-LF should not change.
+
+We state the Definition 2.2 and Lemma 2.1 here again.
+
+Definition J.1. Representation $s$ is called an invariant representation if $s\left( {z,\overrightarrow{r}}\right) = s\left( {z,{\overrightarrow{r}}_{o}{}^{T}}\right) ,\forall o \in$ $O\left( 3\right) , z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ . Representation $\overrightarrow{v}$ is called an equivariant representation if $\overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {o}^{T} =$ $\overrightarrow{v}\left( {z,\overrightarrow{r}{o}^{T}}\right) ,\forall o \in O\left( 3\right) , z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ .
+
+Lemma J.1.
+
+1. Any function of invariant representation $s$ will produce an invariant representation.
+
+2. Let $s \in {\mathbb{R}}^{F}$ denote an invariant representation, $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. We define $s \circ \overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ as a matrix whose(i, j)th element is ${s}_{i}{\overrightarrow{v}}_{ij}$ . When $\overrightarrow{v} \in {\mathbb{R}}^{1 \times 3}$ , we first broadcast it along the first dimension. Then the output is also an equivariant representation.
+
+3. Let $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. $\overrightarrow{E} \in {\mathbb{R}}^{3 \times 3}$ denotes an equivariant frame. The projection of $\overrightarrow{v}$ to $\overrightarrow{E}$ , denoted as ${P}_{\overrightarrow{E}}\left( \overrightarrow{v}\right) \mathrel{\text{:=}} \overrightarrow{v}{\overrightarrow{E}}^{T}$ , is an invariant representation in ${\mathbb{R}}^{F \times 3}$ . For $\overrightarrow{v},{P}_{\overrightarrow{E}}$ is a bijective function. Its inverse ${P}_{\overrightarrow{E}}^{-1}$ convert an invariant representation $s \in {\mathbb{R}}^{F \times 3}$ to an equivariant representation in ${\mathbb{R}}^{F \times 3},{P}_{\overrightarrow{E}}^{-1}\left( s\right) = s\overrightarrow{E}$ .
+
+4. Projection of $\overrightarrow{v}$ to a general equivariant representation ${\overrightarrow{v}}^{\prime } \in {\mathbb{R}}^{{F}^{\prime } \times 3}$ can also be defined. It produces an invariant representation in ${\mathbb{R}}^{F \times {F}^{\prime }},{P}_{{\overrightarrow{v}}^{\prime }}\left( \overrightarrow{v}\right) = \overrightarrow{v}{\overrightarrow{v}}^{\prime T}$ .
+
+As shown in Figure 1, GNN-LF first generates a frame for each atom and projects equivariant features of neighbor atoms onto the frame. A graph with only invariant features is then produced. An ordinary GNN is then used to process the graph and produce the output. We illustate them step by step.
+
+Notations. The initial node feature of node $i,{z}_{i} \in \mathbb{N}$ , is an integer atomic number, which neural network cannot process directly. So we first use an embedding layer to transform ${z}_{i}$ to float features ${s}_{i} = s\left( {z}_{i}\right) \in {\mathbb{R}}^{F}$ , where $F$ is the hidden dimension. According to the first point of Lemma J.1, ${s}_{i}$ is an invariant representation.
+
+${\overrightarrow{r}}_{i} \in {\mathbb{R}}^{1 \times 3}$ , the 3D coordinates of atom $i$ , is an equivariant representation. ${\overrightarrow{r}}_{ij} = {\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j} \in {\mathbb{R}}^{1 \times 3}$ is the position of atom $i$ relative to atom $j.\forall o \in \mathrm{O}\left( 3\right) ,{\overrightarrow{r}}_{ij}\left( {z,\overrightarrow{r}{o}^{T}}\right) = {\overrightarrow{r}}_{i}{o}^{T} - {\overrightarrow{r}}_{j}{o}^{T} = {\overrightarrow{r}}_{ij}\left( {z,\overrightarrow{r}}\right) {o}^{T}$ , so ${\overrightarrow{r}}_{ij}$ is an equivariant representation. ${r}_{ij}$ denotes the distance between atom $i$ and atom $j.{r}_{ij} = \sqrt{{\overrightarrow{r}}_{ij}{\overrightarrow{r}}_{ij}^{T}} \in \mathbb{R}$ . According to the fourth point of Lemma J.1, ${\overrightarrow{r}}_{ij}{\overrightarrow{r}}_{ij}^{T}$ is an invariant representation. According to the first point of Lemma J.1, ${r}_{ij} = \sqrt{{\overrightarrow{r}}_{ij}{\overrightarrow{r}}_{ij}^{T}}$ is thus an invariant representation.
+
+Frame Generation. As shown in Equation 8, our frame has the following form.
+
+$$
+{\overrightarrow{E}}_{i} = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}\frac{w\left( {r}_{ij}\right) }{{r}_{ij}}\left( {f\left( {r}_{ij}\right) \odot {s}_{j}}\right) \circ {\overrightarrow{r}}_{ij}, \tag{71}
+$$
+
+where $w\left( {r}_{ij}\right) \in \mathbb{R}$ and $f\left( {r}_{ij}\right) \in {\mathbb{R}}^{F}$ denotes two function of ${r}_{ij}, \odot$ denotes Hadamard product. $\frac{w\left( {r}_{ij}\right) }{{r}_{ij}}\left( {f\left( {r}_{ij}\right) \odot {s}_{j}}\right)$ as a whole is a function of ${r}_{ij}$ and ${s}_{j}$ , which are both invariant representations. According to the first point of Lemma J.1, $\frac{w\left( {r}_{ij}\right) }{{r}_{ij}}\left( {f\left( {r}_{ij}\right) \odot {s}_{j}}\right)$ is an invariant representation $\in {\mathbb{R}}^{F}$ . - denotes the scale operation described in the second point of Lemma J.1, so $\frac{w\left( {r}_{ij}\right) }{{r}_{ij}}\left( {f\left( {r}_{ij}\right) \odot {s}_{j}}\right) \circ {\overrightarrow{r}}_{ij}$ is an equivariant representation. The frame of atom $i$ , namely ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{F \times 3}$ , is an equivariant representation, because ${\overrightarrow{E}}_{i}\left( {z,\overrightarrow{r}{o}^{T}}\right) = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}\left( {\frac{w\left( {r}_{ij}\right) }{{r}_{ij}}\left( {f\left( {r}_{ij}\right) \odot {s}_{j}}\right) \circ {\overrightarrow{r}}_{ij}{o}^{T}}\right) =$ $\left( {\mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}\frac{w\left( {r}_{ij}\right) }{{r}_{ij}}\left( {f\left( {r}_{ij}\right) \odot {s}_{j}}\right) \circ {\overrightarrow{r}}_{ij}}\right) {o}^{T} = {\overrightarrow{E}}_{i}\left( {z,\overrightarrow{r}}\right) {o}^{T}$
+
+Projection. Projection is composed of two parts. As shown in Equation 9 and Equation 6.
+
+$$
+{d}_{ij}^{1} = \frac{1}{{r}_{ij}}\left( {{\overrightarrow{r}}_{ij}{\overrightarrow{E}}_{i}^{T}}\right) {d}_{ij}^{2} = \operatorname{diag}\left( {{W}_{1}{\overrightarrow{E}}_{j}{\overrightarrow{E}}_{i}^{T}{W}_{2}^{T}}\right) , \tag{72}
+$$
+
+where ${W}_{1},{W}_{2} \in {\mathbb{R}}^{F \times F}$ are two learnable linear layers. According to the fourth point of lemma J.1, ${d}_{ij}^{1} = {\overrightarrow{r}}_{ij}{\overrightarrow{E}}_{i}^{T}$ are invariant representations. According to the fourth point of lemma J.1, ${\overrightarrow{E}}_{j}{\overrightarrow{E}}_{i}^{T}$ are invariant representations. ${d}_{ij}^{2} = \operatorname{diag}\left( {{W}_{1}{\overrightarrow{E}}_{j}{\overrightarrow{E}}_{i}^{T}{W}_{2}^{T}}\right)$ is a function of ${\overrightarrow{E}}_{j}{\overrightarrow{E}}_{i}^{T}$ , so ${d}_{ij}^{2}$ are invariant representations.
+
+Graph Neural Network. We use an ordinary GNN to produce the energy prediction. The GNN takes ${s}_{i}$ as the input node features and $\left( {{r}_{ij},{d}_{ij}^{1},{d}_{ij}^{2}}\right)$ as the input edge features. GNN-LF $\left( {z,\overrightarrow{r}}\right) =$ $\operatorname{GNN}\left( {\left\{ {{s}_{i} \mid i = 1,2,.., N}\right\} ,\left\{ {\left( {{r}_{ij},{d}_{ij}^{1},{d}_{ij}^{2}}\right) \mid i = 1,2,.., N, j = 1,2,.., N}\right\} }\right)$ . As all inputs of GNN is invariant to $\mathrm{O}\left( 3\right)$ operation, the energy prediction will also be $\mathrm{O}\left( 3\right)$ -invariant.
+
+Our GNN has an ordinary message passing scheme. The message from atom $j$ to atom $r$ is
+
+$$
+{m}_{ij} = {f}_{2}\left( {{r}_{ij},{d}_{ij}^{1},{d}_{ij}^{2}}\right) \odot {s}_{j}, \tag{73}
+$$
+
+where ${f}^{\prime }$ is a neural network, whose output $\in {\mathbb{R}}^{F}$ . The message combines the features of edge $i, j$ and node $j$ . Each message passing layer will update the node feature ${s}_{i}$ .
+
+$$
+{s}_{i} \leftarrow {s}_{i} + g\left( {\mathop{\sum }\limits_{{j \in N\left( i\right) }}{m}_{ij}}\right) \tag{74}
+$$
+
+where $g$ is a multi-layer perceptron, $N\left( i\right)$ is the set of neighbor nodes of node $i$ .
+
+After some message passing processes, ${s}_{i}$ contains rich graph information. The energy prediction is
+
+$$
+\widehat{E} = h\left( {\mathop{\sum }\limits_{{i = 1}}^{N}{s}_{i}}\right) , \tag{75}
+$$
+
+52 where $h$ is a multi-layer perceptron.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..15a54fdf383ca7d358f0560085cee8b49dc032ce
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/0lSm-R82jBW/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,418 @@
+§ GRAPH NEURAL NETWORK WITH LOCAL FRAME FOR MOLECULAR POTENTIAL ENERGY SURFACE
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Modeling molecular potential energy surface is of pivotal importance in science. Graph Neural Networks have shown great success in this field. However, their message passing schemes need special designs to capture geometric information and fulfill symmetry requirement like rotation equivariance, leading to complicated architectures. To avoid these designs, we introduce a novel local frame method to molecule representation learning and analyze its expressivity. Projected onto a frame, equivariant features like 3D coordinates are converted to invariant features, so that we can capture geometric information with these projections and decouple the symmetry requirement from GNN design. Theoretically, we prove that given non-degenerate frames, even ordinary GNNs can encode molecules injectively and reach maximum expressivity with coordinate projection and frame-frame projection. In experiments, our model uses a simple ordinary GNN architecture yet achieves state-of-the-art accuracy. The simpler architecture also leads to higher scalability. Our model only takes about ${30}\%$ inference time and ${10}\%$ GPU memory compared to the most efficient baselines.
+
+§ 17 1 INTRODUCTION
+
+Prediction of molecular properties is widely used in fields such as material searching, drug designing, and understanding chemical reactions [1]. Among properties, potential energy surface (PES) [2], the relationship between the energy of a molecule and its geometry, is of pivotal importance as it can determine the dynamics of molecular systems and many other properties. Many computational chemistry methods have been developed for the prediction, but few can achieve both high precision and scalability.
+
+In recent years, machine learning (ML) methods have emerged, which are both accurate and efficient. Graph Neural Networks (GNNs) are promising among these ML methods. They have improved continuously [3-10] and achieved state-of-the-art performance on many benchmark datasets. Compared with popular GNNs used in other graph tasks [11], these models need special designs, as molecules are more than a graph composed of merely nodes and edges. Atoms are in the continuous 3D space, and the prediction targets like energy are sensitive to the coordinates of atoms. Therefore, GNNs for molecules must include geometric information. Moreover, these models should keep the symmetry of the target properties for generalization. For example, the energy prediction should be invariant to the coordinate transformations in $\mathrm{O}\left( 3\right)$ group, like rotation and reflection.
+
+All existing methods can keep the invariance. Some models $\left\lbrack {4,5,8}\right\rbrack$ use hand-crafted invariant features like distance, angle, and dihedral angle as the input of GNN. Others use equivariant representations, which change with the coordinate transformations. Among them, some $\left\lbrack {6,9,{12}}\right\rbrack$ use irreducible representations of the $\mathrm{{SO}}\left( 3\right)$ group. The other models $\left\lbrack {7,{10}}\right\rbrack$ manually design functions for equivariant and invariant representations. All these methods can keep invariance, but they vary in performance. Therefore, expressivity analysis is necessary. However, the symmetry requirement hinders the application of the existing theoretical framework for ordinary GNNs [13].
+
+By using the local frame, we decouple the symmetry requirement. As shown in Figure 1, our model, namely ${GNN} - {LF}$ , first produces a frame (a set of bases of ${\mathbb{R}}^{3}$ space) equivariant to $\mathrm{O}\left( 3\right)$
+
+ < g r a p h i c s >
+
+Figure 1: An illustration of our model. One local frame is generated for each atom. Frames are used to transform geometric information into invariant representations. Then an ordinary GNN is applied. transformations. Then it projects the relative positions and frames of neighbor atoms on the frame as the edge features. Therefore, an ordinary GNN with no special design for symmetry can work on the graph with only invariant features. The expressivity of the GNN for molecules can also be proved using a framework for ordinary GNNs [13]. As the GNN needs no special design for symmetry, GNN-LF also has a simpler architecture and, thus, better scalability. Our model achieves state-of-the-art performance on the MD17 and QM9 datasets. It also uses only 30% time and 10% GPU memory than the fastest baseline on the PES task.
+
+§ 2 PRELIMINARIES
+
+Ordinary GNN. Message passing neural network (MPNN) [14] is a common framework of GNNs. 51 For each node, a message passing layer aggregates information from neighbors to update the node representations. The ${k}^{\text{ th }}$ layer can be formulated as follows.
+
+$$
+{\mathbf{h}}_{v}^{\left( k\right) } = {\mathrm{U}}^{\left( k\right) }\left( {{\mathbf{h}}_{v}^{\left( k - 1\right) },\mathop{\sum }\limits_{{u \in N\left( v\right) }}{M}^{\left( k\right) }\left( {{\mathbf{h}}_{u}^{\left( k - 1\right) },{e}_{vu}}\right) }\right) \tag{1}
+$$
+
+where ${\mathbf{h}}_{v}^{\left( k\right) }$ is the representations of node $v$ at the ${k}^{\text{ th }}$ layer, $N\left( v\right)$ is the set of neighbors of $v,{\mathbf{h}}_{v}^{\left( 0\right) }$ is the node $v$ ’s features, ${e}_{uv}$ is the features of edge ${uv}$ , and ${U}^{\left( k\right) },{M}^{\left( k\right) }$ are some functions.
+
+Xu et al. [13] provide a theoretical framework for the expressivity of ordinary GNNs. One message passing layer can encode neighbor nodes injectively and then reaches maximum expressivity. With several message passing layers, MPNN can learn the information of multi-hop neighbors.
+
+Modeling PES. PES is the relationship between molecular energy and geometry. Given a molecule with $N$ atoms, our model takes the kinds of atoms $z \in {\mathbb{Z}}^{N}$ and the $3\mathrm{D}$ coordinates of atoms $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ as input to predict the energy $\widehat{E} \in \mathbb{R}$ of this molecule. It can also predict the force $\widehat{\overrightarrow{F}} \in {\mathbb{R}}^{N \times 3} = - {\nabla }_{\overrightarrow{r}}\widehat{E}.$
+
+Equivariance. To formalized the symmetry requirement, we define equivariant and invariant functions as in [15].
+
+Definition 2.1. Given a function $h : \mathbb{X} \rightarrow \mathbb{Y}$ and a group $G$ acting on $\mathbb{X}$ and $\mathbb{Y}$ as $\star$ . We say that $h$ is
+
+$$
+G\text{ -invariant: }\;\text{ if }h\left( {g \star x}\right) = h\left( x\right) ,\forall x \in \mathbb{X},g \in G \tag{2}
+$$
+
+$$
+G\text{ -equivariant: if }h\left( {g \star x}\right) = g \star h\left( x\right) ,\forall x \in \mathbb{X},g \in G \tag{3}
+$$
+
+The energy is invariant to the permutation of atoms, coordinates' translations, and coordinates' orthogonal transformations (rotations and reflections). GNN naturally keeps the permutation invariance. As the relative position ${\overrightarrow{r}}_{ij} = {\overrightarrow{r}}_{i} - {\overrightarrow{r}}_{j} \in {\mathbb{R}}^{1 \times 3}$ , which is invariant to translation, is used as the input to GNNs, the translation invariance can also be ensured. So we focus on orthogonal transformations. Orthogonal transformations of coordinates form the group $\mathrm{O}\left( 3\right) = \left\{ {Q \in {\mathbb{R}}^{3 \times 3} \mid Q{Q}^{T} = I}\right\}$ , where $I$ is the identity matrix. Representations are considered as functions of $z$ and $\overrightarrow{r}$ , so we can define equivariant and invariant representations.
+
+Definition 2.2. Representation $s$ is called an invariant representation if $s\left( {z,\overrightarrow{r}}\right) = s\left( {z,\overrightarrow{r}{o}^{T}}\right) ,\forall o \in$ $O\left( 3\right) ,z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ . Representation $\overrightarrow{v}$ is called an equivariant representation if $\overrightarrow{v}\left( {z,\overrightarrow{r}}\right) {o}^{T} =$ $\overrightarrow{v}\left( {z,\overrightarrow{r}{o}^{T}}\right) ,\forall o \in O\left( 3\right) ,z \in {\mathbb{Z}}^{N},\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ .
+
+Invariant and equivariant representations are also called scalar and vector representations respectively in some previous work [7].
+
+Frame is a special kind of equivariant representation. Through our theoretical analysis, frame $\overrightarrow{E}$ is an orthogonal matrix in ${\mathbb{R}}^{3 \times 3},\overrightarrow{E}{\overrightarrow{E}}^{T} = I$ . GNN-LF generates a frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{3 \times 3}$ for each node $i$ . We will discuss how to generate the frames in Section 5.
+
+In Lemma 2.1, we introduce some basic operations of representations.
+
+§ LEMMA 2.1.
+
+ * Any function of invariant representation $s$ will produce an invariant representation.
+
+ * Let $s \in {\mathbb{R}}^{F}$ denote an invariant representation, $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. We define $s \circ \overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ as a matrix whose(i, j)th element is ${s}_{i}{\overrightarrow{v}}_{ij}$ . When $\overrightarrow{v} \in {\mathbb{R}}^{1 \times 3}$ , we first broadcast it along the first dimension. Then the output is also an equivariant representation.
+
+ * Let $\overrightarrow{v} \in {\mathbb{R}}^{F \times 3}$ denote an equivariant representation. $\overrightarrow{E} \in {\mathbb{R}}^{3 \times 3}$ denotes an equivariant frame. The projection of $\overrightarrow{v}$ to $\overrightarrow{E}$ , denoted as ${P}_{\overrightarrow{E}}\left( \overrightarrow{v}\right) \mathrel{\text{ := }} \overrightarrow{v}{\overrightarrow{E}}^{T}$ , is an invariant representation in ${\mathbb{R}}^{F \times 3}$ . For $\overrightarrow{v},{P}_{\overrightarrow{E}}$ is a bijective function. Its inverse ${P}_{\overrightarrow{E}}^{-1}$ convert an invariant representation $s \in {\mathbb{R}}^{F \times 3}$ to an equivariant representation in ${\mathbb{R}}^{F \times 3},{P}_{\overrightarrow{E}}^{-1}\left( s\right) = s\overrightarrow{E}$ .
+
+ * Projection of $\overrightarrow{v}$ to a general equivariant representation ${\overrightarrow{v}}^{\prime } \in {\mathbb{R}}^{{F}^{\prime } \times 3}$ can also be defined. It produces an invariant representation in ${\mathbb{R}}^{F \times {F}^{\prime }},{P}_{{\overrightarrow{v}}^{\prime }}\left( \overrightarrow{v}\right) = \overrightarrow{v}{\overrightarrow{v}}^{\prime T}$ .
+
+Local Environment. Most PES models set a cutoff radius ${r}_{c}$ and encode the local environment of each atom as defined in Definition 2.3.
+
+Definition 2.3. Let ${r}_{ij}$ denote $\begin{Vmatrix}{\overrightarrow{r}}_{ij}\end{Vmatrix}$ . The local environment of atom $i$ is $L{E}_{i} = \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ , the set of invariant atom features ${s}_{j}$ (like atomic numbers) and relative positions ${\overrightarrow{r}}_{ij}$ of atoms $j$ within the sphere centered at $i$ with cutoff distance ${r}_{c}$ , where ${r}_{c}$ is usually a hyperparameter.
+
+In this work, orthogonal transformation of a set/sequence means transforming each element in the set/sequence. For example, an orthogonal transformation $o$ will map $\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ to $\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}{o}^{T}}\right) \mid {r}_{ij} < {r}_{c}}\right\}$ .
+
+§ 3 RELATED WORK
+
+We classify existing ML models for PES into two classes: manual descriptors and GNNs. GNN-LF outperforms the representative of each kind in experiments.
+
+Manual Descriptor. These models first use manually designed functions with few learnable parameters to convert one molecule to a descriptor vector and then feed the vector into some ordinary ML models like kernel regression [16-18] and neural network [19-21] to produce the prediction. These methods are more scalable and data-efficient than GNNs. However, due to the hard-coded descriptors, they are less accurate and cannot process variable-size molecules or different kinds of atoms.
+
+GNN. These GNNs mainly differ in the way to incorporate geometric information.
+
+Invariant models use rotation-invariant geometric features only. Schutt et al. [3] and Schütt et al. [4] only consider the distance between atoms. Klicpera et al. [5] introduce angular features, and Gasteiger et al. [8] further use dihedral angles. Similar to GNN-LF, the input of the GNN is invariant. However, the features are largely hand-crafted and are not expressive enough, while our projections on frames are learnable and provably expressive. Moreover, as some features are of multiple atoms (for example, angle is a feature of three-atom tuple), the message passing scheme passes messages between node tuples rather than nodes, while GNN-LF uses an ordinary GNN with lower time complexity.
+
+Recent works have also utilized equivariant features, which will change as the input coordinates rotate. Some $\left\lbrack {6,9,{12}}\right\rbrack$ are based on irreducible representations of the ${SO}\left( 3\right)$ group. Though having certain theoretical expressivity guarantees [22], these methods and analyses are based on polynomial approximation. High-order tensors are needed to approximate complex functions like high-order polynomials. However, in implementation, only low-order tensors are used, and these models' empirical performance is not high. Other works $\left\lbrack {7,{10}}\right\rbrack$ model equivariant interactions in Cartesian space using both invariant and equivariant representations. They achieve good empirical performance but have no theoretical guarantees. Different sets of functions must be designed separately for different input and output types (invariant or equivariant representations), so their architectures are also complex. Our work adopts a completely different approach. We introduce $\mathrm{O}\left( 3\right)$ -equivariant frames and project all equivariant features on the frames. The expressivity can be proved using the existing framework [13] and needs no high-order tensors.
+
+"Frame" models. Some of existing methods [23, 24] designed for other tasks also use the term "frame". However, in conclusion, these methods differ significantly from ours in task, theory, and method as follows.
+
+ * Most target properties of molecules are $\mathrm{O}\left( 3\right)$ -equivariant or invariant (including energy and force). Our model can fully describe symmetry, while existing models cannot. For example, a molecule and its mirroring must have the same energy, and GNN-LF will produce the same prediction while existing models cannot keep the invariance.
+
+ * Our theoretical analysis removes group representation used in [22, 24].
+
+ * Existing models use some schemes not learnable to initialize frames and update them. GNN-LF uses a learnable message passing scheme to produce frames and will not update them, leading to simpler architecture and lower overhead.
+
+ * Only coordinate projection is used previously, while we add frame-frame projection.
+
+The comparison is detailed in Appendix F.
+
+§ 4 HOW FRAMES BOOST EXPRESSIVITY?
+
+Though symmetry imposes constraints on our design, our primary focus is expressivity. Therefore, we only discuss how the frame boosts expressivity in this section. Our methods, implementations, and how our model keeps invariance will be detailed in Section 6 and Appendix J. Throughout this section, we assume the existence of frames, which will be discussed in Section 5.
+
+§ 4.1 DECOUPLING SYMMETRY REQUIREMENT
+
+Though equivariant representations have been used for a long time, it is still unclear how to transform them ideally. Existing methods $\left\lbrack {7,{10},{15},{25}}\right\rbrack$ either have no theoretical guarantee or tend to use too many parameters. This section asks a fundamental question: can we use invariant representations instead of equivariant ones and keep expressivity?
+
+Given any frame $\overrightarrow{E}$ , the projection ${P}_{\overrightarrow{E}}\left( \overrightarrow{x}\right)$ will contain all the information of the input equivariant feature $\overrightarrow{x}$ , because the inverse projection function can resume $\overrightarrow{x}$ from projection, ${P}_{\overrightarrow{E}}^{-1}\left( {{P}_{\overrightarrow{E}}\left( \overrightarrow{x}\right) }\right) = \overrightarrow{x}$ . Therefore, we can use ${P}_{\overrightarrow{E}}$ and ${P}_{\overrightarrow{E}}^{-1}$ to change the type (invariant or equivariant representation) of input and output of any function without information loss.
+
+Proposition 4.1. Given frame $\overrightarrow{E}$ and any equivariant function $g$ , there exists a function $\widetilde{g} =$ ${P}_{\overrightarrow{E}} \cdot g \cdot {P}_{\overrightarrow{E}}^{-1}$ which takes invariant representations as input and outputs invariant representations, where $\cdot$ is function composition. $g$ can be expressed with $\widetilde{g} : g = {P}_{\overrightarrow{E}}^{-1} \cdot \widetilde{g} \cdot {P}_{\overrightarrow{E}}$ .
+
+We can use a multilayer perceptron (MLP) to approximate the function $\widetilde{g}$ and thus achieving universal approximation for all $\mathrm{O}\left( 3\right)$ -equivariant functions. Proposition 4.1 motivates us to transform equivariant representations to projections in the beginning and then fully operate on the invariant representation space. Invariant representations can also be transformed back to equivariant prediction with inverse projection operation if necessary.
+
+§ 4.2 PROJECTION BOOSTS MESSAGE PASSING LAYER
+
+The previous section discusses how projection decouples the symmetry requirement. This section shows that projections contain rich geometry information. Even ordinary GNNs can reach maximum expressivity with projections on frames, while existing models with hand-crafted invariant features are not expressive enough. The discussion is composed of two parts. Coordinate projection boosts the expressivity of one single message passing layer, and frame-frame projection boosts the whole GNN composed of multiple message passing layers.
+
+Note that in this section, we consider input ${x}_{1},{x}_{2}$ (local environment or the whole molecule) as equal if they can interconvert with some orthogonal transformation $\left( {\exists o \in \mathrm{O}\left( 3\right) ,o\left( {x}_{1}\right) = {x}_{2}}\right)$ , because the invariant representations and energy prediction are invariant under $\mathrm{O}\left( 3\right)$ transformation. Therefore, injective mapping and maximum expressivity mean that function can differentiate inputs unequal in this sense.
+
+ < g r a p h i c s >
+
+Figure 2: The green balls in the figure are the center atoms. We use balls with different colors to represent different kinds of atoms. (a) SchNet cannot distinguish two local environments due to the inability to capture angle. (b) DimeNet cannot distinguish two local environments with the same set of angles. Blue lines form a regular icosahedron and help visualization. The center atom is at the symmetrical center of the icosahedron. (c) Invariant models fail to pass the orientation information, while the projection of frame vectors can solve this problem. For simplicity, we only show one vector (orange) to represent the frame.
+
+Encoding local environment. Similar to that MPNN can encode neighbor nodes injectively on the graph, GNN-LF can encode neighbor nodes injectively in 3D space. Other models can also be analyzed from an encoding local environments perspective. GNNs for PES only collect messages from atoms within the sphere of radius ${r}_{c}$ , so one message passing layer of them is equivalent to encoding the local environments in Definition 2.3. When mapping local environments injectively, a single message passing layer reaches maximum expressivity.
+
+Some popular models are under-expressive. For example, as shown in Figure 2a, SchNet [4] only considers the distance between atoms and neglects the angular information, leading to the inability to differentiate some simple local environments. Moreover, Figure 2b illustrates that though DimeNet [5] adds angular information to message passing, its expressivity is still limited, which may be attributed to the loss of high-order geometric information like dihedral angle.
+
+In contrast, no information loss will happen when we use the coordinates projected on the frame.
+
+Theorem 4.1. There exists a function $\phi$ . Given a frame ${\overrightarrow{E}}_{i}$ of the atom $i,\phi$ encodes the local environment of atom i injectively into atom i's embeddings.
+
+$$
+\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\operatorname{Concatenate}\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{s}_{j}}\right) }\right) }\right) . \tag{4}
+$$
+
+Theorem 4.1 shows that an ordinary message passing layer can encode local environments injectively with coordinate projection as an edge feature.
+
+Passing messages across local environments. In physics, interaction between distant atoms is usually not negligible. Using one single message passing layer, which encodes atoms within cutoff radius only, leads to loss of such interaction. When using multiple message passing layers, GNN can pass messages between two distant atoms along a path of atoms and thus model the interaction.
+
+However, passing messages in multiple steps may lead to loss of information. For example, in Figure 2c, two molecules are different as a part of the molecule rotates. However, the local environment will not change. So the node representations, the messages passed between nodes, and finally, the energy prediction will not change while two molecules have different energy. This problem will also happen in previous PES models [4, 5]. Loss of information in multi-step message passing is a fundamental and challenging problem even for ordinary GNN [13].
+
+Nevertheless, the solution is simple in this special case. We can eliminate the information loss by frame-frame projection, i.e., projecting ${\overrightarrow{E}}_{i}$ (the frame of atom $j$ ) on ${\overrightarrow{E}}_{j}$ (the frame of atom $i$ ). For example, in Figure 2c, as the molecule rotates, frame vectors also rotate, leading to frame-frame projection change, so our model can differentiate them. We also prove the effectiveness of frame-frame projection in theory.
+
+Theorem 4.2. Let $\mathcal{G}$ denote the graph in which node $i$ represents the atom $i$ and edge ${ij}$ exists iff ${r}_{ij} < {r}_{c}$ , where ${r}_{c}$ is the cutoff radius. Assuming frames exist, if $\mathcal{G}$ is a connected graph whose diameter is $L$ , GNN with $L$ message passing layers as follows can encode the whole molecule
+
+$$
+\phi \left( \left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij},{\overrightarrow{E}}_{j}}\right) \mid {r}_{ij} < {r}_{c}}\right\} \right) = \rho \left( {\mathop{\sum }\limits_{{{r}_{ij} < {r}_{c}}}\varphi \left( {\text{ Concatenate }\left( {{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{r}}_{ij}\right) ,{P}_{{\overrightarrow{E}}_{i}}\left( {\overrightarrow{E}}_{j}\right) ,{s}_{j}}\right) }\right) }\right) . \tag{5}
+$$
+
+ < g r a p h i c s >
+
+Figure 3: (a) The left part shows the symmetry of the water molecule, which has a rotation axis. Its equivariant vectors must be parallel to the rotation axis. However, with a frame composed of only one vector, its geometry can be described. The right part shows that with the projection of ${\overrightarrow{r}}_{ij}$ on the frame and the distance between two atoms, the angle $\theta$ and the position of $j$ atom can be determined. (b) The left part is a molecule with central symmetry. Its global frame will be zero. However, when selected as the center (green), the atom's environment has no central symmetry.
+
+$\left\{ {\left( {{s}_{j},{\overrightarrow{r}}_{ij}}\right) \mid j \in \{ 1,2,\ldots ,n\} }\right\}$ injectively into the embedding of node $i$ .
+
+Theorem 4.2 shows that an ordinary GNN can encode the whole molecule injectively with coordinate projection and frame-frame projection as edge features.
+
+In conclusion, when frames exist, even ordinary GNN can encode molecule injectively and thus reach maximum expressivity with coordinate projection and frame-frame projection.
+
+§ 5 HOW TO BUILD A FRAME?
+
+We propose frame generation method after discussing how to use frames because generation method's connection to expressivity is less direct. Whatever frame generation method is used, GNN-LF can keep expressivity as long as the frame does not degenerate. A frame degenerates iff it has less than three linearly independent vectors. This section provides one feasible frame generation method.
+
+A straightforward idea is to use the invariant features of each atom, like the atomic number, to produce the frame. However, function of invariant features must be invariant representations rather than equivariant frames. Therefore, we consider producing the frame from the local environment of each atom, which contains equivariant 3D coordinates. In Theorem 5.1, we prove that there exists a function mapping the local environment to an $\mathrm{O}\left( 3\right)$ -equivariant frame.
+
+Theorem 5.1. There exists an $O\left( 3\right)$ -equivariant function $g$ mapping the local environment $L{E}_{i}$ to an equivariant representation in ${\mathbb{R}}^{3 \times 3}$ . The output forms a frame if $\forall o \in O\left( 3\right) ,o \neq I,o\left( {L{E}_{i}}\right) \neq L{E}_{i}$ .
+
+The frame produced by the function in Theorem 5.1 will not degenerate if the local environment has no symmetry elements, such as centers of inversion, axes of rotation, or mirror planes.
+
+Building a frame for a symmetric local environment remains a problem in our current implementation but will not seriously hamper our model. Firstly, our model can produce reasonable output even with symmetric input and is provably more expressive than a widely used model SchNet [4] (see Appendix G). Secondly, symmetric molecules are rare and form a zero-measure set. In our two representative real-world datasets, less than ${0.01}\%$ of molecules (about ten molecules in the whole datasets of several hundred thousand molecules) are symmetric. Thirdly, symmetric geometry may be captured with a degenerate frame. As shown in Figure 3a, water is a symmetric molecule. We can use a frame with one vector to describe its geometry. Based on node identity features and relational pooling [26], we also propose a scheme in Appendix H to completely solve the expressivity loss caused by degeneration. However, for scalability, we do not use it in GNN-LF.
+
+A message passing layer for frame generation. The existence of the frame generation function is proved in Theorem 4.2. Here we demonstrate how to implement it. There exists a universal framework for approximating $\mathrm{O}\left( 3\right)$ -equivariant functions [15] which can be used to implement the function in Theorem 5.1. For scalability, we use a simplified form of that framework which has empirically good performance:
+
+$$
+{\overrightarrow{E}}_{i} = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{g}^{\prime }\left( {{r}_{ij},{s}_{j}}\right) \circ \frac{{\overrightarrow{r}}_{ij}}{{r}_{ij}}, \tag{6}
+$$
+
+where ${g}^{\prime }$ maps invariant features and distance to invariant weights and the entire framework reduces to a message passing process. The derivation is detailed in Appendix B.
+
+Local frame vs global frame. With the message passing framework in Equation 6, an individual frame, called local frame, is produced for each atom. These local frames can also be summed to produce a global frame.
+
+$$
+\overrightarrow{E} = \mathop{\sum }\limits_{{i = 1}}^{n}{\overrightarrow{E}}_{i} \tag{7}
+$$
+
+The global frame can replace local frames and keep the invariance of energy prediction. All previous analysis will still be valid if the frame degeneration does not happen. However, the global frame is more likely to degenerate than local frames. As shown in Figure 3b, the benzene molecule has central symmetry and produces a zero global frame. However, when choosing each atom as the center, the central symmetry is broken, and a non-zero local frame can be produced. We further formalize this intuition and prove that the global frame is more likely to degenerate in Appendix I.
+
+In conclusion, we can generate local frames with a message passing layer.
+
+§ 6 GNN WITH LOCAL FRAME
+
+We formally introduce our GNN with local frame (GNN-LF) model. The whole architecture is detailed in Appendix C. The time and space complexity are $O\left( {Nn}\right)$ , where $N$ is the number of atoms in the molecule, and $n$ is the maximum number of neighbor atoms of one atom.
+
+Notations. Let $F$ denote the hidden dimension. We first convert the input features, coordinates $\overrightarrow{r} \in {\mathbb{R}}^{N \times 3}$ and atomic numbers $z \in {\mathbb{N}}^{N}$ , to a graph. The initial node feature ${s}_{i}^{\left( 0\right) } \in {\mathbb{R}}^{F}$ is an embedding of the atomic number ${z}_{i}$ . Edge ${ij}$ has two features: the edge weight ${w}_{ij}^{\left( e\right) } = \operatorname{cutoff}\left( {r}_{ij}\right)$ (where cutoff means the cutoff function), and the radial basis expansion of the distance ${s}_{ij}^{\left( e\right) } = \operatorname{rbf}\left( {r}_{ij}\right)$ . Edge weight ${w}_{ij}^{\left( e\right) }$ is not necessary for expressivity. However, to ensure that the energy prediction is a smooth function of coordinates, messages passed among atoms must be scaled with ${w}_{ij}^{\left( e\right) }$ [19]. These special functions are detailed in Appendix C.
+
+Producing frame. The message passing scheme for producing local frames implements Equation (6).
+
+$$
+{\overrightarrow{E}}_{i} = \mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{w}_{ij}^{\left( e\right) }\left( {{f}_{1}\left( {s}_{ij}^{\left( e\right) }\right) \odot {s}_{j}}\right) \circ \frac{{\overrightarrow{r}}_{ij}}{{r}_{ij}}, \tag{8}
+$$
+
+where ${f}_{1}$ is an MLP. Note that frame ${\overrightarrow{E}}_{i} \in {\mathbb{R}}^{F \times 3}$ in implementation is not restricted to have three vectors. The number of vectors equals the hidden dimension. This design needs no extra linear layer to change the hidden dimension. Moreover, our theoretical analysis is still valid because frame in ${\mathbb{R}}^{F \times 3}$ can be considered as an ensemble of $\frac{F}{3}$ frames in ${\mathbb{R}}^{3 \times 3}$ .
+
+Coordinate projection is as follows,
+
+$$
+{d}_{ij}^{1} = \frac{1}{{r}_{ij}}{\overrightarrow{r}}_{ij}{\overrightarrow{E}}_{i}^{T}. \tag{9}
+$$
+
+The projection in implementation is scaled by $\frac{1}{{r}_{ij}}$ to decouple the distance information in ${s}_{ij}^{\left( e\right) }$ .
+
+Frame-frame projection. ${\overrightarrow{E}}_{i}{\overrightarrow{E}}_{j}^{T}$ is a large matrix. Therefore, we only use the diagonal elements of the projection. To keep the expressivity, we transform the frame with two ordinary linear layers.
+
+$$
+{d}_{ij}^{2} = \operatorname{diag}\left( {{W}_{1}{\overrightarrow{E}}_{j}{\overrightarrow{E}}_{i}^{T}{W}_{2}^{T}}\right) . \tag{10}
+$$
+
+Adding the projections to edge features, we get a graph with invariant features only.
+
+GNN working on the invariant graph. The message passing scheme uses the form in Theorem 4.1. Let the ${s}_{i}^{\left( l\right) }$ denote the node representations produced by the ${l}^{\text{ th }}$ message passing layers. ${s}_{i}^{\left( 0\right) } = {s}_{i}$ .
+
+$$
+{s}_{i}^{\left( l\right) } = \rho \left( {\mathop{\sum }\limits_{{j \neq i,{r}_{ij} < {r}_{c}}}{w}_{ij}^{\left( e\right) }\left( {{f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right) \odot {s}_{j}^{\left( l - 1\right) }}\right) }\right) , \tag{11}
+$$
+
+$$
+{f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right) = {g}_{1}\left( {s}_{ij}^{\left( e\right) }\right) \odot {g}_{2}\left( {{d}_{ij}^{1},{d}_{ij}^{2}}\right) . \tag{12}
+$$
+
+Table 1: Results on the MD17 dataset. Units: energy (E) (kcal/mol) and forces (F) (kcal/mol/Å).
+
+max width=
+
+X X FCHL SchNet DimeNet GemNet PaiNN NequlP TorchMD GNN-LF
+
+1-10
+2*Aspirin E 0.182 0.37 0.204 - 0.167 - 0.124 0.1342
+
+2-10
+ F 0.478 1.35 0.499 0.2168 0.338 0.348 0.255 0.2018
+
+1-10
+2*Benzene E - 0.08 0.078 - - - 0.056 0.0686
+
+2-10
+ F - 0.31 0.187 0.1453 - 0.187 0.201 0.1506
+
+1-10
+2*Ethanol E 0.054 0.08 0.064 - 0.064 - 0.054 0.0520
+
+2-10
+ F 0.136 0.39 0.230 0.0853 0.224 0.208 0.116 0.0814
+
+1-10
+2*Malonaldehyde E 0.081 0.13 0.104 - 0.091 - 0.079 0.0764
+
+2-10
+ F 0.245 0.66 0.383 0.1545 0.319 0.337 0.176 0.1259
+
+1-10
+2*Naphthalene E 0.117 0.16 0.122 - 0.166 - 0.085 0.1136
+
+2-10
+ F 0.151 0.58 0.215 0.0553 0.077 0.097 0.060 0.0550
+
+1-10
+2*Salicylic acid E 0.114 0.20 0.134 - 0.166 - 0.094 0.1081
+
+2-10
+ F 0.221 0.85 0.374 0.1048 0.195 0.238 0.135 0.1005
+
+1-10
+2*Toluence E 0.098 0.12 0.102 - 0.095 - 0.074 0.0930
+
+2-10
+ F 0.203 0.57 0.216 0.0600 0.094 0.101 0.066 0.0543
+
+1-10
+2*Uracil E 0.104 0.14 0.115 - 0.106 - 0.096 0.1037
+
+2-10
+ F 0.105 0.56 0.301 0.0969 0.139 0.173 0.094 0.0751
+
+1-10
+average rank 3.93 6.63 5.38 2.00 4.36 5.25 2.25 1.75
+
+1-10
+
+where $\rho$ is an MLP. We further use a filter decomposition design as follows.
+
+The distance information ${s}_{ij}^{\left( e\right) }$ is easier to learn as it has been expanded with a set of bases, so a linear layer ${g}_{1}$ is enough. In contrast, projections need a more expressive MLP ${g}_{2}$ .
+
+Sharing filters. Generating different filters ${f}_{2}\left( {{s}_{ij}^{\left( e\right) },{d}_{ij}^{1},{d}_{ij}^{2}}\right)$ for each message passing layer is time-consuming. Therefore, we share filters between different layers. Experimental results show that sharing filters leads to minor performance loss and significant scalability gain.
+
+§ 7 EXPERIMENT
+
+In this section, we compare GNN-LF with existing models and do an ablation analysis. We report the mean absolute error (MAE) on the test set (the lower, the better). All our results are averaged over three random splits. Baselines' results are from their papers. The best and the second best results are shown in bold and underline respectively in tables. Experimental settings are detailed in Appendix D.
+
+§ 7.1 MODELING PES
+
+We first evaluate GNN-LF for modeling PES on the MD17 dataset [27], which consists of MD trajectories of small organic molecules. GNN-LF is compared with a manual descriptor model: FCHL [18], invariant models: SchNet [4], DimeNet [5], GemNet [8], a model using irreducible representations: NequIP [9], and models using equivariant representations: PaiNN [7] and TorchMD [10]. The results are shown in Table 1. GNN-LF outperforms all the baselines on $9/{16}$ targets and achieves the second-best performance on all other 7 targets. Our model leads to ${10}\%$ lower loss on average than GemNet, the best baseline. The outstanding performance verifies the effectiveness of the local frame method for modeling PES. Moreover, our model also uses fewer parameters and only about 30% time and 10% GPU memory compared with the baselines as shown in Appendix E.
+
+§ 7.2 ABLATION STUDY
+
+We perform an ablation study to verify our model designs. The results are shown in Table 2.
+
+On average, ablation of frame-frame projection (NoDir2) leads to ${20}\%$ higher MAE, which verifies the necessity of frame-frame projection. The column Global replaces the local frames with the global frame, resulting in 100% higher loss, which verifies local frames' advantages over global frame. Ablation of filter decomposition (NoDecomp) leads to 9% higher loss, indicating the advantage of separately processing distance and projections. Although using different filters for each message passing layer (NoShare) uses much more computation time ( ${1.67} \times$ ) and parameters ( ${3.55} \times$ ), it 8 only leads to 0.01% lower loss on average, illustrating that sharing filters does little harm to the expressivity.
+
+Table 2: Ablation results on the MD17 dataset. Units: energy (E) (kcal/mol) and forces (F) (kcal/mol/Å). GNN-LF does not use ${d}^{2}$ for some molecules, so the NoDir2 column is empty.
+
+max width=
+
+X X GNN-LF NoDir2 Global NoDecomp GNN-LF Noshare
+
+1-8
+2*Aspirin E 0.1342 0.1435 0.2280 0.1411 0.1342 0.1364
+
+2-8
+ F 0.2018 0.2799 0.6894 0.2622 0.2018 0.1979
+
+1-8
+2*Benzene E 0.0686 0.0716 0.0972 0.0688 0.0686 0.0713
+
+2-8
+ F 0.1506 0.1583 0.3520 0.1499 0.1506 0.1507
+
+1-8
+2*Ethanol E 0.0520 0.0532 0.0556 0.0518 0.0520 0.0514
+
+2-8
+ F 0.0814 0.0930 0.1465 0.0847 0.0814 0.0751
+
+1-8
+2*Malonaldehyde E 0.0764 0.0776 0.0923 0.0765 0.0764 0.0790
+
+2-8
+ F 0.1259 0.1466 0.3194 0.1321 0.1259 0.1210
+
+1-8
+2*Naphthalene E 0.1136 0.1152 0.1276 0.1254 0.1136 0.1168
+
+2-8
+ F 0.0550 0.0834 0.2069 0.0553 0.0550 0.0547
+
+1-8
+2*Salicylic acid E 0.1081 0.1087 0.1224 0.1123 0.1081 0.1091
+
+2-8
+ F 0.1048 0.1328 0.2890 0.1399 0.1048 0.1012
+
+1-8
+2*Toluence E 0.0930 0.0942 0.1000 0.0932 0.0930 0.0942
+
+2-8
+ F 0.0543 0.0770 0.1659 0.0695 0.0543 0.0519
+
+1-8
+2*Uracil E 0.1037 0.1069 0.1075 0.1053 0.1037 0.1042
+
+2-8
+ F 0.0751 0.0964 0.1901 0.0825 0.0751 0.0754
+
+1-8
+
+Table 3: Results on the QM9 dataset.
+
+max width=
+
+Target Unit SchNet DimeNet++ Cormorant PaiNN Torchmd GNN-LF
+
+1-8
+$\mu$ D 0.033 0.0297 0.038 0.012 0.002 0.013
+
+1-8
+$\alpha$ ${a}_{0}^{3}$ 0.235 0.0435 0.085 0.045 0.01 0.0353
+
+1-8
+EHOMO meV 41 24.6 34 27.6 21.2 23.5
+
+1-8
+ELUMO meV 34 19.5 38 20.4 17.8 17.0
+
+1-8
+${\Delta \epsilon }$ meV 63 32.6 61 45.7 38 37.1
+
+1-8
+$\langle {R}^{2}\rangle$ ${a}_{0}^{2}$ 0.073 0.331 0.961 0.066 0.015 0.037
+
+1-8
+ZPVE meV 1.7 1.21 2.027 1.28 2.12 1.19
+
+1-8
+${U}_{0}$ meV 14 6.32 22 5.85 6.24 5.30
+
+1-8
+$U$ meV 19 6.28 21 5.83 6.30 5.24
+
+1-8
+$H$ meV 14 6.53 21 5.98 6.48 5.48
+
+1-8
+$G$ meV 14 7.56 20 7.35 7.64 6.84
+
+1-8
+${C}_{v}$ cal/mol/K 0.033 0.023 0.026 0.024 0.026 0.022
+
+1-8
+
+§ 7.3 OTHER CHEMICAL PROPERTIES
+
+Though designed for PES, our model can also predict other properties directly. The QM9 dataset [28] consists of ${134}\mathrm{k}$ stable small organic molecules. The task is to use the atomic numbers and coordinates to predict properties of these molecules. We compare our model with invariant models: SchNet [4], DimeNet++ [29], a model using irreducible representations: Cormorant [6], and models using equivariant representations: PaiNN [7] and TorchMD [10]. Results are shown in Table 3. Our model outperforms all other models on $7/{12}$ tasks and achieves the second-best performance on $4/5$ left tasks, which illustrates that the local frame method has the potential to be applied to other fields.
+
+§ 8 CONCLUSION
+
+This paper proposes GNN-LF, a simple and effective molecular potential energy surface model. It introduces a novel local frame method to decouple the symmetry requirement and capture rich geometric information. In theory, we prove that even ordinary GNNs can reach maximum expressivity with the local frame method. Furthermore, we propose ways to construct local frames. In experiments, our model outperforms all baselines in both scalability (using only 30% time and 10% GPU memory) and accuracy (10% lower loss). Ablation study also verifies the effectiveness of our designs.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea4d117771f043a3b80fbcbeb883cb550a9fd1dd
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,428 @@
+# Graph Reinforcement Learning for Network Control via Bi-Level Optimization
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Dynamic network flow models have been extensively studied and widely used in the past decades to formulate many problems with great real-world impact, such as transportation, supply chain management, power grid control, and more. Within this context, time-expansion techniques currently represent a generic approach for solving control problems over dynamic networks. However, the complexity of these methods does not allow traditional approaches to scale to large networks, especially when these need to be solved recursively over a receding horizon (e.g., to yield a sequence of actions in model predictive control). Moreover, tractable optimization-based approaches are limited to simple linear deterministic settings, and are not able to handle environments with stochastic, non-linear, or unknown dynamics. In this work, we present dynamic network flow problems through the lens of reinforcement learning and propose a graph network-based framework that can handle a wide variety of problems and learn efficient algorithms without significantly compromising optimality. Instead of a naive and poorly-scalable formulation, in which agent actions (and thus network outputs) consist of actions on edges, we present a two-phase decomposition. The first phase consists of an RL agent specifying desired outcomes to the actions. The second phase exploits the problem structure to solve a convex optimization problem and achieve (as best as possible) these desired outcomes. This formulation leads to dramatically improved scalability and performance. We further highlight a collection of features that are potentially desirable to system designers, investigate design decisions, and present experiments showing the utility, scalability, and flexibility of our framework.
+
+## 24 1 Introduction
+
+Many economically critical real-world systems are well-modelled through the lens of control on graphs. Power generation [1-3]; road, rail, and air transportation systems [4, 5]; complex manufacturing systems, supply chain, and distribution networks [6, 7]; telecommunication networks [8-10]; and many other systems are fundamentally the problem of controlling flows of products, vehicles, or other quantities on graph-structured networks. Traditionally, these problems are approached through the definition of a dynamic network flow model (DNF) [11, 12]. Within this class of problems, Ford and Fulkerson [13, 14] proposed a generic approach, showing how one can use time-expansion techniques to (i) convert dynamic networks with discrete time horizon into static networks, and (ii) solve the problem using algorithms developed for static networks. However, this approach leads to networks that grow exponentially in the input size of the problem, thus not allowing traditional methods to scale to large networks. Moreover, the design of good heuristics or approximation algorithms for network flow problems often requires significant specialized knowledge and trial-and-error.
+
+In this paper, we argue that data-driven strategies have the potential to automate this challenging, tedious process, and learn efficient algorithms without compromising optimality. To do so, we propose a graph network-based reinforcement learning framework that can handle a wide variety of network control problems. Specifically, we introduce a bi-level formulation that leads to dramatically 41 improved scalability and performance by combining the strengths of mathematical optimization and learning-based approaches.
+
+## 2 Problem Setting: Dynamic Network Control
+
+To outline our problem formulation, we first define the linear problem, which is a classic convex problem formulation. We will then define a nonlinear, dynamic, non-convex problem setting that better corresponds to real-world instances. Much of the classical flow control literature and practice substitute the former linear problem for the latter nonlinear problem to yield tractable optimization problems [15-17]; we leverage the linear problem as an important algorithmic primitive. We consider the control of ${N}_{c}$ commodities on graphs, for example, vehicles in a transportation problem. A graph $\mathcal{G} = \{ \mathcal{V},\mathcal{E}\}$ is defined as a set $\mathcal{V}$ of ${N}_{v}$ nodes, and a set $\mathcal{E}$ of ${N}_{e}$ ordered pairs of nodes(i, j)called edges, each described by a traversal time ${t}_{ij}$ . We use ${\mathcal{N}}^{ + }\left( i\right) ,{\mathcal{N}}^{ - }\left( i\right) \subseteq \mathcal{V}$ for the set of nodes having edges pointing away from or toward node $i$ , respectively. We use ${s}_{i}^{t}\left( k\right) \in \mathbb{R}$ to denote the quantity of commodity $k$ at node $i$ and time ${t}^{1}$ .
+
+The Linear Network Control Problem. Within the linear model, our commodity quantities evolve
+
+in time as
+
+$$
+{s}_{i}^{t + 1} = {s}_{i}^{t} + {f}_{i}^{t} + {e}_{i}^{t},\;\forall i \in \mathcal{V} \tag{1}
+$$
+
+where ${f}_{i}^{t}$ denotes the change due to flow of commodities along edges and ${e}_{i}^{t}$ denotes the change due to exchange between commodities at the same graph node. We refer to this expression as the conservation of flow. We also accrue money as
+
+$$
+{m}^{t + 1} = {m}^{t} + {m}_{f}^{t} + {m}_{e}^{t}, \tag{2}
+$$
+
+where ${m}_{f}^{t},{m}_{e}^{t} \in \mathbb{R}$ denote the money gained due to flows and exchanges respectively. Money can also be replaced with any other form of scalar reward, although it may be subject to e.g. non-negativity constraints and thus is different from the notion of reward in the RL problem. Our overall problem formulation will typically be to control flows and exchanges so as to maximize money over one or more steps subject to additional constraints such as, e.g., flow limitations through a particular edge. Please refer to Appendix A for a formal treatment of flow and exchange quantities, together with practical constraints within network control problems.
+
+The Nonlinear Dynamic Network Control Problem. The previous subsection presented a linear problem formulation that yields a convex optimization problem for the decision variables-the chosen flow and exchange values. However, the formulation is limited by the assumption of linearity, thus lacking in the characterization of a number of elements typical of real-world systems (please refer to Appendix A for a more complete treatment). Crucially, these nonlinear, time-varying, stochastic, or unknown elements lead to severe difficulties in applying the convex formulation derived in the previous subsection. A common approach is to solve a linearized version of the nonlinear problem at each timestep, which is a form of model predictive control (MPC), although this essentially discards some elements of the problem to achieve computational tractability. In this paper, we focus on solving the nonlinear problem (reflecting real, highly general problem statements) via a bilevel optimization approach, wherein the linear problem (which has been shown to be extremely useful in practice) is used as an inner control primitive.
+
+## 3 Methodology: The Bi-Level Formulation
+
+In this section we describe the bi-level formulation that is the primary contribution of this paper. We further introduce a more formal Markov decision process (MDP) for our problem setting, together with a discussion on practical elements for real-world problem formulations in Appendix B.
+
+The Bi-Level Formulation. We consider a discounted infinite-horizon MDP $\mathcal{M} = \left( {\mathcal{S},\mathcal{A}, P, R,\gamma }\right)$ . Here, ${s}^{t} \in \mathcal{S}$ is the state and ${a}^{t} \in \mathcal{A}$ is the action space, both at time $t$ . The state in this setting is commodity values at nodes, as well as other available information; actions corresponds to aforementioned decision variables. The dynamics, $P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \left\lbrack {0,1}\right\rbrack$ are probabilistic, with $P\left( {{s}^{t + 1} \mid {s}^{t},{a}^{t}}\right)$ denoting a conditional distribution over ${s}^{t + 1}$ . The reward function $R : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is real-valued, and not limited to strictly positive or negative rewards. Finally, we write the discount factor as $\gamma$ as is typical in the infinite-horizon RL formulation, although it is straightforward to instead consider a finite-horizon setting. Please refer to Appendix B. 1 for further treatment of the MDP.
+
+The overall goal of the reinforcement learning problem setting is to find a policy ${\widetilde{\pi }}^{ * } \in \widetilde{\Pi }$ (where $\widetilde{\Pi }$ is the space of realizable Markovian policies) such that ${\widetilde{\pi }}^{ * } \in \arg \mathop{\max }\limits_{{\widetilde{\pi } \in \widetilde{\Pi }}}{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack$ ,
+
+---
+
+${}^{1}$ We consider several reduced views over these quantities, and maintain several notational rules. We write ${s}_{i}^{t} \in {\mathbb{R}}^{{N}_{c}}$ to denote the vector of all commodities; we write ${s}^{t}\left( k\right) \in {\mathbb{R}}^{{N}_{v}}$ to denote the vector of commodity $k$ at all nodes; we write ${s}_{i}\left( k\right) \in {\mathbb{R}}^{T}$ to denote commodity $k$ at node $i$ for all times $t$ . We can also apply any combination of these notation rules, yielding for example $s \in {\mathbb{R}}^{T \times {N}_{c} \times {N}_{v}}$ .
+
+---
+
+where $\tau = \left( {{s}^{0},{a}^{0},{s}^{1},{a}^{1},\ldots }\right)$ denotes the trajectory of states and actions. This policy formulation requires specifying a distribution over all flow/exchange actions, which may be an extremely large space. We instead consider a bi-level formulation
+
+$$
+{\pi }^{ * } \in \underset{\pi \in \Pi }{\arg \max }{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack \;\text{ s.t. }{a}^{t} = \operatorname{LCP}\left( {{\widehat{s}}^{t + 1},{s}^{t}}\right) \tag{3}
+$$
+
+where we consider a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ , which maps from the current state to a goal next state (or subset of the state, such as commodity values only). This goal next state is used in the linear control problem $\left( {\operatorname{LCP}\left( {\cdot , \cdot }\right) }\right)$ , which leverages a (slightly modified) one-step version of the linear problem formulation of Section 2 to map from desired next state to action. Thus, the resulting formulation is a bi-level optimization problem, whereby the policy $\widetilde{\pi }$ is the composition of the policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ and the solution to the linear control problem. Specifically, given a sample of ${\widehat{s}}^{t + 1}$ from the stochastic policy, we select concrete flow and exchange actions by solving the linear control problem, defined as
+
+$\underset{{a}^{t}}{\arg \min }\;d\left( {{\widehat{s}}^{t + 1},{s}^{t + 1}}\right) - R\left( {{s}^{t},{a}^{t}}\right)$(4a)s.t. Conservation of flow (1); Net flow (5); Flow reward (6);(4b)Exchange conditions (7); Other constraints, e.g. (8) or (9)(4c)
+
+where $d\left( {\cdot , \cdot }\right)$ is a chosen convex metric which penalizes deviation from the desired next state. The resultant problem-consisting of a convex objective subject to linear constraints-is convex and thus may be easily and inexpensively solved to choose actions ${a}^{t}$ , even for very large problems.
+
+As is standard in reinforcement learning, we will aim to solve this problem via learning the policy from data. This may be in the form of online learning [18] or via learning from offline data [19]. There are large bodies of work on both problems, and our presentation will generally aim to be as-agnostic-as-possible to the underlying reinforcement learning algorithm used. Of critical importance is the fact that the majority of reinforcement learning algorithms use likelihood ratio gradient estimation (typically referred to as the REINFORCE gradient estimator in RL [20]), which does not require path-wise backpropagation through the inner problem.
+
+We also note that our formulation assumes access to a model (the linear problem) that is a reasonable approximation of the true dynamics over short horizons. This short-term correspondence is central to our formulation: we exploit exact optimization when it is useful, and otherwise push the impacts of the nonlinearity over time in the learned policy. We assume this model is known in our experiments, but it could be identified independently. Please see Appendix C.1, C.2, and C.4 for a broader discussion.
+
+Network Architecture. To exploit the network structure of the problem we introduce a policy graph neural network architecture based on message passing neural networks [21] (Appendix B.2). As introduced in this section, the goal of RL is to learn a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ mapping to goal next states. Concretely, to obtain a valid probability density over next states, we define the output of our policy network to represent the concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{v}}$ of a Dirichlet distribution, such that ${\widehat{s}}^{t + 1} \sim \operatorname{Dir}\left( {{\widehat{s}}^{t + 1} \mid \alpha }\right)$ , although alternate output formulations are possible.
+
+## 4 Experiments
+
+In this section, we compare against a number of benchmarks on an instance of network control with great real-world impact: the minimum cost flow problem. Within this context, the goal is to control commodities so to move them from one or more source nodes to one or more sink nodes, in the minimum time possible. Appendix E provides further details on both benchmarks and environments.
+
+Minimum cost flow through message passing. In this first experiment, we consider 3 different environments (Fig. 1), such that different topologies enforce a different number of required hops of message passing between source and sink nodes to select the best path. Results in Table 1 (2-hop, 3-hop, 4-hop) show how MPNN-RL is able to achieve at least 87% of oracle performance. Table 1 further shows how agents based on graph convolutions (i.e., GCN [22], GAT [23]) fail to learn an effective flow optimization strategy. As in Xu et al. [24], we argue in favor of the algorithmic alignment between the computational structure of MPNNs and the kind of computations needed to solve traditional network optimization problems (see Appendix C.3 for further discussion).
+
+Dynamic traversal times. In this experiment, we define time-dependent traversal times. In Fig. 2 and Table 1 (Dyn tt) we measure results on a dynamic network characterized by two change-points, i.e., time steps where the optimal path changes because of a change in traversal times. Results show how the proposed MPNN-RL is able to achieve above ${99}\%$ of oracle performance.
+
+Table 1: Average performance across multiple environments over 100 test episodes
+
+ | Random | MLP-RL | GCN-RL | GAT-RL | MPNN-RL (ours) | Oracle |
| 2-hops | Avg. Reward | 63 | 387 | 201 | 146 | 576 | 642 |
| % Oracle | 9.9% | 60.2% | 31.3% | 22.9% | 89.7% | - |
| 3-hops | Avg. Reward | 1013 | 1084 | 1385 | 1257 | 1803 | 2014 |
| % Oracle | 50.3% | 53.8% | 68.7% | 62.4% | 89.5% | - |
| 4-hops | Avg. Reward | 2033 | 2185 | 2303 | 2198 | 2807 | 3223 |
| % Oracle | 63.1% | 67.8% | 71.4% | 68.2% | 87.1% | - |
| Dyn tt | Avg. Reward | -546 | -18 | 437 | 400 | 2306 | 2327 |
| % Oracle | -23.4% | -0.7% | 18.7% | 17.1% | 99.1% | - |
| Dyn top | Avg. Reward | 810 | N/A | 1016 | 827 | 1599 | 1904 |
| % Oracle | 42.5% | N/A | 53.4% | 43.4% | $\mathbf{{83.9}\% }$ | - |
| Capacity | Avg. Reward | 1495 | 1498 | 1557 | 1503 | 2145 | 2389 |
| % Oracle | 62.6% | 62.7% | 65.2% | 62.9% | 89.8% | - |
| Success Rate | 82% | 82% | 87% | 80% | 87% | 88% |
| Multi-com | Avg. Reward | 2191 | 4045 | 3278 | 3206 | 6986 | 9701 |
| % Oracle | 22.5% | 41.7% | 33.8% | 33.0% | 72.0% | - |
+
+Dynamic topology. In this experiment we assume a time-dependent topology, i.e., nodes and edges can be dropped or added during an episode. This case is substantially different from what most traditional approaches are able to handle: the locality of MPNN agents together with the one-step implicit planning of RL, enable our framework to deal with multiple graph configurations during the same episode. Fig. 3 and Table 1 (Dyn top) show how MPNN-RL achieves 83.9% of oracle performance clearly outperforming the other benchmarks. Crucially, these results highlight how agents based on MLPs result in highly inflexible network controllers, thus limited to a fixed topology.
+
+Capacity constraints. In this experiment, we relax the assumption that capacities ${\bar{f}}_{ij}$ are always able to accommodate any flow on the graph. Compared to previous sections, the lower capacities introduce the possibility of infeasible states. To measure this, the Success Rate computes the percentage of episodes which have been terminated successfully. Results in Table 1 (Capacity) highlight how MPNN-RL is able to achieve ${89.8}\%$ of oracle performance while being able to successfully terminate ${87}\%$ of episodes. Qualitatively, Fig. 4 shows a visualization of the policy for a specific test episode. The plots show how the MPNN-RL is able to learn the effects of capacity on the optimal strategy by allocating flow to a different node when the corresponding edge is approaching its capacity limit.
+
+Multi-commodity. In this scenario, we extend the current architecture to deal with multiple commodities and source-sink combinations. Results in Table 1 (Multi-com) and Fig. 5 show how MPNN-RL is able to effectively recover distinct policies for each policy head, thus being able to operate successfully multi-commodity flows within the same network.
+
+Computational analysis. We study the computational cost of MPNN-RL compared to MPC-based solutions. As shown in Fig. 6, we compare the time necessary to compute a single network flow decision. We do so across varying dimensions of the underlying graph, ranging from 10 up to 400 nodes. As verified by this experiment, learning-based approaches exhibit computational complexity linear in the number of nodes and graph connectivity, without significant decay in performance.
+
+## 5 Outlook and Limitations
+
+Research in network flow models, in both theory and practice, is largely scattered across the control, management science, and optimization literature, potentially hindering scientific progress. In this work, we propose a general framework that could enable learning-based approaches to help address the open challenges in this space: handling nonlinear dynamics and scalability, among others. In the hope of fostering a unification of tools among the reinforcement learning and network control communities, we aimed to (i) maintain the narration as-agnostic-as-possible, and (ii) showcase the extreme versatility of our framework through numerous controlled experiments. However, what we present here should be considered as, in our opinion, exciting preliminary results aiming to gather more traction among the ML community towards the solution of hugely impactful real-world problems in the field of network control. Crucially, before being able to consider learning-based frameworks as a concrete alternative to current standards, we believe this research opens several promising future directions for the extension of these concepts to large-scale applications.
+
+References
+
+[1] Daniel Bienstock, Michael Chertkov, and Sean Harnett. Chance-constrained optimal power flow: Risk-aware network control under uncertainty. SIAM Review, 56(3):461-495, 2014. 1
+
+[2] Hermann W Dommel and William F Tinney. Optimal power flow solutions. IEEE Transactions on power apparatus and systems, (10):1866-1876, 1968.
+
+[3] M Huneault and Francisco D Galiana. A survey of the optimal power flow literature. IEEE transactions on Power Systems, 6(2):762-770, 1991. 1
+
+[4] Y. Wang, W. Y. Szeto, K. Han, and T. Friesz. Dynamic traffic assignment: A review of the methodological advances for environmentally sustainable road transportation applications. Transportation Research Part B: Methodological, 111:370-394, 2018. 1
+
+[5] D. Gammelli, K. Yang, J. Harrison, F. Rodrigues, F. C. Pereira, and M. Pavone. Graph neural network reinforcement learning for autonomous mobility-on-demand systems. In Proc. IEEE Conf. on Decision and Control, 2021. 1, 10
+
+[6] Haralambos Sarimveis, Panagiotis Patrinos, Chris D Tarantilis, and Chris T Kiranoudis. Dynamic modeling and control of supply chain systems: A review. Computers & operations research, 35(11):3530-3561, 2008. 1
+
+[7] Marcus A Bellamy and Rahul C Basole. Network analysis of supply chain systems: A systematic review and future research. Systems Engineering, 16(2):235-249, 2013. 1
+
+[8] Gabriel Jakobson and Mark Weissman. Real-time telecommunication network management: Extending event correlation with temporal constraints. In International Symposium on Integrated Network Management, pages 290-301, 1995. 1
+
+[9] John Edward Flood. Telecommunication networks. IET, 1997.
+
+[10] Vladimir Popovskij, Alexander Barkalov, and Larysa Titarenko. Control and adaptation in telecommunication systems: Mathematical Foundations, volume 94. Springer Science & Business Media, 2011. 1
+
+[11] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms and Applications. Prentice Hall, 1993. 1
+
+[12] B. Kotnyek. An annotated overview of dynamic network flows. INRIA, 2003. 1
+
+[13] L. R. Ford and D. R. Fulkerson. Constructing maximal dynamic flows from static flows. Operations Research, 6(3):419-433, 1958. 1
+
+[14] L. R. Ford and D. R. Fulkerson. Flows in Networks. Princeton Univ. Press, 1962. 1
+
+[15] Fangxing Li and Rui Bo. Dcopf-based Imp simulation: algorithm, comparison with acopf, and sensitivity. IEEE Transactions on Power Systems, 22(4):1475-1485, 2007. 2
+
+[16] Rick Zhang, Federico Rossi, and Marco Pavone. Model predictive control of autonomous mobility-on-demand systems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1382-1389, 2016.
+
+[17] Peter B Key and Graham A Cope. Distributed dynamic routing schemes. IEEE Communications Magazine, 28(10):54-58, 1990. 2
+
+[18] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 2 edition, 2018.3,11
+
+[19] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv:2005.01643, 2020. 3
+
+[20] R.-J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 1992. 3
+
+[21] Gilmer J., Schoenholz S., Riley P., Vinyals O., and Dahl G. Neural message passing for quantum chemistry. In Int. Conf. on Machine Learning, 2017. 3
+
+[22] T.-N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Int. Conf. on Learning Representations, 2017. 3
+
+[23] P. Veličković, G. Cucurull, A. Casanova, A. Romero, O. Liò, and Y. Bengio. Graph attention networks. In Int. Conf. on Learning Representations, 2018. 3
+
+[24] K. Xu, J. Li, M. Zhang, S. Du, K. Kawarabayashi, and S. Jegelka. What can neural networks reason about? In Int. Conf. on Learning Representations, 2020. 3
+
+[25] H. Markowitz. Portfolio selection. Journal of Finance, 7(1):77-91, 1952. 10
+
+[26] H. U. Gerber and G. Pafum. Utility functions: From risk theory to finance. North American Actuarial Journal, 2(3):74-91, 1998. 10
+
+[27] V. Konda and J. Tsitsiklis. Actor-critic algorithms. In Conf. on Neural Information Processing Systems, 1999. 10
+
+[28] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. 10
+
+[29] V. Mnih, A. Puigdomenech, M. Mirza, A. Graves, T.-P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Int. Conf. on Learning Representations, 2016. 10
+
+[30] D. Gammelli, K. Yang, J. Harrison, F. Rodrigues, F. C. Pereira, and M. Pavone. Graph meta-reinforcement learning for transferable autonomous mobility-on-demand. In ACM Int. Conf. on Knowledge Discovery and Data Mining, 2022. 10
+
+[31] K. Murota. Matrices and Matroids for Systems Analysis. Springer Science & Business Media, 1 edition, 2009. 10
+
+[32] Mario VF Pereira and Leontina MVG Pinto. Multi-stage stochastic optimization applied to energy planning. Mathematical Programming, 52(1):359-375, 1991. 11
+
+[33] Justin Dumouchelle, Rahul Patel, Elias B Khalil, and Merve Bodur. Neur2sp: Neural two-stage stochastic programming. arXiv:2205.12006, 2022. 11
+
+[34] Scott Fujimoto, David Meger, Doina Precup, Ofir Nachum, and Shixiang Shane Gu. Why should i trust you, bellman? the bellman error is a poor replacement for value error. arXiv:2201.12417, 2022. 11
+
+[35] A. Shapiro, D. Dentcheva, and A. Ruszczyński. Lectures on stochastic programming: Modeling and theory. SIAM, second edition, 2014. 11
+
+[36] J. Rawlings and D. Mayne. Model predictive control: Theory and design. Nob Hill Publishing, 2013.12
+
+[37] Tom Van de Wiele, David Warde-Farley, Andriy Mnih, and Volodymyr Mnih. Q-learning in enormous action spaces via amortized approximate maximization. arXiv:2001.08116, 2020. 12
+
+[38] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Int. Conf. on Machine Learning, 2017. 12
+
+[39] James Harrison, Apoorva Sharma, and Marco Pavone. Meta-learning priors for efficient online bayesian regression. In Workshop on Algorithmic Foundations of Robotics, pages 318-337, 2018.
+
+[40] A. Agrawal, S. Barratt, S. Boyd, E. Busseti, and W. M. Moursi. Differentiating through a conic program. Online, 2019.
+
+[41] A. Agrawal, S. Barratt, S. Boyd, and B. Stellato. Learning convex optimization control policies. In Learning for Dynamics & Control, 2019. 12
+
+[42] B. Amos and J. Z. Kolter. Optnet: Differentiable optimization as a layer in neural networks. In Int. Conf. on Machine Learning, 2017.
+
+[43] Benoit Landry, Joseph Lorenzetti, Zachary Manchester, and Marco Pavone. Bilevel optimization for planning through contact: A semidirect method. In The International Symposium of Robotics Research, pages 789-804, 2019.
+
+[44] Luke Metz, Niru Maheswaranathan, Jeremy Nixon, Daniel Freeman, and Jascha Sohl-Dickstein. Understanding and correcting pathologies in the training of learned optimizers. In Int. Conf. on Machine Learning, pages 4556-4565, 2019. 12
+
+[45] Aviv Tamar, Garrett Thomas, Tianhao Zhang, Sergey Levine, and Pieter Abbeel. Learning from the hindsight plan-episodic mpc improvement. In Proc. IEEE Conf. on Robotics and Automation, pages 336-343, 2017. 12
+
+[46] Brandon Amos, Ivan Jimenez, Jacob Sacks, Byron Boots, and J Zico Kolter. Differentiable mpc for end-to-end planning and control. Conf. on Neural Information Processing Systems, 31, 2018.12
+
+[47] Brian Ichter, James Harrison, and Marco Pavone. Learning sampling distributions for robot motion planning. In Proc. IEEE Conf. on Robotics and Automation, pages 7087-7094, 2018. 12
+
+[48] Thomas Power and Dmitry Berenson. Variational inference mpc using normalizing flows and out-of-distribution projection. arXiv:2205.04667, 2022.
+
+[49] Brandon Amos and Denis Yarats. The differentiable cross-entropy method. In Int. Conf. on Machine Learning, pages 291-302, 2020. 12
+
+[50] Jacob Sacks and Byron Boots. Learning to optimize in model predictive control. In Proc. IEEE Conf. on Robotics and Automation, pages 10549-10556, 2022. 12
+
+[51] Xuesu Xiao, Tingnan Zhang, Krzysztof Marcin Choromanski, Tsang-Wei Edward Lee, Anthony Francis, Jake Varley, Stephen Tu, Sumeet Singh, Peng Xu, Fei Xia, Leila Takayama, Roy Frostig, Jie Tan, Carolina Parada, and Vikas Sindhwani. Learning model predictive controllers with real-time attention for real-world navigation. In Conf. on Robot Learning, 2022. 12
+
+[52] Priya Donti, Brandon Amos, and J Zico Kolter. Task-based end-to-end model learning in stochastic optimization. Conf. on Neural Information Processing Systems, 30, 2017. 12
+
+[53] Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75-84, 1990. 12
+
+[54] A. Paszke, S. Gross, F. Massa, A. Lerer, et al. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703, 2019. 12
+
+[55] IBM. ILOG CPLEX User's guide. IBM ILOG, 1987. 12
+
+[56] R. Zhang, F. Rossi, and M. Pavone. Model predictive control of Autonomous Mobility-on-Demand systems. In Proc. IEEE Conf. on Robotics and Automation, 2016. 13
+
+## A Dynamic Network Control
+
+In this section we make concrete both our linear and nonlinear problem formulation.
+
+Flows. We will denote flows along edge(i, j)with ${f}_{ij}^{t}\left( k\right)$ . From these flows, we have
+
+$$
+{f}_{i}^{t} = \mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ - }\left( i\right) }}{f}_{ji}^{t} - \mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ + }\left( i\right) }}{f}_{ij}^{t},\;\forall i \in \mathcal{V} \tag{5}
+$$
+
+which is the net flow (inflows minus outflows). As discussed, associated with each flow is a cost ${m}_{ij}^{t}\left( k\right)$ . Note that given this formulation, the total cost for all commodities can be written as ${m}_{ij}^{t} \cdot {f}_{ij}^{t} = {\left( {m}_{ij}^{t}\right) }^{\top }{f}_{ij}^{t}$ . Thus, we can write the total flow cost at time $t$ as
+
+$$
+{m}_{f}^{t} = \mathop{\sum }\limits_{{i \in \mathcal{V}}}\left( {\mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ - }\left( i\right) }}{m}_{ji}^{t} \cdot {f}_{ji}^{t} - \mathop{\sum }\limits_{{j \in {\mathcal{N}}^{ + }\left( i\right) }}{m}_{ij}^{t} \cdot {f}_{ij}^{t}}\right) . \tag{6}
+$$
+
+Exchanges. To define our exchange relations and their effect on commodity quantities and costs, we will write the effect which exchanges have on money for each node; we write this as ${m}_{i}^{t}$ . Thus, we have ${m}_{e}^{t} = \mathop{\sum }\limits_{{i \in \mathcal{V}}}{m}_{i}^{t}$ . The exchange relation takes the form
+
+$$
+\left\lbrack \begin{matrix} {e}_{i}^{t} \\ {m}_{i}^{t} \end{matrix}\right\rbrack = {E}_{i}^{t}{w}_{i}^{t} \tag{7}
+$$
+
+where ${E}_{i}^{t} \in {\mathbb{R}}^{{N}_{c} + 1 \times {N}_{e}\left( i\right) }$ is an exchange matrix and $w \in {\mathbb{R}}^{{N}_{e}\left( i\right) }$ are the weights for each exchange. Each column in this exchange matrix denotes an (exogenous) exchange rate between commodities; for example, for $i$ ’th column ${\left\lbrack -1,1,{0.1}\right\rbrack }^{\top }$ , one unit of commodity one is exchanged for one unit of commodity two plus 0.1 units of money. Thus, choice of exchange weights ${w}_{i}^{t}$ uniquely determines exchanges ${e}_{i}^{t}$ and money change due to exchanges, ${m}_{e}^{t}$ .
+
+Linear Constraints. We may impose additional (linear) constraints on the problem beyond the conservation of flow we have discussed so far. There are a few common examples that we may use in several applications. A common constraint is non-negativity of commodity values, which we may express as
+
+$$
+{s}_{i}^{t} \geq 0,\;\forall i, t. \tag{8}
+$$
+
+Note that this inequality is defined element-wise. A similar constraint can be defined for money. We may also impose constraints on flows and exchanges; thus, we may for example limit the flow of all commodities through a particular edge via
+
+$$
+\mathop{\sum }\limits_{{k = 1}}^{{N}_{c}}{f}_{ij}^{t}\left( k\right) \leq {\bar{f}}_{ij}^{t} \tag{9}
+$$
+
+where this sum could also be weighted per-commodity. These linear constraints are only a limited selection of some common examples; the space of possible constraints is extremely general and the particular choice of constraints is problem-specific.
+
+Elements breaking the linearity assumptions. Real-world systems are characterized by many factors that cannot be reliably modelled through the linear problem described in Section 2. In what follows, we discuss a (non-exhaustive) list of factors potentially breaking such linearity assumptions:
+
+- Stochasticity. Various stochastic elements can impact the problem. Commodity transitions in the previous section were defined as being deterministic; in practice in many problems, there are elements of stochasticity to these transitions. For example, random demand may reduce supply by an unpredictable amount; vehicles may be randomly added in a transportation problem; and packages may be lost in a supply chain setting. In addition to these state transitions, constraints may be stochastic as well: flow times or edge capacities may be stochastic, as when a road is shared with other users, or costs for flows and exchange may be stochastic.
+
+- Nonlinearity. Various elements of the state evolution, constraints, or cost function may be nonlinear. The objective may be chosen to be a risk-sensitive or robust metric applied to the distribution of outcomes, as is common in financial problems. The state evolution may have natural saturating behavior (e.g. automatic load shedding). Indeed, many real constraints will have natural nonlinear behavior.
+
+- Time-varying costs and constraints. Similar to the stochastic case, various quantities may be time-varying. However, it is possible that they are time-varying in a structured way, as opposed to randomly. For example, demand for transportation may vary over the time of day, or purchasing costs may vary over the year.
+
+- Unknown dynamics elements. While not a major focus of discussion in the paper up to this point, elements of the underlying dynamics may be partially or wholly unknown. Our reinforcement learning formulation is capapble of addressing this by learning policies directly from data, in contrast to standard control techniques.
+
+## B Methodology
+
+In this section we discuss the full MDP formulation (including defining state and action spaces) and discuss algorithmic details.
+
+### B.1 The Dynamic Network MDP
+
+The problem setting for the full, dynamic network problem is best formulated, in the general case, as a partially-observed MDP. We will present it as a Markovian decision process (fully-observed), where the choice of input features beyond commodity values are chosen by the user; discussion on strategies for better handling partial-observability are presented later in this section.
+
+We consider a discounted infinite-horizon MDP $\mathcal{M} = \left( {\mathcal{S},\mathcal{A}, P, R,\gamma }\right)$ . Here, ${s}^{t} \in \mathcal{S}$ is the state and ${a}^{t} \in \mathcal{A}$ is the action space, both at time $t$ . The dynamics, $P : \mathcal{S} \times \mathcal{S} \times \mathcal{A} \rightarrow \left\lbrack {0,1}\right\rbrack$ are probabilistic, with $P\left( {{s}^{t + 1} \mid {s}^{t},{a}^{t}}\right)$ denoting a conditional distribution over ${s}^{t + 1}$ . The reward function $R : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is real-valued, and not limited to strictly positive or negative rewards. Finally, we write the discount factor as $\gamma$ as is typical in the infinite-horizon RL formulation, although it is straightforward to instead consider a finite-horizon setting.
+
+State and state space. We will, generally, define the state to contain enough information to yield "good" Markovian policies. More formally, real-world network control problems are typically highly partially-observed; many features of the world impact the state evolution. However, a small number of features are typically of primary importance, and the impact of the other partially-observed elements can be modeled as stochastic disturbances.
+
+For our bi-level formulation, there are some state elements that are required. Our formulation requires, at each timestep, the commodity values ${s}^{t}$ . Furthermore, the constraint values are required, such as costs, exchange rates, flow capacities, etc. If the graph topology is time-varying, the connectivity at time $t$ is also critical. The state values to fully define the one-step linear control problem are the only state elements which are required. We refer to these constraint values as edge state elements. More precisely, the state elements we have discussed so far are either properties of the graph nodes (commodity values) or of the edges (such as flow constraints). This difference is of critical importance in our graph neural network architecture.
+
+In addition to these state elements, additional information may be incorporated. Generally, the choice of state elements will depend on the information available to a system designer (what can be measured) and will depend on the particular problem setting. Possible examples of further state elements include forecasts of prices, exchange rates, or flow constraints at future times; exchanges rates, for example, include notions of demand or supply. We note that such forecasts are almost always available, as they are necessary for solving the multi-step planning problem.
+
+Action and action space. As discussed in Section 2, the action is defined as all flows and exchange weights at all nodes/edges, ${a}^{t} = \left( {{f}^{t},{w}^{t}}\right)$ .
+
+Dynamics. The dynamics of the MDP, $P$ , describe the evolution of state elements. We split our discussion in to two parts: the dynamics associated with the commodity time evolution and the dynamics of the non-commodity elements.
+
+The commodity dynamics are assumed to be reasonably well-modeled by the conservation of flow, (1), subject to the constraints; this forms the basis of the bi-level approach we describe in the next subsection. The primary element not included in the conservation of flow expression is possible stochasticity. For example, in transportation problems, vehicles may randomly drop out of service.
+
+The non-commodity dynamics are assumed to be substantially more complex. For example, prices to buy or sell (reflected in exchange rates) may have a complex dependency on past sales, current demand, and current supply (commodity values), as well as random exogenous factors. Thus, we place few assumptions on the evolution of non-commodity dynamics, and assume that current values are measurable.
+
+Reward. Throughout the paper, we will assume our full reward is the total discounted money earned over the (infinite) problem duration. This results in a stage-wise reward function that corresponds simply to the money earned in that time period, or
+
+$$
+R\left( {{s}^{t},{a}^{t}}\right) = {m}_{e}^{t} + {m}_{f}^{t}. \tag{10}
+$$
+
+Note that the sum of rewards to time $t$ is exactly ${m}^{t} - {m}^{0}$ , which corresponds to the money earned. It is typical in economics and finance to consider concave utility functions or risk metrics as opposed to the exact return $\left\lbrack {{25},{26}}\right\rbrack$ . However, this reward structure does not result in a simple stage-wise reward decomposition as in the linear case. Thus, while addressing this concavity is important, we do not address it in this work.
+
+### B.2 Network Architecture and RL Details
+
+In this section we introduce the basic building blocks of our graph neural network architecture. Let us define with ${\mathbf{x}}_{i} \in {\mathbb{R}}^{{D}_{\mathbf{x}}}$ and ${\mathbf{e}}_{ji} \in {\mathbb{R}}^{{D}_{\mathbf{e}}}$ the ${D}_{\mathbf{x}}$ -dimensional vector of node features of node $i$ and the ${D}_{\mathrm{e}}$ -dimensional vector of edge features from node $j$ to node $i$ , respectively.
+
+We define the update function of node features through the following message passing neural network (MPNN):
+
+$$
+{\mathbf{x}}_{i}^{\left( k\right) } = \mathop{\max }\limits_{{j \in {\mathcal{N}}^{ - }\left( i\right) }}{f}_{\theta }\left( {{\mathbf{x}}_{i}^{\left( k - 1\right) },{\mathbf{x}}_{j}^{\left( k - 1\right) },{\mathbf{e}}_{ji}}\right) , \tag{11}
+$$
+
+where $k$ indicates the $k$ -th layer of message passing in the GNN with $k = 0$ indicating raw environment features, i.e., ${\mathbf{x}}_{i}^{\left( 0\right) } = {\mathbf{x}}_{i}$ , and where we use the element-wise max operator as aggregation function in our proposed graph-network.
+
+We note that this network architecture can be used to define both policy and value function estimator, depending on the reinforcement learning algorithm of interest (e.g., actor-critic [27], value-based [28], etc.). As an example, in our implementation, we define two separate decoder architectures for the actor and critic networks of an Advantage Actor Critic (A2C) [29] algorithm. For the actor, we define the output of our policy network to represent the concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{v}}$ of a Dirichlet distribution, such that ${\mathbf{a}}_{t} \sim \operatorname{Dir}\left( {{\mathbf{a}}_{t} \mid \alpha }\right)$ , and where the positivity of $\alpha$ is ensured by a Softplus nonlinearity. On the other hand, the critic is characterized by a global sum-pooling performed after $K$ layers of MPNN. In this way, the critic computes a single value function estimate for the entire network by aggregating information across all nodes in the graph.
+
+Exploration. In practice, we choose large penalty terms $d\left( {\cdot , \cdot }\right)$ to minimize greediness. However early in training, randomly initialized penalty terms can harm exploration. In practice, we found it was sufficient to down-weight the penalty term early in training. As such, the inner action selection is biased toward short-term rewards, resulting in greedy action selection. However, there are many further possibilities for exploiting random penalty functions to induce exploration, which we discuss in the next section.
+
+Integer-valued flows. For several problem settings, it is desirable that the chosen flows be integer-valued. For example, in a transportation problem, we may wish to allocate some number of vehicles, which can not be infinitely sub-divided [5, 30]. There are several ways to introduce integer-valued constraints to our framework. First, we note that because the RL agent is trained through policy gradient-and thus we do not require a differentiable inner problem-we can simply introduce integer constraints into the lower-level problem ${}^{2}$ . However, solving integer-constrained problems is typically expensive in practice. An alternate solution is to simply use a heuristic rounding operation on the output of the inner problem. Again, because of the choice of gradient estimator, this does not need to be differentiable. Moreover, the RL policy learns to adapt to this heuristic clipping. Thus, we in general recommend this strategy as opposed to directly imposing constraints in the inner problem.
+
+## C Discussion and Algorithmic Components
+
+In this section we discuss various elements of the proposed framework, highlight correspondences and design decisions, and discuss component-level extensions.
+
+---
+
+${}^{2}$ Note that several problems exhibit a total unimodularity property [31], for which the relaxed integer-valued problem is tight.
+
+---
+
+### C.1 Distance metric as value function
+
+The role of the distance metric (and the generated goal next state) is to capture the value of future reward in the greedy one-step inner optimization problem. This is closely related to the value function in dynamic programming and reinforcement learning, which in expectation captures the sum of future rewards for a particular policy. Indeed, under moderate technical assumptions, our linear problem formulation with stochasticity yields convex expected cost-to-go (the negative of the value) [32,33].
+
+There are several critical differences between our penalty term and a learned value function. First, a value function in a Markovian setting for a given policy is a function solely of state. For example, in the LCP, a value function would depend only on ${s}^{t + 1}$ . In contrast, our value function depends on ${\widehat{s}}^{t + 1}$ , which is the output of a policy which takes ${s}^{t}$ as an input. Thus, the penalty term is a function of both the current and predicted next state. Given this, the penalty term is better understood as a local approximation of the value function, for which convex optimization is tractable, or as a form of state-action value function with a reduced action space (also referred to as a Q function).
+
+The second major distinction between the penalty term and a value function is particular to reinforcement learning. Value functions in modern RL are typically learned via minimizing the Bellman residual [18], although there is disagreement on whether this is a desirable objective [34]. In contrast, our policy is trained directly via gradient descent on the total reward (potentially incorporating value function control variates). Thus, the objective for this penalty method is better aligned with maximizing total reward.
+
+### C.2 Beyond a single-step inner problem
+
+Our formulation so far has considered a bi-level formulation in which the RL policy outputs a desired state at the next timestep, ${\widehat{s}}^{t + 1}$ ; this is then used in the lower-level problem to select actions. There are two relaxations to this procedure that can be incorporated here.
+
+First, the RL policy can output any future state, and direct optimization can happen for any horizon. We may parameterize the RL policy to return ${\widehat{s}}^{t + k}$ for $k \geq 1$ . Given this, a multi-step optimization problem may be solved using the linear model. The potential risk to this approach is the linear (in horizon) growth in variables for the inner problem, and poor agreement between the linear model and the nonlinear model. This presents a strict generalization of our proposed method. The primary reason we have not considered the multi-step formulation as the primary algorithm of this paper is that it requires modeling the dynamics of the non-commodity state variables. For example, this model requires forecasting all constraint values, whereas our one-step formulation requires only knowledge at the current timestep. Forecasting of constraint values is closely linked to questions of (persistent) feasibility, which we do not consider in detail in this paper.
+
+Second, stochasticity may be directly integrated into the lower-level problem. The standard formulation for stochastic model predictive control (or stochastic multi-stage optimization) is the scenario formulation [35], in which a tree of outcomes is constructed via sampling noise realizations ${}^{3}$ . Within the one-step bi-level formulation, sampling ${N}_{n}$ noise realizations results in ${N}_{n}$ values of the next state, ${s}_{i}^{t + 1}, i = 1,\ldots ,{N}_{n}$ within the inner problem. The empirical mean loss
+
+$$
+{\mathbb{E}}_{{s}^{t + 1}}\left\lbrack {d\left( {{\widehat{s}}^{t + 1},{s}^{t + 1}}\right) }\right\rbrack - R\left( {{s}^{t},{a}^{t}}\right) \approx \frac{1}{{N}_{n}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{n}}d\left( {{\widehat{s}}^{t + 1},{s}_{i}^{t + 1}}\right) - R\left( {{s}^{t},{a}^{t}}\right) \tag{12}
+$$
+
+can then be minimized. We emphasize that the actions are the same for each noise realization-this is the so-called non-anticipativity constraint. This formulation, for one step, does not meaningfully increase the number of decision variables, although will result in increased computational complexity More importantly, multi-step optimization within the scenario tree approach yields exponential growth in the number of decision variables, which will rapid result in intractability. We refer the reader to [35] for more details on scenario-based stochastic optimization.
+
+### C.3 Algorithmic alignment
+
+The concept of algorithmic alignment refers to the fact that despite many neural network architectures have the capacity to represent a wide range of algorithms, not all networks are able to actually learn these algorithms. Intuitively, a network may learn and generalize better if it is able to represent a function (algorithm) "more easily." A notable example of this in the context of supervised learning is the relation between MLPs and CNNs in computer vision-where MLPs are theoretically universal approximators yet struggle to achieve satisfying performance on most vision tasks. The difference in results of MLP-RL in Table 1 (2-hops) compared to Table 1 (3-hops, 4-hops) further confirms these concepts, whereby the smaller dimensionality of the 2-hops environment leads to a smaller solution space for the MLPs, which are able to converge to relatively good policies. On the other hand, the 3-hops and 4-hops environments are characterized by a significant increase in the number of edges and nodes, leading to a more challenging search for solutions in policy-space.
+
+---
+
+${}^{3}$ We note that non-sampling strategies such as moment-matching formulations are also possible, although we will not discuss these methods herein.
+
+---
+
+### C.4 Computational efficiency
+
+Consider solving the full nonlinear control problem via direct optimization over a finite horizon ( $T$ timesteps), which corresponds to a model predictive control [36] formulation. How many total actions must be selected? The number of possible flows for a fully dense graph (worst case) is ${N}_{v}\left( {{N}_{v} - 1}\right)$ . In addition to this, there are $\mathop{\sum }\limits_{{i \in \mathcal{V}}}{N}_{e}\left( i\right)$ possible exchange actions; if we assume ${N}_{e}$ is the same for all nodes, this yields ${N}_{v}{N}_{e}$ possible actions. Finally, we have ${N}_{c}$ commodities. Thus, the worst-case number of actions to select is $T{N}_{c}{N}_{v}\left( {{N}_{v} + {N}_{e} - 1}\right)$ ; it is evident that for even moderate choices of each variable, the complexity of action selection in our problem formulation quickly grows beyond tractability.
+
+While moderately-sized problems may be tractable within the direct optimization setting, we aim to incorporate the impacts of stochasticity, nonlinearity, and uncertainty, which typically results in non-convexity. The reinforcement learning approach, in addition to being able to improve directly from data, reduces the number of actions required to those for a single step. If we were to directly parameterize the naive policy that outputs flows and exchanges, this would correspond to ${N}_{c}{N}_{v}\left( {{N}_{v} + }\right.$ ${N}_{e} - 1$ ) actions. For even moderate values of ${N}_{c},{N}_{v},{N}_{e}$ , this can result in millions of actions. It is well-known that reinforcement learning algorithms struggle with high dimensional action spaces [37], and thus this approach is unlikely to be successful. In contrast, our bi-level formulation requires only ${N}_{c}$ actions for the learned policy, while additionally leveraging the beneficial inductive biases over short time horizons.
+
+## D Related Work
+
+Bi-level optimization-in which one optimization problem depends on the solution to another optimization problem, and are thus nested-has recently become an important topic in machine learning, reinforcement learning, and control [38-44]. Of particular relevance to our framework are methods that combine principled control strategies with learned components in a hierarchical way. Examples include using LQR control in the inner problem with learnable cost and dynamics $\left\lbrack {{41},{45},{46}}\right\rbrack$ ; learning sampling distributions in planning and control $\left\lbrack {{47} - {49}}\right\rbrack$ ; or learning optimization strategies or goals for optimization-based control [50, 51].
+
+Numerous strategies for learning control with bi-level formulations have been proposed. A simple approach is to insert intermediate goals to train lower-level components, such as imitation [47]. This approach is inherently limited by the choice of the intermediate objective; if this objective does not strongly correlate with the downstream task, learning could emphasize unnecessary elements or miss critical ones. An alternate strategy, which we take in this work, is directly optimizing through an inner controller. A large body of work has focused on exploiting exact solutions to the gradient of (convex) optimization problems at fixed points $\left\lbrack {{41},{46},{52}}\right\rbrack$ . This allows direct backpropatation through optimization problems, allowing them to be used as a generic component in a differentiable computation graph (or neural network). Our approach leverages likelihood ratio gradients (equivalently, policy gradient), an alternate zeroth-order gradient estimator [53]. This enables easy differentiation through lower-lever optimization problems without the technical details necessitated with fixed-point differentiation.
+
+## E Experiments
+
+### E.1 Benchmarks
+
+All RL modules were implemented using PyTorch [54] and the IBM CPLEX solver [55] for the optimization problem. In our experiments, we compare the proposed framework with the following methods:
+
+Heuristics. In this class of methods, we focus on measuring performance of simple, domain-knowledge-driven rebalancing heuristics.
+
+1. Random policy: at each timestep, we sample the desired distribution from a Dirichlet prior with concentration parameter $\alpha = \left\lbrack {1,1,\ldots ,1}\right\rbrack$ . This benchmark provides a lower bound of performance by choosing desired goal states randomly.
+
+Learning-based. Within this class of methods, we focus on measuring how different architectures affect the quality of the solutions for the dynamic network control problem. For all methods, the A2C algorithm is kept fixed, thus the difference solely lies in the neural network architecture.
+
+3. MLP-RL: both policy and value function estimator are parametrized by feed-forward neural networks. In all our experiments, we use two layers of 32 hidden unites and an output layer mapping to the output's support (e.g., a scalar value for the critic network). Through this comparison, we highlight the performance and flexibility of graph representations for network-structured data.
+
+4. GCN-RL: In all our experiments, we use three layers of graph convolution with 32 hidden units and a linear output layer mapping to the output's support. See below for a broader discussion of graph convolution operators.
+
+5. GAT-RL: In all our experiments, we use three layers of graph attention with 32 hidden units and single attention head. The output is further computed by a linear output layer mapping to the output's support. Together with GCN-RL, this model represents an approach based on graph convolutions rather than explicit message passing along the edges (as in MPNNs). Through this comparison, we argue in favor of explicit, pair-wise messages along the edges, opposed to sole aggregation of node features among a neighborhood. Specifically, we argue in favor of the alignment between MPNN and the kind of computations required to solve flow optimization tasks, e.g., propagation of travel times and selection of best path among a set of candidates (max aggregation).
+
+6. MPNN-RL: ours. We use three layers of MPNN of 32 hidden units as defined in Section B. 2 and a linear output layer mapping to the output's support.
+
+MPC-based. Within this class of methods, we focus on measuring performance of MPC approaches that serve as state-of-art benchmarks for the dynamic network flow problem.
+
+5. MPC-Oracle: we directly optimize the flow using a standard formulation of MPC [56]. Notice that although the embedded optimization is a linear programming model, it may not meet the computation requirement of real-time applications (e.g., obtaining a solution within several seconds) for large scale networks.
+
+### E.2 Environments
+
+- Minimum cost flow through message passing. Given a single-source, single-sink network, we assume travel times to be constant over the episode and requirements (i.e., demand) to be sampled at each time step as $\rho = {10} + {\psi }_{i},{\psi }_{i} \sim \operatorname{Uniform}\left\lbrack {-2,2}\right\rbrack$ . Capacities ${u}_{ij}$ are fixed to a very high positive number, thus not representing a constraint in practice. Cost ${m}_{ij}$ is considered equal to the traversal time ${t}_{ij}$ . An episode is assumed to have a duration of 30 time steps and terminates when there is no more flow traversing the network. To present a variety of scenarios to the agent at training time, we sample random travel times for each new episode as ${t}_{ij} \sim$ Uniform $\left\lbrack {0,{10}}\right\rbrack$ and use the topologies shown in Fig. 1. In our experiments, we apply as many layers of message passing as hops from source to sink node in the graph, e.g., $K = 2$ and $K = 3$ in the 2-hops and 3-hops environment, respectively.
+
+- Dynamic traversal times. To train our MPNN-RL, we select the 3-hops environment and generate travel times as follows for every episode: (i) sample random traversal times as ${t}_{ij} \sim$ Uniform $\left\lbrack {0,{10}}\right\rbrack$ ,(ii) for every time step, gradually change the traversal time as ${t}_{ij} = {t}_{ij} + \psi ,\psi \sim$ Uniform $\left\lbrack {-1,1}\right\rbrack$ .
+
+- Capacity constraints. In this experiment, we focus on the 3-hops environment and assume a constant value ${\bar{f}}_{ij} = {20},\forall i, j \in \mathcal{V} : j \neq 7$ while we keep a high value for all the edges going into node 7 (i.e., the sink node) which would more easily generate infeasible scenarios. From an RL perspective, we add the following edge-level features:
+
+- Edge-capacity ${\left\{ {\bar{f}}_{ij}^{t}\right\} }_{i, j \in \mathcal{V}}$ at the current time step $t$ .
+
+- Accumulated flow ${\left\{ {f}_{ij}^{t}\right\} }_{i, j \in \mathcal{V}}$ on edge ${ij}$
+
+- Multi-commodity. Let ${N}_{c}$ define the number of commodities to consider, indexed by $k$ . From an RL perspective, we extend the our proposed policy graph neural network to represent a ${N}_{c}$ - dimensionsional Dirichlet distribution. Concretely, we define the output of the policy network to
+
+
+
+Figure 1: Graph topologies used for the message passing experiments: 2-hops (left), 3-hops (center), 4-hops (right). The source and sink nodes are represented by the left-most and right-most nodes, respectively. Values in proximity of the edges represent traversal times.
+
+
+
+Figure 2: Visualization of a trained instance of MPNN-RL on an environment with dynamic traversal times. We simulate a scenario where the optimal path changes three times (left, middle, and right) over the course of an episode. Shaded edges represent actions induced by the MPNN-RL.
+
+represent the ${N}_{c} \times {N}_{v}$ concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{c} \times {N}_{v}}$ of a Dirichlet distribution over nodes for each commodity, such that ${\mathbf{a}}_{t} \sim \operatorname{Dir}\left\{ {{\mathbf{a}}_{t} \mid \alpha }\right\}$ . In other words, to extend our approach to the multi-commodity setting, we define a multi-head policy network characterized by one head per commodity. In our experiments, we train our multi-head agent on the topology shown in Fig. 5 whereby we assume two parallel commodities: commodity A going from node 0 to node 10 , and commodity B going from node 0 to node 11 . We choose this topology so that the only way to solve the scenario is to discover distinct behaviours between the two network heads (i.e., the policy head controlling flow for commodity A needs to go up or it won't get any reward, and vice-versa for commodity B).
+
+- Computational analysis. In this experiment, we generate different versions of the 3-hops environment, whereby different environments are characterized by intermediate layers with increasing number of nodes and edges. The results are computed by applying the pre-trained MPNN-RL agent on the original 3-hops environment (i.e., characterized by 8 nodes in the graph). In light of this, Figure 6 showcases a promising degree of transfer and generalization among graphs of different dimensions.
+
+
+
+Figure 3: Visualization of a trained instance of MPNN-RL on an environment with dynamic topology. We simulate a scenario where the optimal path changes over the course of an episode because of the addition of a new path. Shaded edges represent actions induced by the MPNN-RL.
+
+
+
+Figure 4: Visualization of the MPNN-RL policy on the capacity constrained environment. (Top) The resulting flow ${f}_{ij}$ on the edges $0 \rightarrow 1,0 \rightarrow 2,0 \rightarrow 3$ . (Center) The accumulated flow on the same edges compared to the fixed capacity ${\bar{f}}_{ij} = {20}$ , represented as a dashed horizontal line. (Bottom) The desired distribution described by the MPNN-RL policy.
+
+
+
+Figure 5: Visualization of the multi-commodity environment. (Left) The topology considered during our experiments. (Center) A visualization of the policy for the first commodity A. (Right) A visualization of the policy for the second commodity B.
+
+
+
+Figure 6: Comparison of computation times between learning-based (blue) and control-based (orange) approaches. Green triangles represent the percentage performance of our RL framework compared to the oracle model.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9764f9079e62bb119b628d089ecb75356870136a
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/1sPcfSScGWO/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,140 @@
+§ GRAPH REINFORCEMENT LEARNING FOR NETWORK CONTROL VIA BI-LEVEL OPTIMIZATION
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Dynamic network flow models have been extensively studied and widely used in the past decades to formulate many problems with great real-world impact, such as transportation, supply chain management, power grid control, and more. Within this context, time-expansion techniques currently represent a generic approach for solving control problems over dynamic networks. However, the complexity of these methods does not allow traditional approaches to scale to large networks, especially when these need to be solved recursively over a receding horizon (e.g., to yield a sequence of actions in model predictive control). Moreover, tractable optimization-based approaches are limited to simple linear deterministic settings, and are not able to handle environments with stochastic, non-linear, or unknown dynamics. In this work, we present dynamic network flow problems through the lens of reinforcement learning and propose a graph network-based framework that can handle a wide variety of problems and learn efficient algorithms without significantly compromising optimality. Instead of a naive and poorly-scalable formulation, in which agent actions (and thus network outputs) consist of actions on edges, we present a two-phase decomposition. The first phase consists of an RL agent specifying desired outcomes to the actions. The second phase exploits the problem structure to solve a convex optimization problem and achieve (as best as possible) these desired outcomes. This formulation leads to dramatically improved scalability and performance. We further highlight a collection of features that are potentially desirable to system designers, investigate design decisions, and present experiments showing the utility, scalability, and flexibility of our framework.
+
+§ 24 1 INTRODUCTION
+
+Many economically critical real-world systems are well-modelled through the lens of control on graphs. Power generation [1-3]; road, rail, and air transportation systems [4, 5]; complex manufacturing systems, supply chain, and distribution networks [6, 7]; telecommunication networks [8-10]; and many other systems are fundamentally the problem of controlling flows of products, vehicles, or other quantities on graph-structured networks. Traditionally, these problems are approached through the definition of a dynamic network flow model (DNF) [11, 12]. Within this class of problems, Ford and Fulkerson [13, 14] proposed a generic approach, showing how one can use time-expansion techniques to (i) convert dynamic networks with discrete time horizon into static networks, and (ii) solve the problem using algorithms developed for static networks. However, this approach leads to networks that grow exponentially in the input size of the problem, thus not allowing traditional methods to scale to large networks. Moreover, the design of good heuristics or approximation algorithms for network flow problems often requires significant specialized knowledge and trial-and-error.
+
+In this paper, we argue that data-driven strategies have the potential to automate this challenging, tedious process, and learn efficient algorithms without compromising optimality. To do so, we propose a graph network-based reinforcement learning framework that can handle a wide variety of network control problems. Specifically, we introduce a bi-level formulation that leads to dramatically 41 improved scalability and performance by combining the strengths of mathematical optimization and learning-based approaches.
+
+§ 2 PROBLEM SETTING: DYNAMIC NETWORK CONTROL
+
+To outline our problem formulation, we first define the linear problem, which is a classic convex problem formulation. We will then define a nonlinear, dynamic, non-convex problem setting that better corresponds to real-world instances. Much of the classical flow control literature and practice substitute the former linear problem for the latter nonlinear problem to yield tractable optimization problems [15-17]; we leverage the linear problem as an important algorithmic primitive. We consider the control of ${N}_{c}$ commodities on graphs, for example, vehicles in a transportation problem. A graph $\mathcal{G} = \{ \mathcal{V},\mathcal{E}\}$ is defined as a set $\mathcal{V}$ of ${N}_{v}$ nodes, and a set $\mathcal{E}$ of ${N}_{e}$ ordered pairs of nodes(i, j)called edges, each described by a traversal time ${t}_{ij}$ . We use ${\mathcal{N}}^{ + }\left( i\right) ,{\mathcal{N}}^{ - }\left( i\right) \subseteq \mathcal{V}$ for the set of nodes having edges pointing away from or toward node $i$ , respectively. We use ${s}_{i}^{t}\left( k\right) \in \mathbb{R}$ to denote the quantity of commodity $k$ at node $i$ and time ${t}^{1}$ .
+
+The Linear Network Control Problem. Within the linear model, our commodity quantities evolve
+
+in time as
+
+$$
+{s}_{i}^{t + 1} = {s}_{i}^{t} + {f}_{i}^{t} + {e}_{i}^{t},\;\forall i \in \mathcal{V} \tag{1}
+$$
+
+where ${f}_{i}^{t}$ denotes the change due to flow of commodities along edges and ${e}_{i}^{t}$ denotes the change due to exchange between commodities at the same graph node. We refer to this expression as the conservation of flow. We also accrue money as
+
+$$
+{m}^{t + 1} = {m}^{t} + {m}_{f}^{t} + {m}_{e}^{t}, \tag{2}
+$$
+
+where ${m}_{f}^{t},{m}_{e}^{t} \in \mathbb{R}$ denote the money gained due to flows and exchanges respectively. Money can also be replaced with any other form of scalar reward, although it may be subject to e.g. non-negativity constraints and thus is different from the notion of reward in the RL problem. Our overall problem formulation will typically be to control flows and exchanges so as to maximize money over one or more steps subject to additional constraints such as, e.g., flow limitations through a particular edge. Please refer to Appendix A for a formal treatment of flow and exchange quantities, together with practical constraints within network control problems.
+
+The Nonlinear Dynamic Network Control Problem. The previous subsection presented a linear problem formulation that yields a convex optimization problem for the decision variables-the chosen flow and exchange values. However, the formulation is limited by the assumption of linearity, thus lacking in the characterization of a number of elements typical of real-world systems (please refer to Appendix A for a more complete treatment). Crucially, these nonlinear, time-varying, stochastic, or unknown elements lead to severe difficulties in applying the convex formulation derived in the previous subsection. A common approach is to solve a linearized version of the nonlinear problem at each timestep, which is a form of model predictive control (MPC), although this essentially discards some elements of the problem to achieve computational tractability. In this paper, we focus on solving the nonlinear problem (reflecting real, highly general problem statements) via a bilevel optimization approach, wherein the linear problem (which has been shown to be extremely useful in practice) is used as an inner control primitive.
+
+§ 3 METHODOLOGY: THE BI-LEVEL FORMULATION
+
+In this section we describe the bi-level formulation that is the primary contribution of this paper. We further introduce a more formal Markov decision process (MDP) for our problem setting, together with a discussion on practical elements for real-world problem formulations in Appendix B.
+
+The Bi-Level Formulation. We consider a discounted infinite-horizon MDP $\mathcal{M} = \left( {\mathcal{S},\mathcal{A},P,R,\gamma }\right)$ . Here, ${s}^{t} \in \mathcal{S}$ is the state and ${a}^{t} \in \mathcal{A}$ is the action space, both at time $t$ . The state in this setting is commodity values at nodes, as well as other available information; actions corresponds to aforementioned decision variables. The dynamics, $P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \left\lbrack {0,1}\right\rbrack$ are probabilistic, with $P\left( {{s}^{t + 1} \mid {s}^{t},{a}^{t}}\right)$ denoting a conditional distribution over ${s}^{t + 1}$ . The reward function $R : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is real-valued, and not limited to strictly positive or negative rewards. Finally, we write the discount factor as $\gamma$ as is typical in the infinite-horizon RL formulation, although it is straightforward to instead consider a finite-horizon setting. Please refer to Appendix B. 1 for further treatment of the MDP.
+
+The overall goal of the reinforcement learning problem setting is to find a policy ${\widetilde{\pi }}^{ * } \in \widetilde{\Pi }$ (where $\widetilde{\Pi }$ is the space of realizable Markovian policies) such that ${\widetilde{\pi }}^{ * } \in \arg \mathop{\max }\limits_{{\widetilde{\pi } \in \widetilde{\Pi }}}{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack$ ,
+
+${}^{1}$ We consider several reduced views over these quantities, and maintain several notational rules. We write ${s}_{i}^{t} \in {\mathbb{R}}^{{N}_{c}}$ to denote the vector of all commodities; we write ${s}^{t}\left( k\right) \in {\mathbb{R}}^{{N}_{v}}$ to denote the vector of commodity $k$ at all nodes; we write ${s}_{i}\left( k\right) \in {\mathbb{R}}^{T}$ to denote commodity $k$ at node $i$ for all times $t$ . We can also apply any combination of these notation rules, yielding for example $s \in {\mathbb{R}}^{T \times {N}_{c} \times {N}_{v}}$ .
+
+where $\tau = \left( {{s}^{0},{a}^{0},{s}^{1},{a}^{1},\ldots }\right)$ denotes the trajectory of states and actions. This policy formulation requires specifying a distribution over all flow/exchange actions, which may be an extremely large space. We instead consider a bi-level formulation
+
+$$
+{\pi }^{ * } \in \underset{\pi \in \Pi }{\arg \max }{\mathbb{E}}_{\tau }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}^{t},{a}^{t}}\right) }\right\rbrack \;\text{ s.t. }{a}^{t} = \operatorname{LCP}\left( {{\widehat{s}}^{t + 1},{s}^{t}}\right) \tag{3}
+$$
+
+where we consider a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ , which maps from the current state to a goal next state (or subset of the state, such as commodity values only). This goal next state is used in the linear control problem $\left( {\operatorname{LCP}\left( {\cdot , \cdot }\right) }\right)$ , which leverages a (slightly modified) one-step version of the linear problem formulation of Section 2 to map from desired next state to action. Thus, the resulting formulation is a bi-level optimization problem, whereby the policy $\widetilde{\pi }$ is the composition of the policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ and the solution to the linear control problem. Specifically, given a sample of ${\widehat{s}}^{t + 1}$ from the stochastic policy, we select concrete flow and exchange actions by solving the linear control problem, defined as
+
+$\underset{{a}^{t}}{\arg \min }\;d\left( {{\widehat{s}}^{t + 1},{s}^{t + 1}}\right) - R\left( {{s}^{t},{a}^{t}}\right)$(4a)s.t. Conservation of flow (1); Net flow (5); Flow reward (6);(4b)Exchange conditions (7); Other constraints, e.g. (8) or (9)(4c)
+
+where $d\left( {\cdot , \cdot }\right)$ is a chosen convex metric which penalizes deviation from the desired next state. The resultant problem-consisting of a convex objective subject to linear constraints-is convex and thus may be easily and inexpensively solved to choose actions ${a}^{t}$ , even for very large problems.
+
+As is standard in reinforcement learning, we will aim to solve this problem via learning the policy from data. This may be in the form of online learning [18] or via learning from offline data [19]. There are large bodies of work on both problems, and our presentation will generally aim to be as-agnostic-as-possible to the underlying reinforcement learning algorithm used. Of critical importance is the fact that the majority of reinforcement learning algorithms use likelihood ratio gradient estimation (typically referred to as the REINFORCE gradient estimator in RL [20]), which does not require path-wise backpropagation through the inner problem.
+
+We also note that our formulation assumes access to a model (the linear problem) that is a reasonable approximation of the true dynamics over short horizons. This short-term correspondence is central to our formulation: we exploit exact optimization when it is useful, and otherwise push the impacts of the nonlinearity over time in the learned policy. We assume this model is known in our experiments, but it could be identified independently. Please see Appendix C.1, C.2, and C.4 for a broader discussion.
+
+Network Architecture. To exploit the network structure of the problem we introduce a policy graph neural network architecture based on message passing neural networks [21] (Appendix B.2). As introduced in this section, the goal of RL is to learn a stochastic policy $\pi \left( {{\widehat{s}}^{t + 1} \mid {s}^{t}}\right)$ mapping to goal next states. Concretely, to obtain a valid probability density over next states, we define the output of our policy network to represent the concentration parameters $\alpha \in {\mathbb{R}}_{ + }^{{N}_{v}}$ of a Dirichlet distribution, such that ${\widehat{s}}^{t + 1} \sim \operatorname{Dir}\left( {{\widehat{s}}^{t + 1} \mid \alpha }\right)$ , although alternate output formulations are possible.
+
+§ 4 EXPERIMENTS
+
+In this section, we compare against a number of benchmarks on an instance of network control with great real-world impact: the minimum cost flow problem. Within this context, the goal is to control commodities so to move them from one or more source nodes to one or more sink nodes, in the minimum time possible. Appendix E provides further details on both benchmarks and environments.
+
+Minimum cost flow through message passing. In this first experiment, we consider 3 different environments (Fig. 1), such that different topologies enforce a different number of required hops of message passing between source and sink nodes to select the best path. Results in Table 1 (2-hop, 3-hop, 4-hop) show how MPNN-RL is able to achieve at least 87% of oracle performance. Table 1 further shows how agents based on graph convolutions (i.e., GCN [22], GAT [23]) fail to learn an effective flow optimization strategy. As in Xu et al. [24], we argue in favor of the algorithmic alignment between the computational structure of MPNNs and the kind of computations needed to solve traditional network optimization problems (see Appendix C.3 for further discussion).
+
+Dynamic traversal times. In this experiment, we define time-dependent traversal times. In Fig. 2 and Table 1 (Dyn tt) we measure results on a dynamic network characterized by two change-points, i.e., time steps where the optimal path changes because of a change in traversal times. Results show how the proposed MPNN-RL is able to achieve above ${99}\%$ of oracle performance.
+
+Table 1: Average performance across multiple environments over 100 test episodes
+
+max width=
+
+2|c|X Random MLP-RL GCN-RL GAT-RL MPNN-RL (ours) Oracle
+
+1-8
+2*2-hops Avg. Reward 63 387 201 146 576 642
+
+2-8
+ % Oracle 9.9% 60.2% 31.3% 22.9% 89.7% -
+
+1-8
+2*3-hops Avg. Reward 1013 1084 1385 1257 1803 2014
+
+2-8
+ % Oracle 50.3% 53.8% 68.7% 62.4% 89.5% -
+
+1-8
+2*4-hops Avg. Reward 2033 2185 2303 2198 2807 3223
+
+2-8
+ % Oracle 63.1% 67.8% 71.4% 68.2% 87.1% -
+
+1-8
+2*Dyn tt Avg. Reward -546 -18 437 400 2306 2327
+
+2-8
+ % Oracle -23.4% -0.7% 18.7% 17.1% 99.1% -
+
+1-8
+2*Dyn top Avg. Reward 810 N/A 1016 827 1599 1904
+
+2-8
+ % Oracle 42.5% N/A 53.4% 43.4% $\mathbf{{83.9}\% }$ -
+
+1-8
+3*Capacity Avg. Reward 1495 1498 1557 1503 2145 2389
+
+2-8
+ % Oracle 62.6% 62.7% 65.2% 62.9% 89.8% -
+
+2-8
+ Success Rate 82% 82% 87% 80% 87% 88%
+
+1-8
+2*Multi-com Avg. Reward 2191 4045 3278 3206 6986 9701
+
+2-8
+ % Oracle 22.5% 41.7% 33.8% 33.0% 72.0% -
+
+1-8
+
+Dynamic topology. In this experiment we assume a time-dependent topology, i.e., nodes and edges can be dropped or added during an episode. This case is substantially different from what most traditional approaches are able to handle: the locality of MPNN agents together with the one-step implicit planning of RL, enable our framework to deal with multiple graph configurations during the same episode. Fig. 3 and Table 1 (Dyn top) show how MPNN-RL achieves 83.9% of oracle performance clearly outperforming the other benchmarks. Crucially, these results highlight how agents based on MLPs result in highly inflexible network controllers, thus limited to a fixed topology.
+
+Capacity constraints. In this experiment, we relax the assumption that capacities ${\bar{f}}_{ij}$ are always able to accommodate any flow on the graph. Compared to previous sections, the lower capacities introduce the possibility of infeasible states. To measure this, the Success Rate computes the percentage of episodes which have been terminated successfully. Results in Table 1 (Capacity) highlight how MPNN-RL is able to achieve ${89.8}\%$ of oracle performance while being able to successfully terminate ${87}\%$ of episodes. Qualitatively, Fig. 4 shows a visualization of the policy for a specific test episode. The plots show how the MPNN-RL is able to learn the effects of capacity on the optimal strategy by allocating flow to a different node when the corresponding edge is approaching its capacity limit.
+
+Multi-commodity. In this scenario, we extend the current architecture to deal with multiple commodities and source-sink combinations. Results in Table 1 (Multi-com) and Fig. 5 show how MPNN-RL is able to effectively recover distinct policies for each policy head, thus being able to operate successfully multi-commodity flows within the same network.
+
+Computational analysis. We study the computational cost of MPNN-RL compared to MPC-based solutions. As shown in Fig. 6, we compare the time necessary to compute a single network flow decision. We do so across varying dimensions of the underlying graph, ranging from 10 up to 400 nodes. As verified by this experiment, learning-based approaches exhibit computational complexity linear in the number of nodes and graph connectivity, without significant decay in performance.
+
+§ 5 OUTLOOK AND LIMITATIONS
+
+Research in network flow models, in both theory and practice, is largely scattered across the control, management science, and optimization literature, potentially hindering scientific progress. In this work, we propose a general framework that could enable learning-based approaches to help address the open challenges in this space: handling nonlinear dynamics and scalability, among others. In the hope of fostering a unification of tools among the reinforcement learning and network control communities, we aimed to (i) maintain the narration as-agnostic-as-possible, and (ii) showcase the extreme versatility of our framework through numerous controlled experiments. However, what we present here should be considered as, in our opinion, exciting preliminary results aiming to gather more traction among the ML community towards the solution of hugely impactful real-world problems in the field of network control. Crucially, before being able to consider learning-based frameworks as a concrete alternative to current standards, we believe this research opens several promising future directions for the extension of these concepts to large-scale applications.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..23e89ec688830ae04c5c795540e3394e79b58363
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,755 @@
+# Graph-Time Convolutional Autoencoders
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+We introduce graph-time convolutional autoencoder (GTConvAE), a novel spatiotemporal architecture tailored to unsupervised learning for multivariate time series on networks. The GTConvAE leverages product graphs to represent the time series and a principled joint spatiotemporal convolution over this product graph. Instead of fixing the product graph at the outset, we make it parametric to attend to the spatiotemporal coupling for the task at hand. On top of this, we propose temporal downsampling for the encoder to improve the receptive field in a spatiotemporal manner without affecting the network structure; respectively. In the decoder, we consider the opposite upsampling operator. We prove that the GTConvAEs with graph integral Lipschitz filters are stable to relative network perturbations, ultimately showing the role of the different components in the encoder and decoder. Numerical experiments for denoising and anomaly detection in solar and water networks corroborate our findings and showcase the effectiveness of the GTConvAE compared with state-of-the-art alternatives.
+
+## 1 Introduction
+
+Learning unsupervised representations from spatiotemporal network data is commonly encountered in applications concerning multivariate data denoising [1], anomaly detection [2], missing data imputation [3], and forecasting [4], to name just a few. The challenge is to develop models that jointly capture the spatiotemporal dependencies in a computation- and data-efficient manner yet being tractable so that to understand the role played by the network structure and the dynamics over it. The autoencoder family of functions is of interest in this setting, but vanilla spatiotemporal forms [5-7] that ignore the network structure suffer the well-known curse of dimensionality and lack inductive learning capabilities [8].
+
+Upon leveraging the network as an inductive bias [9], graph-time autoencoders have been recently developed. These approaches are typically composed of two interleaving modules: one capturing the spatial dependencies via graph neural networks (GNNs) [10] and one capturing the temporal dependencies via temporal CNN or LTSM networks. For example, the work in [1] uses an edge-varying GNN [11] followed by a temporal convolution for motion denoising. The work in [12] considers LSTMs and graph convolutions for variational spatiotemporal autoencoders, which have been further investigated in $\left\lbrack {3,{13}}\right\rbrack$ , respectively, for spatiotemporal data imputation as a graph-based matrix completion problem and dynamic topologies. Graph-time autoencoders over dynamic topologies have also been investigated in [14, 15]. Lastly, [4] embeds the temporal information into the edges of a graph and develops an autoencoder over this graph for forecasting purposes.
+
+By working disjointly first on the graph and then on the temporal dimension of the graph embeddings, these approaches fail to capture the joint spatiotemporal dependencies present in the raw data. It is also challenging to analyze their theoretical properties and to attribute to what extent the benefit comes from one module over the other. This aspect has been investigated for supervised spatiotemporal learning via GNNs [16-21] but not for autoencoders. The two works elaborating on this are [2] and [22]. The work in [2] replicates the graph over time via the Cartesian product principle [23] and uses an order one graph convolution [24] to learn spatiotemporal embeddings that are fed into an LSTM module to improve the temporal memory, ultimately giving more importance to the temporal dimension of the latent representation. Differently, [25] proposed a variational graph-time autoencoder that its encoder is based on [17] and its decoder is a multi-layer perceptron; hence, being suitable only for topological tasks such as dynamic link prediction but not for tasks concerning time series over networks such as denoising or anomaly detection.
+
+In this paper, we propose a GTConvAE that, differently from [2], captures jointly the spatiotemporal coupling both in the raw data and the intermediate higher-level representations. The GTConvAE operates over a parametric product graph [26] to attend to the spatiotemporal coupling for the task at hand rather than fixing it at the outset. Differently from [17], the GTConvAE has a symmetric structure with graph-time convolutions in both encoder and decoder, making it suitable for tasks concerning network time series. We also study the capability of the GTConvAE to transfer learning across different networks, which is of importance as practical topologies differ from the models used during training (e.g., because of model uncertainness, perturbations, or dynamics). The latter has been studied for traditional [27-29] and graph-time GNN models [20, 26, 30] but not for graph-time autoencoders.
+
+Our contribution in this paper is twofold. First, we propose a symmetric graph-time convolutional autoencoder that jointly captures the spatiotemporal coupling in the data suited for tasks concerning multivariate time series over networks. The GTConvAE represents the time series as a graph signal over product graphs and uses the latter as an inductive bias to learn unsupervised representations. The product graph is parametric to attend to the coupling for the specific task, and it generalizes the popular choices of product graphs [31]. We also propose a temporal downsampling/upsampling in the encoder/decoder to increase the spatiotemporal receptive field without affecting the network structure; hence, preserving the inductive bias. Second, we prove GTConvAE is stable to relative perturbations on the spatial graph; highlighting the role played by the encoder, decoder, parametric product graph, convolutional filters, and downsampling/upsampling rate. Numerical experiments about denoising and anomaly detection over solar and water networks corroborate our findings and show a competitive performance compared with the more involved state-of-the-art alternatives.
+
+The rest of this paper is organized as follows. Section 2 formulates the GTConvAE model and Section 3 analyzes its theoretical properties. Numerical experiments are presented in Section 4 and conclusions in Section 5. The proofs are collected in the appendix.
+
+## 2 Graph-Time Convolutional Autoencoders
+
+GTconvAE learns representations from $N$ -dimensional multivariate time series ${\mathbf{x}}_{t} \in {\mathbb{R}}^{N}, t =$ $1,\ldots , T$ , collected in matrix $\mathbf{X} \in {\mathbb{R}}^{N \times T}$ . These time series have a spatial network structure represented by a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ composed of $N$ nodes $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ and $M$ edges. The $n$ -th row of $\mathbf{X}$ contains the time series ${\mathbf{x}}^{n} = {\left\lbrack {x}_{1}\left( n\right) ,\ldots ,{x}_{T}\left( n\right) \right\rbrack }^{\top }$ on node ${v}_{n}$ and the $t$ -th column a graph signal ${\mathbf{x}}_{t} = {\left\lbrack {x}_{t}\left( 1\right) ,\ldots ,{x}_{t}\left( N\right) \right\rbrack }^{\top }$ at timestamp $t\left\lbrack {{32},{33}}\right\rbrack$ . For example, the time series could be nodal pressures measured over junction nodes in a water distribution network, while the pipe connections rule the spatial structure. The representations learned from the tuple $\{ \mathcal{G},\mathbf{X}\}$ can then be used, among others, for anomaly detection [5], denoising dynamic data over graphs [1], and missing data completion [3].
+
+The GTconvAE follows the standard encoder-decoder structure [34], but in each module, it jointly captures the spatiotemporal structure in the data. We denote the GTconvAE as
+
+$$
+\widehat{\mathbf{X}} = \operatorname{GTConvAE}\left( {\mathbf{X},\mathcal{G};\mathcal{H}}\right) \mathrel{\text{:=}} \operatorname{DEC}\left( {\operatorname{ENC}\left( {\mathbf{X},\mathcal{G};{\mathcal{H}}_{e}}\right) ,\mathcal{G};{\mathcal{H}}_{d}}\right)
+$$
+
+where the encoder $\operatorname{ENC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{e}}\right)$ and decoder $\operatorname{DEC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{d}}\right)$ are non-linear parametric functions and where set $\mathcal{H} = {\mathcal{H}}_{e} \cup {\mathcal{H}}_{d}$ collects all parameters. The encoder takes as input the graph $\mathcal{G}$ and the time series $\mathbf{X}$ and produces higher-level representations $\mathbf{Z} \in {\mathbb{R}}^{N \times {T}_{e}}$ . These representations are built in a layered manner where each layer comprises: $i$ ) a joint graph-time convolutional filter to capture the spatiotemporal dependencies in a principled manner; ii) a temporal downsampling module to increase the receptive field without affecting the network structure; and iii) a pointwise nonlinearity to have more complex representations. The decoder has a mirrored structure w.r.t. the encoder by taking as input $\mathbf{Z}$ and outputting an estimate of the input $\widehat{\mathbf{X}}$ . The model parameters are estimated end-to-end by minimizing a spatiotemporal regularized reconstruction loss $\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right)$ .
+
+### 2.1 Product Graph Representation of Network Time Series
+
+GTConvAE uses product graphs to represent the spatiotemporal dependencies in X [23]. Product graphs have been proven successful for processing multivariate time series, such as imputing missing values [35, 36], denoising [37], providing a spatiotemporal Fourier analysis [33], as well as building vector autoregressive models [38], spatiotemporal scattering transforms [39], and graph-time neural networks [26]. Specifically, denote by $\mathbf{S} \in {\mathbb{R}}^{N \times N}$ the graph shift operator (GSO) of the spatial graph $\mathcal{G}$ , e.g., adjacency, Laplacian. Consider also a temporal graph ${\mathcal{G}}_{T} = \left( {{\mathcal{V}}_{T},{\mathcal{E}}_{T},{\mathbf{S}}_{T}}\right)$ , where the node set ${\mathcal{V}}_{T} = \{ 1,\ldots , T\}$ comprises the discrete-time instants, the edge set ${\mathcal{E}}_{T} \subseteq {\mathcal{V}}_{T} \times {V}_{T}$ captures the temporal dependencies; e.g., a directed line or a cyclic graph, and ${\mathbf{S}}_{T} \in {\mathbb{R}}^{N \times N}$ is the respective GSO [40,41]. The time series ${\mathbf{x}}^{n}$ now can be defined as a graph signal over temporal graph ${\mathbf{S}}_{T}$ where ${x}_{t}\left( n\right)$ is a scalar value assigned to the $t$ -th node of ${\mathcal{G}}_{T}$ .
+
+The product graph representing the spatiotemporal patterns in $\mathbf{X}$ is denoted by ${\mathcal{G}}_{\diamond } = {\mathcal{G}}_{T}\diamond \mathcal{G} =$ $\left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right)$ . The node set ${\mathcal{V}}_{\diamond }$ is the Cartesian product between ${\mathcal{V}}_{T}$ and $\mathcal{V}$ which leads to ${NT}$ distinct spatiotemporal nodes ${i}_{\diamond } = \left( {n, t}\right)$ . The edge set ${\mathcal{E}}_{\diamond }$ connects these nodes and the GSO ${\mathbf{S}}_{\diamond } \in {\mathbb{R}}^{{NT} \times {NT}}$ is dictated by the product graph. Fixing the product graph implies fixing the spatiotemporal dependencies in the data, which may lead to wrong inductive biases. To avoid this and improve flexibility, we consider a parametric product graph whose GSO is of the form
+
+$$
+{\mathbf{S}}_{\diamond } = \mathop{\sum }\limits_{{i = 0}}^{1}\mathop{\sum }\limits_{{j = 0}}^{1}{s}_{ij}\left( {{\mathbf{S}}_{T}^{i} \otimes {\mathbf{S}}^{j}}\right) = \underset{\text{self-loops }}{\underbrace{{s}_{00}{\mathbf{I}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{Cartesian }}{\underbrace{{s}_{01}{\mathbf{I}}_{T} \otimes \mathbf{S} + {s}_{10}{\mathbf{S}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{Kronecker }}{\underbrace{{s}_{11}{\mathbf{S}}_{T} \otimes \mathbf{S}}}, \tag{1}
+$$
+
+where the scalar parameters $\left\{ {s}_{ij}\right\}$ attend the spatiotemporal connections and encompass the typical product graph choices such as the Kronecker, the Cartesian, and the strong product. By column-vectorizing $\mathbf{X}$ into ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right) \in {\mathbb{R}}^{NT}$ , we obtain a product graph signal assigning a real value to each spacetime node ${i}_{\diamond }$ . I.e., the dynamic data ${\mathbf{x}}_{t}$ over $\mathcal{G}$ is now a static signal ${\mathbf{x}}_{\diamond }$ over the product graph ${\mathcal{G}}_{\diamond }$ .
+
+### 2.2 Encoder
+
+The encoder is an ${L}_{e}$ -layered architecture in which each layer comprises a bank of product graph convolutional filters, temporal downsampling, and pointwise nonlinearities.
+
+GTConv filter captures the spatiotemporal patterns in the data matrix X. Given the parametric product graph representation ${\mathcal{G}}_{\diamond } = \left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right) \left\lbrack \text{cf. (1)}\right\rbrack$ and the product graph signal ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right)$ as input, the output of a graph-time convolutional filter of order $K$ is
+
+$$
+{\mathbf{y}}_{\diamond } = \mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond } = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}{\mathbf{x}}_{\diamond } \tag{2}
+$$
+
+where $\mathbf{h} = {\left\lbrack {h}_{0},\ldots ,{h}_{K}\right\rbrack }^{\top }$ are the filter parameters and $\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}$ the filtering matrix. The filter in (2) is called convolutional as the output ${\mathbf{y}}_{\diamond }$ is a weighted linear combination of shifted graph signals over the product graph up to $K$ times [42]. Hence, the filter is spatiotemporally local in a neighborhood of radius $K$ . The filter locality does not only depend on the order $K$ but also on the type of product graph. For example, for a fixed $K$ , the Cartesian product is more localized than the strong product, which can be considered to have a longer spatiotemporal memory [26]. Consequently, learning parameters $\left\{ {s}_{ij}\right\}$ in (1) implies learning the multi-hop resolution radius.
+
+In the $\ell$ -th layer, the encoder has ${F}_{\ell - 1}$ product graph signal features ${\mathbf{x}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{x}}_{\diamond ,\ell - 1}^{g},\ldots {\mathbf{x}}_{\diamond ,\ell - 1}^{{F}_{\ell - 1}}$ , processes these with a bank of ${F}_{\ell }{F}_{\ell - 1}$ filters and outputs ${F}_{\ell }$ product graph signal features as
+
+$$
+{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }, \tag{3}
+$$
+
+which are the higher-level linear representation of the layer.
+
+Temporal downsampling reduces the temporal dimension in each output ${\left\{ {\mathbf{y}}_{\diamond ,\ell }^{f}\right\} }_{f}$ in (3) by down-sampling the latter along the temporal dimension with a rate $r$ . More specifically, we first transform
+
+the $f$ -th output ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{e}}$ into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{e}}$ and then summarize every $r$ consecutive columns without overlap to obtain the downsampled matrix ${\mathbf{X}}_{d,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell }^{e}}$ with ${T}_{\ell }^{e} < {T}_{\ell - 1}^{e}$ . The(n, t)-th entry of ${\mathbf{X}}_{d,\ell }^{f}$ is computed as
+
+$$
+{\mathbf{X}}_{d,\ell }^{f}\left( {n, t}\right) = \operatorname{SUM}\left( {{\mathbf{Y}}_{\ell }^{f}\left( {n, r\left( {t - 1}\right) + 1 : {rt}}\right) }\right) ,\;f = 1,\ldots {F}_{\ell }, \tag{4}
+$$
+
+where $\operatorname{SUM}\left( \cdot \right)$ is a summary function over the temporal indices $r\left( {t - 1}\right) + 1$ to ${rt}$ . This summary function could be a simple downsampling (i.e., output the first column in the block ${\mathbf{Y}}_{\ell }^{f}(n, r\left( {t - 1}\right) + 1$ : ${rt}))$ or an aggregation function (i.e., mean/max/min per spatial node).
+
+This temporal downsampling increases the encoder spatiotemporal memory without affecting the spatial structure. I.e., nodes with the temporal indices $t,{rt},\left( {r + 1}\right) t,\ldots$ become neighbors, which brings in a longer memory in the next layer and increases the encoder receptive field. While also spatial graph pooling can be added [43], we do not advocate it for two reasons. First, the spatial graph acts as an inductive bias for the GTConvAE [9]; hence, changing the graph in the layers via graph reduction, coarsening, or alternatives will affect the spatial structure, ultimately changing the inductive bias. Second, the spatial graph often represents the communication channels for distributed implementation of GTConv $\left\lbrack {{20},{42},{44}}\right\rbrack$ , and changing it may be physically impossible as sensor nodes have a limited transmission radius. An option in the latter setting may be a zero-pad spatial pooling $\left\lbrack {{45},{46}}\right\rbrack$ but it requires memorizing the indices where the zero-padding is applied, which may be challenging for large graphs.
+
+Activation functions nonlinearize the downsampled features to increase the representational capacity. We consider an entry-wise nonlinear function $\sigma \left( \cdot \right)$ such as ReLU and produce layer $\ell$ -th output as
+
+$$
+{\mathbf{X}}_{\ell + 1}^{f} = \sigma \left( {\mathbf{X}}_{d,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{5}
+$$
+
+The encoder performs operations (3)-(4)-(5) for all the ${L}_{e}$ layers to yield the encoded output
+
+$$
+{\mathbf{Z}}_{\diamond } \mathrel{\text{:=}} {\mathbf{X}}_{\diamond , L} = \operatorname{ENC}\left( {{\mathbf{x}}_{\diamond ,0},\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) , \tag{6}
+$$
+
+where ${\mathbf{x}}_{\diamond ,0} \mathrel{\text{:=}} {\mathbf{x}}_{\diamond } \in {\mathbb{R}}^{NT},{\mathbf{Z}}_{\diamond } = \left\lbrack {{\mathbf{z}}_{\diamond }^{1},\ldots ,{\mathbf{z}}_{\diamond }^{{F}_{L}}}\right\rbrack \in {\mathbb{R}}^{N{T}_{{L}_{e}} \times {F}_{L}}$ , and we made explicit the dependence from the product graph parameters $\mathbf{s} = {\left\lbrack {s}_{00},{s}_{01},{s}_{10},{s}_{11}\right\rbrack }^{\top }$ [cf. (1)].
+
+### 2.3 Decoder
+
+Mirroring the encoder, the decoder reconstructs the input from the latent representations in (6). At the generic layer $\ell$ , graph-time convolutional filtering is performed, subsequently a temporal upsampling, and a pointwise nonlinearity.
+
+GTConv filtering decodes the spatiotemporal latent representations from the encoder. Considering again ${F}_{\ell - 1}$ input features ${\mathbf{z}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{g},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{{F}_{\ell } - 1}$ and a filter bank of ${F}_{\ell }{F}_{\ell - 1}$ GTConv filters as per (2), the outputs are
+
+$$
+{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{z}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }. \tag{7}
+$$
+
+Upsampling zero-pads the removed temporal values during downsampling [cf. (4)] so that the final GTConvAE output matches the dimension of $\mathbf{X}$ . Specifically, given the $f$ -th feature ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{d}}$ from (7), we again transform it into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{d}}$ and obtain the upsampled matrix ${\mathbf{Z}}_{u,\ell }^{f} \in {\mathbb{R}}^{N \times {T}_{\ell }^{d}}$ whose(n, t)-th entry is computed as
+
+$$
+{\mathbf{Z}}_{u,\ell }^{f}\left( {n, t}\right) = \left\{ \begin{array}{ll} {\mathbf{Y}}_{\ell }^{f}\left( {n,\lceil t/r\rceil }\right) ; & \text{ if }\exists k \in \mathbb{Z} : t = {kr} \\ 0; & \text{ o/w } \end{array}\right. \tag{8}
+$$
+
+where $\lceil \cdot \rceil$ is the ceiling function. ${}^{1}$ The GTConv filter bank in the next layer interpolates these zero-padded values from the downsampled ones. This implies that the downsampling rate in the
+
+---
+
+${}^{1}$ We considered the same down/up-sampling rate in each layer of the decoder and encoder; hence, because of the mirrored structure ${T}_{\ell }^{e}$ in (5) equals ${T}_{\ell - 1}^{d}$ in (8).
+
+---
+
+encoder cannot be too harsh to lose information, and also, the filter orders in the decoder cannot be too small to have a high interpolatory capacity.
+
+Activation functions again nonlinzearize the upsampled features in (8) and yield
+
+$$
+{\mathbf{Z}}_{\ell }^{f} = \sigma \left( {\mathbf{Z}}_{u,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{9}
+$$
+
+The decoder performs operations (7)-(8)-(9) for all ${L}_{d}$ layers to yield the decoded output ${\widehat{\mathbf{x}}}_{\diamond } =$ ${\mathbf{z}}_{\diamond ,{L}_{d}} \in {\mathbb{R}}^{NT}$ , which also corresponds to the GTConvAE output
+
+$$
+{\widehat{\mathbf{x}}}_{\diamond } = {\mathbf{z}}_{\diamond ,{L}_{d}} = \operatorname{DEC}\left( {{\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) , \tag{10}
+$$
+
+where we match the dimensions by setting ${F}_{{L}_{d}} = 1$ .
+
+### 2.4 Loss Function
+
+Given (6) and (10), the GTConvAE in (1) can be detailed as
+
+$$
+{\widehat{\mathbf{x}}}_{\diamond } = \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};\mathcal{H},\mathbf{s}}\right) = \operatorname{DEC}\left( {\operatorname{ENC}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) ,\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) . \tag{11}
+$$
+
+The GTConv filter parameters in $\mathcal{H}$ and the product graph parameters in $\mathbf{s}$ are estimated by minimizing the loss function
+
+$$
+\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right) = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\begin{Vmatrix}{\mathbf{x}}_{\diamond } - {\widehat{\mathbf{x}}}_{\diamond }\end{Vmatrix}}_{2}\right\rbrack + \rho \parallel \mathbf{s}{\parallel }_{1}. \tag{12}
+$$
+
+where the first term measures the reconstruction error over the probabilistic distribution $\mathcal{D}$ of the training set, whereas the second term imposes sparsity in the spatiotemporal dependencies of the product graph. Scalar $\rho > 0$ controls the trade-off between fitting and regularization, and a higher value implies a stronger spatiotemporal sparsity (from the norm one $\parallel \cdot {\parallel }_{1}$ ); i.e., sparser spatiotemporal attention.
+
+Complexity analysis: Denoting the maximum number of features in all layers by ${F}_{\max } = \max \left\{ {F}_{\ell }\right\}$ the GTConvAE has $\left| \mathcal{H}\right| = \left( {{L}_{e} + {L}_{d}}\right) \left( {K + 1}\right) {F}_{\max }^{2}$ parameters. This is because each GTConv filter (2) has $K + 1$ parameters and in each layer a filter bank of at most ${F}_{max}^{2}$ filters is used. Despite the product graphs are of large dimensions, the latter is highly sparse and the computation complexity of the GTConvAE is of order $\mathcal{O}\left( {{M}_{\diamond }\left| \mathcal{H}\right| }\right)$ , where ${M}_{\diamond } = {NT} + N{M}_{T} + {MT} + {2M}{M}_{T}$ is the number of edges of the product graph ( $M$ edges in the spatial graph and ${M}_{T}$ edges in the temporal graph). This is because each graph-time filter has a computational complexity of order $\mathcal{O}\left( {\left( {K + 1}\right) {M}_{\diamond }}\right)$ [26] and the GTConvAE consists of $\left( {{L}_{e} + {L}_{d}}\right) {F}_{\max }^{2}$ graph-time filters. Note that we consider $r = 1$ sampling rate to provide the worst case analysis, but the computational complexity can be further reduced for $r > 1$ .
+
+## 3 Stability Analysis
+
+In this section, we conduct a stability analysis of the GTConvAE w.r.t. relative perturbations in the spatial graph. This stability analysis is motivated by the fact that we do not always have access to the ground truth spatial graph due to modeling issues or when the physical network undergoes slight changes over time. Hence, the spatial graph used for training differs from that used for testing; thus, having a stable GTConvAE is desirable to perform the tasks reliably.
+
+We consider the relative perturbation model proposed in [27]
+
+$$
+\widehat{\mathbf{S}} = \mathbf{S} + \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) \tag{13}
+$$
+
+where $\widehat{\mathbf{S}}$ is the perturbed GSO and $\mathbf{E}$ is the perturbation matrix with bounded operator norm $\parallel \mathbf{E}\parallel \leq \epsilon$ . This model accounts for graph perturbation depending on its structure, i.e., a higher degree node (a node with higher-weighted connected edges) is relatively prone to more perturbation.
+
+### 3.1 Spatiotemporal integral Lipschitz filters
+
+To investigate the stability of GTConvAE, we first characterize the graph-time convolutional filters in the spectral domain. Consider the eigendecompositions of the spatial GSO $\mathbf{S} = \mathbf{V}\mathbf{\Lambda }{\mathbf{V}}^{\mathrm{H}}$ and of the temporal GSO ${\mathbf{S}}_{T} = {\mathbf{V}}_{T}{\mathbf{\Lambda }}_{T}{\mathbf{V}}_{T}^{\mathrm{H}}$ . Matrices $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{1},\ldots ,{\mathbf{v}}_{N}\right\rbrack }^{\top }$ and $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{T,1},\ldots ,{\mathbf{v}}_{T, T}\right\rbrack }^{\top }$ collect the spatial and the temporal eigenvectors, respectively, and $\mathbf{\Lambda } = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{N}}\right)$ and ${\mathbf{\Lambda }}_{T} = \operatorname{diag}\left( {{\lambda }_{T,1},\ldots ,{\lambda }_{T, T}}\right)$ the corresponding eigenvalues. From (1), the eigendecomposition of the product graph GSO is ${\mathbf{S}}_{\diamond } = {\mathbf{V}}_{\diamond }{\mathbf{\Lambda }}_{\diamond }{\mathbf{V}}_{\diamond }^{\mathrm{H}}$ with eigenvectors ${\mathbf{V}}_{\diamond } = {\mathbf{V}}_{T} \otimes \mathbf{V}$ being the Kronecker product $\otimes$ of the respective GSOs and the eigenvalues ${\mathbf{\Lambda }}_{\diamond } = {\mathbf{\Lambda }}_{T}\diamond \mathbf{\Lambda }$ are defined by the product graph rule. As in graph signal processing [32], it is possible to characterize the joint graph-time Fourier transform of product graph signals. Specifically, the graph-time Fourier of signal ${\mathbf{x}}_{\diamond }$ is defined as $\widetilde{\mathbf{x}} = {\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}{\mathbf{x}}_{\diamond }$ and the eigenvalues in ${\mathbf{\Lambda }}_{\diamond }$ now collect the graph-time frequencies of the product graph [33]. Applying this Fourier transform on the input and output of the GTConv filter in (2), we can write the filter input-output as ${\widetilde{\mathbf{y}}}_{\diamond } = \mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right) \widetilde{\mathbf{y}}$ , where ${\widetilde{\mathbf{y}}}_{\diamond }$ is the Fourier transform of the output and $\mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right)$ is an ${NT} \times {NT}$ diagonal matrix containing the filter frequency response on the main diagonal. This frequency response is of the form
+
+$$
+h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k} \tag{14}
+$$
+
+where ${\lambda }_{\diamond ,\left( {n, t}\right) } = {\lambda }_{T, t}\diamond {\lambda }_{n}$ indicates the eigenvalue of ${\mathbf{S}}_{\diamond }$ corresponding to the spatial index $n \in \left\lbrack N\right\rbrack$ and temporal index $t \in \left\lbrack T\right\rbrack$ of the product graph.
+
+The eigenvalues ${\lambda }_{\diamond ,\left( {n, t}\right) }$ can be considered as the frequencies of the product graph and can be ordered in ascending order of magnitude. We can then characterize the variation of the filter frequency response for two different spatial eigenvalues.
+
+Definition 1. A GTConv filter with a frequency response $h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right)$ is graph integral Lipschitz if there exists constant $C > 0$ such that for all frequencies ${\lambda }_{\diamond ,\left( {n, t}\right) },{\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ , it holds that
+
+$$
+\left| {h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) - h\left( {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }\right) }\right| \leq C\frac{\left| {\lambda }_{n} - {\lambda }_{{n}^{\prime }}\right| }{\left| {{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}\right| /2}\text{ for all }\left\{ {{\lambda }_{n},{\lambda }_{{n}^{\prime }}}\right\} \in \mathbf{\Lambda }. \tag{15}
+$$
+
+Expression (15) states that the frequency response of graph-time convolutional filter should vary sub-linearly while the coefficient depends on the gap $\left| {{\lambda }_{\diamond ,\left( {n, t}\right) } + {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }}\right| /2$ . This implies
+
+$$
+\left| {{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) }{\partial {\lambda }_{n}}}\right| \leq C\text{ for all }{\lambda }_{n} \in \mathbf{\Lambda }\;\text{ and }\;{\lambda }_{\diamond ,\left( {n, t}\right) } \in {\mathbf{\Lambda }}_{\diamond } \tag{16}
+$$
+
+which means the integral Lipschitz filter cannot vary drastically in high frequencies. Hence, such a filter can discriminate low frequency content but not high frequency ones.
+
+Definition 2. A graph-time convolutional filter has normalized frequency response if $\left| {h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) }\right| \leq 1$ for all ${\lambda }_{\diamond ,\left( {n, t}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ .
+
+This definition is a direct consequence of normalizing the filters' frequency response by their maximum value. We shall show next that GTConvAE with filters satisfying Def. 1 and 2 are stable to perturbations in the form (13).
+
+### 3.2 Stability result
+
+The following theorem with proof in Appendix A provides the main result.
+
+Theorem 1. Consider a GTConvAE with an ${L}_{e}$ -layer encoder and an ${L}_{d}$ -layer decoder having ${F}_{\ell } \leq {F}_{\max }$ and ${F}_{d,\ell } \leq {F}_{\max }$ features per layer in encoder and decoder, respectively, and a summary function $\operatorname{SUM}\left( \cdot \right)$ performing pure downsampling with rate $r$ . Consider also the filters are integral Lipschitz [cf. Def. 1] with a normalized frequency response [cf. Def. 2] and that the nonlinearities are 1-Lipschitz (e.g., ReLU, absolute value). Let this GTConvAE be trained over the product graph (1) and deployed over its perturbed version whose spatial GSO is given in (13) with a perturbation of at most $\parallel \mathbf{E}\parallel \leq \epsilon$ . The distance between the two models is upper bounded by
+
+$$
+\parallel \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right) - \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}}\right) {\parallel }_{2} \leq \left( {{L}_{d} + {L}_{e}}\right) {r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{max}^{{L}_{e} + {L}_{d} - 1}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}.
+$$
+
+(17)
+
+where $\Delta = {2C}\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\text{ max }}}\right) \left( {1 + \delta \sqrt{NT}}\right)$ , and $\delta = {\left( \parallel \mathbf{U} - \mathbf{V}{\parallel }^{2} + 1\right) }^{2} - 1$ with eigenvectors $\mathbf{U}$ from $\mathbf{E} = {\mathbf{{UMU}}}^{\mathrm{H}}$ and $\mathbf{V}$ from $\mathbf{S} = {\mathbf{{V\Lambda V}}}^{\mathrm{H}}$ .
+
+The result (17) states that GTConvAE is stable against relative perturbations. It also suggests that GTConvAE is less stable for larger product graphs $\left( \sqrt{NT}\right)$ since more nodes pass information over the perturbed edges. Moreover, making the model more complex by increasing the number of features or layers compromises stability as more graph-time convolutional filters work on a perturbed graph $\left( {F}_{\max }^{{L}_{c} + {L}_{d} - 1}\right)$ . We also see the stability improves with the sampling rate $r > 1$ because fewer nodes operate over the perturbed graph after downsampling. Furthermore, for a deeper encoder we have more downsampling hence the stability improves; yet there is a tradeoff between improving the bound imposed by the terms ${r}^{-{L}_{e}/2},{F}_{\max }^{{L}_{e} + {L}_{d} - 1}$ , and ${L}_{e} + {L}_{d}$ . Finally, parameters ${s}_{01}$ and ${s}_{11}$ appear in the stability bound because they are the only ones composing the spatial edges; thus, minimizing $\parallel \mathbf{s}{\parallel }_{1}$ in (12) leads to improved stability.
+
+## 4 Numerical Results
+
+This section compares the GTConvAE with baseline solutions and competitive alternatives for time series denoising as well as anomaly detection with real data from solar irradiance and water networks. In all experiments, the ADAM optimizer with the standard hyperparameters is used and an unweighted directed line graph is considered for the temporal graph in (1).
+
+### 4.1 Denoising of solar irradiance time series
+
+We consider the task of denoising solar irradiance time series over $N = {75}$ solar cities around the northern region of the U.S. measured in GHI $\left( {W/{m}^{2}}\right) \left\lbrack 4\right\rbrack$ . Each solar city is a vertex and an undirected edge is set using the physical distances between the cities via Gaussian threshold kernel with $\sigma = {0.25}$ and ${th} = {0.1}$ after normalizing maximum weight to 1 [32]. The noise is generated via a zero-mean Gaussian distribution with a covariance matrix corresponding to the pseudo-inverse of the normalized graph Laplacian.
+
+
+
+Figure 1: Denoising performance of the proposed GTConvAE and alternatives. The standard deviation for all the models is of order ${10}^{-2}$ .
+
+Experimental setup. We considered the first 2000 samples for training and validation (2000- 2014) and the subsequent 200 (2014-2016) for testing. The input data is a single feature corresponding to the GHI measurement and the product graph has $N = {75}$ spatial nodes and $T = 8$ temporal nodes. The GTConvAE has three layers with $\{ 8,4,2\}$ features in the encoder and reversely in the decoder; all filters are 4th-order and normalized Laplacian is used as GSO; a downsampling rate of $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.2}$ and the learning rate is ${25} \times {10}^{-4}$ . We compared the GTConvAE with the following alternatives:
+
+- C3D [5]: non-graph spatiotemporal autoencoder using three-dimensional CNNs.
+
+- ConvLSTMAE [7]: A non-graph spatiotemporal autoencoder using two-dimensional CNNs followed by LSTMs.
+
+- STGAE [1]: A modular spatiotemporal graph autoencoder that uses an edge varying filter for the graph dimension followed by temporal convolution.
+
+- Baseline GCNN [42]: An autoencoder built with a conventional graph convolutional neural network using the time series as features over the nodes. The shift operator is the normalized Laplacian matrix.
+
+The first two methods are considered to show the role of using a distance graph as an inductive bias. The third method is considered to compare the joint GTConvAE over disjoint alternatives, whereas the last model is considered to show the role of the sparse product graphs rather than treating time series as node features. The parameters for all models are chosen via grid search from the ranges reported in Appendix B.
+
+Results. Fig. 1 shows the reconstruction normalized mean squared error (NMSE) for different signal-to-noise ratios (SNRs). The proposed GTConvAE compares well with STGAE for low SNRs but better for high SNRs. We attribute this improvement to the ability of the GTConvAE to capture
+
+Table 1: Comparison of different models in the BATADAL dataset. All metrics are the higher the better.
+
+| Model | ${N}_{A}$ | $\mathcal{S}$ | ${\mathcal{S}}_{\text{TTD }}$ | ${\mathcal{S}}_{\mathrm{{CM}}}$ | TPR | TNR |
| STGCAE-LSTM [2] | 7 | 0.924 | 0.920 | 0.928 | 0.892 | 0.964 |
| TGCN [47] | 7 | 0.931 | 0.934 | 0.928 | 0.885 | 0.971 |
| GTConvAE (ours) | 7 | 0.940 | 0.928 | 0.952 | 0.922 | 0.981 |
+
+jointly the spatiotemporal patterns in the data while STGAE operates disjointly. We also see that in comparison with the baseline GCNN, the GTConvAE performs consistently better, highlighting the importance of the sparser product graphs and temporal downsampling. Finally, we also observe a superior performance compared with the non-graph alternatives C3D and ConvLSTMAE.
+
+### 4.2 Anomaly detection in water networks
+
+We now consider the task of detecting cyber-physical attacks on a water network. We considered the C-town network from the Battle of ATtack Detection ALgorithms (BATADAL) dataset comprising $N = {388}$ nodes (demand junctions, storage tanks, and reservoirs) and 8762 hourly measurements of 43 different node feature signals for a period of 12 months. We used the same setup as in [47] and considered a correlation graph from the data. The dataset provides a normal operating condition comprising recordings for the first 12 months and an anomalous event operating condition comprising 7 attacks over the successive 3 months. Refer to [48, 49] for more detail about the BATADAL dataset.
+
+Experimental setup. The normal operating condition data are used to train the model for one-step forecasting to be used for detecting anomalies. The anomalous event operating condition data is used for testing and an anomaly is flagged if the prediction error exceeds a fixed threshold. We set the threshold intuitively to three times the error variance during training. The inputs are the 43 time series over the $N = {388}$ nodes and we considered $T = 6$ for the temporal graph dimension. The GTConvAE has two layers with $\{ 8,2\}$ features in the encoder and reversely in the decoder; all filters are of order $K = 4$ ; a downsampling rate $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.14}$ and learning rate is $5 \times {10}^{-4}$ . We compared the performance against two graph-based alternatives:
+
+- STGCAE-LSTM [2]: A related solution to our method that uses a Cartesian spatiotemporal graph with graph convolutions followed by an LSTM in the latent domain.
+
+- TGCN [47]: A modular graph-based autoencoder using cascades of temporal convolutions and message passing.
+
+The parameters for all models are obtained via grid search from the ranges reported in Appendix C. We measure the performance via the $\mathcal{S}$ -score present in the BATADAL dataset, which contains ${\mathcal{S}}_{\text{TTD }}$ for the timing in detecting anomalies and ${\mathcal{S}}_{\mathrm{{CM}}}$ for the classification accuracy. The $\mathcal{S}$ -score is defined as
+
+$$
+\mathcal{S} = {0.5}\left( {{\mathcal{S}}_{\mathrm{{TTD}}} + {\mathcal{S}}_{\mathrm{{CM}}}}\right) = {0.5}\left( {\left( {1 - \frac{1}{{N}_{A}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{A}}\frac{{\mathrm{{TTD}}}_{i}}{\Delta {\mathrm{T}}_{i}}}\right) + \frac{\mathrm{{TPR}} + \mathrm{{TNR}}}{2}}\right) , \tag{18}
+$$
+
+where ${N}_{A}$ is the number of attacks, TTD is the detection time of the attack, $\Delta {T}_{i}$ is the duration of the $i$ -th attack, TPR is the true positive rate, and TNR is the true negative rate.
+
+Results: Table 1 shows that all the models managed to detect all of the attacks, however, the TGCN has a better performance in timing ${\mathcal{S}}_{\text{TTD }}$ . This is due to the calibration of the threshold in their work with a validation dataset while we used a fixed intuitive threshold only based on training. In the accuracy of anomaly detection ${\mathcal{S}}_{\mathrm{{CM}}}$ , the GTConvAE outperforms the other two models as the product graphs alongside downsampling enable it to learn spatiotemporal patterns in the data effectively. Overall, the GTConvAE performs better than other models by a small margin.
+
+
+
+Figure 2: Stability results for different scenarios of the GTConvAE and fixed product graphs. (a) Different SNRs in the topology. (b) Different graph sizes in $4\mathrm{\;{dB}}$ perturbation. (c) Different sampling rates $r$ .
+
+### 4.3 Stability analysis
+
+To investigate the stability of the GTConvAE, we trained the model over a synthesized dataset so we could control all the settings such as the spatial graph size $N$ . The graph is an undirected stochastic block model with 5 communities among $N = \{ {50},{100},\ldots ,{500}\}$ . The edges are drawn independently with probability 0.8 for nodes in the same community and 0.2 otherwise. Each data sample is a diffused signal over the graph $\mathbf{X} = \left\lbrack {\mathbf{{Sx}},\ldots ,{\mathbf{S}}^{T}\mathbf{x}}\right\rbrack$ with $T = 6$ and $\mathbf{x}$ having a random non-zero entry. The autoencoder is used to reconstruct this data.
+
+Experimental setup The model has two layers of encoder and decoder with sampling rate $r = 2$ . Each layer of the encoder has $\{ 8,4\}$ features and reversely in the decoder. All filters are of order four and the normalized graph Laplacian is used as GSO. The activation functions are ReLU and pure donwsampling is considered. The regularizer weight is 0.25 and learning rate is ${25} \times {10}^{-3}$ . The model is trained over the graph with different sizes and tested with a perturbed graph following the relative perturbation model in (13) for different SNR scenarios in the topology. We compare the stability of the GTConvAE with learned graphs with the same autoencoder having fixed Cartesian and strong product graphs.
+
+Results Fig. 2a indicates that the GTConvAE in different noisy scenarios. GTConvAE is the most stable in medium and high SNRs as it leverages sparsity in the spatiotemporal coupling. However, GTConvAE performance drops more rapidly in low SNR scenarios as its parameters are trained for the data and task. Fig. 2b shows the results for reconstruction error over graphs with different sizes. The GTConvAE is more stable than the other models, even in graphs with the larger sizes for the same reason as before. All the models lose performance similarly as the size of the graph grows. This is consistent with the theoretical result in (17).
+
+## 5 Conclusion
+
+We introduced GTConv-AE as an unsupervised model for learning representations from multivariate time series over networks. The GTConv-AE uses parametric product graphs to aggregate information from a spatiotemporal neighborhood while it yet learns spatiotemporal couplings in the product graph We proposed a spectral analysis for GTConv-AE due to its convolutional nature which led to stability analysis. The stability analysis states that GTConv-AE is stable against relative perturbations in the spatial graph as long as graph-time filters vary smoothly over high spatiotemporal frequencies. Finally, numerical results showed that the GTConv-AE compares well with the state-of-the-art models on benchmark datasets and corroborated the stability results.
+
+## References
+
+[1] Kanglei Zhou, Zhiyuan Cheng, Hubert P. H. Shum, Frederick W. B. Li, and Xiaohui Liang. Stgae: Spatial-temporal graph auto-encoder for hand motion denoising. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages 41-49, 2021. doi: 10.1109/ ISMAR52148.2021.00018. 1, 2, 7
+
+369
+
+370
+
+[2] Nanjun Li, Faliang Chang, and Chunsheng Liu. Human-related anomalous event detection via spatial-temporal graph convolutional autoencoder with embedded long short-term memory network. Neurocomputing, 490:482-494, 2022. ISSN 0925-2312. doi: https://doi.org/10.1016/ j.neucom.2021.12.023. 1, 2, 8
+
+[3] Tien Huu Do, Duc Minh Nguyen, Evaggelia Tsiligianni, Angel Lopez Aguirre, Valerio Panzica La Manna, Frank Pasveer, Wilfried Philips, and Nikos Deligiannis. Matrix completion with variational graph autoencoders: Application in hyperlocal air quality inference. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7535-7539, 2019. 1, 2
+
+[4] Manajit Sengupta, Yu Xie, Anthony Lopez, Aron Habte, Galen Maclaurin, and James Shelby. The national solar radiation data base (nsrdb). Renewable and Sustainable Energy Reviews, 89: 51-60, 2018. ISSN 1364-0321. 1, 7
+
+[5] Shifu Zhou, Wei Shen, Dan Zeng, Mei Fang, Yuanwang Wei, and Zhijiang Zhang. Spatial-temporal convolutional neural networks for anomaly detection and localization in crowded scenes. Signal Processing: Image Communication, 47:358-368, 2016. ISSN 0923-5965. 1, 2, 7
+
+[6] Yong Shean Chong and Yong Haur Tay. Abnormal event detection in videos using spatiotemporal autoencoder. In International symposium on neural networks, pages 189-196. Springer, 2017.
+
+[7] Weixin Luo, Wen Liu, and Shenghua Gao. Remembering history with convolutional lstm for anomaly detection. In 2017 IEEE International Conference on Multimedia and Expo (ICME), pages 439-444, 2017. doi: 10.1109/ICME.2017.8019325. 1, 7
+
+[8] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. 1
+
+[9] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 1, 4
+
+[10] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4-24, 2021. doi: 10.1109/TNNLS.2020.2978386. 1
+
+[11] Elvin Isufi, Fernando Gama, and Alejandro Ribeiro. Edgenets:edge varying graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-1, 2021. doi: 10.1109/TPAMI.2021.3111054. 1
+
+[12] Wenchao Chen, Long Tian, Bo Chen, Liang Dai, Zhibin Duan, and Mingyuan Zhou. Deep variational graph convolutional recurrent network for multivariate time series anomaly detection. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 3621-3633. PMLR, 17-23 Jul 2022. 1
+
+[13] Sedigheh Mahdavi, Shima Khoshraftar, and Aijun An. Dynamic joint variational graph au-toencoders. In Peggy Cellier and Kurt Driessens, editors, Machine Learning and Knowledge Discovery in Databases, pages 385-401, Cham, 2020. Springer International Publishing. ISBN 978-3-030-43823-4. 1
+
+[14] Yue Hu, Ao Qu, and Dan Work. Detecting extreme traffic events via a context augmented graph autoencoder. ACM Transactions on Intelligent Systems and Technology (TIST), 2022. 1
+
+[15] Mounir Haddad, Cécile Bothorel, Philippe Lenca, and Dominique Bedart. Temporalizing static graph autoencoders to handle temporal networks. In Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pages 201-208, 2021. 1
+
+[16] C. Si, W. Chen, W. Wang, L. Wang, and T. Tan. An attention enhanced graph convolutional lstm network for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1227-1236, 2019. 1
+
+[17] Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson. Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pages 362-373. Springer, 2018. 2
+
+[18] L. Ruiz, F. Gamao, and A. Ribeiro. Gated graph recurrent neural networks. IEEE Transactions on Signal Processing, 68:6303-6318, 2020.
+
+[19] S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
+
+[20] Samar Hadou, Charilaos I Kanatsoulis, and Alejandro Ribeiro. Space-time graph neural networks. arXiv preprint arXiv:2110.02880, 2021. 2, 4
+
+[21] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kaneza-shi, Tim Kaler, Tao Schardl, and Charles Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. 34:5363-5370, Apr. 2020. doi: 10.1609/aaai.v34i04.5984. 1
+
+[22] Yanbang Wang, Pan Li, Chongyang Bai, and Jure Leskovec. Tedic: Neural modeling of behavioral patterns in dynamic social interaction networks. In Proceedings of the Web Conference 2021, WWW '21, page 693-705, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383127. 1
+
+[23] Richard H Hammack, Wilfried Imrich, Sandi Klavžar, Wilfried Imrich, and Sandi Klavžar. Handbook of product graphs, volume 2. CRC press Boca Raton, 2011. 1, 3
+
+[24] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016. 1
+
+[25] Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. Variational graph recurrent neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. 2
+
+[26] Mohammad Sabbaqi and Elvin Isufi. Graph-time convolutional neural networks: Architecture and theoretical analysis. arXiv preprint arXiv:2206.15174, 2022. 2, 3, 5
+
+[27] Fernando Gama, Joan Bruna, and Alejandro Ribeiro. Stability properties of graph neural networks. IEEE Transactions on Signal Processing, 68:5680-5695, 2020. doi: 10.1109/TSP. 2020.3026980.2,5,13
+
+[28] Zhan Gao, Elvin Isufi, and Alejandro Ribeiro. Stability of graph convolutional neural networks to stochastic perturbations. Signal Processing, 188:108216, 2021. ISSN 0165-1684.
+
+[29] Henry Kenlay, Dorina Thano, and Xiaowen Dong. On the stability of graph convolutional neural networks under edge rewiring. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8513-8517, 2021. doi: 10.1109/ ICASSP39728.2021.9413474. 2
+
+[30] Luana Ruiz, Fernando Gama, and Alejandro Ribeiro. Gated graph recurrent neural networks. IEEE Transactions on Signal Processing, 68:6303-6318, 2020. doi: 10.1109/TSP.2020.3033962. 2
+
+[31] Aliaksei Sandryhaila and Jose M.F. Moura. Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure. IEEE Signal Processing Magazine, 31(5):80-90, 2014. 2
+
+[32] David I Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3):83-98, 2013. doi: 10.1109/MSP.2012.2235192. 2, 6, 7
+
+[33] Francesco Grassi, Andreas Loukas, Nathanaël Perraudin, and Benjamin Ricaud. A time-vertex signal processing framework: Scalable processing and meaningful representations for time-series on graphs. IEEE Transactions on Signal Processing, 66(3):817-829, 2017. 2, 3, 6
+
+[34] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016. 2
+
+[35] Kai Qiu, Xianghui Mao, Xinyue Shen, Xiaohan Wang, Tiejian Li, and Yuantao Gu. Time-varying graph signal reconstruction. IEEE Journal of Selected Topics in Signal Processing, 11 (6):870-883, 2017. doi: 10.1109/JSTSP.2017.2726969. 3
+
+[36] Vassilis N. Ioannidis, Daniel Romero, and Georgios B. Giannakis. Inference of spatio-temporal functions over graphs via multikernel kriged kalman filtering. IEEE Transactions on Signal Processing, 66(12):3228-3239, 2018. doi: 10.1109/TSP.2018.2827328. 3
+
+[37] Jhony H. Giraldo, Arif Mahmood, Belmar Garcia-Garcia, Dorina Thanou, and Thierry Bouwmans. Reconstruction of time-varying graph signals via sobolev smoothness. IEEE Transactions on Signal and Information Processing over Networks, 8:201-214, 2022. doi: 10.1109/TSIPN.2022.3156886. 3
+
+[38] Alberto Natali, Elvin Isufi, Mario Coutino, and Geert Leus. Learning time-varying graphs from online data. IEEE Open Journal of Signal Processing, 3:212-228, 2022. doi: 10.1109/OJSP. 2022.3178901. 3
+
+[39] Chao Pan, Siheng Chen, and Antonio Ortega. Spatio-temporal graph scattering transform. arXiv preprint arXiv:2012.03363, 2020. 3
+
+[40] Aliaksei Sandryhaila and José M. F. Moura. Discrete signal processing on graphs. IEEE Transactions on Signal Processing, 61(7):1644-1656, 2013. doi: 10.1109/TSP.2013.2238935. 3
+
+[41] Antonio Ortega, Pascal Frossard, Jelena Kovačević, José M. F. Moura, and Pierre Vandergheynst. Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE, 106 (5):808-828, 2018. 3
+
+[42] Fernando Gama, Elvin Isufi, Geert Leus, and Alejandro Ribeiro. Graphs, convolutions, and neural networks: From graph filters to graph neural networks. IEEE Signal Processing Magazine, 37(6):128-138, 2020. doi: 10.1109/MSP.2020.3016143. 3, 4, 7
+
+[43] Chuang Liu, Yibing Zhan, Chang Li, Bo Du, Jia Wu, Wenbin Hu, Tongliang Liu, and Dacheng Tao. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321, 2022. 4
+
+[44] Arbaaz Khan, Ekaterina Tolstaya, Alejandro Ribeiro, and Vijay Kumar. Graph policy gradients for large scale robot control. In Leslie Pack Kaelbling, Danica Kragic, and Komei Sugiura, editors, Proceedings of the Conference on Robot Learning, volume 100 of Proceedings of Machine Learning Research, pages 823-834. PMLR, 30 Oct-01 Nov 2020. 4
+
+[45] Fernando Gama, Antonio G. Marques, Geert Leus, and Alejandro Ribeiro. Convolutional neural network architectures for signals supported on graphs. IEEE Transactions on Signal Processing, 67(4):1034-1049, 2019. doi: 10.1109/TSP.2018.2887403. 4
+
+[46] E. Isufi and G. Mazzola. Graph-time convolutional neural networks. IEEE Data Science and Learning Workshop, 2021. 4
+
+[47] Lydia Tsiami and Christos Makropoulos. Cyber-physical attack detection in water distribution systems with temporal graph convolutional neural networks. Water, 13(9):1247, 2021. 8
+
+[48] Riccardo Taormina, Stefano Galelli, Nils Ole Tippenhauer, Elad Salomons, and Avi Ostfeld. Characterizing cyber-physical attacks on water distribution systems. Journal of Water Resources Planning and Management, 143(5):04017009, 2017. 8
+
+[49] Riccardo Taormina, Stefano Galelli, Nils Ole Tippenhauer, Elad Salomons, Avi Ostfeld, Demetrios G Eliades, Mohsen Aghashahi, Raanju Sundararajan, Mohsen Pourahmadi, M Katherine Banks, et al. Battle of the attack detection algorithms: Disclosing cyber attacks on water distribution networks. Journal of Water Resources Planning and Management, 144(8), 2018. 8
+
+## A Stability proof
+
+The proof is structured in three components. First we prove the graph-time convolutional filter is stable to perturbations. Then, we prove stability for the encoder and finally for the decoder. Throughout the proof we will use the following lemmas.
+
+Lemma 1. [27] Let $\mathbf{S} = {\mathbf{{V\Lambda V}}}^{\mathrm{H}}$ and $\mathbf{E} = {\mathbf{{UMU}}}^{\mathrm{H}}$ such that $\parallel \mathbf{E}\parallel \leq \epsilon$ . Assume that ${\mathbf{E}}_{V} = {\mathbf{{VMV}}}^{\mathrm{H}}$ is the projection of perturbation $\mathbf{E}$ over graph eigenspace of $\mathbf{S}$ , and $\mathbf{E} = {\mathbf{E}}_{V} + {\mathbf{E}}_{U}$ . For any eigenvector ${\mathbf{v}}_{n}$ of $\mathbf{S}$ it holds that
+
+$$
+\mathbf{E}{\mathbf{v}}_{n} = {m}_{n}{\mathbf{v}}_{n} + {\mathbf{E}}_{U}{\mathbf{v}}_{n} \tag{19}
+$$
+
+with $\begin{Vmatrix}{\mathbf{E}}_{U}\end{Vmatrix} \leq {\epsilon \delta }$ , where $\delta = {\left( \parallel \mathbf{U} - \mathbf{V}{\parallel }^{2} + 1\right) }^{2} - 1$ and ${m}_{n}$ is the $n$ -th eigenvalue of $\mathbf{M}$ . Recall that $\parallel \cdot \parallel$ represents the operator norm of a matrix.
+
+Lemma 2. Given the frequency response of a graph-time convolutional filter as $h\left( {\lambda }_{\diamond }\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{h}_{k}{\lambda }_{\diamond }^{k}$ , the partial derivation w.r.t. graph frequency $\lambda$ is
+
+$$
+\frac{\partial h\left( {\lambda }_{\diamond }\right) }{\partial \lambda } = \left( {{s}_{01} + {s}_{11}{\lambda }_{T}}\right) \mathop{\sum }\limits_{{k = 1}}^{K}k{h}_{k}{\lambda }_{\diamond }^{k - 1}. \tag{20}
+$$
+
+527 Proof. Using the product graph definition (1) we have
+
+$$
+\frac{\partial {\lambda }_{\diamond }}{\partial \lambda } = \frac{\partial \left( {{s}_{00} + {s}_{01}\lambda + {s}_{10}{\lambda }_{T} + {s}_{11}{\lambda }_{T}\lambda }\right) }{\partial \lambda } = {s}_{01} + {s}_{11}{\lambda }_{T}. \tag{21}
+$$
+
+Then,
+
+$$
+\frac{\partial h\left( {\lambda }_{\diamond }\right) }{\partial \lambda } = \frac{\partial h\left( {\lambda }_{\diamond }\right) }{\partial {\lambda }_{\diamond }} \times \frac{\partial {\lambda }_{\diamond }}{\partial \lambda } = \left( {\mathop{\sum }\limits_{{k = 1}}^{K}k{h}_{k}{\lambda }_{\diamond }^{k - 1}}\right) \left( {{s}_{01} + {s}_{11}{\lambda }_{T}}\right) \tag{22}
+$$
+
+completes the proof.
+
+To ease notation, let us also rearrange the parametric product graph GSO as
+
+$$
+{\mathbf{S}}_{\diamond } = \left( {{s}_{00}{\mathbf{I}}_{T} + {s}_{10}{\mathbf{S}}_{T}}\right) \otimes {\mathbf{I}}_{N} + \left( {{s}_{01}{\mathbf{I}}_{T} + {s}_{11}{\mathbf{S}}_{T}}\right) \otimes \mathbf{S} = {\mathbf{S}}_{T0} \otimes {\mathbf{I}}_{N} + {\mathbf{S}}_{T1} \otimes \mathbf{S} \tag{23}
+$$
+
+where ${\mathbf{S}}_{T0} = {s}_{00}{\mathbf{I}}_{T} + {s}_{10}{\mathbf{S}}_{T}$ collects the fully temporal edges and ${\mathbf{S}}_{T1} = {s}_{01}{\mathbf{I}}_{T} + {s}_{11}{\mathbf{S}}_{T}$ the edges ruled by the spatial graph.
+
+## GTConv filter stability.
+
+The difference of the filter operating on the perturbed and nominal graph is
+
+$$
+\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\left( {{\widehat{\mathbf{S}}}_{\diamond }^{k} - {\mathbf{S}}_{\diamond }^{k}}\right) \tag{24}
+$$
+
+Leveraging the product GSO expansion (23) and the perturbation model $\widehat{\mathbf{S}} = \mathbf{S} + \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right)$ [cf. 36 (13)] we can write the $k$ -th power of the perturbed product graph GSO as
+
+$$
+{\widehat{\mathbf{S}}}_{\diamond }^{k} = {\left( {\mathbf{S}}_{T0} \otimes {\mathbf{I}}_{N} + {\mathbf{S}}_{T1} \otimes \left( \mathbf{S} + \left( \mathbf{{SE}} + \mathbf{{ES}}\right) \right) \right) }^{k}
+$$
+
+$$
+= {\left( {\mathbf{S}}_{\diamond } + \left( {\mathbf{S}}_{T1} \otimes \left( \mathbf{{SE}} + \mathbf{{ES}}\right) \right) \right) }^{k} \tag{25}
+$$
+
+$$
+= {\mathbf{S}}_{\diamond }^{k} + \mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) {\mathbf{S}}_{\diamond }^{k - r - 1} + \mathbf{D},
+$$
+
+where we applied the first-order Taylor expansion in the third line. Matrix D contains all terms of order $\mathcal{O}\left( {\epsilon }^{2}\right)$ and can be ignored.
+
+Substituting then (25) into (24), we get
+
+$$
+\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) {\mathbf{S}}_{\diamond }^{k - r - 1}. \tag{26}
+$$
+
+Upon applying the filters to an input ${\mathbf{x}}_{\diamond }$ we get the output difference ${\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \left( {\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) }\right) {\mathbf{x}}_{\diamond }$ . Substituting into this the graph-time Fourier expansion of the input
+
+$$
+{\mathbf{x}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \tag{27}
+$$
+
+with ${\widetilde{x}}_{\left( n, t\right) }$ the(n, t)-th Fourier coefficients and $\left( {{\mathbf{v}}_{T, t},{\mathbf{v}}_{n}}\right)$ the eigenvector pair for the temporal and spatial GSOs [cf. Sec. 3.1], we can write the output difference as
+
+$$
+{\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) {\mathbf{S}}_{\diamond }^{k - r - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{28}
+$$
+
+544 Since $\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right)$ is an eigenvector of ${\mathbf{S}}_{\diamond }$ , we have
+
+$$
+{\mathbf{S}}_{\diamond }^{k - r - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) = {\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \tag{29}
+$$
+
+45 which by substituting to (28) yields
+
+$$
+{\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) }\right) \left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \tag{30}
+$$
+
+6 where ${\lambda }_{\diamond ,\left( {n, t}\right) }$ is the eigenvalue of the product graph GSO ${\mathbf{S}}_{\diamond }$ for indices(n, t). Leveraging mixed product property of Kronecker product ${}^{2}$ allows us to rewrite (30) as
+
+$$
+{\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{S}}_{T1}{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) {\mathbf{v}}_{n}}\right) . \tag{31}
+$$
+
+Replacing ${\mathbf{S}}_{T1} = {s}_{01}{\mathbf{I}}_{T} + {s}_{11}{\mathbf{S}}_{T}$ leads to
+
+$$
+{\widehat{\mathbf{y}}}_{\diamond } - {\mathbf{y}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) {\mathbf{v}}_{n}}\right) . \tag{32}
+$$
+
+Applying Lemma 1 results in
+
+$$
+{\widehat{\mathbf{y}}}_{\diamond } - {\mathbf{y}}_{\diamond } = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{S} + {\lambda }_{n}{\mathbf{I}}_{N}}\right) \left( {\underset{\text{term 1 }}{\underbrace{{m}_{n}{\mathbf{v}}_{n}}} + \underbrace{{\mathbf{E}}_{U}{\mathbf{v}}_{n}}}\right) }\right) ,
+$$
+
+(33)
+
+which leaves us with two terms that shall be discussed separately.
+
+For the first term, we have
+
+$$
+{\mathbf{t}}_{1} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}2{\lambda }_{n}{m}_{n}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{34}
+$$
+
+By exploiting eigenvector property ${\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) = {\lambda }_{\diamond ,\left( {n, t}\right) }^{r}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right)$ we can rewrite (34) into
+
+$$
+{\mathbf{t}}_{1} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}2{\lambda }_{n}{m}_{n}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}k{h}_{k}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - 1}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{35}
+$$
+
+553 Applying Lemma 2 leads to
+
+$$
+{\mathbf{t}}_{1} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}2{m}_{n}{\widetilde{x}}_{\left( n, t\right) }{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond ,\left( {n, t}\right) }\right) }{\partial {\lambda }_{n}}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) . \tag{36}
+$$
+
+554 For the second term, we have
+
+$$
+{\mathbf{t}}_{2} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) {\widetilde{x}}_{\left( n, t\right) }\mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\mathbf{S}}_{\diamond }^{r}\left( {{\mathbf{v}}_{T, t} \otimes \left( {\mathbf{S} + {\lambda }_{n}{\mathbf{I}}_{N}}\right) {\mathbf{E}}_{U}{\mathbf{v}}_{n}}\right) . \tag{37}
+$$
+
+---
+
+$$
+{}^{2}\left( {A \otimes B}\right) \left( {C \otimes D}\right) = {AC} \otimes {BD}
+$$
+
+---
+
+By substituting the eigendecomposition ${\mathbf{S}}_{\diamond }^{r} = \left( {{\mathbf{V}}_{T} \otimes \mathbf{V}}\right) {\mathbf{\Lambda }}_{\diamond }^{r}{\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}$ we get
+
+$$
+{\mathbf{t}}_{2} = \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\left( {{\mathbf{V}}_{T} \otimes \mathbf{V}}\right) \operatorname{diag}\left( {\mathbf{g}}_{\left( n, t\right) }\right) {\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{E}}_{U}{\mathbf{v}}_{n}}\right) . \tag{38}
+$$
+
+6 where the entries of vectors ${\mathbf{g}}_{\left( n, t\right) } \in {\mathbb{R}}^{NT}$ for $n \in \left\lbrack N\right\rbrack$ and $t \in \left\lbrack T\right\rbrack$ are defined as
+
+$$
+{g}_{\left( n, t\right) }\left( {{n}^{\prime },{t}^{\prime }}\right) = \left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) \left( {{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}\right) \mathop{\sum }\limits_{{k = 0}}^{k}{h}_{k}\mathop{\sum }\limits_{{r = 0}}^{{k - 1}}{\lambda }_{\diamond ,\left( {n, t}\right) }^{k - r - 1}{\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }^{r}
+$$
+
+$$
+= \left\{ \begin{matrix} 2{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond \left( {n, t}\right) }\right) }{\partial {\lambda }_{n}}; & \left( {n, t}\right) = \left( {{n}^{\prime },{t}^{\prime }}\right) \\ \left( {{s}_{01} + {s}_{11}{\lambda }_{T, t}}\right) \left( {h\left( {\lambda }_{\diamond \left( {n, t}\right) }\right) - h\left( {\lambda }_{\diamond \left( {{n}^{\prime },{t}^{\prime }}\right) }\right) }\right) \frac{{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}{{\lambda }_{n} - {\lambda }_{{n}^{\prime }}}; & \left( {n, t}\right) \neq \left( {{n}^{\prime },{t}^{\prime }}\right) \end{matrix}\right. \tag{39}
+$$
+
+With this in place, we now upper bound the two-norm of the difference ${\mathbf{y}}_{\diamond } - {\widehat{\mathbf{y}}}_{\diamond } = {\mathbf{t}}_{1} + {\mathbf{t}}_{2}$ by bounding each of the terms ${\mathbf{t}}_{1}$ and ${\mathbf{t}}_{2}$ separately. From $\parallel \mathbf{E}\parallel \leq \epsilon$ , we have that $\left| {m}_{n}\right| \leq \epsilon$ . Also from the integral Lipschitz property of the filter [cf. Def. 1]. Using these two into (36), we can upper bound the norm of term ${\mathbf{t}}_{1}$ as
+
+$$
+{\begin{Vmatrix}{\mathbf{t}}_{1}\end{Vmatrix}}_{2} \leq {2\epsilon C}\mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}{\widetilde{x}}_{\left( n, t\right) }\left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right) \leq {2\epsilon C}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}, \tag{40}
+$$
+
+where the second inequality holds due to Fourier transform definition (27).
+
+Moving on to ${\mathbf{t}}_{2}$ , we use mixed product property as ${\mathbf{v}}_{T, t} \otimes {\mathbf{E}}_{U}{\mathbf{v}}_{n} = \left( {{\mathbf{I}}_{T} \otimes {\mathbf{E}}_{U}}\right) \left( {{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}}\right)$ and operator norms in (38) to obtain an upper bound as
+
+$$
+{\begin{Vmatrix}{\mathbf{t}}_{2}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left| {\widetilde{x}}_{\left( n, t\right) }\right| \begin{Vmatrix}\left( {{\mathbf{V}}_{T} \otimes \mathbf{V}}\right) \end{Vmatrix}\begin{Vmatrix}{\operatorname{diag}\left( {\mathbf{g}}_{\left( n, t\right) }\right) }\end{Vmatrix}\begin{Vmatrix}{\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}\end{Vmatrix}\begin{Vmatrix}{{\mathbf{I}}_{T} \otimes {\mathbf{E}}_{U}}\end{Vmatrix}{\begin{Vmatrix}{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}\end{Vmatrix}}_{2}. \tag{41}
+$$
+
+From the integral Lipschitz property we can bound $\begin{Vmatrix}{\operatorname{diag}\left( {\mathbf{g}}_{\left( n, t\right) }\right) }\end{Vmatrix} \leq {2C}\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\max }}\right)$ in (39) where ${\lambda }_{T,\max }$ is a temporal eigenvalue with the largest absolute value. As ${\mathbf{V}}_{T} \otimes \mathbf{V}$ is an orthonormal bases, its operator norm is $\begin{Vmatrix}{{\mathbf{V}}_{T} \otimes \mathbf{V}}\end{Vmatrix} = 1$ , and ${l}_{2}$ -norm of the eigenvectors is ${\begin{Vmatrix}{\mathbf{v}}_{T, t} \otimes {\mathbf{v}}_{n}\end{Vmatrix}}_{2} = 1$ . Lemma 1 states that $\parallel \mathbf{E}\parallel \leq {\epsilon \delta }$ which leads to $\begin{Vmatrix}{{\mathbf{I}}_{T} \otimes {\mathbf{E}}_{U}}\end{Vmatrix} \leq {\epsilon \delta }$ . Finally, ${l}_{1}$ -norm can be bounded by $\mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\sum }\limits_{{n = 1}}^{N}\left| {\widetilde{x}}_{\left( n, t\right) }\right| = \parallel \widetilde{\mathbf{x}}{\parallel }_{1} \leq \sqrt{NT}\parallel \widetilde{\mathbf{x}}{\parallel }_{2} = \sqrt{NT}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}$ . Considering all the abovementioned bounds and replacing them in (41) yields
+
+$$
+{\begin{Vmatrix}{\mathbf{t}}_{2}\end{Vmatrix}}_{2} \leq 2\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\max }}\right) {\epsilon C\delta }\sqrt{NT}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}. \tag{42}
+$$
+
+Finally, based on the triangle inequality the GTConv filter difference is
+
+$$
+\begin{Vmatrix}{\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) - \mathbf{H}\left( {\widehat{\mathbf{S}}}_{\diamond }\right) }\end{Vmatrix} \leq 2\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\max }}\right) {\epsilon C}\left( {1 + \delta \sqrt{NT}}\right) = {\epsilon \Delta }. \tag{43}
+$$
+
+## Encoder stability.
+
+Consider the encoder contains ${L}_{e}$ layer each having ${F}_{\ell }$ features and $r$ sampling rate. We are interested in the output difference of the encoder
+
+$$
+{\begin{Vmatrix}\operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{2}^{2} = \mathop{\sum }\limits_{{f = 1}}^{{F}_{{L}_{e}}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2}^{2}. \tag{44}
+$$
+
+To ease exposition, we denote $\mathbf{H} \mathrel{\text{:=}} \mathbf{H}\left( \mathbf{S}\right)$ and $\widehat{\mathbf{H}} \mathrel{\text{:=}} \mathbf{H}\left( \widehat{\mathbf{S}}\right)$ . For the $f$ -th output encoder feature we have
+
+$$
+{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2} = {\begin{Vmatrix}\sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{{L}_{e} - 1}{S}_{r}}}{S}_{r}\left( {\mathbf{H}}_{{L}_{e}}^{fg}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}\right) - \sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{{L}_{e} - 1}{S}_{r}}}{S}_{r}\left( {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}{\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\right) \right) \right) \end{Vmatrix}}_{2} \tag{45}
+$$
+
+where ${S}_{r}\left( \cdot \right)$ is the sampling operator with rate $r$ , i.e., simple ${SUM}\left( \cdot \right)$ function without any aggregation. The downsampling reduces the norm of each time series by a factor $1/\sqrt{r}$ , so ${\begin{Vmatrix}{\mathbf{y}}_{\diamond ,{L}_{e}}\end{Vmatrix}}_{2}$ will be
+
+reduced by $1/\sqrt{r}$ . As non-linearity is 1-Lipschitz, i.e., $\left| {\sigma \left( a\right) - \sigma \left( b\right) }\right| \leq \left| {a - b}\right|$ , we can conclude the following inequality from (45) by use of triangular inequality
+
+$$
+{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{{L}_{e} - 1}}{\begin{Vmatrix}{\mathbf{H}}_{{L}_{e}}^{fg}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}{\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2}. \tag{46}
+$$
+
+We add and subtract ${\widehat{\mathbf{H}}}_{L}^{fg}{\mathbf{x}}_{\diamond , L - 1}^{g}$ inside the ${l}_{2}$ -norm and use the triangular inequality once again for each of the input features $g$ to get
+
+$$
+{\begin{Vmatrix}{\mathbf{H}}_{{L}_{e}}^{fg}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}{\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2} \leq \parallel \left( {{\mathbf{H}}_{{L}_{e}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}}\right) {\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}\left( {{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}}\right) {\parallel }_{2}
+$$
+
+$$
+\leq \parallel {\mathbf{H}}_{{L}_{e}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}\parallel \parallel {\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{e}}^{fg}\parallel \parallel {\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}{\parallel }_{2}
+$$
+
+(47)
+
+The stability of GTConv filter in (43) provides an upper bound for the first term as $\begin{Vmatrix}{{\mathbf{H}}_{{L}_{\alpha }}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{\alpha }}^{fg}}\end{Vmatrix} \leq {\epsilon \Delta }$ which is applicable for all the layers. Note that $\Delta$ depends on temporal graph size, so it is different in each layer due to the downsampling. However, we assume the largest temporal size $T$ so the inequality holds for all the layers ${}^{3}$ . The second term is bounded by spectral normalization assumption $\begin{Vmatrix}{\mathbf{H}}_{{L}_{e}}^{fg}\end{Vmatrix} \leq 1$ [cf. Def. 2]. Leveraging these bounds and replacing in (46) we get
+
+$$
+{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e}}^{f} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e}}^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{{F}_{{L}_{e} - 1} + {\epsilon \Delta }}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{x}}_{\diamond ,{L}_{e} - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,{L}_{e} - 1}^{g}\end{Vmatrix}}_{2}. \tag{48}
+$$
+
+This equation defines a recursion among the encoder layers with initial condition ${\mathbf{x}}_{\diamond ,0}^{g} = {\widehat{\mathbf{x}}}_{\diamond ,0}^{g} \mathrel{\text{:=}} {\mathbf{x}}_{\diamond }^{g}$ for all the input features. So for the $\ell$ -th layer, we can write
+
+$$
+{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f} - {\widehat{\mathbf{x}}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\epsilon \Delta }{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell - 1}^{g} - {\widehat{\mathbf{x}}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2}. \tag{49}
+$$
+
+39 To solve this recursive inequality, we first upper bound ${\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2}$ as
+
+$$
+{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}{\mathbf{x}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} \leq \frac{1}{\sqrt{r}}\mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2}, \tag{50}
+$$
+
+where the last inequality is due to the assumption $\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}\end{Vmatrix} \leq 1$ [Def. 2]. Solving this recursion leads to
+
+$$
+{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \frac{1}{{r}^{l/2}}\mathop{\prod }\limits_{{i = 1}}^{{\ell - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2} = {r}^{-\ell /2}\mathop{\prod }\limits_{{i = 1}}^{{\ell - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{51}
+$$
+
+1 Replacing (51) in (49) and solving the recursion considering the initial conditions we get
+
+$$
+{\begin{Vmatrix}{\mathbf{x}}_{\diamond ,\ell }^{f} - {\widehat{\mathbf{x}}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq {r}^{-\ell /2}{\epsilon \Delta }\ell \mathop{\prod }\limits_{{i = 1}}^{{\ell - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{52}
+$$
+
+Setting $\ell = {L}_{e}$ in (52) and replacing it in (44) yields to
+
+$$
+{\begin{Vmatrix}\operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \operatorname{ENC}\left( {\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{F} \leq {L}_{e}{r}^{-{L}_{e}/2}{\epsilon \Delta }\sqrt{{F}_{{L}_{e}}}\mathop{\prod }\limits_{{n = 1}}^{{{L}_{e} - 1}}{F}_{n}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{53}
+$$
+
+## GTConv-AE stability.
+
+Let ${\mathbf{Z}}_{\diamond } = \operatorname{ENC}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right)$ be the input of the decoder and ${\mathbf{z}}_{\diamond ,{L}_{d}} = \operatorname{DEC}\left( {{\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right)$ its output. To prove GTConvAE stability, we need to bound
+
+$$
+{\begin{Vmatrix}\mathrm{{DEC}}\left( {\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \mathrm{{DEC}}\left( {\mathbf{Z}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{2}^{2} = \mathop{\sum }\limits_{{f = 1}}^{{F}_{d,{L}_{d}}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2}^{2}. \tag{54}
+$$
+
+For each feature in the output we have
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} = {\begin{Vmatrix}\sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{d,{L}_{d}} - 1}}{U}_{r}\left( {\mathbf{H}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}\right) - \sigma \left( \mathop{\sum }\limits_{{g = 1}}^{{{F}_{d,{L}_{d} - 1}{U}_{r}}}{U}_{r}\left( {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\right) \right) \right) \end{Vmatrix}}_{2} \tag{55}
+$$
+
+---
+
+${}^{3}$ It is possible to solve the recursive equation with ${\Delta }_{T}$ as a variable, but it leads to overcrowded multipliers in inequalities without carrying important information on the bound.
+
+---
+
+where ${U}_{r}\left( \cdot \right)$ is an upsampling operator with rate $r$ which insert zeros among the samples. The upsampling module leaves the ${l}_{2}$ -norm per time series unaffected and can be ignored. Given 1- Lipschitz continuity of activation function $\sigma \left( \cdot \right)$ , the following inequality can be concluded from (55) using the triangular inequality
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,{L}_{d} - 1}}{\begin{Vmatrix}{\mathbf{H}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2}. \tag{56}
+$$
+
+Adding and subtracting ${\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}$ in the norm and leveraging again the triangular inequality yields
+
+$$
+{\begin{Vmatrix}{\mathbf{H}}_{{L}_{d}}^{fg}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}{\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2} \leq \parallel \left( {{\mathbf{H}}_{{L}_{d}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}}\right) {\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}\left( {{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}}\right) {\parallel }_{2}
+$$
+
+$$
+\leq \parallel {\mathbf{H}}_{{L}_{d}}^{fg} - {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}\parallel \parallel {\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}{\parallel }_{2} + \parallel {\widehat{\mathbf{H}}}_{{L}_{d}}^{fg}\parallel \parallel {\mathbf{x}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}{\parallel }_{2},
+$$
+
+(57)
+
+for $g = 1,\ldots ,{F}_{d,{L}_{d} - 1}$ . The first term is bounded by GTConv filters stability in (43) and the second term is upper-bounded because filters are normalized $\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}\end{Vmatrix} \leq 1$ [cf. Def. 2]. Given these two bounds, (57) can be upper-bounded as
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{{F}_{d,{L}_{d} - 1} - 1}}{\epsilon \Delta }{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d} - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d} - 1}^{g}\end{Vmatrix}}_{2}. \tag{58}
+$$
+
+This allows defining a recursion for the generic layer $\ell$ as
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell }^{f} - {\widehat{\mathbf{z}}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,\ell - 1}}{\epsilon \Delta }{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell - 1}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2}. \tag{59}
+$$
+
+For the first term on the right hand-side of (59), we have
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell }^{f}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,\ell - 1}}{\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}{\mathbf{z}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} \leq \mathop{\sum }\limits_{{g = 1}}^{{F}_{d,\ell - 1}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,\ell - 1}^{g}\end{Vmatrix}}_{2} = \mathop{\prod }\limits_{{j = 1}}^{{\ell - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2} \tag{60}
+$$
+
+because $\begin{Vmatrix}{\mathbf{H}}_{\ell }^{fg}\end{Vmatrix} \leq 1$ [cf. Def. 2]. Replacing (60) into (59) and evaluating it at $\ell = {L}_{d}$ brings the recursion to its initial conditions
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq {\epsilon \Delta }{L}_{d}\mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2} + \mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2}. \tag{61}
+$$
+
+For initial conditions we have ${\mathbf{Z}}_{\diamond ,0} = {\mathbf{Z}}_{\diamond }$ , however, the error caused by spatial graph perturbation in the encoder appears here as an initial condition where ${\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,0}^{f}\end{Vmatrix}}_{2}$ is bounded by the result in (53) for $f \in \left\lbrack {F}_{d,0}\right\rbrack$ .
+
+As the initial condition of the decoder states ${\mathbf{Z}}_{\diamond ,0} = {\mathbf{Z}}_{\diamond } = {\mathbf{X}}_{\diamond , L}$ , we can set $\ell = L$ in (51) to obtain
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond }^{f}\end{Vmatrix}}_{2} \leq {r}^{-{L}_{e}/2}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{62}
+$$
+
+Substituting encoder stability bound (53), to enforce the initial condition for $\mathop{\sum }\limits_{{g = 1}}^{{F}_{d,0}}{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,0}^{g} - {\widehat{\mathbf{z}}}_{\diamond ,0}^{g}\end{Vmatrix}}_{2}$ , and (62) into (61) results in
+
+$$
+{\begin{Vmatrix}{\mathbf{z}}_{\diamond ,{L}_{d}}^{f} - {\widehat{\mathbf{z}}}_{\diamond ,{L}_{d}}^{f}\end{Vmatrix}}_{2} \leq {L}_{d}{r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{d,0}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}
+$$
+
+$$
++ {L}_{e}{r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{d,0}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\prod }\limits_{{j = 1}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{63}
+$$
+
+Calculating over all the output features completes the upper-bound as
+
+$$
+{\begin{Vmatrix}\operatorname{GTConvAE}\left( {\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}\right) - \operatorname{GTConvAE}\left( {\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}\right) \end{Vmatrix}}_{2} \leq
+$$
+
+$$
+\left( {{L}_{d} + {L}_{e}}\right) {r}^{-{L}_{e}/2}{\epsilon \Delta }\sqrt{{F}_{d,{L}_{d}}}\mathop{\prod }\limits_{{i = 1}}^{{{L}_{e} - 1}}{F}_{i}\mathop{\prod }\limits_{{j = 0}}^{{{L}_{d} - 1}}{F}_{d, j}\mathop{\sum }\limits_{{g = 1}}^{{F}_{0}}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }^{g}\end{Vmatrix}}_{2}. \tag{64}
+$$
+
+Assuming ${F}_{0} = {F}_{d,{L}_{d}} = 1$ and $\left\{ {{F}_{d}, F}\right\} \leq {F}_{\max }$ completes the proof.
+
+## B Denoising solar irradiance time series
+
+In this appendix we provide extra information on numerical experiment for denoising solar irradiance time series.
+
+SNR: An error vector ${\mathbf{e}}_{t} \sim \mathcal{N}\left( {0,{L}^{ \dagger }}\right)$ is generated independently for each timestamp $t \in \left\lbrack T\right\rbrack$ . Matrix $L$ represents normalized Laplacian and $\dagger$ stands for pseudo-inverse operation. This noise varies smoothly over spatial graph which makes it more difficult to detect. Assume noise matrix $\sigma \mathbf{E} = \sigma \left\lbrack {{\mathbf{e}}_{1},\ldots ,{\mathbf{e}}_{T}}\right\rbrack \in {\mathbb{R}}^{N \times T}$ , we define SNR as follow:
+
+$$
+{SNR} = {20}\log \frac{\parallel \mathbf{X}{\parallel }_{F}}{\sigma \parallel \mathbf{E}{\parallel }_{F}}, \tag{65}
+$$
+
+where $\sigma$ is used to control SNR for the experiments.
+
+Model parameters: The time window is searched over $T \in \{ 2,\ldots ,8\}$ . The number of layers for both encoder and decoder are selected from ${L}_{e} = {L}_{d} \in \{ 2,3\}$ . The number of features for every layer are chosen from $F \in {32},{16},8,4,2$ . The filter order is evaluated on $K \in \{ 2,3,4,5\}$ . The sampling is searched over $r \in \{ 1,2,3,4\}$ . All the aggregation function have been tested. Finally, the regularizer weight initially selected from logarithmic interval $\rho \in \left\{ {{10}^{ - }2,\ldots ,{10}^{2}}\right\}$ and fine-tuned around optimum value.
+
+## C Anomaly detection in water networks
+
+In this appendix we provide extra information on numerical experiments for anomaly detection in water networks.
+
+Model parameters: The model parameters are evaluated and fine-tuned by sliding window back-testing. The time window is searched over $T \in \{ 2,\ldots ,8\}$ . The number of layers for both encoder and decoder are selected from ${L}_{e} = {L}_{d} \in \{ 2,3\}$ . The number of features for every layer are chosen from $F \in {128},{64},{32},{16},8,4,2$ . The filter order is evaluated on $K \in \{ 2,3,4,5\}$ . The sampling is searched over $r \in \{ 1,2,3\}$ . All the aggregation functions have been tested. Finally, the regularizer weight initially selected from logarithmic interval $\rho \in \left\{ {{10}^{ - }2,\ldots ,{10}^{2}}\right\}$ and fine-tuned around optimum value.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..757aa30ac83630282ee63bc57744d5d9bbefe2f3
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/2HqKwHaBwv/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,288 @@
+§ GRAPH-TIME CONVOLUTIONAL AUTOENCODERS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+We introduce graph-time convolutional autoencoder (GTConvAE), a novel spatiotemporal architecture tailored to unsupervised learning for multivariate time series on networks. The GTConvAE leverages product graphs to represent the time series and a principled joint spatiotemporal convolution over this product graph. Instead of fixing the product graph at the outset, we make it parametric to attend to the spatiotemporal coupling for the task at hand. On top of this, we propose temporal downsampling for the encoder to improve the receptive field in a spatiotemporal manner without affecting the network structure; respectively. In the decoder, we consider the opposite upsampling operator. We prove that the GTConvAEs with graph integral Lipschitz filters are stable to relative network perturbations, ultimately showing the role of the different components in the encoder and decoder. Numerical experiments for denoising and anomaly detection in solar and water networks corroborate our findings and showcase the effectiveness of the GTConvAE compared with state-of-the-art alternatives.
+
+§ 1 INTRODUCTION
+
+Learning unsupervised representations from spatiotemporal network data is commonly encountered in applications concerning multivariate data denoising [1], anomaly detection [2], missing data imputation [3], and forecasting [4], to name just a few. The challenge is to develop models that jointly capture the spatiotemporal dependencies in a computation- and data-efficient manner yet being tractable so that to understand the role played by the network structure and the dynamics over it. The autoencoder family of functions is of interest in this setting, but vanilla spatiotemporal forms [5-7] that ignore the network structure suffer the well-known curse of dimensionality and lack inductive learning capabilities [8].
+
+Upon leveraging the network as an inductive bias [9], graph-time autoencoders have been recently developed. These approaches are typically composed of two interleaving modules: one capturing the spatial dependencies via graph neural networks (GNNs) [10] and one capturing the temporal dependencies via temporal CNN or LTSM networks. For example, the work in [1] uses an edge-varying GNN [11] followed by a temporal convolution for motion denoising. The work in [12] considers LSTMs and graph convolutions for variational spatiotemporal autoencoders, which have been further investigated in $\left\lbrack {3,{13}}\right\rbrack$ , respectively, for spatiotemporal data imputation as a graph-based matrix completion problem and dynamic topologies. Graph-time autoencoders over dynamic topologies have also been investigated in [14, 15]. Lastly, [4] embeds the temporal information into the edges of a graph and develops an autoencoder over this graph for forecasting purposes.
+
+By working disjointly first on the graph and then on the temporal dimension of the graph embeddings, these approaches fail to capture the joint spatiotemporal dependencies present in the raw data. It is also challenging to analyze their theoretical properties and to attribute to what extent the benefit comes from one module over the other. This aspect has been investigated for supervised spatiotemporal learning via GNNs [16-21] but not for autoencoders. The two works elaborating on this are [2] and [22]. The work in [2] replicates the graph over time via the Cartesian product principle [23] and uses an order one graph convolution [24] to learn spatiotemporal embeddings that are fed into an LSTM module to improve the temporal memory, ultimately giving more importance to the temporal dimension of the latent representation. Differently, [25] proposed a variational graph-time autoencoder that its encoder is based on [17] and its decoder is a multi-layer perceptron; hence, being suitable only for topological tasks such as dynamic link prediction but not for tasks concerning time series over networks such as denoising or anomaly detection.
+
+In this paper, we propose a GTConvAE that, differently from [2], captures jointly the spatiotemporal coupling both in the raw data and the intermediate higher-level representations. The GTConvAE operates over a parametric product graph [26] to attend to the spatiotemporal coupling for the task at hand rather than fixing it at the outset. Differently from [17], the GTConvAE has a symmetric structure with graph-time convolutions in both encoder and decoder, making it suitable for tasks concerning network time series. We also study the capability of the GTConvAE to transfer learning across different networks, which is of importance as practical topologies differ from the models used during training (e.g., because of model uncertainness, perturbations, or dynamics). The latter has been studied for traditional [27-29] and graph-time GNN models [20, 26, 30] but not for graph-time autoencoders.
+
+Our contribution in this paper is twofold. First, we propose a symmetric graph-time convolutional autoencoder that jointly captures the spatiotemporal coupling in the data suited for tasks concerning multivariate time series over networks. The GTConvAE represents the time series as a graph signal over product graphs and uses the latter as an inductive bias to learn unsupervised representations. The product graph is parametric to attend to the coupling for the specific task, and it generalizes the popular choices of product graphs [31]. We also propose a temporal downsampling/upsampling in the encoder/decoder to increase the spatiotemporal receptive field without affecting the network structure; hence, preserving the inductive bias. Second, we prove GTConvAE is stable to relative perturbations on the spatial graph; highlighting the role played by the encoder, decoder, parametric product graph, convolutional filters, and downsampling/upsampling rate. Numerical experiments about denoising and anomaly detection over solar and water networks corroborate our findings and show a competitive performance compared with the more involved state-of-the-art alternatives.
+
+The rest of this paper is organized as follows. Section 2 formulates the GTConvAE model and Section 3 analyzes its theoretical properties. Numerical experiments are presented in Section 4 and conclusions in Section 5. The proofs are collected in the appendix.
+
+§ 2 GRAPH-TIME CONVOLUTIONAL AUTOENCODERS
+
+GTconvAE learns representations from $N$ -dimensional multivariate time series ${\mathbf{x}}_{t} \in {\mathbb{R}}^{N},t =$ $1,\ldots ,T$ , collected in matrix $\mathbf{X} \in {\mathbb{R}}^{N \times T}$ . These time series have a spatial network structure represented by a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ composed of $N$ nodes $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ and $M$ edges. The $n$ -th row of $\mathbf{X}$ contains the time series ${\mathbf{x}}^{n} = {\left\lbrack {x}_{1}\left( n\right) ,\ldots ,{x}_{T}\left( n\right) \right\rbrack }^{\top }$ on node ${v}_{n}$ and the $t$ -th column a graph signal ${\mathbf{x}}_{t} = {\left\lbrack {x}_{t}\left( 1\right) ,\ldots ,{x}_{t}\left( N\right) \right\rbrack }^{\top }$ at timestamp $t\left\lbrack {{32},{33}}\right\rbrack$ . For example, the time series could be nodal pressures measured over junction nodes in a water distribution network, while the pipe connections rule the spatial structure. The representations learned from the tuple $\{ \mathcal{G},\mathbf{X}\}$ can then be used, among others, for anomaly detection [5], denoising dynamic data over graphs [1], and missing data completion [3].
+
+The GTconvAE follows the standard encoder-decoder structure [34], but in each module, it jointly captures the spatiotemporal structure in the data. We denote the GTconvAE as
+
+$$
+\widehat{\mathbf{X}} = \operatorname{GTConvAE}\left( {\mathbf{X},\mathcal{G};\mathcal{H}}\right) \mathrel{\text{ := }} \operatorname{DEC}\left( {\operatorname{ENC}\left( {\mathbf{X},\mathcal{G};{\mathcal{H}}_{e}}\right) ,\mathcal{G};{\mathcal{H}}_{d}}\right)
+$$
+
+where the encoder $\operatorname{ENC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{e}}\right)$ and decoder $\operatorname{DEC}\left( {\cdot ,\cdot ;{\mathcal{H}}_{d}}\right)$ are non-linear parametric functions and where set $\mathcal{H} = {\mathcal{H}}_{e} \cup {\mathcal{H}}_{d}$ collects all parameters. The encoder takes as input the graph $\mathcal{G}$ and the time series $\mathbf{X}$ and produces higher-level representations $\mathbf{Z} \in {\mathbb{R}}^{N \times {T}_{e}}$ . These representations are built in a layered manner where each layer comprises: $i$ ) a joint graph-time convolutional filter to capture the spatiotemporal dependencies in a principled manner; ii) a temporal downsampling module to increase the receptive field without affecting the network structure; and iii) a pointwise nonlinearity to have more complex representations. The decoder has a mirrored structure w.r.t. the encoder by taking as input $\mathbf{Z}$ and outputting an estimate of the input $\widehat{\mathbf{X}}$ . The model parameters are estimated end-to-end by minimizing a spatiotemporal regularized reconstruction loss $\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right)$ .
+
+§ 2.1 PRODUCT GRAPH REPRESENTATION OF NETWORK TIME SERIES
+
+GTConvAE uses product graphs to represent the spatiotemporal dependencies in X [23]. Product graphs have been proven successful for processing multivariate time series, such as imputing missing values [35, 36], denoising [37], providing a spatiotemporal Fourier analysis [33], as well as building vector autoregressive models [38], spatiotemporal scattering transforms [39], and graph-time neural networks [26]. Specifically, denote by $\mathbf{S} \in {\mathbb{R}}^{N \times N}$ the graph shift operator (GSO) of the spatial graph $\mathcal{G}$ , e.g., adjacency, Laplacian. Consider also a temporal graph ${\mathcal{G}}_{T} = \left( {{\mathcal{V}}_{T},{\mathcal{E}}_{T},{\mathbf{S}}_{T}}\right)$ , where the node set ${\mathcal{V}}_{T} = \{ 1,\ldots ,T\}$ comprises the discrete-time instants, the edge set ${\mathcal{E}}_{T} \subseteq {\mathcal{V}}_{T} \times {V}_{T}$ captures the temporal dependencies; e.g., a directed line or a cyclic graph, and ${\mathbf{S}}_{T} \in {\mathbb{R}}^{N \times N}$ is the respective GSO [40,41]. The time series ${\mathbf{x}}^{n}$ now can be defined as a graph signal over temporal graph ${\mathbf{S}}_{T}$ where ${x}_{t}\left( n\right)$ is a scalar value assigned to the $t$ -th node of ${\mathcal{G}}_{T}$ .
+
+The product graph representing the spatiotemporal patterns in $\mathbf{X}$ is denoted by ${\mathcal{G}}_{\diamond } = {\mathcal{G}}_{T}\diamond \mathcal{G} =$ $\left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right)$ . The node set ${\mathcal{V}}_{\diamond }$ is the Cartesian product between ${\mathcal{V}}_{T}$ and $\mathcal{V}$ which leads to ${NT}$ distinct spatiotemporal nodes ${i}_{\diamond } = \left( {n,t}\right)$ . The edge set ${\mathcal{E}}_{\diamond }$ connects these nodes and the GSO ${\mathbf{S}}_{\diamond } \in {\mathbb{R}}^{{NT} \times {NT}}$ is dictated by the product graph. Fixing the product graph implies fixing the spatiotemporal dependencies in the data, which may lead to wrong inductive biases. To avoid this and improve flexibility, we consider a parametric product graph whose GSO is of the form
+
+$$
+{\mathbf{S}}_{\diamond } = \mathop{\sum }\limits_{{i = 0}}^{1}\mathop{\sum }\limits_{{j = 0}}^{1}{s}_{ij}\left( {{\mathbf{S}}_{T}^{i} \otimes {\mathbf{S}}^{j}}\right) = \underset{\text{ self-loops }}{\underbrace{{s}_{00}{\mathbf{I}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{ Cartesian }}{\underbrace{{s}_{01}{\mathbf{I}}_{T} \otimes \mathbf{S} + {s}_{10}{\mathbf{S}}_{T} \otimes {\mathbf{I}}_{N}}} + \underset{\text{ Kronecker }}{\underbrace{{s}_{11}{\mathbf{S}}_{T} \otimes \mathbf{S}}}, \tag{1}
+$$
+
+where the scalar parameters $\left\{ {s}_{ij}\right\}$ attend the spatiotemporal connections and encompass the typical product graph choices such as the Kronecker, the Cartesian, and the strong product. By column-vectorizing $\mathbf{X}$ into ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right) \in {\mathbb{R}}^{NT}$ , we obtain a product graph signal assigning a real value to each spacetime node ${i}_{\diamond }$ . I.e., the dynamic data ${\mathbf{x}}_{t}$ over $\mathcal{G}$ is now a static signal ${\mathbf{x}}_{\diamond }$ over the product graph ${\mathcal{G}}_{\diamond }$ .
+
+§ 2.2 ENCODER
+
+The encoder is an ${L}_{e}$ -layered architecture in which each layer comprises a bank of product graph convolutional filters, temporal downsampling, and pointwise nonlinearities.
+
+GTConv filter captures the spatiotemporal patterns in the data matrix X. Given the parametric product graph representation ${\mathcal{G}}_{\diamond } = \left( {{\mathcal{V}}_{\diamond },{\mathcal{E}}_{\diamond },{\mathbf{S}}_{\diamond }}\right) \left\lbrack \text{ cf. (1) }\right\rbrack$ and the product graph signal ${\mathbf{x}}_{\diamond } = \operatorname{vec}\left( \mathbf{X}\right)$ as input, the output of a graph-time convolutional filter of order $K$ is
+
+$$
+{\mathbf{y}}_{\diamond } = \mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond } = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}{\mathbf{x}}_{\diamond } \tag{2}
+$$
+
+where $\mathbf{h} = {\left\lbrack {h}_{0},\ldots ,{h}_{K}\right\rbrack }^{\top }$ are the filter parameters and $\mathbf{H}\left( {\mathbf{S}}_{\diamond }\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\mathbf{S}}_{\diamond }^{k}$ the filtering matrix. The filter in (2) is called convolutional as the output ${\mathbf{y}}_{\diamond }$ is a weighted linear combination of shifted graph signals over the product graph up to $K$ times [42]. Hence, the filter is spatiotemporally local in a neighborhood of radius $K$ . The filter locality does not only depend on the order $K$ but also on the type of product graph. For example, for a fixed $K$ , the Cartesian product is more localized than the strong product, which can be considered to have a longer spatiotemporal memory [26]. Consequently, learning parameters $\left\{ {s}_{ij}\right\}$ in (1) implies learning the multi-hop resolution radius.
+
+In the $\ell$ -th layer, the encoder has ${F}_{\ell - 1}$ product graph signal features ${\mathbf{x}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{x}}_{\diamond ,\ell - 1}^{g},\ldots {\mathbf{x}}_{\diamond ,\ell - 1}^{{F}_{\ell - 1}}$ , processes these with a bank of ${F}_{\ell }{F}_{\ell - 1}$ filters and outputs ${F}_{\ell }$ product graph signal features as
+
+$$
+{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{x}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }, \tag{3}
+$$
+
+which are the higher-level linear representation of the layer.
+
+Temporal downsampling reduces the temporal dimension in each output ${\left\{ {\mathbf{y}}_{\diamond ,\ell }^{f}\right\} }_{f}$ in (3) by down-sampling the latter along the temporal dimension with a rate $r$ . More specifically, we first transform
+
+the $f$ -th output ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{e}}$ into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{e}}$ and then summarize every $r$ consecutive columns without overlap to obtain the downsampled matrix ${\mathbf{X}}_{d,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell }^{e}}$ with ${T}_{\ell }^{e} < {T}_{\ell - 1}^{e}$ . The(n, t)-th entry of ${\mathbf{X}}_{d,\ell }^{f}$ is computed as
+
+$$
+{\mathbf{X}}_{d,\ell }^{f}\left( {n,t}\right) = \operatorname{SUM}\left( {{\mathbf{Y}}_{\ell }^{f}\left( {n,r\left( {t - 1}\right) + 1 : {rt}}\right) }\right) ,\;f = 1,\ldots {F}_{\ell }, \tag{4}
+$$
+
+where $\operatorname{SUM}\left( \cdot \right)$ is a summary function over the temporal indices $r\left( {t - 1}\right) + 1$ to ${rt}$ . This summary function could be a simple downsampling (i.e., output the first column in the block ${\mathbf{Y}}_{\ell }^{f}(n,r\left( {t - 1}\right) + 1$ : ${rt}))$ or an aggregation function (i.e., mean/max/min per spatial node).
+
+This temporal downsampling increases the encoder spatiotemporal memory without affecting the spatial structure. I.e., nodes with the temporal indices $t,{rt},\left( {r + 1}\right) t,\ldots$ become neighbors, which brings in a longer memory in the next layer and increases the encoder receptive field. While also spatial graph pooling can be added [43], we do not advocate it for two reasons. First, the spatial graph acts as an inductive bias for the GTConvAE [9]; hence, changing the graph in the layers via graph reduction, coarsening, or alternatives will affect the spatial structure, ultimately changing the inductive bias. Second, the spatial graph often represents the communication channels for distributed implementation of GTConv $\left\lbrack {{20},{42},{44}}\right\rbrack$ , and changing it may be physically impossible as sensor nodes have a limited transmission radius. An option in the latter setting may be a zero-pad spatial pooling $\left\lbrack {{45},{46}}\right\rbrack$ but it requires memorizing the indices where the zero-padding is applied, which may be challenging for large graphs.
+
+Activation functions nonlinearize the downsampled features to increase the representational capacity. We consider an entry-wise nonlinear function $\sigma \left( \cdot \right)$ such as ReLU and produce layer $\ell$ -th output as
+
+$$
+{\mathbf{X}}_{\ell + 1}^{f} = \sigma \left( {\mathbf{X}}_{d,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{5}
+$$
+
+The encoder performs operations (3)-(4)-(5) for all the ${L}_{e}$ layers to yield the encoded output
+
+$$
+{\mathbf{Z}}_{\diamond } \mathrel{\text{ := }} {\mathbf{X}}_{\diamond ,L} = \operatorname{ENC}\left( {{\mathbf{x}}_{\diamond ,0},\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) , \tag{6}
+$$
+
+where ${\mathbf{x}}_{\diamond ,0} \mathrel{\text{ := }} {\mathbf{x}}_{\diamond } \in {\mathbb{R}}^{NT},{\mathbf{Z}}_{\diamond } = \left\lbrack {{\mathbf{z}}_{\diamond }^{1},\ldots ,{\mathbf{z}}_{\diamond }^{{F}_{L}}}\right\rbrack \in {\mathbb{R}}^{N{T}_{{L}_{e}} \times {F}_{L}}$ , and we made explicit the dependence from the product graph parameters $\mathbf{s} = {\left\lbrack {s}_{00},{s}_{01},{s}_{10},{s}_{11}\right\rbrack }^{\top }$ [cf. (1)].
+
+§ 2.3 DECODER
+
+Mirroring the encoder, the decoder reconstructs the input from the latent representations in (6). At the generic layer $\ell$ , graph-time convolutional filtering is performed, subsequently a temporal upsampling, and a pointwise nonlinearity.
+
+GTConv filtering decodes the spatiotemporal latent representations from the encoder. Considering again ${F}_{\ell - 1}$ input features ${\mathbf{z}}_{\diamond ,\ell - 1}^{1},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{g},\ldots ,{\mathbf{z}}_{\diamond ,\ell - 1}^{{F}_{\ell } - 1}$ and a filter bank of ${F}_{\ell }{F}_{\ell - 1}$ GTConv filters as per (2), the outputs are
+
+$$
+{\mathbf{y}}_{\diamond ,\ell }^{f} = \mathop{\sum }\limits_{{g = 1}}^{{F}_{\ell - 1}}{\mathbf{H}}^{fg}\left( {\mathbf{S}}_{\diamond }\right) {\mathbf{z}}_{\diamond ,\ell - 1}^{g},\;f = 1,\ldots {F}_{\ell }. \tag{7}
+$$
+
+Upsampling zero-pads the removed temporal values during downsampling [cf. (4)] so that the final GTConvAE output matches the dimension of $\mathbf{X}$ . Specifically, given the $f$ -th feature ${\mathbf{y}}_{\diamond ,\ell }^{f} \in {\mathbb{R}}^{N{T}_{\ell - 1}^{d}}$ from (7), we again transform it into a matrix ${\mathbf{Y}}_{1}^{f} = {\operatorname{vec}}^{-1}\left( {\mathbf{y}}_{\diamond ,1}^{f}\right) \in {\mathbb{R}}^{N \times {T}_{\ell - 1}^{d}}$ and obtain the upsampled matrix ${\mathbf{Z}}_{u,\ell }^{f} \in {\mathbb{R}}^{N \times {T}_{\ell }^{d}}$ whose(n, t)-th entry is computed as
+
+$$
+{\mathbf{Z}}_{u,\ell }^{f}\left( {n,t}\right) = \left\{ \begin{array}{ll} {\mathbf{Y}}_{\ell }^{f}\left( {n,\lceil t/r\rceil }\right) ; & \text{ if }\exists k \in \mathbb{Z} : t = {kr} \\ 0; & \text{ o/w } \end{array}\right. \tag{8}
+$$
+
+where $\lceil \cdot \rceil$ is the ceiling function. ${}^{1}$ The GTConv filter bank in the next layer interpolates these zero-padded values from the downsampled ones. This implies that the downsampling rate in the
+
+${}^{1}$ We considered the same down/up-sampling rate in each layer of the decoder and encoder; hence, because of the mirrored structure ${T}_{\ell }^{e}$ in (5) equals ${T}_{\ell - 1}^{d}$ in (8).
+
+encoder cannot be too harsh to lose information, and also, the filter orders in the decoder cannot be too small to have a high interpolatory capacity.
+
+Activation functions again nonlinzearize the upsampled features in (8) and yield
+
+$$
+{\mathbf{Z}}_{\ell }^{f} = \sigma \left( {\mathbf{Z}}_{u,\ell }^{f}\right) ,\;f = 1,\ldots {F}_{\ell }. \tag{9}
+$$
+
+The decoder performs operations (7)-(8)-(9) for all ${L}_{d}$ layers to yield the decoded output ${\widehat{\mathbf{x}}}_{\diamond } =$ ${\mathbf{z}}_{\diamond ,{L}_{d}} \in {\mathbb{R}}^{NT}$ , which also corresponds to the GTConvAE output
+
+$$
+{\widehat{\mathbf{x}}}_{\diamond } = {\mathbf{z}}_{\diamond ,{L}_{d}} = \operatorname{DEC}\left( {{\mathbf{Z}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) , \tag{10}
+$$
+
+where we match the dimensions by setting ${F}_{{L}_{d}} = 1$ .
+
+§ 2.4 LOSS FUNCTION
+
+Given (6) and (10), the GTConvAE in (1) can be detailed as
+
+$$
+{\widehat{\mathbf{x}}}_{\diamond } = \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};\mathcal{H},\mathbf{s}}\right) = \operatorname{DEC}\left( {\operatorname{ENC}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{e},\mathbf{s}}\right) ,\mathbf{S},{\mathbf{S}}_{T};{\mathcal{H}}_{d},\mathbf{s}}\right) . \tag{11}
+$$
+
+The GTConv filter parameters in $\mathcal{H}$ and the product graph parameters in $\mathbf{s}$ are estimated by minimizing the loss function
+
+$$
+\mathcal{L}\left( {\mathbf{X},\widehat{\mathbf{X}},\mathcal{G},\mathcal{H}}\right) = {\mathbb{E}}_{\mathcal{D}}\left\lbrack {\begin{Vmatrix}{\mathbf{x}}_{\diamond } - {\widehat{\mathbf{x}}}_{\diamond }\end{Vmatrix}}_{2}\right\rbrack + \rho \parallel \mathbf{s}{\parallel }_{1}. \tag{12}
+$$
+
+where the first term measures the reconstruction error over the probabilistic distribution $\mathcal{D}$ of the training set, whereas the second term imposes sparsity in the spatiotemporal dependencies of the product graph. Scalar $\rho > 0$ controls the trade-off between fitting and regularization, and a higher value implies a stronger spatiotemporal sparsity (from the norm one $\parallel \cdot {\parallel }_{1}$ ); i.e., sparser spatiotemporal attention.
+
+Complexity analysis: Denoting the maximum number of features in all layers by ${F}_{\max } = \max \left\{ {F}_{\ell }\right\}$ the GTConvAE has $\left| \mathcal{H}\right| = \left( {{L}_{e} + {L}_{d}}\right) \left( {K + 1}\right) {F}_{\max }^{2}$ parameters. This is because each GTConv filter (2) has $K + 1$ parameters and in each layer a filter bank of at most ${F}_{max}^{2}$ filters is used. Despite the product graphs are of large dimensions, the latter is highly sparse and the computation complexity of the GTConvAE is of order $\mathcal{O}\left( {{M}_{\diamond }\left| \mathcal{H}\right| }\right)$ , where ${M}_{\diamond } = {NT} + N{M}_{T} + {MT} + {2M}{M}_{T}$ is the number of edges of the product graph ( $M$ edges in the spatial graph and ${M}_{T}$ edges in the temporal graph). This is because each graph-time filter has a computational complexity of order $\mathcal{O}\left( {\left( {K + 1}\right) {M}_{\diamond }}\right)$ [26] and the GTConvAE consists of $\left( {{L}_{e} + {L}_{d}}\right) {F}_{\max }^{2}$ graph-time filters. Note that we consider $r = 1$ sampling rate to provide the worst case analysis, but the computational complexity can be further reduced for $r > 1$ .
+
+§ 3 STABILITY ANALYSIS
+
+In this section, we conduct a stability analysis of the GTConvAE w.r.t. relative perturbations in the spatial graph. This stability analysis is motivated by the fact that we do not always have access to the ground truth spatial graph due to modeling issues or when the physical network undergoes slight changes over time. Hence, the spatial graph used for training differs from that used for testing; thus, having a stable GTConvAE is desirable to perform the tasks reliably.
+
+We consider the relative perturbation model proposed in [27]
+
+$$
+\widehat{\mathbf{S}} = \mathbf{S} + \left( {\mathbf{{SE}} + \mathbf{{ES}}}\right) \tag{13}
+$$
+
+where $\widehat{\mathbf{S}}$ is the perturbed GSO and $\mathbf{E}$ is the perturbation matrix with bounded operator norm $\parallel \mathbf{E}\parallel \leq \epsilon$ . This model accounts for graph perturbation depending on its structure, i.e., a higher degree node (a node with higher-weighted connected edges) is relatively prone to more perturbation.
+
+§ 3.1 SPATIOTEMPORAL INTEGRAL LIPSCHITZ FILTERS
+
+To investigate the stability of GTConvAE, we first characterize the graph-time convolutional filters in the spectral domain. Consider the eigendecompositions of the spatial GSO $\mathbf{S} = \mathbf{V}\mathbf{\Lambda }{\mathbf{V}}^{\mathrm{H}}$ and of the temporal GSO ${\mathbf{S}}_{T} = {\mathbf{V}}_{T}{\mathbf{\Lambda }}_{T}{\mathbf{V}}_{T}^{\mathrm{H}}$ . Matrices $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{1},\ldots ,{\mathbf{v}}_{N}\right\rbrack }^{\top }$ and $\mathbf{V} = {\left\lbrack {\mathbf{v}}_{T,1},\ldots ,{\mathbf{v}}_{T,T}\right\rbrack }^{\top }$ collect the spatial and the temporal eigenvectors, respectively, and $\mathbf{\Lambda } = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{N}}\right)$ and ${\mathbf{\Lambda }}_{T} = \operatorname{diag}\left( {{\lambda }_{T,1},\ldots ,{\lambda }_{T,T}}\right)$ the corresponding eigenvalues. From (1), the eigendecomposition of the product graph GSO is ${\mathbf{S}}_{\diamond } = {\mathbf{V}}_{\diamond }{\mathbf{\Lambda }}_{\diamond }{\mathbf{V}}_{\diamond }^{\mathrm{H}}$ with eigenvectors ${\mathbf{V}}_{\diamond } = {\mathbf{V}}_{T} \otimes \mathbf{V}$ being the Kronecker product $\otimes$ of the respective GSOs and the eigenvalues ${\mathbf{\Lambda }}_{\diamond } = {\mathbf{\Lambda }}_{T}\diamond \mathbf{\Lambda }$ are defined by the product graph rule. As in graph signal processing [32], it is possible to characterize the joint graph-time Fourier transform of product graph signals. Specifically, the graph-time Fourier of signal ${\mathbf{x}}_{\diamond }$ is defined as $\widetilde{\mathbf{x}} = {\left( {\mathbf{V}}_{T} \otimes \mathbf{V}\right) }^{\mathrm{H}}{\mathbf{x}}_{\diamond }$ and the eigenvalues in ${\mathbf{\Lambda }}_{\diamond }$ now collect the graph-time frequencies of the product graph [33]. Applying this Fourier transform on the input and output of the GTConv filter in (2), we can write the filter input-output as ${\widetilde{\mathbf{y}}}_{\diamond } = \mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right) \widetilde{\mathbf{y}}$ , where ${\widetilde{\mathbf{y}}}_{\diamond }$ is the Fourier transform of the output and $\mathbf{H}\left( {\mathbf{\Lambda }}_{\diamond }\right)$ is an ${NT} \times {NT}$ diagonal matrix containing the filter frequency response on the main diagonal. This frequency response is of the form
+
+$$
+h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) = \mathop{\sum }\limits_{{k = 0}}^{K}{h}_{k}{\lambda }_{\diamond ,\left( {n,t}\right) }^{k} \tag{14}
+$$
+
+where ${\lambda }_{\diamond ,\left( {n,t}\right) } = {\lambda }_{T,t}\diamond {\lambda }_{n}$ indicates the eigenvalue of ${\mathbf{S}}_{\diamond }$ corresponding to the spatial index $n \in \left\lbrack N\right\rbrack$ and temporal index $t \in \left\lbrack T\right\rbrack$ of the product graph.
+
+The eigenvalues ${\lambda }_{\diamond ,\left( {n,t}\right) }$ can be considered as the frequencies of the product graph and can be ordered in ascending order of magnitude. We can then characterize the variation of the filter frequency response for two different spatial eigenvalues.
+
+Definition 1. A GTConv filter with a frequency response $h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right)$ is graph integral Lipschitz if there exists constant $C > 0$ such that for all frequencies ${\lambda }_{\diamond ,\left( {n,t}\right) },{\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ , it holds that
+
+$$
+\left| {h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) - h\left( {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }\right) }\right| \leq C\frac{\left| {\lambda }_{n} - {\lambda }_{{n}^{\prime }}\right| }{\left| {{\lambda }_{n} + {\lambda }_{{n}^{\prime }}}\right| /2}\text{ for all }\left\{ {{\lambda }_{n},{\lambda }_{{n}^{\prime }}}\right\} \in \mathbf{\Lambda }. \tag{15}
+$$
+
+Expression (15) states that the frequency response of graph-time convolutional filter should vary sub-linearly while the coefficient depends on the gap $\left| {{\lambda }_{\diamond ,\left( {n,t}\right) } + {\lambda }_{\diamond ,\left( {{n}^{\prime },{t}^{\prime }}\right) }}\right| /2$ . This implies
+
+$$
+\left| {{\lambda }_{n}\frac{\partial h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) }{\partial {\lambda }_{n}}}\right| \leq C\text{ for all }{\lambda }_{n} \in \mathbf{\Lambda }\;\text{ and }\;{\lambda }_{\diamond ,\left( {n,t}\right) } \in {\mathbf{\Lambda }}_{\diamond } \tag{16}
+$$
+
+which means the integral Lipschitz filter cannot vary drastically in high frequencies. Hence, such a filter can discriminate low frequency content but not high frequency ones.
+
+Definition 2. A graph-time convolutional filter has normalized frequency response if $\left| {h\left( {\lambda }_{\diamond ,\left( {n,t}\right) }\right) }\right| \leq 1$ for all ${\lambda }_{\diamond ,\left( {n,t}\right) } \in {\mathbf{\Lambda }}_{\diamond }$ .
+
+This definition is a direct consequence of normalizing the filters' frequency response by their maximum value. We shall show next that GTConvAE with filters satisfying Def. 1 and 2 are stable to perturbations in the form (13).
+
+§ 3.2 STABILITY RESULT
+
+The following theorem with proof in Appendix A provides the main result.
+
+Theorem 1. Consider a GTConvAE with an ${L}_{e}$ -layer encoder and an ${L}_{d}$ -layer decoder having ${F}_{\ell } \leq {F}_{\max }$ and ${F}_{d,\ell } \leq {F}_{\max }$ features per layer in encoder and decoder, respectively, and a summary function $\operatorname{SUM}\left( \cdot \right)$ performing pure downsampling with rate $r$ . Consider also the filters are integral Lipschitz [cf. Def. 1] with a normalized frequency response [cf. Def. 2] and that the nonlinearities are 1-Lipschitz (e.g., ReLU, absolute value). Let this GTConvAE be trained over the product graph (1) and deployed over its perturbed version whose spatial GSO is given in (13) with a perturbation of at most $\parallel \mathbf{E}\parallel \leq \epsilon$ . The distance between the two models is upper bounded by
+
+$$
+\parallel \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\mathbf{S},{\mathbf{S}}_{T}}\right) - \operatorname{GTConvAE}\left( {{\mathbf{x}}_{\diamond },\widehat{\mathbf{S}},{\mathbf{S}}_{T}}\right) {\parallel }_{2} \leq \left( {{L}_{d} + {L}_{e}}\right) {r}^{-{L}_{e}/2}{\epsilon \Delta }{F}_{max}^{{L}_{e} + {L}_{d} - 1}{\begin{Vmatrix}{\mathbf{x}}_{\diamond }\end{Vmatrix}}_{2}.
+$$
+
+(17)
+
+where $\Delta = {2C}\left( {{s}_{01} + {s}_{11}{\lambda }_{T,\text{ max }}}\right) \left( {1 + \delta \sqrt{NT}}\right)$ , and $\delta = {\left( \parallel \mathbf{U} - \mathbf{V}{\parallel }^{2} + 1\right) }^{2} - 1$ with eigenvectors $\mathbf{U}$ from $\mathbf{E} = {\mathbf{{UMU}}}^{\mathrm{H}}$ and $\mathbf{V}$ from $\mathbf{S} = {\mathbf{{V\Lambda V}}}^{\mathrm{H}}$ .
+
+The result (17) states that GTConvAE is stable against relative perturbations. It also suggests that GTConvAE is less stable for larger product graphs $\left( \sqrt{NT}\right)$ since more nodes pass information over the perturbed edges. Moreover, making the model more complex by increasing the number of features or layers compromises stability as more graph-time convolutional filters work on a perturbed graph $\left( {F}_{\max }^{{L}_{c} + {L}_{d} - 1}\right)$ . We also see the stability improves with the sampling rate $r > 1$ because fewer nodes operate over the perturbed graph after downsampling. Furthermore, for a deeper encoder we have more downsampling hence the stability improves; yet there is a tradeoff between improving the bound imposed by the terms ${r}^{-{L}_{e}/2},{F}_{\max }^{{L}_{e} + {L}_{d} - 1}$ , and ${L}_{e} + {L}_{d}$ . Finally, parameters ${s}_{01}$ and ${s}_{11}$ appear in the stability bound because they are the only ones composing the spatial edges; thus, minimizing $\parallel \mathbf{s}{\parallel }_{1}$ in (12) leads to improved stability.
+
+§ 4 NUMERICAL RESULTS
+
+This section compares the GTConvAE with baseline solutions and competitive alternatives for time series denoising as well as anomaly detection with real data from solar irradiance and water networks. In all experiments, the ADAM optimizer with the standard hyperparameters is used and an unweighted directed line graph is considered for the temporal graph in (1).
+
+§ 4.1 DENOISING OF SOLAR IRRADIANCE TIME SERIES
+
+We consider the task of denoising solar irradiance time series over $N = {75}$ solar cities around the northern region of the U.S. measured in GHI $\left( {W/{m}^{2}}\right) \left\lbrack 4\right\rbrack$ . Each solar city is a vertex and an undirected edge is set using the physical distances between the cities via Gaussian threshold kernel with $\sigma = {0.25}$ and ${th} = {0.1}$ after normalizing maximum weight to 1 [32]. The noise is generated via a zero-mean Gaussian distribution with a covariance matrix corresponding to the pseudo-inverse of the normalized graph Laplacian.
+
+ < g r a p h i c s >
+
+Figure 1: Denoising performance of the proposed GTConvAE and alternatives. The standard deviation for all the models is of order ${10}^{-2}$ .
+
+Experimental setup. We considered the first 2000 samples for training and validation (2000- 2014) and the subsequent 200 (2014-2016) for testing. The input data is a single feature corresponding to the GHI measurement and the product graph has $N = {75}$ spatial nodes and $T = 8$ temporal nodes. The GTConvAE has three layers with $\{ 8,4,2\}$ features in the encoder and reversely in the decoder; all filters are 4th-order and normalized Laplacian is used as GSO; a downsampling rate of $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.2}$ and the learning rate is ${25} \times {10}^{-4}$ . We compared the GTConvAE with the following alternatives:
+
+ * C3D [5]: non-graph spatiotemporal autoencoder using three-dimensional CNNs.
+
+ * ConvLSTMAE [7]: A non-graph spatiotemporal autoencoder using two-dimensional CNNs followed by LSTMs.
+
+ * STGAE [1]: A modular spatiotemporal graph autoencoder that uses an edge varying filter for the graph dimension followed by temporal convolution.
+
+ * Baseline GCNN [42]: An autoencoder built with a conventional graph convolutional neural network using the time series as features over the nodes. The shift operator is the normalized Laplacian matrix.
+
+The first two methods are considered to show the role of using a distance graph as an inductive bias. The third method is considered to compare the joint GTConvAE over disjoint alternatives, whereas the last model is considered to show the role of the sparse product graphs rather than treating time series as node features. The parameters for all models are chosen via grid search from the ranges reported in Appendix B.
+
+Results. Fig. 1 shows the reconstruction normalized mean squared error (NMSE) for different signal-to-noise ratios (SNRs). The proposed GTConvAE compares well with STGAE for low SNRs but better for high SNRs. We attribute this improvement to the ability of the GTConvAE to capture
+
+Table 1: Comparison of different models in the BATADAL dataset. All metrics are the higher the better.
+
+max width=
+
+Model ${N}_{A}$ $\mathcal{S}$ ${\mathcal{S}}_{\text{ TTD }}$ ${\mathcal{S}}_{\mathrm{{CM}}}$ TPR TNR
+
+1-7
+STGCAE-LSTM [2] 7 0.924 0.920 0.928 0.892 0.964
+
+1-7
+TGCN [47] 7 0.931 0.934 0.928 0.885 0.971
+
+1-7
+GTConvAE (ours) 7 0.940 0.928 0.952 0.922 0.981
+
+1-7
+
+jointly the spatiotemporal patterns in the data while STGAE operates disjointly. We also see that in comparison with the baseline GCNN, the GTConvAE performs consistently better, highlighting the importance of the sparser product graphs and temporal downsampling. Finally, we also observe a superior performance compared with the non-graph alternatives C3D and ConvLSTMAE.
+
+§ 4.2 ANOMALY DETECTION IN WATER NETWORKS
+
+We now consider the task of detecting cyber-physical attacks on a water network. We considered the C-town network from the Battle of ATtack Detection ALgorithms (BATADAL) dataset comprising $N = {388}$ nodes (demand junctions, storage tanks, and reservoirs) and 8762 hourly measurements of 43 different node feature signals for a period of 12 months. We used the same setup as in [47] and considered a correlation graph from the data. The dataset provides a normal operating condition comprising recordings for the first 12 months and an anomalous event operating condition comprising 7 attacks over the successive 3 months. Refer to [48, 49] for more detail about the BATADAL dataset.
+
+Experimental setup. The normal operating condition data are used to train the model for one-step forecasting to be used for detecting anomalies. The anomalous event operating condition data is used for testing and an anomaly is flagged if the prediction error exceeds a fixed threshold. We set the threshold intuitively to three times the error variance during training. The inputs are the 43 time series over the $N = {388}$ nodes and we considered $T = 6$ for the temporal graph dimension. The GTConvAE has two layers with $\{ 8,2\}$ features in the encoder and reversely in the decoder; all filters are of order $K = 4$ ; a downsampling rate $r = 2$ ; a max function in (4); and ReLU activation functions. The regularizer weight in (12) is $\rho = {0.14}$ and learning rate is $5 \times {10}^{-4}$ . We compared the performance against two graph-based alternatives:
+
+ * STGCAE-LSTM [2]: A related solution to our method that uses a Cartesian spatiotemporal graph with graph convolutions followed by an LSTM in the latent domain.
+
+ * TGCN [47]: A modular graph-based autoencoder using cascades of temporal convolutions and message passing.
+
+The parameters for all models are obtained via grid search from the ranges reported in Appendix C. We measure the performance via the $\mathcal{S}$ -score present in the BATADAL dataset, which contains ${\mathcal{S}}_{\text{ TTD }}$ for the timing in detecting anomalies and ${\mathcal{S}}_{\mathrm{{CM}}}$ for the classification accuracy. The $\mathcal{S}$ -score is defined as
+
+$$
+\mathcal{S} = {0.5}\left( {{\mathcal{S}}_{\mathrm{{TTD}}} + {\mathcal{S}}_{\mathrm{{CM}}}}\right) = {0.5}\left( {\left( {1 - \frac{1}{{N}_{A}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{A}}\frac{{\mathrm{{TTD}}}_{i}}{\Delta {\mathrm{T}}_{i}}}\right) + \frac{\mathrm{{TPR}} + \mathrm{{TNR}}}{2}}\right) , \tag{18}
+$$
+
+where ${N}_{A}$ is the number of attacks, TTD is the detection time of the attack, $\Delta {T}_{i}$ is the duration of the $i$ -th attack, TPR is the true positive rate, and TNR is the true negative rate.
+
+Results: Table 1 shows that all the models managed to detect all of the attacks, however, the TGCN has a better performance in timing ${\mathcal{S}}_{\text{ TTD }}$ . This is due to the calibration of the threshold in their work with a validation dataset while we used a fixed intuitive threshold only based on training. In the accuracy of anomaly detection ${\mathcal{S}}_{\mathrm{{CM}}}$ , the GTConvAE outperforms the other two models as the product graphs alongside downsampling enable it to learn spatiotemporal patterns in the data effectively. Overall, the GTConvAE performs better than other models by a small margin.
+
+ < g r a p h i c s >
+
+Figure 2: Stability results for different scenarios of the GTConvAE and fixed product graphs. (a) Different SNRs in the topology. (b) Different graph sizes in $4\mathrm{\;{dB}}$ perturbation. (c) Different sampling rates $r$ .
+
+§ 4.3 STABILITY ANALYSIS
+
+To investigate the stability of the GTConvAE, we trained the model over a synthesized dataset so we could control all the settings such as the spatial graph size $N$ . The graph is an undirected stochastic block model with 5 communities among $N = \{ {50},{100},\ldots ,{500}\}$ . The edges are drawn independently with probability 0.8 for nodes in the same community and 0.2 otherwise. Each data sample is a diffused signal over the graph $\mathbf{X} = \left\lbrack {\mathbf{{Sx}},\ldots ,{\mathbf{S}}^{T}\mathbf{x}}\right\rbrack$ with $T = 6$ and $\mathbf{x}$ having a random non-zero entry. The autoencoder is used to reconstruct this data.
+
+Experimental setup The model has two layers of encoder and decoder with sampling rate $r = 2$ . Each layer of the encoder has $\{ 8,4\}$ features and reversely in the decoder. All filters are of order four and the normalized graph Laplacian is used as GSO. The activation functions are ReLU and pure donwsampling is considered. The regularizer weight is 0.25 and learning rate is ${25} \times {10}^{-3}$ . The model is trained over the graph with different sizes and tested with a perturbed graph following the relative perturbation model in (13) for different SNR scenarios in the topology. We compare the stability of the GTConvAE with learned graphs with the same autoencoder having fixed Cartesian and strong product graphs.
+
+Results Fig. 2a indicates that the GTConvAE in different noisy scenarios. GTConvAE is the most stable in medium and high SNRs as it leverages sparsity in the spatiotemporal coupling. However, GTConvAE performance drops more rapidly in low SNR scenarios as its parameters are trained for the data and task. Fig. 2b shows the results for reconstruction error over graphs with different sizes. The GTConvAE is more stable than the other models, even in graphs with the larger sizes for the same reason as before. All the models lose performance similarly as the size of the graph grows. This is consistent with the theoretical result in (17).
+
+§ 5 CONCLUSION
+
+We introduced GTConv-AE as an unsupervised model for learning representations from multivariate time series over networks. The GTConv-AE uses parametric product graphs to aggregate information from a spatiotemporal neighborhood while it yet learns spatiotemporal couplings in the product graph We proposed a spectral analysis for GTConv-AE due to its convolutional nature which led to stability analysis. The stability analysis states that GTConv-AE is stable against relative perturbations in the spatial graph as long as graph-time filters vary smoothly over high spatiotemporal frequencies. Finally, numerical results showed that the GTConv-AE compares well with the state-of-the-art models on benchmark datasets and corroborated the stability results.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..dcd2c9ed7cf5d7421a60aea37b3c3dd6adb7f95c
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,568 @@
+# Piecewise-Velocity Model for Learning Continuous-time Dynamic Node Representations
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Networks have become indispensable and ubiquitous structures in many fields to model the interactions among different entities, such as friendship in social networks or protein interactions in biological graphs. A major challenge is to understand the structure and dynamics of these systems. Although networks evolve through time, most existing graph representation learning methods target only static networks. Whereas approaches have been developed for the modeling of dynamic networks, there is a lack of efficient continuous time dynamic graph representation learning methods that can provide accurate network characterization and visualization in low dimensions while explicitly accounting for prominent network characteristics such as homophily and transitivity. In this paper, we propose the Precewise-VElocity Model (PIVEM) for the representation of continuous-time dynamic networks. It learns dynamic embeddings in which the temporal evolution of nodes is approximated by piecewise linear interpolations based on a latent distance model with piecewise constant node-specific velocities. The model allows for analytically tractable expressions of the associated Poisson process likelihood with scalable inference invariant to the number of events. We further impose a scalable Kronecker structured Gaussian Process prior to the dynamics accounting for community structure, temporal smoothness, and disentangled (uncorrelated) latent embedding dimensions optimally learned to characterize the network dynamics. We show that PIVEM can successfully represent network structure and dynamics in ultra-low two and three-dimensional embedding spaces. We further extensively evaluate the performance of the approach on various networks of different types and sizes and find that it outperforms existing relevant state-of-art methods in downstream tasks such as link prediction. In summary, PIVEM enables easily interpretable dynamic network visualizations and characterizations that can further improve our understanding of the intrinsic dynamics of time-evolving networks.
+
+## 28 1 Introduction
+
+With technological advancements in data storage and production systems, we have witnessed the massive growth of graph (or network) data in recent years, with many prominent examples, including social, technological, and biological networks from diverse disciplines [1]. They propose an exquisite way to store and represent the interactions among data points and machine learning techniques on graphs have thus gained considerable attention to extract meaningful information from these complex systems and perform various predictive tasks. In this regard, Graph Representation Learning (GRL) techniques have become a cornerstone in the field through their exceptional performance in many downstream tasks such as node classification and edge prediction. Unlike the classical techniques relying on the extraction and design of handcrafted feature vectors peculiar to given networks, GRL approaches aim to design algorithms that can automatically learn features optimally preserving various characteristics of networks in their induced latent space.
+
+Many networks evolve through time and are liable to modifications in structure with newly arriving nodes or emerging connections, the GRL methods have primarily addressed static networks, in other words, a snapshot of the networks at a specific time. However, recent years have seen increasing efforts toward modeling dynamic complex networks, see also [2] for a review. Whereas most approaches have concentrated their attention on discrete-time temporal networks, which have built upon a collection of time-stamped networks (c.f. [3-11]) modeling of networks in continuous time has also been studied (c.f. [12-15]). These approaches have been based on latent class [3, 4, 12-14] and latent feature modeling approaches [5-11, 15] including advanced dynamic graph neural network representations [16, 17].
+
+Although these procedures have enabled to characterize evolving networks useful for downstream tasks such as link prediction and node classification, existing dynamic latent feature models are either in discrete time or do not explicitly account for network homophily and transitivity in terms of their latent representations. Whereas latent class models typically provide interpretable representations at the level of groups, latent feature models in general rely on high-dimensional latent representations that are not easily amenable to visualization and interpretation. A further complication of most existing dynamic modeling approaches is their scaling typically growing in complexity by the numbers of observed events and number of network dyads.
+
+This work addresses the embedding problem of nodes in a continuous-time latent space and seeks to accurately model network interaction patterns using low dimensional scalable representations explicitly accounting for network homopholy and transitivity. The main contributions of the paper can be summarized as follows:
+
+- We propose a novel scalable GRL method, the Precewise-VElocity Model (PIVEM), to flexibly learn continuous-time dynamic node representations.
+
+- We present a framework balancing the trade-off between the smoothness of node trajectories in the latent space and model capacity accounting for the temporal evolution.
+
+- We show that the PIVEM can embed nodes accurately in very low dimensional spaces, i.e., $D = 2$ , such that it serves as a dynamic network visualization tool facilitating human insights into networks' complex, evolving structures.
+
+- The performance of the introduced approach is extensively evaluated in various downstream tasks, such as network reconstruction and link prediction. We show that it outperforms wellknown baseline methods on a wide range of datasets. Besides, we propose an efficient model optimization strategy enabling the PIVEM to scale to large networks.
+
+Source code and other materials. The datasets, implementation of the method in Python, and all the generated animations can be found at the address: https://tinyurl.com/pivem.
+
+## 2 Related Work
+
+The work on dynamic modeling of complex networks has spurred substantial attention in recent years and covers approaches for the modeling of dynamic structures at the level of groups (i.e., latent class models) and dynamic representation learning approaches based on latent feature models including graph neural networks (GNNs). Whereas most attention has been given to discrete time dynamic networks a substantial body of work has also covered continuous time modeling as outlined below.
+
+### 2.1 Dynamic Latent Class Models
+
+Initial efforts for modeling continuously evolving networks has combined latent class models defined by the stochastic block models [18, 19] with Hawkes processes [20, 21]. In the work of [12], co-dependent (through time) Hawkes processes were combined with the Infinite Relational Model [22] (Hawkes IRM), yielding a non-parametric Bayesian approach capable of expressing reciprocity between inferred groups of actors. A drawback of such a model is the computational cost of the imposed Markov-chain Monte-Carlo optimization, as well as, its limitation on modelling only reciprocation effects. Scalability issues were addressed in [13] via the Block Hawkes Model (BHM), which utilizes variational inference and simplifies the Hawkes IRM model by associating only the inferred block structure pairs with a univariate point process. Recently, the BHM model was extended to decoupling interactions between different pairs of nodes belonging to the same block pair, through the use of independent univariate Hawkes processes, defining the Community Hawkes Independent Pairs model [14]. Whereas the above works have been based on continuous time modeling of dynamic networks the dynamic-IRM (dIRM) of [3] focused on the modeling of discrete time networks by inducing a infinite Hidden Markov Model (IHMM) to account for transitions over time of nodes between communities. In [4] a dynamic hierarchical block model was proposed based on the modeling of change points admitting dynamic node relocation within a Gibbs fragmentation tree. Despite the various advantages of such models, networks are constrained to be regarded and analyzed at a block level which in many cases is restrictive.
+
+### 2.2 Dynamic Latent Feature Models
+
+Prominent works around node-level representations of continuous-time networks have originally considered feature propagation within the discrete time network topology [23] or extended the random-walk frameworks of [6] and [7] to the temporal case yielding the Continuous-Time Dynamic Network Embeddings model (CTDNE), outperforming the aforementioned original approaches in multiple temporal settings. CTDNE provides a single temporal-aware node embedding, meaning that network and node evolution are unable to be visualized and explored. A more flexible approach was designed in [24] (DyRep) where temporal node embeddings are learned under a so-called latent mediation process, combining an association process describing the dynamics of the network with a communication process describing the dynamics on the network. The DyRep model uses deep recurrent architectures to parameterize the intensity function of the point process, and thus the embedding space suffers from lack of explainability. Graph neural networks (GNNs) can be extended to the analysis of continuous networks via the Temporal Graph Network (TGN) [17] where the classical encoder-decoder architecture is coupled with a memory cell.
+
+In the context of latent feature dynamic network models Gaussian Processes (GP) has been used to characterize the smoothness of the temporal dynamics. This includes the discrete time dynamic network model considered in [8] in which latent factors where endowed a GP prior based on radial basis function kernels imposing temporal smoothness within the latent representation. The approach was extended in [9] to impose stochastic differential equations for the evolution of latent factors. In [15] GPs were used for the modeling of continuous time dynamic networks based on Poisson and Hawkes processes respectively including exogenous as well as endogenous features specified by a radial basis function prior.
+
+Latent Distance Models (LDM) as proposed in [25] have recently been shown to outperform prominent GRL methods utilizing very-low dimensions in the static case [26, 27]. LDMs for temporal networks have been mostly studied in the discrete case [10], considering mainly diffusion dynamics in order to make predictions, as firstly studied in [28] and extended with popularity and activity effects [11]. While all these models express homophily (a tendency where similar nodes are more likely to connect to each other than dissimilar ones) and transitivity ("a friend of a friend is a friend") in the dynamic case, they fail to account for continuous dynamics.
+
+Our work is inspired by these previous approaches for the modeling of dynamic complex networks. Specifically, we make use of the latent distance model formulation to account for homophily and transitivity, the Poisson Process for the characterization of continuous time dynamics, and a Gaussian Process prior based on the radial-basis-function kernel to account for temporal smoothness within the latent representation. Inspired by latent class models we further impose a structured low-rank representation of nodes based on soft-assigning nodes to communities exhibiting similar temporal dynamics. Notably, we exploit how LDMs as opposed to GNN approaches in general can provide easy interpretable yet accurate network representations in ultra low $D = 2$ dimensional spaces facilitating accurate dynamic network visualization and interpretation.
+
+## 3 Proposed Approach
+
+Our main objective is to represent every node of a given network, $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , into a low-dimensional metric space, $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , in which the pairwise node proximities will be characterized by their distances in a continuous-time latent space (Objective 3.1). Since we address the continuous-time dynamic networks, the interactions among nodes through time can vary, with new links appearing or disappearing at any time. More precisely, we will presently consider undirected continuous-time networks:
+
+Definition 3.1. A continuous-time dynamic undirected graph on a time interval ${\mathcal{I}}_{T} \mathrel{\text{:=}} \left\lbrack {0, T}\right\rbrack$ is an ordered pair $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ where $\mathcal{V} = \{ 1,\ldots , N\}$ is a set of nodes and $\mathcal{E} \subseteq \left\{ {\{ i, j, t\} \in {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \mid 1 \leq }\right.$ $i < j \leq N\}$ is a set of events or edges.
+
+We will use the symbol, $N$ , to denote the number of nodes in the vertex set and ${\mathcal{E}}_{ij}\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq \mathcal{E}$ to indicate the set of edges between nodes $i$ and $j$ occurring on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$
+
+But we note that the approach readily extends to directed and bipartite dynamic networks.
+
+### 3.1 Nonhomogeneous Poisson Point Processes
+
+The Poisson Point Processes (PPP)s are one of the natural choices widely used to model the number of random events occurring in time or the locations in a spatial space. PPPs are parameterized by a quantity known as the rate or the intensity indicating the average density of the points in the underlying space of the Poisson process. If the intensity depends on the time or location, the point process is called Nonhomogeneous PPP (Defn. 3.2), and it is typically adapted for applications in which the event points are not uniformly distributed [29].
+
+Definition 3.2. [Nonhomogeneous PPP] A counting process $\{ M\left( t\right) , t \geq 0\}$ is called a nonhomogeneous Poisson process with intensity function $\lambda \left( t\right) , t \geq 0$ if (i) $M\left( 0\right) = 0$ ,(ii) $M\left( t\right)$ has independent increments: i.e., $\left( {M\left( {t}_{1}\right) - M\left( {t}_{0}\right) }\right) ,\ldots ,\left( {M\left( {t}_{B}\right) - M\left( {t}_{B - 1}\right) }\right)$ are independent random variables for each $0 \leq {t}_{0} < \cdots < {t}_{B}$ , and (iii) $M\left( {t}_{u}\right) - M\left( {t}_{l}\right)$ is Poisson distributed with mean ${\int }_{{t}_{l}}^{{t}_{u}}\lambda \left( t\right) {dt}$ .
+
+In this paper, we consider continuous-time dynamic networks such that the events (or links/edges) among nodes can occur at any point in time. As we will examine in the following sections, these interactions do not necessarily exhibit any recurring characteristics; instead, they vary over time in many real networks. In this regard, we assume that the number of links, $M\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack$ , between a pair of node $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ follows a nonhomogeneous Poisson point process (NHPP) with intensity function ${\lambda }_{ij}\left( t\right)$ on the time interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ , and for a given network $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , the log-likelihood function can be written by
+
+$$
+\mathcal{L}\left( \Omega \right) \mathrel{\text{:=}} \log p\left( {\mathcal{G} \mid \Omega }\right) = \frac{1}{2}\mathop{\sum }\limits_{{\left( {i, j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) \tag{1}
+$$
+
+where ${\mathcal{E}}_{i, j} \subseteq \mathcal{E}\left\lbrack {0, T}\right\rbrack$ is the set of links of node pair $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ on the timeline ${\mathcal{I}}_{T} \mathrel{\text{:=}} \left\lbrack {0, T}\right\rbrack$ , and $\Omega = {\left\{ {\lambda }_{ij}\right\} }_{1 \leq i < j \leq N}$ indicates the set of intensity functions.
+
+### 3.2 Problem Formulation
+
+Without loss of generality, it can be assumed that the timeline starts from 0 and is bounded by $T \in {\mathbb{R}}^{ + }$ . Since the interactions among nodes can occur at any time point on ${\mathcal{I}}_{T} = \left\lbrack {0, T}\right\rbrack$ , we would like to identify an accurate continuous-time node representation $\{ r\left( {i, t}\right) {\} }_{\left( {i, t}\right) \in \mathcal{V} \times {\mathcal{I}}_{T}}$ defined using a low-dimensional latent space ${\mathbb{R}}^{D}\left( {D \ll N}\right)$ where $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow {\mathbb{R}}^{D}$ is a map indicating the embedding or representation of node $i \in \mathcal{V}$ at time point $t \in {\mathcal{I}}_{T}$ . We define our objective more formally as follows:
+
+Objective 3.1. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ be a continuous-time dynamic network and ${\lambda }^{ * } : {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \rightarrow \mathbb{R}$ be an unknown intensity function of a nonhomogeneous Poisson point process. For a given metric space $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , our purpose it to learn a function or representation $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow \mathrm{X}$ satisfying
+
+$$
+\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{d}_{\mathrm{X}}\left( {\mathbf{r}\left( {i, t}\right) ,\mathbf{r}\left( {j, t}\right) }\right) {dt} \approx \frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{\mathbf{\lambda }}^{ * }\left( {i, j, t}\right) {dt} \tag{2}
+$$
+
+for all $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ pairs, and for every interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$ .
+
+In this work, we consider the Euclidean metric on a $D$ -dimensional real vector space, $\mathrm{X} \mathrel{\text{:=}} {\mathbb{R}}^{D}$ and the embedding of node $i \in \mathcal{V}$ at time $t \in {\mathcal{I}}_{T}$ will be denoted by ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
+
+### 3.3 PIVEM: Piecewise-Velocity Model For Learning Continuous-time Embeddings
+
+We learn continuous-time node representations by employing the canonical exponential link-function defining the intensity function as
+
+$$
+{\lambda }_{ij}\left( t\right) \mathrel{\text{:=}} \exp \left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) \tag{3}
+$$
+
+where ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ and ${\beta }_{i} \in \mathbb{R}$ denote the embedding vector at time $t$ and the bias term of node $i \in \mathcal{V}$ , respectively. For given bias terms, it can be seen by Lemma 3.1, that the definition of the intensity function provides a guarantee for our goal given in Equation (2), and a pair of nodes having a high number of interactions can be positioned close in the latent space. Although we utilize the squared Euclidean distance in Equation (3), which is not a metric, but we impose it as a distance [27, 30].
+
+Lemma 3.1. For given fixed bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ , the node embeddings, ${\left\{ {\mathbf{r}}_{i}\left( t\right) \right\} }_{i \in \mathcal{V}}$ , learned by optimizing the objective function given in Equation (1) satisfy
+
+$$
+\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }\;\text{ for all }\left( {i, j}\right) \in {\mathcal{V}}^{2}
+$$
+
+where ${p}_{ij}$ is the probability of having more than ${m}_{ij}$ links between $i$ and $j$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ .
+
+## Proof. Please see the appendix for the proof.
+
+Notably, constraining the approximation of the unknown intensity function by a metric space imposes the homophily property (i.e., similar nodes in the graph are placed close to each other in embedding space). When we have a pair of nodes exhibiting high interactions, they must have average intensity, so the term, ${p}_{ij}\left( {{m}_{ij}/\left( {{t}_{u} - {t}_{l}}\right) }\right.$ , in Lemma 3.1 converges to 1, and the average distance between the nodes is bounded by the sum of their bias terms. It can also be seen that the transitivity property holds up to some extend (i.e., if node $i$ is similar to $j$ and $j$ similar to $k$ , then $i$ should also be similar to $k$ ) since we can bound the squared Euclidean distance [27,31].
+
+Importantly, for a dynamic embedding, we would like to have embeddings of a pair of nodes close enough to each other when they have high interactions during a particular time interval and far away from each other if they have less or no links. Note that the bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ are responsible for the node-specific effects such as degree heterogeneity [31, 32], and they provide additional flexibility to the model by acting as scaling factor for the corresponding nodes so that, for instance, a hub node might have a high number of interactions simultaneously without getting close to the others in the latent space.
+
+Since our primary purpose is to learn continuous node representations in a latent space, we define the representation of node $i \in \mathcal{V}$ at time $t$ based on a linear model by ${\mathbf{r}}_{i}\left( t\right) \mathrel{\text{:=}} {\mathbf{x}}_{i}^{\left( 0\right) } + {\mathbf{v}}_{i}t$ . Here, ${\mathbf{x}}_{i}^{\left( 0\right) }$ can be considered as the initial position and ${\mathbf{v}}_{i}$ the velocity of the corresponding node. However, the linear model provides a minimal capacity for tracking the nodes and modeling their representations. Therefore, we reinterpret the given timeline ${\mathcal{I}}_{T} \mathrel{\text{:=}} \left\lbrack {0, T}\right\rbrack$ by dividing it into $B$ equally-sized bins, $\left\lbrack {{t}_{b - 1},{t}_{b}}\right) ,\left( {1 \leq b \leq B}\right)$ such that $\left\lbrack {0, T}\right\rbrack = \left\lbrack {0,{t}_{1}}\right) \cup \cdots \cup \left\lbrack {{t}_{B - 1},{t}_{B}}\right\rbrack$ where ${t}_{0} \mathrel{\text{:=}} 0$ and ${t}_{B} \mathrel{\text{:=}} T$ . By applying the linear model for each subinterval, we obtain a piecewise linear approximation of general intensity functions strengthening the models' capacity. As a result, we can write the position of node $i$ at time $t \in {\mathcal{I}}_{T}$ as follows:
+
+$$
+{\mathbf{r}}_{i}\left( t\right) \mathrel{\text{:=}} {\mathbf{x}}_{i}^{\left( 0\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 1\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 2\right) } + \cdots + \left( {t{\;\operatorname{mod}\;\left( {\Delta }_{B}\right) }}\right) {\mathbf{v}}_{i}^{\left( \left\lfloor t/{\Delta }_{B}\right\rfloor + 1\right) } \tag{4}
+$$
+
+where ${\Delta }_{B}$ indicates the bin widths, $T/B$ , and ${\;\operatorname{mod}\;\left( \cdot \right) }$ is the modulo operation used to compute the remaining time. Note that the piece-wise interpretation of the timeline allows us to track better the path of the nodes in the embedding space, and it can be seen by Theorem 3.2 that we can obtain more accurate trails by augmenting the number of bins.
+
+Theorem 3.2. Let $\mathbf{f}\left( t\right) : \left\lbrack {0, T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding of a node. For any given $\epsilon > 0$ , there exists a continuous, piecewise-linear node embedding, $\mathbf{r}\left( t\right)$ , satisfying $\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} < \epsilon$ for all $t \in \left\lbrack {0, T}\right\rbrack$ where $\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( b\right) }\left( t\right)$ for all $\left( {b - 1}\right) {\Delta }_{B} \leq t < b{\Delta }_{B},\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( B\right) }\left( t\right)$ for $t = T$ and ${\Delta }_{B} = T/B$ for some $B \in {\mathbb{N}}^{ + }$ .
+
+## Proof. Please see the appendix for the proof.
+
+Prior probability. In order to control the smoothness of the motion in the latent space, we employ a Gaussian Process (GP) [33] prior over the initial position ${\mathbf{x}}^{\left( 0\right) } \in {\mathbb{R}}^{N \times D}$ and velocity vectors $\mathbf{v} \in {\mathbb{R}}^{B \times N \times D}$ . Hence, we suppose that $\operatorname{vect}\left( {\mathbf{x}}^{\left( \mathbf{0}\right) }\right) \oplus \operatorname{vect}\left( \mathbf{v}\right) \sim \mathcal{N}\left( {\mathbf{0},\mathbf{\sum }}\right)$ where $\mathbf{\sum } \mathrel{\text{:=}} {\lambda }^{2}\left( {{\sigma }_{\sum }^{2}\mathbf{I} + \mathbf{K}}\right)$ is the covariance matrix with a scaling factor $\lambda \in \mathbb{R}$ . We utilize, ${\sigma }_{\sum \in \mathbb{R}}$ , to denote the noise of the covariance, and $\operatorname{vect}\left( \mathbf{z}\right)$ is the vectorization operator stacking the columns to form a single vector. To reduce the number of parameters of the prior and enable scalable inference, we define $\mathbf{K}$ as a Kronecker product of three matrices $\mathbf{K} \mathrel{\text{:=}} \mathbf{B} \otimes \mathbf{C} \otimes \mathbf{D}$ respectively accounting for temporal-, node-, and dimension specific covariance structures. Specifically, we define $\mathbf{B} \mathrel{\text{:=}} \left\lbrack {c}_{{\mathbf{x}}^{0}}\right\rbrack \oplus {\left\lbrack \exp {\left( -{\left( {c}_{b} - {\widetilde{c}}_{\widetilde{b}}\right) }^{2}/2{\sigma }_{\mathbf{B}}^{2}\right) }_{1 \leq b,\widetilde{b} \leq B}\right\rbrack }_{1 \leq b,\widetilde{b} \leq B}$ is a $\left( {B + 1}\right) \times \left( {B + 1}\right)$ matrix intending to capture the smoothness of velocities across time-bins where ${c}_{b} = \frac{{t}_{b - 1} + {t}_{b}}{2}$ is the center of the corresponding
+
+bin, and the matrix is constructed by combining the radial basis function kernel (RBF) with a scalar term ${c}_{{\mathbf{x}}^{0}}$ corresponding to the initial position being decoupled from the structure of the velocities. The node specific matrix, $\mathbf{C} \in {\mathbb{R}}^{N \times N}$ , is constructed as a product of a low-rank matrix $\mathbf{C} \mathrel{\text{:=}} {\mathbf{{QQ}}}^{\top }$ where the row sums of $\mathbf{Q} \in {\mathbb{R}}^{N \times k}$ equals to $1\left( {k \ll N}\right)$ , and it aims to extract covariation patterns of the motion of the nodes. Finally, we simply set the dimensionality matrix to the identity: $\mathbf{D} \mathrel{\text{:=}} \mathbf{I} \in {\mathbb{R}}^{D \times D}$ in order to have uncorrelated dimensions.
+
+To sum up, we can express our objective relying on the piecewise velocities with the prior as follows:
+
+$$
+\widehat{\Omega } = \underset{\Omega }{\arg \max }\frac{1}{2}\mathop{\sum }\limits_{{\left( {i, j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) + \log \mathcal{N}\left( {\left\lbrack \begin{matrix} {\mathbf{x}}^{\left( 0\right) } \\ \mathbf{v} \end{matrix}\right\rbrack ;\mathbf{0},\mathbf{\sum }}\right) \tag{5}
+$$
+
+where $\Omega = \left\{ {\mathbf{\beta },{\mathbf{x}}^{\left( 0\right) },\mathbf{v},{\sigma }_{\sum },{\sigma }_{\mathbf{B}},{c}_{{\mathbf{x}}^{0}},\mathbf{Q}}\right\}$ is the set of hyper-parameters, and ${\lambda }_{ij}\left( t\right)$ is the intensity function as defined in Equation (3) based on the node embeddings, ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
+
+### 3.4 Optimization
+
+Our objective given in Equation (5) is not a convex function, so the learning strategy that we follow is of great significance in order to escape from the local minima and for the quality of the representations. We start by randomly initializing the model’s hyper-parameters from $\left\lbrack {-1,1}\right\rbrack$ except for the velocity tensor, which is set to 0 at the beginning. We adapt the sequential learning strategy in learning these parameters. In other words, we first optimize the initial position and bias terms together, $\left\{ {{\mathbf{x}}^{\left( 0\right) },\mathbf{\beta }}\right\}$ , for a given number of epochs; then, we include the velocity tensor, $\{ \mathbf{v}\}$ , in the optimization process and repeat the training for the same number of epochs. Finally, we add the prior parameters and learn all model hyper-parameters together. We have employed Adam optimizer [34] with learning rate 0.1 .
+
+Computational issues and complexity. Note that we need to evaluate the log-intensity term in Equation (5) for each $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ and event time ${e}_{ij} \in {\mathcal{E}}_{ij}$ . Therefore, the computational cost required for the whole network is bounded by $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}\left| \mathcal{E}\right| }\right)$ . However, we can alleviate the computational cost by pre-computing certain coefficients at the beginning of the optimization process so that the complexity can be reduced to $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$ . We also have an explicit formula for the computation of the integral term since we utilize the squared Euclidean distance so that it can be computed in at most $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ operations. Instead of optimizing the whole network at once, we apply a batching strategy over the set of nodes in order to reduce the memory requirements. As a result, we sample $\mathcal{S}$ nodes for each epoch. Hence, the overall complexity for the log-likelihood function is $\mathcal{O}\left( {{\mathcal{S}}^{2}B\mathcal{I}}\right)$ where $\mathcal{I}$ is the number of epochs and $\mathcal{S} \ll \left| \mathcal{V}\right|$ . Similarly, the prior can be computed in at most $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\mathcal{S}}\right)$ operations by using various algebraic properties such as Woodbury matrix identity and Matrix Determinant lemma [35]. To sum up, the complexity of the proposed approach is $\mathcal{O}\left( {B{\mathcal{S}}^{2}\mathcal{I} + {B}^{3}{D}^{3}{K}^{2}\mathcal{S}\mathcal{I}}\right)$ (Please see the appendix for the derivations and other details).
+
+## 4 Experiments
+
+In this section, we extensively evaluate the performance of the proposed PIecewise-VElocity Model with respect to the well-known baselines in challenging tasks over various datasets of sizes and types. We consider all networks as undirected, and the event times of links are scaled to the interval $\left\lbrack {0,1}\right\rbrack$ for the consistency of experiments. We use the finest granularity level of the given input timestamps, such as seconds and milliseconds. We provide a brief summary of the networks below, but more details and various statistics are reported in Table 4 in the appendix. For all the methods, we learn node embeddings in two-dimensional space $\left( {D = 2}\right)$ since one of the objectives of this work is to produce dynamic node embeddings facilitating human insights into a complex network.
+
+Experimental Setup. We first split the networks into two sets, such that the events occurring in the last 10% of the timeline are taken out for the prediction. Then, we randomly choose 10% of the node pairs among all possible dyads in the network for the graph completion task, and we ensure that each node in the residual network contains at least one event keeping the number of nodes fixed. If a pair of nodes only contains events in the prediction set and if these nodes do not have any other links during the training time, they are removed from the networks.
+
+For conducting the experiments, we generate the labeled dataset of links as follows: For the positive samples, we construct small intervals of length $2 \times {10}^{-3}$ for each event time (i.e., $\left\lbrack {e - {10}^{-3}, e + {10}^{3}}\right\rbrack$ where $e$ is an event time). We randomly sample an equal number of time points and corresponding node pairs to form negative instances. If a sampled event time is not located inside the interval of a positive sample, we follow the same strategy to build an interval for it, and it is considered a negative instance. Otherwise, we sample another time point and a dyad. Note that some networks might contain a very high number of links, which leads to computational problems for these networks. Therefore, we subsample ${10}^{4}$ positive and negative instances if they contain more than this.
+
+Table 1: The performance evaluation for the network reconstruction experiment over various datasets.
+
+ | Synthetic(π) | Synthetic $\left( \mu \right)$ | College | Contacts | Email | Forum | Hypertext |
| ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR |
| LDM | .563 | .539 | .669 | .642 | .951 | .944 | .860 | .835 | .954 | .948 | .909 | .897 | .818 | .797 |
| NODE2VEC | .519 | .507 | .503 | .509 | .711 | .655 | .812 | .756 | .853 | .828 | .677 | .619 | .696 | .648 |
| CTDNE | .518 | .522 | .499 | .505 | .689 | .656 | .599 | .584 | .630 | .645 | .643 | .608 | .540 | .545 |
| PIVEM | .762 | .713 | .905 | .869 | .948 | .948 | .938 | .938 | .978 | .977 | .907 | .902 | .830 | .823 |
+
+Synthetic datasets. We generate two artificial networks in order to evaluate the behavior of the models in controlled experimental settings. (i) Synthetic $\left( \pi \right)$ is sampled from the prior distribution stated in Subsection 3.2. The hyper-parameters, $\mathbf{\beta }, K$ and $B$ are set to0,20and 100, respectively. (ii) Synthetic $\left( \mu \right)$ is constructed based on the temporal block structures. The timeline is divided into 10 sub-intervals, and the nodes are randomly split into 20 groups for each interval. The links within each group are sampled from the Poisson distribution with the constant intensity of 5 .
+
+Real Networks. The (iii) Hypertext network [36] was built on the radio badge records showing the interactions of the conference attendees for 2.5 days, and each event time indicates 20 seconds of active contact. Similarly, (iv) the Contacts network [37] was generated concerning the interactions of the individuals in an office environment. (v) Forum [38] is comprised of the activity data of university students on an online social forum system. (vi) CollegeMsg [39] indicates the private messages among the students on an online social platform. Finally, (vii) Eu-Email [40] was constructed based on the exchanged e-mail information among the members of European research institutions.
+
+Baselines. We compare the performance of our method with three baselines. We include LDM with Poisson rate and node-specific biases [31, 41] since it is a static method having the closest formulation to ours. A very well-known GRL method, NODE2VEC (or N2V) [7] relies on the explicit generation of the random walks, and learns node embeddings by relying on the node proximities within random walks. In our experiments, we tune the model’s parameters(p, q)from $\{ {0.25},{0.5},1,2,4\}$ . Since it has the ability to run over the weighted networks, we also constructed a weighted graph based on the number of links through time and reported the best score of both versions of the networks. CTDNE [42] is a dynamic node embedding approach performing temporal random walks over the network. But it is unable to produce instantaneous node representations and produces embeddings only for a given time. Therefore, we have utilized the last time of the training set to obtain the representations. We provide the other details about the parameter settings of the baseline methods in the appendix.
+
+For our method, we set the parameter $K = {25}$ , and bins count $B = {100}$ to have enough capacity to track node interactions. For the regularization term $\left( \lambda \right)$ of the prior, we first mask ${20}\%$ of the dyads in the optimization of Equation (5). Furthermore, we train the model by starting with $\lambda = {10}^{6}$ , and then we reduce it to one-tenth after each 100 epoch. The same procedure is repeated until $\lambda = {10}^{-6}$ , and we choose the $\lambda$ value minimizing the log-likelihood of the masked pairs. The final embeddings are then obtained by performing this annealing strategy without any mask until this $\lambda$ value. We repeat this procedure 5 times, and we consider the best-performing method in learning the embeddings. The relative standard deviation of the experiments is always less than 0.5 , and Figure 1a shows an illustrative example for tuning $\lambda$ over the Synthetic $\left( \pi \right)$ dataset with 5 random runs.
+
+For the performance comparison of the methods, we provide the Area Under Curve (AUC) scores for the Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves [43]. We compute the intensity of a given instance for LDM and PIVEM for the similarity measure of the node pair. Since NODE2VEC and CTDNE rely on the SkipGram architecture [44], we use cosine similarity for them.
+
+Network Reconstruction. Our goal is to see how accurately a model can capture the interaction patterns among nodes and generate embeddings exhibiting their temporal relationships in a latent space. In this regard, we train the models on the residual network and generate sample sets as described previously. The performance of the models is reported in Table 1. Comparing the performance of PIVEM against the baselines, we observe favorable results across all networks, highlighting the importance and ability of PIVEM to account for and detect structure in a continuous time manner.
+
+Table 2: The performance evaluation for the network completion experiment over various datasets.
+
+ | Synthetic(π) | Synthetic $\left( \mu \right)$ | College | Contacts | Email | Forum | Hypertext |
| ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR |
| LDM | .535 | .529 | .646 | .631 | .931 | .926 | .836 | .799 | .948 | .942 | .863 | .858 | .761 | .738 |
| NODE2VEC | .519 | .511 | .747 | .677 | .685 | .637 | .787 | .744 | .818 | .777 | .635 | .592 | .596 | .588 |
| CTDNE | .522 | .527 | .499 | .503 | .647 | .599 | .658 | .656 | .571 | .593 | .617 | .592 | .464 | .485 |
| PIVEM | .750 | .696 | .874 | .851 | .935 | .934 | .873 | .864 | .951 | .953 | .879 | .875 | .770 | .712 |
+
+Table 3: The performance evaluation for the link prediction experiment over various datasets.
+
+ | Synthetic(π) | Synthetic $\left( \mu \right)$ | College | Contacts | Email | Forum | Hypertext |
| ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR | ROC | PR |
| LDM | .562 | .539 | .498 | .642 | .951 | .944 | .860 | .835 | .954 | .948 | .909 | .897 | .819 | .797 |
| NODE2VEC | .518 | .506 | .498 | .502 | .705 | .676 | .783 | .716 | .825 | .807 | .635 | .605 | .748 | .739 |
| CTDNE | .514 | .526 | .457 | .481 | .666 | .643 | .632 | .623 | .629 | .629 | .621 | .599 | .508 | .532 |
| PIVEM | .716 | .689 | .474 | .485 | .891 | .887 | .876 | .884 | .964 | .964 | .894 | .895 | .756 | .767 |
+
+Network Completion. The network completion experiment is a relatively more challenging task than the reconstruction. Since we hide 10% of the network, the dyads containing events are also viewed as non-link pairs, and the temporal models should place these nodes in distant locations of the embedding space. However, it might be possible to predict these events accurately if the network links have temporal triangle patterns through certain time intervals. In Table 2, we report the AUC-ROC and PR-AUC scores for the network completion experiment. Once more, PIVEM outperforms the baselines (in most cases significantly). We again discovered evidence supporting the necessity for modeling and tracking temporal networks with time-evolving embedding representations.
+
+Future Prediction. Finally, we examine the performance of the models in the future prediction task. Here, the models are asked to forecast the ${10}\%$ future of the timeline. For PIVEM, the similarity between nodes is obtained by calculating the intensity function for the timeline of the training set (i.e., from 0 to 0.9 ), and we keep our previously described strategies for the baselines since they generate the embeddings only for the last training time. Table 3 presents the performances of the models. It is noteworthy that while PIVEM outperforms the baselines significantly on the Synthetic $\left( \pi \right)$ network, it does not show promising results on Synthetic $\left( \mu \right)$ . Since the first network is compatible with our model, it successfully learns the dominant link pattern of the network. However, the second network conflicts with our model: it forms a completely different structure for every 0.1 second. For the real datasets, we observe mostly on-par results, especially with LDM. Some real networks contain link patterns that become "static" with respect to the future prediction task.
+
+We have previously described how we set the prior coefficient, $\lambda$ , and now we will examine the influence of the other hyperparameters over the $\operatorname{Synthetic}\left( \pi \right)$ dataset for network reconstruction.
+
+Influence of dimension size(D). We report the AUC-ROC and AUC-PR scores in Figure 1b. When we increase the dimension size, we observe a constant increase in performance. It is not a surprising
+
+
+
+
+
+Figure 2: Comparisons of the ground truth and learned representations in two-dimensional space.
+
+result because we also increase the model's capacity depending on the dimension. However, the two-dimensional space still provides comparable performances in the experiments, facilitating human insights into networks' complex, evolving structures.
+
+Influence of bin count(B). Figure $1\mathrm{c}$ demonstrates the effect of the number of bins for the network reconstruction task. We generated the Synthetic $\left( \pi \right)$ network from for 100 bins, so it can be seen that the performance stabilizes around ${2}^{6}$ , which points out that PIVEM reaches enough capability to model the interactions among nodes.
+
+Latent Embedding Animation. Although many GRL methods show high performance in the downstream tasks, in general, they require high dimensional spaces, so a postprocessing step later has to be applied in order to visualize the node representations in a small dimensional space. However, such processes cause distortions in the embeddings, which can lead a practitioner to end up with inaccurate arguments about the data.
+
+As we have seen in the experimental evaluations, our proposed approach successfully learns embed-dings in the two-dimensional space, and it also produces continuous-time representations. Therefore, it offers the ability to animate how the network evolves through time and can play a crucial role in grasping the underlying characteristics of the networks. As an illustrative example, Figure 2 compares the ground truth representations of Synthetic $\left( \pi \right)$ with the learned ones. The synthetic network consists of small communities of 5 nodes, and each color indicates these groups. Although the problem does not have unique solutions, it can be seen that our model successfully seizes the clustering patterns in the network. We refer the reader to supplementary materials for the full animation.
+
+## 5 Conclusion and Limitations
+
+In this paper, we have proposed a novel continuous-time dynamic network embedding approach, namely, Piecewise Velocity Model (PIVEM). Its performance has been examined in various experiments, such as network reconstruction and completion tasks over various networks with respect to the very well-known baselines. We demonstrated that it could accurately embed the nodes into a two-dimensional space. Therefore, it can be directly utilized to animate the learned node embeddings, and it can be beneficial in extracting the networks' underlying characteristics, foreseeing how they will evolve through time. We showed that the model could scale up to large networks.
+
+Although our model successfully learns continuous-time representations, it is unable to capture temporal patterns in the network in terms of the GP structure. Therefore, we are planning to employ different kernels instead of RBF, such as periodic kernels in the prior. The optimization strategies of the proposed method might be improved to escape from local minima. As a possible future direction, the algorithm can also be adapted for other graph types, such as directed and multi-layer networks.
+
+References
+
+[1] M. E. J. Newman. The structure and function of complex networks. SIAM Review, 45(2): 167-256, 2003. 1
+
+[2] Bomin Kim, Kevin H Lee, Lingzhou Xue, and Xiaoyue Niu. A review of dynamic network models with latent variables. Statistics surveys, 12:105, 2018. 2
+
+[3] Katsuhiko Ishiguro, Tomoharu Iwata, Naonori Ueda, and Joshua Tenenbaum. Dynamic infinite relational model for time-varying relational data analysis. NeurIPS, 23, 2010. 2
+
+[4] Tue Herlau, Morten Mørup, and Mikkel Schmidt. Modeling temporal evolution and multiscale structure in networks. In International Conference on Machine Learning, pages 960-968. PMLR, 2013. 2, 3
+
+[5] Creighton Heaukulani and Zoubin Ghahramani. Dynamic probabilistic models for latent feature propagation in social networks. In International Conference on Machine Learning, pages 275-283. PMLR, 2013. 2
+
+[6] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In ${KDD}$ , page 701-710,2014. 3,13
+
+[7] Aditya Grover and Jure Leskovec. Node2Vec: Scalable feature learning for networks. In KDD, pages 855-864, 2016. 3, 7, 13
+
+[8] Daniele Durante and David Dunson. Bayesian logistic gaussian process models for dynamic networks. In Artificial Intelligence and Statistics, pages 194-201. PMLR, 2014. 3
+
+[9] Daniele Durante and David B Dunson. Locally adaptive dynamic networks. The Annals of Applied Statistics, 10(4):2203-2232, 2016. 3
+
+[10] Bomin Kim, Kevin Lee, Lingzhou Xue, and Xiaoyue Niu. A review of dynamic network models with latent variables, 2017. URL https://arxiv.org/abs/1711.10421.3
+
+[11] Daniel Sewell and Yuguo Chen. Latent space models for dynamic networks. JASA, 110:00-00, 012015.2,3
+
+[12] Charles Blundell, Jeff Beck, and Katherine A Heller. Modelling reciprocating relationships with hawkes processes. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, NeurIPS, volume 25. Curran Associates, Inc., 2012. 2
+
+[13] Makan Arastuie, Subhadeep Paul, and Kevin S. Xu. Chip: A hawkes process model for continuous-time networks with scalable and consistent estimation, 2019. URL https:// arxiv.org/abs/1908.06940.2
+
+[14] Sylvain Delattre, Nicolas Fournier, and Marc Hoffmann. Hawkes processes on large networks. The Annals of Applied Probability, 26(1):216 - 261, 2016. 2
+
+[15] Xuhui Fan, Bin Li, Feng Zhou, and Scott SIsson. Continuous-time edge modelling using non-parametric point processes. NeurIPS, 34:2319-2330, 2021. 2, 3
+
+[16] Rakshit S. Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. In ICLR, 2019. 2
+
+[17] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael M. Bronstein. Temporal graph networks for deep learning on dynamic graphs. CoRR, abs/2006.10637, 2020. URL https://arxiv.org/abs/2006.10637.2, 3
+
+[18] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109-137, 1983. 2
+
+[19] Krzysztof Nowicki and Tom A. B Snijders. Estimation and prediction for stochastic blockstruc-tures. JASA, 96(455):1077-1087, 2001. 2
+
+[20] Alan G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83-90, 1971. 2
+
+[21] Alan G. Hawkes. Point spectra of some mutually exciting point processes. J. R. Stat. Soc, 33 (3):438-443, 1971. 2
+
+[22] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning systems of concepts with an infinite relational model. In AAAI, page 381-388, 2006. 2
+
+[23] Creighton Heaukulani and Zoubin Ghahramani. Dynamic probabilistic models for latent feature propagation in social networks. In Sanjoy Dasgupta and David McAllester, editors, PMLR, volume 28, pages 275-283, 2013. 3
+
+[24] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. ICLR. 3
+
+[25] Peter D Hoff, Adrian E Raftery, and Mark S Handcock. Latent space approaches to social network analysis. JASA, 97(460):1090-1098, 2002. 3
+
+[26] Nikolaos Nakis, Abdulkadir Çelikkanat, Sune Lehmann Jørgensen, and Morten Mørup. A hierarchical block distance model for ultra low-dimensional graph representations, 2022. 3
+
+[27] Nikolaos Nakis, Abdulkadir Çelikkanat, and Morten Mørup. Hm-Idm: A hybrid-membership latent distance model, 2022. 3, 4, 5
+
+[28] Purnamrita Sarkar and Andrew Moore. Dynamic social network analysis using latent space models. In Y. Weiss, B. Schölkopf, and J. Platt, editors, NeurIPS, volume 18, 2005. 3
+
+[29] Roy L. Streit. The Poisson Point Process, pages 11-55. Springer US, Boston, MA, 2010. 4
+
+[30] Maximillian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In NIPS, volume 30, 2017. 4
+
+[31] Pavel N. Krivitsky, Mark S. Handcock, Adrian E. Raftery, and Peter D. Hoff. Representing degree distributions, clustering, and homophily in social networks with latent cluster random effects models. Social Networks, 31(3):204-213, 2009. 5, 7, 13
+
+[32] Nikolaos Nakis, Abdulkadir Çelikkanat, Sune Lehmann Jørgensen, and Morten Mørup. A hierarchical block distance model for ultra low-dimensional graph representations, 2022. 5
+
+[33] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. 5
+
+[34] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.6,13
+
+[35] C.C. Aggarwal. Linear Algebra and Optimization for Machine Learning: A Textbook. Springer International Publishing, 2020. 6, 15
+
+[36] Lorenzo Isella, Juliette Stehlé, Alain Barrat, Ciro Cattuto, Jean-François Pinton, and Wouter Van den Broeck. What's in a crowd? analysis of face-to-face behavioral networks. Journal of Theoretical Biology, 271(1):166-180, 2011. 7, 13
+
+[37] Mathieu Génois and Alain Barrat. Can co-location be used as a proxy for face-to-face contacts? EPJ Data Science, 7(1):11, May 2018. 7, 13
+
+[38] Tore Opsahl. Triadic closure in two-mode networks: Redefining the global and local clustering coefficients. Social Networks, 35, 06 2010. 7, 13
+
+[39] Pietro Panzarasa, Tore Opsahl, and Kathleen M. Carley. Patterns and dynamics of users' behavior and interaction: Network analysis of an online community. Journal of the American Society for Information Science and Technology, 60(5):911-932, 2009. 7, 13
+
+[40] Ashwin Paranjape, Austin R. Benson, and Jure Leskovec. Motifs in temporal networks. page 601-610, 2017. 7, 13
+
+[41] Peter D Hoff. Bilinear mixed-effects models for dyadic data. JASA, 100(469):286-295, 2005. 7, 13
+
+[42] Giang Hoang Nguyen, John Boaz Lee, Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, and Sungchul Kim. Continuous-time dynamic network embeddings. In The Web Conf, page 969-976, 2018.7,13
+
+[43] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011. 7
+
+[44] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111-3119, 2013.7,13
+
+## A Appendix
+
+### A.1 Experiments
+
+We consider all networks used in the experiments as undirected, and the event times of links are scaled to the interval $\left\lbrack {0,1}\right\rbrack$ for the consistency of experiments. We use the finest resolution level of the given input timestamps, such as seconds and milliseconds. We provide a brief summary of the networks below, and various statistics are reported in Table 4. The visualization of the event distributions of the networks through time is depicted in Figure .
+
+Synthetic datasets. We generate two artificial networks in order to evaluate the behavior of the models in controlled experimental settings. (i) Synthetic $\left( \pi \right)$ is sampled from the prior distribution stated in Subsection 3.2. The hyper-parameters, $\mathbf{\beta }, K$ and $B$ are set to0,20and 100, respectively. (ii) Synthetic $\left( \mu \right)$ is constructed based on the temporal block structures. The timeline is divided into 10 intervals, and the node set is split into 20 groups. The links within each group are sampled from the Poisson distribution with the constant intensity of 5 .
+
+
+
+Figure 3: Distribution of the links through time.
+
+Table 4: Statistics of networks. $\left| \mathcal{V}\right| :$ Number of nodes, $M :$ Number of pairs having at least one link, $\left| \mathcal{E}\right|$ : Total number of links, ${\left| {\mathcal{E}}_{ij}\right| }_{\max }$ : Max. number of links a pair of nodes has.
+
+ | $\left| \mathcal{V}\right|$ | $M$ | $\left| \mathcal{E}\right|$ | ${\left| {\mathcal{E}}_{ij}\right| }_{max}$ |
| Synthetic $\left( \mu \right)$ | 100 | 4,889 | 180,658 | 124 |
| Synthetic ( $\pi$ ) | 100 | 3,009 | 22,477 | 32 |
| College | 1,899 | 13,838 | 59,835 | 184 |
| Contacts | 217 | 4,274 | 78,249 | 1,302 |
| Hypertext | 113 | 2,196 | 20,818 | 1,281 |
| Email | 986 | 16,064 | 332,334 | 4,992 |
| Forum | 899 | 7,036 | 33,686 | 171 |
+
+Real Networks. The (iii) Hypertext network [36] was built on the radio badge records showing the interactions of the conference attendees for 2.5 days, and each event time indicates 20 seconds of active contact. Similarly, (iv) the Contacts network [37] was generated concerning the interactions of the individuals in an office environment. (v) Forum [38] is comprised of the activity data of university students on an online social forum system. The (vi) CollegeMsg network [39] indicates the private messages among the students on an online social platform. Finally, (vii) Eu-Email [40] was constructed based on the exchanged e-mail information among the members of European research institutions.
+
+Baselines. We compare the performance of our method with three baselines. We include LDM with Poisson rate with node-specific biases [31, 41] since it is a static method having the closest formulation to ours. We randomly initialize the embeddings and bias terms and train the model with the Adam optimizer [34] for 500 epochs and a learning rate of 0.1 . A very well-known GRL method, NODE2VEC (or N2V) [7] relies on the explicit generation of the random walks by starting from each node in the network, then it learns node embeddings by inspiring from the SkipGram [44] algorithm. It optimizes the softmax function for the nodes lying within a fixed window region with respect to a chosen center node over the produced node sequences. It is an extension of the DEEPWALK method [6], and NODE2VEC differs from it by introducing two additional parameters to perform unbiased random walks. In our experiments, we tune the model’s parameters(p, q)from $\{ {0.25},{0.5},1,2,4\}$ . Since it has the ability to run over the weighted networks, we also constructed a weighted graph based on the number of links through time and reported the best score of both versions of the networks. CTDNE [42] is a dynamic node embedding approach performing temporal random walks over the network. But it is unable to produce instantaneous node representations and produces embeddings only for a given time. Therefore, we have utilized the last time of the training set to obtain the representations. We have chosen the recommended values for the common hyper-parameters of NODE2VEC and CTDNE, so the number of walks, walk length, and window size parameters have been set to 10,80, and 10, respectively. We utilized the implementation provided by the StellarGraph Python package to produce the embeddings for CTDNE.
+
+Optimization of the proposed approach. Our objective given in Equation (5) is not a convex function, so the learning strategy that we follow is of great importance in order to escape from the local minima and for the quality of the representations. We start by randomly initializing the model's hyper-parameters from $\left\lbrack {-1,1}\right\rbrack$ except for the velocity tensor, which is set to 0 at the beginning. We adapt the sequential learning strategy in learning these parameters. In other words, we first optimize the initial position and bias terms together, $\left\{ {{\mathbf{x}}^{\left( 0\right) },\mathbf{\beta }}\right\}$ , for a given number of epochs; then, we include the velocity tensor, $\{ \mathbf{v}\}$ , in the optimization process and repeat the training for the same number of epochs. Finally, we add the prior parameters and learn all model hyper-parameters together. We have employed Adam optimizer [34] with learning rate 0.1 .
+
+For our method, we set the parameter $K = {25}$ , and bins count $B = {100}$ to have enough capacity to track node interactions. For the regularization term $\left( \lambda \right)$ of the prior, we first mask ${20}\%$ of the dyads in the optimization of Equation (5). Furthermore, we train the model by starting with $\lambda = {10}^{6}$ , and then we reduce it to one-tenth after each 100 epoch. The same procedure is repeated until $\lambda = {10}^{-6}$ , and we choose the $\lambda$ value minimizing the log-likelihood of the masked pairs. The final node embeddings are then obtained by performing this annealing strategy without any mask until this $\lambda$ value. We repeat this procedure 5 times with different initializations, and we consider the best-performing
+
+method to learn the embeddings. The relative standard deviation of the experiments is always less than 0.5 for all the networks, and we depict the negative log-likelihood of the masked pairs for the annealing strategy with 5 random runs in Figure 1a.
+
+
+
+Figure 4: Negative log-likelihood of the masked pairs for the annealing strategy applied for tuning $\lambda$ parameter with 5 random runs.
+
+### A.2 Computational Problems and Model Complexity
+
+Log-likelihood function. Note that we need to evaluate the log-intensity term in Equation 5 for each $\left( {i, j}\right) \in {\mathcal{V}}^{2}$ and event time ${e}_{ij} \in {\mathcal{E}}_{ij}$ . Therefore, the computational cost required for the whole network is bounded by $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}\left| \mathcal{E}\right| }\right)$ . However, we can alleviate it by computing certain coefficients at the beginning of the optimization process. If we define ${\alpha }_{ij} \mathrel{\text{:=}} \left( {{e}_{ij} - {\Delta }_{B}\left( {{b}^{ * } - 1}\right) }\right)$ , then it can be seen that the sum over the set of all events, ${\mathcal{E}}_{ij}^{{b}^{ * }}$ , lying inside ${b}^{ * }$ ’th bin (i.e., the events in
+
+$\left\lbrack {{\Delta }_{B}\left( {{b}^{ * } - 1}\right) ,{\Delta }_{B}{b}^{ * }}\right)$ can be rewritten by:
+
+$$
+\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}^{{b}^{ * }}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) = \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( {e}_{ij}\right) - {\mathbf{r}}_{j}\left( {e}_{ij}\right) \end{Vmatrix}}^{2}}\right)
+$$
+
+$$
+\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}^{{b}^{ * }}}}\left( {{\beta }_{i} + {\beta }_{j}}\right) + \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) } + \Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }\left( {e}_{ij} - {\Delta }_{B}\left( {b}^{ * } - 1\right) \right) \end{Vmatrix}}^{2}
+$$
+
+$$
+= \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}^{{b}^{ * }}}}\left( {{\beta }_{i} + {\beta }_{j}}\right) + \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\left( {{\alpha }_{ij}^{2}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }\end{Vmatrix}}^{2} + {\left( \Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) }\right) }^{2}}\right.
+$$
+
+$$
+\left. {+2{\alpha }_{ij}\left\langle {\Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) },\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }}\right\rangle }\right)
+$$
+
+$$
+= \left| {\mathcal{E}}_{ij}^{{b}^{ * }}\right| \left( {{\beta }_{i} + {\beta }_{j}}\right) + {\alpha }_{2}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }\end{Vmatrix}}^{2} + \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\left( \Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) }\right) }^{2}
+$$
+
+$$
++ 2{\alpha }_{1}\left\langle {\Delta {\mathbf{x}}_{ij}^{\left( 0\right) } + {\Delta }_{B}\mathop{\sum }\limits_{{b = 1}}^{{{b}^{ * } - 1}}\Delta {\mathbf{v}}_{ij}^{\left( b\right) },\Delta {\mathbf{v}}_{ij}^{\left( {b}^{ * }\right) }}\right\rangle
+$$
+
+where ${\alpha }_{1}^{\left( {b}^{ * }\right) } \mathrel{\text{:=}} \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\alpha }_{ij}$ and ${\alpha }_{2}^{\left( {b}^{ * }\right) } \mathrel{\text{:=}} \mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}{\alpha }_{ij}^{2}$ . We can follow the same strategy for each bin, then the computational complexity can be reduced to $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$
+
+Since we use the squared Euclidean distance in the integral term of our objective, we can derive the exact formula for the computation (please see Lemma A. 3 for the details). We need to evaluate it for all node pairs, so it requires at most $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ operations. Hence, the complexity of the log-likelihood function is $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$ . Instead of optimizing the whole network at once, we are applying the batching strategy over the set of nodes in order to reduce the memory requirements, so we sample $\mathcal{S}$ nodes for each epoch. Hence, the overall complexity of the log-likelihood is $\mathcal{O}\left( {{\mathcal{S}}^{2}B\mathcal{I}}\right)$ where $\mathcal{I}$ is the number of epochs.
+
+Computation of the prior function. The covariance matrix, $\mathbf{\sum } \in {\mathbb{R}}^{{BND} \times {BND}}$ , of the prior is defined by $\mathbf{\sum } \mathrel{\text{:=}} {\lambda }^{2}{\left( {\sigma }_{\mathbf{\sum }}^{2}\mathbf{I} + \mathbf{K}\right) }^{-1}$ with a scaling factor $\lambda \in \mathbb{R}$ and a noise variance ${\sigma }_{\mathbf{\sum }}^{2} \in {\mathbb{R}}^{ + }$ . The multivariate normal distribution is parametrized with a noise term ${\sigma }_{\sum }^{2}\mathbf{I}$ and a matrix $\mathbf{K} \in$ ${\mathbb{R}}^{{BND} \times {BND}}$ having a low-rank form. In other words, $\mathbf{K}$ is written by $\mathbf{B} \otimes \mathbf{C} \otimes \mathbf{D}$ where $\mathbf{B}$ is block diagonal matrix combined with parameter ${c}_{{\mathbf{x}}^{0}}$ and the RBF kernel $\exp \left( {-{\left( {c}_{b} - {c}_{{b}^{\prime }}\right) }^{2}/{\sigma }_{\mathbf{B}}^{2}}\right) \in {\mathbb{R}}^{B \times B}$ for ${c}_{b} \mathrel{\text{:=}} \left( {{t}_{b - 1} - {t}_{b}}\right) /2$ . The matrix aiming for capturing the node interactions, $\mathbf{C} \mathrel{\text{:=}} {\mathbf{{QQ}}}^{\top } \in {\mathbb{R}}^{N \times N}$ is defined with a low-rank matrix $\mathbf{Q} \in {\mathbb{R}}^{N \times k}$ whose rows equal to $1\left( {k \ll N}\right)$ , and we set $\mathbf{D} \mathrel{\text{:=}} \mathbf{I}{\mathbf{I}}^{\top } \in {\mathbb{R}}^{D \times D}$ . By considering the Cholesky decomposition [35] of $\mathbf{B} \mathrel{\text{:=}} {\mathbf{{LL}}}^{\top }$ since $\mathbf{B}$ is symmetric positive semi-definite, we can factorize $\mathbf{K} \mathrel{\text{:=}} {\mathbf{K}}_{f}{\mathbf{K}}_{f}^{\top }$ where ${\mathbf{K}}_{f} \mathrel{\text{:=}} \mathbf{L} \otimes \mathbf{Q} \otimes \mathbf{I}$ .
+
+Note that the precision matrix, ${\mathbf{\sum }}^{-1}$ , can be written by using the Woodbury matrix identity [35] as follows:
+
+$$
+{\mathbf{\sum }}^{-1} = {\lambda }^{-2}{\left( {\sigma }_{\mathbf{\sum }}^{2}\mathbf{I} + {\mathbf{K}}_{f}{\mathbf{K}}_{f}^{\top }\right) }^{-1} = {\lambda }^{-2}\left( {{\sigma }_{\mathbf{\sum }}^{2}{}^{-1}\mathbf{I} - {\sigma }_{\mathbf{\sum }}^{2}{}^{-1}{\mathbf{K}}_{f}{\mathbf{R}}^{-1}{\mathbf{K}}_{f}^{\top }{\sigma }_{\mathbf{\sum }}^{2}{}^{-1}}\right)
+$$
+
+where the capacitance matrix $\mathbf{R} \mathrel{\text{:=}} {\mathbf{I}}_{BKD} + {\sigma }_{\mathbf{\sum }}^{2}{}^{-1}{\mathbf{K}}_{f}^{\top }{\mathbf{K}}_{f}$ .
+
+The log-determinant of ${\lambda }^{2}\mathbf{\sum }$ can be also simplified by applying Matrix Determinant lemma [35]:
+
+$$
+\log \left( {\det \left( \mathbf{\sum }\right) }\right) = \left( {BND}\right) \log \left( {\lambda }^{2}\right) + \log \left( {\det \left( {{\sigma }_{\mathbf{\sum }}^{2}{\mathbf{I}}_{BND} + {\mathbf{K}}_{f}{\mathbf{K}}_{f}^{\top }}\right) }\right)
+$$
+
+$$
+= \left( {BND}\right) \log \left( {\lambda }^{2}\right) + \log \left( {\det \left( {{\mathbf{I}}_{BKD} + {\sigma }_{\mathbf{\sum }}^{2}{}^{-1}{\mathbf{K}}_{f}^{\top }{\mathbf{K}}_{f}}\right) }\right) + \left( {BND}\right) \log \left( {\sigma }_{\mathbf{\sum }}^{2}\right)
+$$
+
+$$
+= \left( {BND}\right) \left( {\log \left( {\lambda }^{2}\right) + \log \left( {\sigma }_{\mathbf{\sum }}^{2}\right) }\right) + \log \left( {\det \left( \mathbf{R}\right) }\right)
+$$
+
+Note that the most cumbersome points in the computation of the prior are the calculations of the inverse and determinant of the terms and some matrix multiplication operations. Since $R$ is a matrix of size ${BKD} \times {BKD}$ , its inverse and determinant can be found in at most $\mathcal{O}\left( {{B}^{3}{K}^{3}{D}^{3}}\right)$ operations. We also need the term, ${\mathbf{K}}_{f}{\mathbf{R}}^{-1}\mathbf{R}$ , which can also be computed in $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\left| \mathcal{V}\right| }\right)$ steps, so the number of operations required for the prior can be bounded by $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\left| \mathcal{V}\right| }\right)$ . It is worth noticing that we cannot directly apply the batching strategy for the computation of the inverse of the capacitance matrix, $\mathbf{R}$ . However, we can compute it once and then we can utilize it for the calculation of the log-prior for different sets of node samples, then we can recompute it when we decide to update the parameters again.
+
+To sum up, the complexity of our proposed approach is $\mathcal{O}\left( {B\mathcal{I}{\mathcal{S}}^{2} + {B}^{3}{D}^{3}{K}^{2}\mathcal{S}\mathcal{I}}\right)$ where $\mathcal{S}$ is the batch size and $\mathcal{I}$ is the number of epochs.
+
+### A.3 Theoretical Results
+
+Lemma A.1. For given fixed bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ , the node embeddings, ${\left\{ {\mathbf{r}}_{i}\left( t\right) \right\} }_{i \in \mathcal{V}}$ , learned by optimizing the objective function given in Equation 1 satisfy
+
+$$
+\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }\;\text{ for all }\left( {i, j}\right) \in {\mathcal{V}}^{2}
+$$
+
+where ${p}_{ij}$ is the probability of having more than ${m}_{ij}$ links between $i$ and $j$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ .
+
+Proof. Let ${X}_{ij} \mathrel{\text{:=}} \left| {{\mathcal{E}}_{ij}\left\lbrack {{t}_{l},{t}_{u}}\right) }\right|$ be the number of links between nodes $i, j \in \mathcal{V}$ following a nonhomogeneous Poisson process with intensity function, ${\lambda }_{ij}\left( t\right)$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ . By Markov’s inequality, it can be written that
+
+$$
+{p}_{ij} \mathrel{\text{:=}} \mathbb{P}\left\{ {{X}_{ij} \geq {m}_{ij}}\right\} \leq \frac{\mathbb{E}\left\lbrack {X}_{ij}\right\rbrack }{{m}_{ij}}
+$$
+
+$$
+= \frac{1}{{m}_{ij}}{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) {dt}
+$$
+
+$$
+= \frac{1}{{m}_{ij}}\exp \left( {{\beta }_{i} + {\beta }_{j}}\right) {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) {dt}
+$$
+
+$$
+\leq \frac{1}{{m}_{ij}}\left( {{t}_{u} - {t}_{l}}\right) \exp \left( {{\beta }_{i} + {\beta }_{j}}\right) \exp \left( {-\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}{dt}}\right)
+$$
+
+$$
+\leq \frac{1}{{m}_{ij}}\left( {{t}_{u} - {t}_{l}}\right) \exp \left( {{\beta }_{i} + {\beta }_{j}}\right) \exp \left( {-\frac{1}{{\left( {t}_{u} - {t}_{l}\right) }^{2}}{\left( {\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}dt\right) }^{2}}\right)
+$$
+
+where the last two lines follow from Jensen's inequality. Finally, it can be concluded that
+
+$$
+\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\log \left( {\exp \left( {{\beta }_{i} + {\beta }_{j}}\right) \frac{\left( {t}_{u} - {t}_{l}\right) }{{m}_{ij}{p}_{ij}}}\right) }
+$$
+
+$$
+= \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }
+$$
+
+597
+
+Theorem A.2. Let $\mathbf{f}\left( t\right) : \left\lbrack {0, T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding of a node. For any given $\epsilon > 0$ , there exists a continuous, piecewise-linear node embedding, $\mathbf{r}\left( t\right)$ , satisfying $\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} < \epsilon$ for all $t \in \left\lbrack {0, T}\right\rbrack$ where $\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( b\right) }\left( t\right)$ for all $\left( {b - 1}\right) {\Delta }_{B} \leq t < b{\Delta }_{B},\mathbf{r}\left( t\right) \mathrel{\text{:=}} {\mathbf{r}}^{\left( B\right) }\left( t\right)$ for $t = T$ and ${\Delta }_{B} = T/B$ for some $B \in {\mathbb{N}}^{ + }$ .
+
+Proof. Let $\mathbf{f}\left( t\right) : \left\lbrack {0, T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding so it is also uniformly continuous by the Heine-Cantor theorem since $\left\lbrack {0, T}\right\rbrack$ is a compact set. Then, we can find some $B \in {\mathbb{N}}^{ + }$ such that for every $t,\widetilde{t} \in \left\lbrack {0, T}\right\rbrack$ with $\left| {t - \widetilde{t}}\right| \leq {\Delta }_{B} \mathrel{\text{:=}} T/B$ implies $\parallel \mathbf{f}\left( t\right) - \mathbf{f}\left( \widetilde{t}\right) {\parallel }_{2} < \epsilon /2$ for any given $\epsilon > 0$ .
+
+Let us define ${\mathbf{r}}^{\left( b\right) }\left( t\right) = {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + {\mathbf{v}}_{b}\left( {t - \left( {b - 1}\right) {\Delta }_{B}}\right)$ recursively for each $b \in \{ 1,\ldots , B\}$
+
+where ${\mathbf{r}}^{\left( 0\right) }\left( 0\right) \mathrel{\text{:=}} {\mathbf{x}}_{0} = \mathbf{f}\left( 0\right)$ , and ${\mathbf{v}}_{b} \mathrel{\text{:=}} \frac{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }{{\Delta }_{B}}$ . Then it can be seen that we have
+
+${\mathbf{r}}^{\left( b\right) }\left( {b{\Delta }_{B}}\right) = \mathbf{f}\left( {b{\Delta }_{B}}\right)$ for all $b \in \{ 1,\ldots , B\}$ because
+
+$$
+{\mathbf{r}}^{\left( b\right) }\left( {b{\Delta }_{B}}\right) = {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + {\mathbf{v}}_{b}\left( {b{\Delta }_{B} - {\Delta }_{B}\left( {b - 1}\right) }\right)
+$$
+
+$$
+= {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + {\mathbf{v}}_{b}{\Delta }_{B}
+$$
+
+$$
+= {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + \left( \frac{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }{{\Delta }_{B}}\right) {\Delta }_{B}
+$$
+
+$$
+= {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\right)
+$$
+
+$$
+= {\mathbf{r}}^{\left( b - 2\right) }\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) + \left( {\mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) }\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\right)
+$$
+
+$$
+= {\mathbf{r}}^{\left( b - 2\right) }\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 2}\right) {\Delta }_{B}}\right) }\right)
+$$
+
+$$
+= \cdots
+$$
+
+$$
+= {\mathbf{r}}^{\left( 0\right) }\left( 0\right) + \left( {\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( 0\right) }\right)
+$$
+
+$$
+= \mathbf{f}\left( {b{\Delta }_{B}}\right)
+$$
+
+where the last line follows from the fact that ${\mathbf{r}}^{\left( 0\right) }\left( 0\right) = {\mathbf{x}}_{0} = \mathbf{f}\left( 0\right)$ by the definition. Therefore, for
+
+any given point $t \in \lbrack 0, T)$ for $b = \left\lfloor {t/{\Delta }_{b}}\right\rfloor + 1$ , it can be seen that
+
+$$
+\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} = {\begin{Vmatrix}\mathbf{f}\left( t\right) - {\mathbf{r}}^{\left( b\right) }\left( t\right) \end{Vmatrix}}_{2}
+$$
+
+$$
+= {\begin{Vmatrix}\mathbf{f}\left( t\right) - \left( {\mathbf{r}}^{\left( b - 1\right) }\left( \left( b - 1\right) {\Delta }_{B}\right) + {\mathbf{v}}_{b}\left( t - \left( b - 1\right) {\Delta }_{B}\right) \right) \end{Vmatrix}}_{2}
+$$
+
+$$
+= {\begin{Vmatrix}\mathbf{f}\left( t\right) - \left( {\mathbf{r}}^{\left( b - 1\right) }\left( \left( b - 1\right) {\Delta }_{B}\right) + \left( \frac{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }{{\Delta }_{B}}\right) \left( t - \left( b - 1\right) {\Delta }_{B}\right) \right) \end{Vmatrix}}_{2}
+$$
+
+$$
+= {\begin{Vmatrix}\left( \mathbf{f}\left( t\right) - {\mathbf{r}}^{\left( b - 1\right) }\left( \left( b - 1\right) {\Delta }_{B}\right) \right) + \left( \mathbf{f}\left( b{\Delta }_{B}\right) - \mathbf{f}\left( \left( b - 1\right) {\Delta }_{B}\right) \right) \left( \frac{t - \left( {b - 1}\right) {\Delta }_{B}}{{\Delta }_{B}}\right) \end{Vmatrix}}_{2}
+$$
+
+$$
+\leq \begin{Vmatrix}{\mathbf{f}\left( t\right) - {\mathbf{r}}^{\left( b - 1\right) }\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\end{Vmatrix} + \begin{Vmatrix}{\mathbf{f}\left( {b{\Delta }_{B}}\right) - \mathbf{f}\left( {\left( {b - 1}\right) {\Delta }_{B}}\right) }\end{Vmatrix}
+$$
+
+$$
+< \frac{\epsilon }{2} + \frac{\epsilon }{2}
+$$
+
+$$
+= \epsilon
+$$
+
+where the inequlity in the fifth line holds since we have $\left| \frac{t - \left( {b - 1}\right) {\Delta }_{B}}{{\Delta }_{B}}\right| \leq 1$
+
+Lemma A. 3 (Integral Computation). The integral of the intensity function, ${\lambda }_{ij}\left( t\right)$ , from ${t}_{l}$ to ${t}_{u}$ is equal to
+
+$$
+{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {{\beta }_{ij} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = \frac{\sqrt{\pi }\exp \left( {{\beta }_{ij} + {r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) }{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}\operatorname{erf}\left( {\left. \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right| }_{t = {t}_{l}}^{t = {t}_{u}}\right)
+$$
+
+where ${\beta }_{ij} \mathrel{\text{:=}} {\beta }_{i} + {\beta }_{j},\Delta {\mathbf{x}}_{ij} \mathrel{\text{:=}} {\mathbf{x}}_{i}^{\left( 0\right) } - {\mathbf{x}}_{j}^{\left( 0\right) },\Delta {\mathbf{v}}_{ij} \mathrel{\text{:=}} {\mathbf{v}}_{i}^{\left( 1\right) } - {\mathbf{v}}_{j}^{\left( 1\right) }$ and $r \mathrel{\text{:=}} \frac{\left\langle \Delta {\mathbf{v}}_{ij},\Delta {\mathbf{x}}_{ij}\right\rangle }{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}$ .
+
+Proof.
+
+$$
+{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}^{2}{t}^{2} - 2\left\langle {\Delta {\mathbf{x}}_{ij},\Delta {\mathbf{v}}_{ij}}\right\rangle t - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) \mathrm{d}t
+$$
+
+$$
+= {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) }^{2} + {r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) \mathrm{d}t \tag{6}
+$$
+
+where ${r}_{ij} \mathrel{\text{:=}} \frac{\left\langle \Delta {\mathbf{v}}_{ij},\Delta {\mathbf{x}}_{ij}\right\rangle }{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}$ . The substitution $u = \begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}t + {r}_{ij}$ yields $\mathrm{d}u = \begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}\mathrm{d}t$ . Furthermore,
+
+we have
+
+$$
+{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) }^{2}}\right) \mathrm{d}t = \frac{1}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}{\int }_{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{l} + {r}_{ij}}^{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{u} + {r}_{ij}}\exp \left( {-{u}^{2}}\right) \mathrm{d}u
+$$
+
+$$
+= \frac{1}{\begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}}\frac{\sqrt{\pi }}{2}\left( {\frac{2}{\sqrt{\pi }}{\int }_{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{l} + {r}_{ij}}^{\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}{t}_{u} + {r}_{ij}}\exp \left( {-{u}^{2}}\right) \mathrm{d}u}\right)
+$$
+
+$$
+= {\left. \frac{\sqrt{\pi }}{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}\operatorname{erf}\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) \right| }_{t = {t}_{l}}^{t = {t}_{u}} \tag{7}
+$$
+
+616 By using Equations 6 and 7, it can be obtained that
+
+$$
+{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = \exp \left( {{r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) {\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {-{\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) }^{2}}\right) \mathrm{d}t
+$$
+
+$$
+= \frac{\sqrt{\pi }\exp \left( {{r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) }{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}{\left. \operatorname{erf}\left( \begin{Vmatrix}\Delta {\mathbf{v}}_{ij}\end{Vmatrix}t + {r}_{ij}\right) \right| }_{t = {t}_{l}}^{t = {t}_{u}}
+$$
+
+617 Therefore, we can conclude that
+
+$$
+{\int }_{{t}_{l}}^{{t}_{u}}\exp \left( {{\beta }_{ij} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij} + \Delta {\mathbf{v}}_{ij}t\end{Vmatrix}}^{2}}\right) = \frac{\sqrt{\pi }\exp \left( {{\beta }_{ij} + {r}_{ij}^{2} - {\begin{Vmatrix}\Delta {\mathbf{x}}_{ij}\end{Vmatrix}}^{2}}\right) }{2\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}}\operatorname{erf}\left( {\begin{Vmatrix}{\Delta {\mathbf{v}}_{ij}}\end{Vmatrix}t + {r}_{ij}}\right) {\left. \right| }_{t = {t}_{l}}^{t = {t}_{u}}
+$$
+
+618
+
+### A.4 Table of Symbols
+
+The detailed list of the symbols used throughout the manuscript and their corresponding definitions can be found in Table 5.
+
+Table 5: Table of symbols
+
+| Symbol | Description |
| $\mathcal{G}$ | Graph |
| V | Vertex set |
| $\varepsilon$ | Edge set |
| ${\mathcal{E}}_{ij}$ | Edge set of node pair(i, j) |
| $N$ | Number of nodes |
| $D$ | Dimension size |
| ${\mathcal{I}}_{T}$ | Time interval |
| $T$ | Time length |
| $B$ | Number of bins |
| ${\beta }_{i}$ | Bias term of node $i$ |
| X | Initial position matrix |
| ${\mathbf{v}}^{\left( b\right) }$ | Velocity matrix for bin $b$ |
| ${\mathbf{r}}_{i}\left( t\right)$ | Position of node $i$ at time $t$ |
| ${\lambda }_{ij}\left( t\right)$ | Intensity of node pair(i, j)at time $t$ |
| ${e}_{ij}$ | An event time of node pair(i, j) |
| $\sum$ | Covariance matrix |
| $\lambda$ | Scaling factor of the covariance |
| ${\sigma }_{\sum }$ | Noise variance |
| ${\sigma }_{\mathrm{B}}$ | Lengthscale variable of RBF kernel |
| ② | Kronecker product |
| I | Identity matrix |
| B | Bin interaction matrix |
| C | Node interaction matrix |
| D | Dimension interaction matrix |
| $\mathbf{R}$ | Capacitance matrix |
| $K$ | Latent dimension of $\mathbf{C}$ |
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f9b8b644aee2c21a6b58822f696df2ed8de46d1c
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/48WaBYh_zbP/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,258 @@
+§ PIECEWISE-VELOCITY MODEL FOR LEARNING CONTINUOUS-TIME DYNAMIC NODE REPRESENTATIONS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Networks have become indispensable and ubiquitous structures in many fields to model the interactions among different entities, such as friendship in social networks or protein interactions in biological graphs. A major challenge is to understand the structure and dynamics of these systems. Although networks evolve through time, most existing graph representation learning methods target only static networks. Whereas approaches have been developed for the modeling of dynamic networks, there is a lack of efficient continuous time dynamic graph representation learning methods that can provide accurate network characterization and visualization in low dimensions while explicitly accounting for prominent network characteristics such as homophily and transitivity. In this paper, we propose the Precewise-VElocity Model (PIVEM) for the representation of continuous-time dynamic networks. It learns dynamic embeddings in which the temporal evolution of nodes is approximated by piecewise linear interpolations based on a latent distance model with piecewise constant node-specific velocities. The model allows for analytically tractable expressions of the associated Poisson process likelihood with scalable inference invariant to the number of events. We further impose a scalable Kronecker structured Gaussian Process prior to the dynamics accounting for community structure, temporal smoothness, and disentangled (uncorrelated) latent embedding dimensions optimally learned to characterize the network dynamics. We show that PIVEM can successfully represent network structure and dynamics in ultra-low two and three-dimensional embedding spaces. We further extensively evaluate the performance of the approach on various networks of different types and sizes and find that it outperforms existing relevant state-of-art methods in downstream tasks such as link prediction. In summary, PIVEM enables easily interpretable dynamic network visualizations and characterizations that can further improve our understanding of the intrinsic dynamics of time-evolving networks.
+
+§ 28 1 INTRODUCTION
+
+With technological advancements in data storage and production systems, we have witnessed the massive growth of graph (or network) data in recent years, with many prominent examples, including social, technological, and biological networks from diverse disciplines [1]. They propose an exquisite way to store and represent the interactions among data points and machine learning techniques on graphs have thus gained considerable attention to extract meaningful information from these complex systems and perform various predictive tasks. In this regard, Graph Representation Learning (GRL) techniques have become a cornerstone in the field through their exceptional performance in many downstream tasks such as node classification and edge prediction. Unlike the classical techniques relying on the extraction and design of handcrafted feature vectors peculiar to given networks, GRL approaches aim to design algorithms that can automatically learn features optimally preserving various characteristics of networks in their induced latent space.
+
+Many networks evolve through time and are liable to modifications in structure with newly arriving nodes or emerging connections, the GRL methods have primarily addressed static networks, in other words, a snapshot of the networks at a specific time. However, recent years have seen increasing efforts toward modeling dynamic complex networks, see also [2] for a review. Whereas most approaches have concentrated their attention on discrete-time temporal networks, which have built upon a collection of time-stamped networks (c.f. [3-11]) modeling of networks in continuous time has also been studied (c.f. [12-15]). These approaches have been based on latent class [3, 4, 12-14] and latent feature modeling approaches [5-11, 15] including advanced dynamic graph neural network representations [16, 17].
+
+Although these procedures have enabled to characterize evolving networks useful for downstream tasks such as link prediction and node classification, existing dynamic latent feature models are either in discrete time or do not explicitly account for network homophily and transitivity in terms of their latent representations. Whereas latent class models typically provide interpretable representations at the level of groups, latent feature models in general rely on high-dimensional latent representations that are not easily amenable to visualization and interpretation. A further complication of most existing dynamic modeling approaches is their scaling typically growing in complexity by the numbers of observed events and number of network dyads.
+
+This work addresses the embedding problem of nodes in a continuous-time latent space and seeks to accurately model network interaction patterns using low dimensional scalable representations explicitly accounting for network homopholy and transitivity. The main contributions of the paper can be summarized as follows:
+
+ * We propose a novel scalable GRL method, the Precewise-VElocity Model (PIVEM), to flexibly learn continuous-time dynamic node representations.
+
+ * We present a framework balancing the trade-off between the smoothness of node trajectories in the latent space and model capacity accounting for the temporal evolution.
+
+ * We show that the PIVEM can embed nodes accurately in very low dimensional spaces, i.e., $D = 2$ , such that it serves as a dynamic network visualization tool facilitating human insights into networks' complex, evolving structures.
+
+ * The performance of the introduced approach is extensively evaluated in various downstream tasks, such as network reconstruction and link prediction. We show that it outperforms wellknown baseline methods on a wide range of datasets. Besides, we propose an efficient model optimization strategy enabling the PIVEM to scale to large networks.
+
+Source code and other materials. The datasets, implementation of the method in Python, and all the generated animations can be found at the address: https://tinyurl.com/pivem.
+
+§ 2 RELATED WORK
+
+The work on dynamic modeling of complex networks has spurred substantial attention in recent years and covers approaches for the modeling of dynamic structures at the level of groups (i.e., latent class models) and dynamic representation learning approaches based on latent feature models including graph neural networks (GNNs). Whereas most attention has been given to discrete time dynamic networks a substantial body of work has also covered continuous time modeling as outlined below.
+
+§ 2.1 DYNAMIC LATENT CLASS MODELS
+
+Initial efforts for modeling continuously evolving networks has combined latent class models defined by the stochastic block models [18, 19] with Hawkes processes [20, 21]. In the work of [12], co-dependent (through time) Hawkes processes were combined with the Infinite Relational Model [22] (Hawkes IRM), yielding a non-parametric Bayesian approach capable of expressing reciprocity between inferred groups of actors. A drawback of such a model is the computational cost of the imposed Markov-chain Monte-Carlo optimization, as well as, its limitation on modelling only reciprocation effects. Scalability issues were addressed in [13] via the Block Hawkes Model (BHM), which utilizes variational inference and simplifies the Hawkes IRM model by associating only the inferred block structure pairs with a univariate point process. Recently, the BHM model was extended to decoupling interactions between different pairs of nodes belonging to the same block pair, through the use of independent univariate Hawkes processes, defining the Community Hawkes Independent Pairs model [14]. Whereas the above works have been based on continuous time modeling of dynamic networks the dynamic-IRM (dIRM) of [3] focused on the modeling of discrete time networks by inducing a infinite Hidden Markov Model (IHMM) to account for transitions over time of nodes between communities. In [4] a dynamic hierarchical block model was proposed based on the modeling of change points admitting dynamic node relocation within a Gibbs fragmentation tree. Despite the various advantages of such models, networks are constrained to be regarded and analyzed at a block level which in many cases is restrictive.
+
+§ 2.2 DYNAMIC LATENT FEATURE MODELS
+
+Prominent works around node-level representations of continuous-time networks have originally considered feature propagation within the discrete time network topology [23] or extended the random-walk frameworks of [6] and [7] to the temporal case yielding the Continuous-Time Dynamic Network Embeddings model (CTDNE), outperforming the aforementioned original approaches in multiple temporal settings. CTDNE provides a single temporal-aware node embedding, meaning that network and node evolution are unable to be visualized and explored. A more flexible approach was designed in [24] (DyRep) where temporal node embeddings are learned under a so-called latent mediation process, combining an association process describing the dynamics of the network with a communication process describing the dynamics on the network. The DyRep model uses deep recurrent architectures to parameterize the intensity function of the point process, and thus the embedding space suffers from lack of explainability. Graph neural networks (GNNs) can be extended to the analysis of continuous networks via the Temporal Graph Network (TGN) [17] where the classical encoder-decoder architecture is coupled with a memory cell.
+
+In the context of latent feature dynamic network models Gaussian Processes (GP) has been used to characterize the smoothness of the temporal dynamics. This includes the discrete time dynamic network model considered in [8] in which latent factors where endowed a GP prior based on radial basis function kernels imposing temporal smoothness within the latent representation. The approach was extended in [9] to impose stochastic differential equations for the evolution of latent factors. In [15] GPs were used for the modeling of continuous time dynamic networks based on Poisson and Hawkes processes respectively including exogenous as well as endogenous features specified by a radial basis function prior.
+
+Latent Distance Models (LDM) as proposed in [25] have recently been shown to outperform prominent GRL methods utilizing very-low dimensions in the static case [26, 27]. LDMs for temporal networks have been mostly studied in the discrete case [10], considering mainly diffusion dynamics in order to make predictions, as firstly studied in [28] and extended with popularity and activity effects [11]. While all these models express homophily (a tendency where similar nodes are more likely to connect to each other than dissimilar ones) and transitivity ("a friend of a friend is a friend") in the dynamic case, they fail to account for continuous dynamics.
+
+Our work is inspired by these previous approaches for the modeling of dynamic complex networks. Specifically, we make use of the latent distance model formulation to account for homophily and transitivity, the Poisson Process for the characterization of continuous time dynamics, and a Gaussian Process prior based on the radial-basis-function kernel to account for temporal smoothness within the latent representation. Inspired by latent class models we further impose a structured low-rank representation of nodes based on soft-assigning nodes to communities exhibiting similar temporal dynamics. Notably, we exploit how LDMs as opposed to GNN approaches in general can provide easy interpretable yet accurate network representations in ultra low $D = 2$ dimensional spaces facilitating accurate dynamic network visualization and interpretation.
+
+§ 3 PROPOSED APPROACH
+
+Our main objective is to represent every node of a given network, $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , into a low-dimensional metric space, $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , in which the pairwise node proximities will be characterized by their distances in a continuous-time latent space (Objective 3.1). Since we address the continuous-time dynamic networks, the interactions among nodes through time can vary, with new links appearing or disappearing at any time. More precisely, we will presently consider undirected continuous-time networks:
+
+Definition 3.1. A continuous-time dynamic undirected graph on a time interval ${\mathcal{I}}_{T} \mathrel{\text{ := }} \left\lbrack {0,T}\right\rbrack$ is an ordered pair $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ where $\mathcal{V} = \{ 1,\ldots ,N\}$ is a set of nodes and $\mathcal{E} \subseteq \left\{ {\{ i,j,t\} \in {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \mid 1 \leq }\right.$ $i < j \leq N\}$ is a set of events or edges.
+
+We will use the symbol, $N$ , to denote the number of nodes in the vertex set and ${\mathcal{E}}_{ij}\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq \mathcal{E}$ to indicate the set of edges between nodes $i$ and $j$ occurring on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$
+
+But we note that the approach readily extends to directed and bipartite dynamic networks.
+
+§ 3.1 NONHOMOGENEOUS POISSON POINT PROCESSES
+
+The Poisson Point Processes (PPP)s are one of the natural choices widely used to model the number of random events occurring in time or the locations in a spatial space. PPPs are parameterized by a quantity known as the rate or the intensity indicating the average density of the points in the underlying space of the Poisson process. If the intensity depends on the time or location, the point process is called Nonhomogeneous PPP (Defn. 3.2), and it is typically adapted for applications in which the event points are not uniformly distributed [29].
+
+Definition 3.2. [Nonhomogeneous PPP] A counting process $\{ M\left( t\right) ,t \geq 0\}$ is called a nonhomogeneous Poisson process with intensity function $\lambda \left( t\right) ,t \geq 0$ if (i) $M\left( 0\right) = 0$ ,(ii) $M\left( t\right)$ has independent increments: i.e., $\left( {M\left( {t}_{1}\right) - M\left( {t}_{0}\right) }\right) ,\ldots ,\left( {M\left( {t}_{B}\right) - M\left( {t}_{B - 1}\right) }\right)$ are independent random variables for each $0 \leq {t}_{0} < \cdots < {t}_{B}$ , and (iii) $M\left( {t}_{u}\right) - M\left( {t}_{l}\right)$ is Poisson distributed with mean ${\int }_{{t}_{l}}^{{t}_{u}}\lambda \left( t\right) {dt}$ .
+
+In this paper, we consider continuous-time dynamic networks such that the events (or links/edges) among nodes can occur at any point in time. As we will examine in the following sections, these interactions do not necessarily exhibit any recurring characteristics; instead, they vary over time in many real networks. In this regard, we assume that the number of links, $M\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack$ , between a pair of node $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ follows a nonhomogeneous Poisson point process (NHPP) with intensity function ${\lambda }_{ij}\left( t\right)$ on the time interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ , and for a given network $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , the log-likelihood function can be written by
+
+$$
+\mathcal{L}\left( \Omega \right) \mathrel{\text{ := }} \log p\left( {\mathcal{G} \mid \Omega }\right) = \frac{1}{2}\mathop{\sum }\limits_{{\left( {i,j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) \tag{1}
+$$
+
+where ${\mathcal{E}}_{i,j} \subseteq \mathcal{E}\left\lbrack {0,T}\right\rbrack$ is the set of links of node pair $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ on the timeline ${\mathcal{I}}_{T} \mathrel{\text{ := }} \left\lbrack {0,T}\right\rbrack$ , and $\Omega = {\left\{ {\lambda }_{ij}\right\} }_{1 \leq i < j \leq N}$ indicates the set of intensity functions.
+
+§ 3.2 PROBLEM FORMULATION
+
+Without loss of generality, it can be assumed that the timeline starts from 0 and is bounded by $T \in {\mathbb{R}}^{ + }$ . Since the interactions among nodes can occur at any time point on ${\mathcal{I}}_{T} = \left\lbrack {0,T}\right\rbrack$ , we would like to identify an accurate continuous-time node representation $\{ r\left( {i,t}\right) {\} }_{\left( {i,t}\right) \in \mathcal{V} \times {\mathcal{I}}_{T}}$ defined using a low-dimensional latent space ${\mathbb{R}}^{D}\left( {D \ll N}\right)$ where $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow {\mathbb{R}}^{D}$ is a map indicating the embedding or representation of node $i \in \mathcal{V}$ at time point $t \in {\mathcal{I}}_{T}$ . We define our objective more formally as follows:
+
+Objective 3.1. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ be a continuous-time dynamic network and ${\lambda }^{ * } : {\mathcal{V}}^{2} \times {\mathcal{I}}_{T} \rightarrow \mathbb{R}$ be an unknown intensity function of a nonhomogeneous Poisson point process. For a given metric space $\left( {\mathrm{X},{d}_{\mathrm{X}}}\right)$ , our purpose it to learn a function or representation $\mathbf{r} : \mathcal{V} \times {\mathcal{I}}_{T} \rightarrow \mathrm{X}$ satisfying
+
+$$
+\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{d}_{\mathrm{X}}\left( {\mathbf{r}\left( {i,t}\right) ,\mathbf{r}\left( {j,t}\right) }\right) {dt} \approx \frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}{\mathbf{\lambda }}^{ * }\left( {i,j,t}\right) {dt} \tag{2}
+$$
+
+for all $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ pairs, and for every interval $\left\lbrack {{t}_{l},{t}_{u}}\right\rbrack \subseteq {\mathcal{I}}_{T}$ .
+
+In this work, we consider the Euclidean metric on a $D$ -dimensional real vector space, $\mathrm{X} \mathrel{\text{ := }} {\mathbb{R}}^{D}$ and the embedding of node $i \in \mathcal{V}$ at time $t \in {\mathcal{I}}_{T}$ will be denoted by ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
+
+§ 3.3 PIVEM: PIECEWISE-VELOCITY MODEL FOR LEARNING CONTINUOUS-TIME EMBEDDINGS
+
+We learn continuous-time node representations by employing the canonical exponential link-function defining the intensity function as
+
+$$
+{\lambda }_{ij}\left( t\right) \mathrel{\text{ := }} \exp \left( {{\beta }_{i} + {\beta }_{j} - {\begin{Vmatrix}{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) \end{Vmatrix}}^{2}}\right) \tag{3}
+$$
+
+where ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ and ${\beta }_{i} \in \mathbb{R}$ denote the embedding vector at time $t$ and the bias term of node $i \in \mathcal{V}$ , respectively. For given bias terms, it can be seen by Lemma 3.1, that the definition of the intensity function provides a guarantee for our goal given in Equation (2), and a pair of nodes having a high number of interactions can be positioned close in the latent space. Although we utilize the squared Euclidean distance in Equation (3), which is not a metric, but we impose it as a distance [27, 30].
+
+Lemma 3.1. For given fixed bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ , the node embeddings, ${\left\{ {\mathbf{r}}_{i}\left( t\right) \right\} }_{i \in \mathcal{V}}$ , learned by optimizing the objective function given in Equation (1) satisfy
+
+$$
+\left| {\frac{1}{\left( {t}_{u} - {t}_{l}\right) }{\int }_{{t}_{l}}^{{t}_{u}}\begin{Vmatrix}{{\mathbf{r}}_{i}\left( t\right) - {\mathbf{r}}_{j}\left( t\right) }\end{Vmatrix}{dt}}\right| \leq \sqrt{\left( {{\beta }_{i} + {\beta }_{j}}\right) - \log \left( {{p}_{ij}\frac{{m}_{ij}}{\left( {t}_{u} - {t}_{l}\right) }}\right) }\;\text{ for all }\left( {i,j}\right) \in {\mathcal{V}}^{2}
+$$
+
+where ${p}_{ij}$ is the probability of having more than ${m}_{ij}$ links between $i$ and $j$ on the interval $\left\lbrack {{t}_{l},{t}_{u}}\right)$ .
+
+§ PROOF. PLEASE SEE THE APPENDIX FOR THE PROOF.
+
+Notably, constraining the approximation of the unknown intensity function by a metric space imposes the homophily property (i.e., similar nodes in the graph are placed close to each other in embedding space). When we have a pair of nodes exhibiting high interactions, they must have average intensity, so the term, ${p}_{ij}\left( {{m}_{ij}/\left( {{t}_{u} - {t}_{l}}\right) }\right.$ , in Lemma 3.1 converges to 1, and the average distance between the nodes is bounded by the sum of their bias terms. It can also be seen that the transitivity property holds up to some extend (i.e., if node $i$ is similar to $j$ and $j$ similar to $k$ , then $i$ should also be similar to $k$ ) since we can bound the squared Euclidean distance [27,31].
+
+Importantly, for a dynamic embedding, we would like to have embeddings of a pair of nodes close enough to each other when they have high interactions during a particular time interval and far away from each other if they have less or no links. Note that the bias terms ${\left\{ {\beta }_{i}\right\} }_{i \in \mathcal{V}}$ are responsible for the node-specific effects such as degree heterogeneity [31, 32], and they provide additional flexibility to the model by acting as scaling factor for the corresponding nodes so that, for instance, a hub node might have a high number of interactions simultaneously without getting close to the others in the latent space.
+
+Since our primary purpose is to learn continuous node representations in a latent space, we define the representation of node $i \in \mathcal{V}$ at time $t$ based on a linear model by ${\mathbf{r}}_{i}\left( t\right) \mathrel{\text{ := }} {\mathbf{x}}_{i}^{\left( 0\right) } + {\mathbf{v}}_{i}t$ . Here, ${\mathbf{x}}_{i}^{\left( 0\right) }$ can be considered as the initial position and ${\mathbf{v}}_{i}$ the velocity of the corresponding node. However, the linear model provides a minimal capacity for tracking the nodes and modeling their representations. Therefore, we reinterpret the given timeline ${\mathcal{I}}_{T} \mathrel{\text{ := }} \left\lbrack {0,T}\right\rbrack$ by dividing it into $B$ equally-sized bins, $\left\lbrack {{t}_{b - 1},{t}_{b}}\right) ,\left( {1 \leq b \leq B}\right)$ such that $\left\lbrack {0,T}\right\rbrack = \left\lbrack {0,{t}_{1}}\right) \cup \cdots \cup \left\lbrack {{t}_{B - 1},{t}_{B}}\right\rbrack$ where ${t}_{0} \mathrel{\text{ := }} 0$ and ${t}_{B} \mathrel{\text{ := }} T$ . By applying the linear model for each subinterval, we obtain a piecewise linear approximation of general intensity functions strengthening the models' capacity. As a result, we can write the position of node $i$ at time $t \in {\mathcal{I}}_{T}$ as follows:
+
+$$
+{\mathbf{r}}_{i}\left( t\right) \mathrel{\text{ := }} {\mathbf{x}}_{i}^{\left( 0\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 1\right) } + {\Delta }_{B}{\mathbf{v}}_{i}^{\left( 2\right) } + \cdots + \left( {t{\;\operatorname{mod}\;\left( {\Delta }_{B}\right) }}\right) {\mathbf{v}}_{i}^{\left( \left\lfloor t/{\Delta }_{B}\right\rfloor + 1\right) } \tag{4}
+$$
+
+where ${\Delta }_{B}$ indicates the bin widths, $T/B$ , and ${\;\operatorname{mod}\;\left( \cdot \right) }$ is the modulo operation used to compute the remaining time. Note that the piece-wise interpretation of the timeline allows us to track better the path of the nodes in the embedding space, and it can be seen by Theorem 3.2 that we can obtain more accurate trails by augmenting the number of bins.
+
+Theorem 3.2. Let $\mathbf{f}\left( t\right) : \left\lbrack {0,T}\right\rbrack \rightarrow {\mathbb{R}}^{D}$ be a continuous embedding of a node. For any given $\epsilon > 0$ , there exists a continuous, piecewise-linear node embedding, $\mathbf{r}\left( t\right)$ , satisfying $\parallel \mathbf{f}\left( t\right) - \mathbf{r}\left( t\right) {\parallel }_{2} < \epsilon$ for all $t \in \left\lbrack {0,T}\right\rbrack$ where $\mathbf{r}\left( t\right) \mathrel{\text{ := }} {\mathbf{r}}^{\left( b\right) }\left( t\right)$ for all $\left( {b - 1}\right) {\Delta }_{B} \leq t < b{\Delta }_{B},\mathbf{r}\left( t\right) \mathrel{\text{ := }} {\mathbf{r}}^{\left( B\right) }\left( t\right)$ for $t = T$ and ${\Delta }_{B} = T/B$ for some $B \in {\mathbb{N}}^{ + }$ .
+
+§ PROOF. PLEASE SEE THE APPENDIX FOR THE PROOF.
+
+Prior probability. In order to control the smoothness of the motion in the latent space, we employ a Gaussian Process (GP) [33] prior over the initial position ${\mathbf{x}}^{\left( 0\right) } \in {\mathbb{R}}^{N \times D}$ and velocity vectors $\mathbf{v} \in {\mathbb{R}}^{B \times N \times D}$ . Hence, we suppose that $\operatorname{vect}\left( {\mathbf{x}}^{\left( \mathbf{0}\right) }\right) \oplus \operatorname{vect}\left( \mathbf{v}\right) \sim \mathcal{N}\left( {\mathbf{0},\mathbf{\sum }}\right)$ where $\mathbf{\sum } \mathrel{\text{ := }} {\lambda }^{2}\left( {{\sigma }_{\sum }^{2}\mathbf{I} + \mathbf{K}}\right)$ is the covariance matrix with a scaling factor $\lambda \in \mathbb{R}$ . We utilize, ${\sigma }_{\sum \in \mathbb{R}}$ , to denote the noise of the covariance, and $\operatorname{vect}\left( \mathbf{z}\right)$ is the vectorization operator stacking the columns to form a single vector. To reduce the number of parameters of the prior and enable scalable inference, we define $\mathbf{K}$ as a Kronecker product of three matrices $\mathbf{K} \mathrel{\text{ := }} \mathbf{B} \otimes \mathbf{C} \otimes \mathbf{D}$ respectively accounting for temporal-, node-, and dimension specific covariance structures. Specifically, we define $\mathbf{B} \mathrel{\text{ := }} \left\lbrack {c}_{{\mathbf{x}}^{0}}\right\rbrack \oplus {\left\lbrack \exp {\left( -{\left( {c}_{b} - {\widetilde{c}}_{\widetilde{b}}\right) }^{2}/2{\sigma }_{\mathbf{B}}^{2}\right) }_{1 \leq b,\widetilde{b} \leq B}\right\rbrack }_{1 \leq b,\widetilde{b} \leq B}$ is a $\left( {B + 1}\right) \times \left( {B + 1}\right)$ matrix intending to capture the smoothness of velocities across time-bins where ${c}_{b} = \frac{{t}_{b - 1} + {t}_{b}}{2}$ is the center of the corresponding
+
+bin, and the matrix is constructed by combining the radial basis function kernel (RBF) with a scalar term ${c}_{{\mathbf{x}}^{0}}$ corresponding to the initial position being decoupled from the structure of the velocities. The node specific matrix, $\mathbf{C} \in {\mathbb{R}}^{N \times N}$ , is constructed as a product of a low-rank matrix $\mathbf{C} \mathrel{\text{ := }} {\mathbf{{QQ}}}^{\top }$ where the row sums of $\mathbf{Q} \in {\mathbb{R}}^{N \times k}$ equals to $1\left( {k \ll N}\right)$ , and it aims to extract covariation patterns of the motion of the nodes. Finally, we simply set the dimensionality matrix to the identity: $\mathbf{D} \mathrel{\text{ := }} \mathbf{I} \in {\mathbb{R}}^{D \times D}$ in order to have uncorrelated dimensions.
+
+To sum up, we can express our objective relying on the piecewise velocities with the prior as follows:
+
+$$
+\widehat{\Omega } = \underset{\Omega }{\arg \max }\frac{1}{2}\mathop{\sum }\limits_{{\left( {i,j}\right) \in {\mathcal{V}}^{2}}}\left( {\mathop{\sum }\limits_{{{e}_{ij} \in {\mathcal{E}}_{ij}}}\log {\lambda }_{ij}\left( {e}_{ij}\right) - {\int }_{0}^{T}{\lambda }_{ij}\left( t\right) {dt}}\right) + \log \mathcal{N}\left( {\left\lbrack \begin{matrix} {\mathbf{x}}^{\left( 0\right) } \\ \mathbf{v} \end{matrix}\right\rbrack ;\mathbf{0},\mathbf{\sum }}\right) \tag{5}
+$$
+
+where $\Omega = \left\{ {\mathbf{\beta },{\mathbf{x}}^{\left( 0\right) },\mathbf{v},{\sigma }_{\sum },{\sigma }_{\mathbf{B}},{c}_{{\mathbf{x}}^{0}},\mathbf{Q}}\right\}$ is the set of hyper-parameters, and ${\lambda }_{ij}\left( t\right)$ is the intensity function as defined in Equation (3) based on the node embeddings, ${\mathbf{r}}_{i}\left( t\right) \in {\mathbb{R}}^{D}$ .
+
+§ 3.4 OPTIMIZATION
+
+Our objective given in Equation (5) is not a convex function, so the learning strategy that we follow is of great significance in order to escape from the local minima and for the quality of the representations. We start by randomly initializing the model’s hyper-parameters from $\left\lbrack {-1,1}\right\rbrack$ except for the velocity tensor, which is set to 0 at the beginning. We adapt the sequential learning strategy in learning these parameters. In other words, we first optimize the initial position and bias terms together, $\left\{ {{\mathbf{x}}^{\left( 0\right) },\mathbf{\beta }}\right\}$ , for a given number of epochs; then, we include the velocity tensor, $\{ \mathbf{v}\}$ , in the optimization process and repeat the training for the same number of epochs. Finally, we add the prior parameters and learn all model hyper-parameters together. We have employed Adam optimizer [34] with learning rate 0.1 .
+
+Computational issues and complexity. Note that we need to evaluate the log-intensity term in Equation (5) for each $\left( {i,j}\right) \in {\mathcal{V}}^{2}$ and event time ${e}_{ij} \in {\mathcal{E}}_{ij}$ . Therefore, the computational cost required for the whole network is bounded by $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}\left| \mathcal{E}\right| }\right)$ . However, we can alleviate the computational cost by pre-computing certain coefficients at the beginning of the optimization process so that the complexity can be reduced to $\mathcal{O}\left( {{\left| \mathcal{V}\right| }^{2}B}\right)$ . We also have an explicit formula for the computation of the integral term since we utilize the squared Euclidean distance so that it can be computed in at most $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ operations. Instead of optimizing the whole network at once, we apply a batching strategy over the set of nodes in order to reduce the memory requirements. As a result, we sample $\mathcal{S}$ nodes for each epoch. Hence, the overall complexity for the log-likelihood function is $\mathcal{O}\left( {{\mathcal{S}}^{2}B\mathcal{I}}\right)$ where $\mathcal{I}$ is the number of epochs and $\mathcal{S} \ll \left| \mathcal{V}\right|$ . Similarly, the prior can be computed in at most $\mathcal{O}\left( {{B}^{3}{D}^{3}{K}^{2}\mathcal{S}}\right)$ operations by using various algebraic properties such as Woodbury matrix identity and Matrix Determinant lemma [35]. To sum up, the complexity of the proposed approach is $\mathcal{O}\left( {B{\mathcal{S}}^{2}\mathcal{I} + {B}^{3}{D}^{3}{K}^{2}\mathcal{S}\mathcal{I}}\right)$ (Please see the appendix for the derivations and other details).
+
+§ 4 EXPERIMENTS
+
+In this section, we extensively evaluate the performance of the proposed PIecewise-VElocity Model with respect to the well-known baselines in challenging tasks over various datasets of sizes and types. We consider all networks as undirected, and the event times of links are scaled to the interval $\left\lbrack {0,1}\right\rbrack$ for the consistency of experiments. We use the finest granularity level of the given input timestamps, such as seconds and milliseconds. We provide a brief summary of the networks below, but more details and various statistics are reported in Table 4 in the appendix. For all the methods, we learn node embeddings in two-dimensional space $\left( {D = 2}\right)$ since one of the objectives of this work is to produce dynamic node embeddings facilitating human insights into a complex network.
+
+Experimental Setup. We first split the networks into two sets, such that the events occurring in the last 10% of the timeline are taken out for the prediction. Then, we randomly choose 10% of the node pairs among all possible dyads in the network for the graph completion task, and we ensure that each node in the residual network contains at least one event keeping the number of nodes fixed. If a pair of nodes only contains events in the prediction set and if these nodes do not have any other links during the training time, they are removed from the networks.
+
+For conducting the experiments, we generate the labeled dataset of links as follows: For the positive samples, we construct small intervals of length $2 \times {10}^{-3}$ for each event time (i.e., $\left\lbrack {e - {10}^{-3},e + {10}^{3}}\right\rbrack$ where $e$ is an event time). We randomly sample an equal number of time points and corresponding node pairs to form negative instances. If a sampled event time is not located inside the interval of a positive sample, we follow the same strategy to build an interval for it, and it is considered a negative instance. Otherwise, we sample another time point and a dyad. Note that some networks might contain a very high number of links, which leads to computational problems for these networks. Therefore, we subsample ${10}^{4}$ positive and negative instances if they contain more than this.
+
+Table 1: The performance evaluation for the network reconstruction experiment over various datasets.
+
+max width=
+
+2*X 2|c|Synthetic(π) 2|c|Synthetic $\left( \mu \right)$ 2|c|College 2|c|Contacts 2|c|Email 2|c|Forum 2|c|Hypertext
+
+2-15
+ ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR
+
+1-15
+LDM .563 .539 .669 .642 .951 .944 .860 .835 .954 .948 .909 .897 .818 .797
+
+1-15
+NODE2VEC .519 .507 .503 .509 .711 .655 .812 .756 .853 .828 .677 .619 .696 .648
+
+1-15
+CTDNE .518 .522 .499 .505 .689 .656 .599 .584 .630 .645 .643 .608 .540 .545
+
+1-15
+PIVEM .762 .713 .905 .869 .948 .948 .938 .938 .978 .977 .907 .902 .830 .823
+
+1-15
+
+Synthetic datasets. We generate two artificial networks in order to evaluate the behavior of the models in controlled experimental settings. (i) Synthetic $\left( \pi \right)$ is sampled from the prior distribution stated in Subsection 3.2. The hyper-parameters, $\mathbf{\beta },K$ and $B$ are set to0,20and 100, respectively. (ii) Synthetic $\left( \mu \right)$ is constructed based on the temporal block structures. The timeline is divided into 10 sub-intervals, and the nodes are randomly split into 20 groups for each interval. The links within each group are sampled from the Poisson distribution with the constant intensity of 5 .
+
+Real Networks. The (iii) Hypertext network [36] was built on the radio badge records showing the interactions of the conference attendees for 2.5 days, and each event time indicates 20 seconds of active contact. Similarly, (iv) the Contacts network [37] was generated concerning the interactions of the individuals in an office environment. (v) Forum [38] is comprised of the activity data of university students on an online social forum system. (vi) CollegeMsg [39] indicates the private messages among the students on an online social platform. Finally, (vii) Eu-Email [40] was constructed based on the exchanged e-mail information among the members of European research institutions.
+
+Baselines. We compare the performance of our method with three baselines. We include LDM with Poisson rate and node-specific biases [31, 41] since it is a static method having the closest formulation to ours. A very well-known GRL method, NODE2VEC (or N2V) [7] relies on the explicit generation of the random walks, and learns node embeddings by relying on the node proximities within random walks. In our experiments, we tune the model’s parameters(p, q)from $\{ {0.25},{0.5},1,2,4\}$ . Since it has the ability to run over the weighted networks, we also constructed a weighted graph based on the number of links through time and reported the best score of both versions of the networks. CTDNE [42] is a dynamic node embedding approach performing temporal random walks over the network. But it is unable to produce instantaneous node representations and produces embeddings only for a given time. Therefore, we have utilized the last time of the training set to obtain the representations. We provide the other details about the parameter settings of the baseline methods in the appendix.
+
+For our method, we set the parameter $K = {25}$ , and bins count $B = {100}$ to have enough capacity to track node interactions. For the regularization term $\left( \lambda \right)$ of the prior, we first mask ${20}\%$ of the dyads in the optimization of Equation (5). Furthermore, we train the model by starting with $\lambda = {10}^{6}$ , and then we reduce it to one-tenth after each 100 epoch. The same procedure is repeated until $\lambda = {10}^{-6}$ , and we choose the $\lambda$ value minimizing the log-likelihood of the masked pairs. The final embeddings are then obtained by performing this annealing strategy without any mask until this $\lambda$ value. We repeat this procedure 5 times, and we consider the best-performing method in learning the embeddings. The relative standard deviation of the experiments is always less than 0.5, and Figure 1a shows an illustrative example for tuning $\lambda$ over the Synthetic $\left( \pi \right)$ dataset with 5 random runs.
+
+For the performance comparison of the methods, we provide the Area Under Curve (AUC) scores for the Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves [43]. We compute the intensity of a given instance for LDM and PIVEM for the similarity measure of the node pair. Since NODE2VEC and CTDNE rely on the SkipGram architecture [44], we use cosine similarity for them.
+
+Network Reconstruction. Our goal is to see how accurately a model can capture the interaction patterns among nodes and generate embeddings exhibiting their temporal relationships in a latent space. In this regard, we train the models on the residual network and generate sample sets as described previously. The performance of the models is reported in Table 1. Comparing the performance of PIVEM against the baselines, we observe favorable results across all networks, highlighting the importance and ability of PIVEM to account for and detect structure in a continuous time manner.
+
+Table 2: The performance evaluation for the network completion experiment over various datasets.
+
+max width=
+
+2*X 2|c|Synthetic(π) 2|c|Synthetic $\left( \mu \right)$ 2|c|College 2|c|Contacts 2|c|Email 2|c|Forum 2|c|Hypertext
+
+2-15
+ ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR
+
+1-15
+LDM .535 .529 .646 .631 .931 .926 .836 .799 .948 .942 .863 .858 .761 .738
+
+1-15
+NODE2VEC .519 .511 .747 .677 .685 .637 .787 .744 .818 .777 .635 .592 .596 .588
+
+1-15
+CTDNE .522 .527 .499 .503 .647 .599 .658 .656 .571 .593 .617 .592 .464 .485
+
+1-15
+PIVEM .750 .696 .874 .851 .935 .934 .873 .864 .951 .953 .879 .875 .770 .712
+
+1-15
+
+Table 3: The performance evaluation for the link prediction experiment over various datasets.
+
+max width=
+
+2*X 2|c|Synthetic(π) 2|c|Synthetic $\left( \mu \right)$ 2|c|College 2|c|Contacts 2|c|Email 2|c|Forum 2|c|Hypertext
+
+2-15
+ ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR ROC PR
+
+1-15
+LDM .562 .539 .498 .642 .951 .944 .860 .835 .954 .948 .909 .897 .819 .797
+
+1-15
+NODE2VEC .518 .506 .498 .502 .705 .676 .783 .716 .825 .807 .635 .605 .748 .739
+
+1-15
+CTDNE .514 .526 .457 .481 .666 .643 .632 .623 .629 .629 .621 .599 .508 .532
+
+1-15
+PIVEM .716 .689 .474 .485 .891 .887 .876 .884 .964 .964 .894 .895 .756 .767
+
+1-15
+
+Network Completion. The network completion experiment is a relatively more challenging task than the reconstruction. Since we hide 10% of the network, the dyads containing events are also viewed as non-link pairs, and the temporal models should place these nodes in distant locations of the embedding space. However, it might be possible to predict these events accurately if the network links have temporal triangle patterns through certain time intervals. In Table 2, we report the AUC-ROC and PR-AUC scores for the network completion experiment. Once more, PIVEM outperforms the baselines (in most cases significantly). We again discovered evidence supporting the necessity for modeling and tracking temporal networks with time-evolving embedding representations.
+
+Future Prediction. Finally, we examine the performance of the models in the future prediction task. Here, the models are asked to forecast the ${10}\%$ future of the timeline. For PIVEM, the similarity between nodes is obtained by calculating the intensity function for the timeline of the training set (i.e., from 0 to 0.9 ), and we keep our previously described strategies for the baselines since they generate the embeddings only for the last training time. Table 3 presents the performances of the models. It is noteworthy that while PIVEM outperforms the baselines significantly on the Synthetic $\left( \pi \right)$ network, it does not show promising results on Synthetic $\left( \mu \right)$ . Since the first network is compatible with our model, it successfully learns the dominant link pattern of the network. However, the second network conflicts with our model: it forms a completely different structure for every 0.1 second. For the real datasets, we observe mostly on-par results, especially with LDM. Some real networks contain link patterns that become "static" with respect to the future prediction task.
+
+We have previously described how we set the prior coefficient, $\lambda$ , and now we will examine the influence of the other hyperparameters over the $\operatorname{Synthetic}\left( \pi \right)$ dataset for network reconstruction.
+
+Influence of dimension size(D). We report the AUC-ROC and AUC-PR scores in Figure 1b. When we increase the dimension size, we observe a constant increase in performance. It is not a surprising
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Figure 2: Comparisons of the ground truth and learned representations in two-dimensional space.
+
+result because we also increase the model's capacity depending on the dimension. However, the two-dimensional space still provides comparable performances in the experiments, facilitating human insights into networks' complex, evolving structures.
+
+Influence of bin count(B). Figure $1\mathrm{c}$ demonstrates the effect of the number of bins for the network reconstruction task. We generated the Synthetic $\left( \pi \right)$ network from for 100 bins, so it can be seen that the performance stabilizes around ${2}^{6}$ , which points out that PIVEM reaches enough capability to model the interactions among nodes.
+
+Latent Embedding Animation. Although many GRL methods show high performance in the downstream tasks, in general, they require high dimensional spaces, so a postprocessing step later has to be applied in order to visualize the node representations in a small dimensional space. However, such processes cause distortions in the embeddings, which can lead a practitioner to end up with inaccurate arguments about the data.
+
+As we have seen in the experimental evaluations, our proposed approach successfully learns embed-dings in the two-dimensional space, and it also produces continuous-time representations. Therefore, it offers the ability to animate how the network evolves through time and can play a crucial role in grasping the underlying characteristics of the networks. As an illustrative example, Figure 2 compares the ground truth representations of Synthetic $\left( \pi \right)$ with the learned ones. The synthetic network consists of small communities of 5 nodes, and each color indicates these groups. Although the problem does not have unique solutions, it can be seen that our model successfully seizes the clustering patterns in the network. We refer the reader to supplementary materials for the full animation.
+
+§ 5 CONCLUSION AND LIMITATIONS
+
+In this paper, we have proposed a novel continuous-time dynamic network embedding approach, namely, Piecewise Velocity Model (PIVEM). Its performance has been examined in various experiments, such as network reconstruction and completion tasks over various networks with respect to the very well-known baselines. We demonstrated that it could accurately embed the nodes into a two-dimensional space. Therefore, it can be directly utilized to animate the learned node embeddings, and it can be beneficial in extracting the networks' underlying characteristics, foreseeing how they will evolve through time. We showed that the model could scale up to large networks.
+
+Although our model successfully learns continuous-time representations, it is unable to capture temporal patterns in the network in terms of the GP structure. Therefore, we are planning to employ different kernels instead of RBF, such as periodic kernels in the prior. The optimization strategies of the proposed method might be improved to escape from local minima. As a possible future direction, the algorithm can also be adapted for other graph types, such as directed and multi-layer networks.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..0847d30ea3fe3eea56d84c09bbd5f09769f452e8
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,564 @@
+# On the Expressiveness and Generalization of Hypergraph Neural Networks
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (Hyper-GNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of Hyper-GNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge arities. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results.
+
+## 1 Introduction
+
+Reasoning over graph-structured data is an important task in many applications, including molecule analysis, social network modeling, and knowledge graph reasoning [1-3]. While we have seen great success of various relational neural networks, such as Graph Neural Networks [GNNs; 4] and Neural Logical Machines [NLM; 5] in a variety of applications [6-8], we do not yet have a full understanding of how different design parameters, such as the depth of the neural network, affects the expressiveness of these models, or how effectively these models generalize from limited data.
+
+This paper analyzes the expressiveness and generalization of relational neural networks applied to hypergraphs, which are graphs with edges connecting more than two nodes. We have formally shown the "if and only if" conditions for the expressive power with respect to the edge arity. That is, $k$ -ary hyper-graph neural networks are sufficient and necessary for realizing FOC- $k$ , a fragment of first-order logic which involves at most $k$ variables. This is a helpful result because now we can determine whether a specific hypergraph neural network can solve a problem by understanding what form of logic formula can represent the solution to this problem. Next, we formally described the relationship between expressiveness and non-constant-depth networks. We state a conjecture about the "depth hierarchy," and connect the potential proof of this conjecture to the distributed computing literature. Our results highlight that: Even when the inputs and outputs of models have only unary and binary relations, allowing intermediate hyperedge representations increases the expressiveness.
+
+Furthermore, we prove, under certain realistic assumptions, it is possible to train a hypergraph neural networks on a finite set of small graphs, and it will generalize to arbitrarily large graphs. This ability is the result of the weight-sharing nature of hypergraph neural networks. We hope our work can serve as a foundation for designing hypergraph neural networks: to solve a specific problem, what arity do you need? What depth do you need? Will my model have structural generalization (i.e., to larger graphs)? Our theoretical results on learning are further supported by experiments, for empirical demonstration of the theorems.
+
+## 2 Hypergraph Reasoning Problems and Hypergraph Neural Networks
+
+A hypergraph representation $G$ is a tuple(V, X), where $V$ is a set of entities (nodes), and $X$ is a set of hypergraph representation functions. Specifically, $X = \left\{ {{X}_{0},{X}_{1},{X}_{2},\cdots ,{X}_{k}}\right\}$ , where ${X}_{j} : \left( {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right) \rightarrow \mathcal{S}$ is a function mapping every tuple of $j$ nodes to a value. We call $j$ the arity of the hyperedge and $k$ is the max arity of input hyperedges.m The range $\mathcal{S}$ can be any set of discrete labels that describes relation type, or a scalar number (e.g., the length of an edge), or a vector. In general, we will use the arity 0 representation function ${X}_{0}\left( \varnothing \right) \rightarrow \mathcal{S}$ to represent any global properties of the graph as a whole.
+
+A graph reasoning function $f$ is a mapping from a hypergraph representation $G = \left( {V, X}\right)$ to another hyperedge representation function $Y$ on $V$ . As concrete examples, asking whether a graph is fully connected is a graph classification problem, where the output $Y = \left\{ {Y}_{0}\right\}$ and ${Y}_{0}\left( \varnothing \right) \rightarrow {\mathcal{S}}^{\prime } = \{ 0,1\}$ is a global label; finding the set of disconnected subgraphs of size $k$ is a $k$ -ary hyperedge classification problem, where the output $Y = \left\{ {Y}_{k}\right\}$ is a label for each $k$ -ary hyperedges.
+
+There are two main motivations and constructions of a neural network applied to graph reasoning problems: message-passing-based and first-order-logic-inspired. Both approaches construct the computation graph layer by layer. The input to the entire neural network consists of the input features of nodes and hyperedges, while the output of the neural network is the per-node or per-edge prediction of desired properties, depending on the training task.
+
+In a nutshell, within each layer, message-passing-based hypergraph neural networks, Higher-Order GNNs [9], perform message passing between each hyperedge and its neighbours. Specifically, we say the j-th neighbour set of a hyperedge $u = \left( {{x}_{1},{x}_{2},\cdots ,{x}_{i}}\right)$ of arity $i$ is ${N}_{j}\left( u\right) =$ $\left\{ \left( {{x}_{1},{x}_{2},\cdots ,{x}_{j - 1}, r,{x}_{j + 1},\cdots ,{x}_{i}}\right) \right\}$ , where $r \in V$ . Then, the all neighbours of node $u$ is the union of all ${N}_{j}$ ’s, where $j = 1,2,\cdots , i$ .
+
+On the other hand, first-order-logic-inspired hypergraph neural networks consider building neural networks that can emulate first logic formulas. Neural Logic Machines [NLM; 5] are defined in terms of a set of input hyperedges; each hyperedge of arity $k$ is represented by a vector of (possibly real) values obtained by applying all of the k -ary predicates in the domain to the tuple of vertices it connects. Each layer in an NLM learns to apply a linear transformation with nonlinear activation and quantification operators (analogous to the for all $\forall$ and exists $\exists$ quantifiers in first-order logic), on these values. It is easy to prove, by construction, that given a sufficient number of layers and maximum arity, NLMs can learn to realize any first-order-logic formula. For readers who are not familiar with HO-GNNs [9] and NLMs [5], we include a mathematical summary of their computation graph in Appendix B. Our analysis starts from the following theorem.
+
+Theorem 2.1. HO-GNNs [9] are equivalent to NLMs in terms of expressiveness. Specifically, a B-ary HO-GNN is equivalent to an NLM applied to $B + 1$ -ary hyperedges. Proofs are in Appendix B.3.
+
+Given Theorem 2.1, we can focus our analysis on just one single type of hypergraph neural network. Specifically, we will focus on Neural Logic Machines [NLM; 5] because its architecture naturally aligns with first-order logic formula structures, which will aid some of our analysis. An NLM is characterized by hyperparameters $D$ (depth), and $B$ maximum arity. We are going to assume that $B$ is a constant, but $D$ can be dependent on the size of the input graph. We will use $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ to denote an NLM family with depth $D$ and max arity $B$ . Other parameters such as the width of neural networks affects the precise details of what functions can be realized, as it does in a regular neural network, but does not affect the analyses in this extended abstract. Furthermore, we will be focusing on neural networks with bounded precision, and briefly discuss how our results generalize to unbounded precision cases.
+
+## 3 Expressiveness of Relational Neural Networks
+
+We start from a formal definition of hypergraph neural network expressiveness.
+
+Definition 3.1 (Expressiveness). We say a model family ${\mathcal{M}}_{1}$ is at least expressive as ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ , if for all ${M}_{2} \in {\mathcal{M}}_{2}$ , there exists ${M}_{1} \in {\mathcal{M}}_{1}$ such that ${M}_{1}$ can realize ${M}_{2}$ . A model family ${\mathcal{M}}_{1}$ is more expressive than ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succ {\mathcal{M}}_{2}$ , if ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ and $\exists {M}_{1} \in {\mathcal{M}}_{1}$ , $\forall {M}_{2} \in {\mathcal{M}}_{2},{M}_{2}$ can not realize ${M}_{1}$ .
+
+Arity Hierarchy We first aim to quantify how the maximum arity $B$ of the network’s representation affects its expressiveness and find that, in short, even if the inputs and outputs of neural networks are of low arity, the higher the maximum arity for intermediate layers, the more expressive the NLM is.
+
+Theorem 3.1 (Arity Hierarchy). For any maximum arity $B$ , there exists a depth ${D}^{ * }$ such that: $\forall D \geq {D}^{ * },\operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ is more expressive than $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ . This theorem applies to both fixed-precision and unbounded-precision networks.
+
+Proof sketch: Our proof slightly extends the proof of Morris et al. [9]. First, the set of graphs distinguishable by $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ is bounded by graphs distinguishable by a $D$ -round order- $B$ Weisfeiler-Leman test [10]. If models in NLM $\left\lbrack {D, B}\right\rbrack$ cannot generate different outputs for two distinct
+
+hypergraphs ${G}_{1}$ and ${G}_{2}$ , but there exists $M \in \operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ that can generate different outputs for ${G}_{1}$ and ${G}_{2}$ , then we can construct a graph classification function $f$ that $\operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ (with some fixed precision) can realize but $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ (even with unbounded precision) cannot.* The full proof is described in Appendix C.1.
+
+It is also important to quantify the minimum arity for realizing certain graph reasoning functions.
+
+Theorem 3.2 (FOL realization bounds). Let ${\mathrm{{FOC}}}_{B}$ denote a fragment of first order logic with at most $B$ variables, extended with counting quantifiers of the form ${\exists }^{ \geq n}\phi$ , which state that there are at least $n$ nodes satisfying formula $\phi$ [11].
+
+- (Upper Bound) Any function $f$ in ${\mathrm{{FOC}}}_{B}$ can be realized by $\mathrm{{NLM}}\left\lbrack {D, B}\right\rbrack$ for some $D$ .
+
+- (Lower Bound) There exists a function $f \in {\mathrm{{FOC}}}_{B}$ such that for all $D, f$ cannot be realized by $\operatorname{NLM}\left\lbrack {D, B - 1}\right\rbrack$ .
+
+Proof: The upper bound part of the claim has been proved by Barceló et al. [12] for $B = 2$ . The results generalize easily to arbitrary $B$ because the counting quantifiers can be realized by sum aggregation. The lower bound part can be proved by applying Section 5 of [11], in which they show that ${\mathrm{{FOC}}}_{B}$ is equivalent to a(B - 1)-dimensional WL test in distinguishing non-isomorphic graphs. Given that $\mathrm{{NLM}}\left\lbrack {D, B - 1}\right\rbrack$ is equivalent to the(B - 2)-dimensional WL test of graph isomorphism, there must be an ${\mathrm{{FOL}}}_{B}$ formula that distinguishes two non-isomorphic graphs that $\operatorname{NLM}\left\lbrack {D, B - 1}\right\rbrack$ cannot. Hence, ${\mathrm{{FOL}}}_{B}$ cannot be realized by $\mathrm{{NLM}}\left\lbrack {\cdot , B - 1}\right\rbrack$ .
+
+Depth Hierarchy We now study the dependence of the expressiveness of NLMs on depth $D$ . Neural networks are generally defined to have a fixed depth, but allowing them to have a depth that is dependent on the number of nodes $n = \left| V\right|$ in the graph can substantially increase their expressive power. In the following, we define a depth hierarchy by analogy to the time hierarchy in computational complexity theory [13], and we extend our notation to let $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) , B}\right\rbrack$ denote the class of adaptive-depth NLMs in which the growth-rate of depth $D$ is bounded by $O\left( {f\left( n\right) }\right)$ .
+
+Conjecture 3.3 (Depth hierarchy). For any maximum arity $B$ , for any two functions $f$ and $g$ , if $g\left( n\right) = o\left( {f\left( n\right) /\log n}\right)$ , that is, $f$ grows logarithmically more quickly than $g$ , then fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) , B}\right\rbrack$ is more expressive than fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {g\left( n\right) }\right) , B}\right\rbrack$ .
+
+There is a closely related result for the congested clique model in distributed computing, where [14] proved that $\operatorname{CLIQUE}\left( {g\left( n\right) }\right) \varsubsetneq \operatorname{CLIQUE}\left( {f\left( n\right) }\right)$ if $g\left( n\right) = o\left( {f\left( n\right) }\right)$ . This result does not have the $\log n$ gap because the congested clique model allows $\log n$ bits to transmit between nodes at each iteration, while fixed-precision NLM allows only a constant number of bits. The reason why the result on congested clique can not be applied to fixed-precision NLMs is that congested clique assumes unbounded precision representation for each individual node.
+
+However, Conjecture 3.3 is not true for NLMs with unbounded precision, because there is an upper bound depth $O\left( {n}^{B - 1}\right)$ for a model’s expressiveness power. ${}^{ \dagger }$ That is, an unbounded-precision NLM can not achieve stronger expressiveness by increasing its depth beyond $O\left( {n}^{B - 1}\right)$ .
+
+It is important to point out that, to realize a specific graph reasoning function, NLMs with different maximum arity $B$ may require different depth $D$ . Fürer [15] provides a general construction for problems that higher-dimensional NLMs can solve in asymptotically smaller depth than lower-dimensional NLMs. In the following we give a concrete example for computing $S - T$ Connectivity- $k$ , which asks whether there is a path of nodes from $S$ and $T$ in a graph, with length $\leq k$ .
+
+Theorem 3.4 (S-T Connectivity- $k$ with Different Max Arity). For any function $f\left( k\right)$ , if $f\left( k\right) = o\left( k\right)$ , $\operatorname{NLM}\left\lbrack {O\left( {f\left( k\right) }\right) ,2}\right\rbrack$ cannot realize S-T Connectivity- $k$ . That is, S-T Connectivity- $k$ requires depth at least $O\left( k\right)$ for a relational neural network with an maximum arity of $B = 2$ . However, S-T Connectivity- $k$ can be realized by $\operatorname{NLM}\left\lbrack {O\left( {\log k}\right) ,3}\right\rbrack$ .
+
+Proof sketch. For any integer $k$ , we can construct a graph with two chains of length $k$ , so that if we mark two of the four ends as $S$ or $T$ , any $\operatorname{NLM}\left\lbrack {k - 1,2}\right\rbrack$ cannot tell whether $S$ and $T$ are on the same chain. The full proof is described in Appendix C.3.
+
+There are many important graph reasoning tasks that do not have known depth lower bounds, including all-pair connectivity and shortest distance [16, 17]. In Appendix C.3, we discuss the concrete complexity bounds for a series of graph reasoning problems.
+
+---
+
+${}^{ * }$ Note that the arity hierarchy is applied to fixed-precision and unbounded-precision separately. For example, $\operatorname{NLM}\left\lbrack {D, B}\right\rbrack$ with unbounded precision is incomparable with $\operatorname{NLM}\left\lbrack {D, B + 1}\right\rbrack$ with fixed precision.
+
+${}^{ \dagger }$ See appendix C. 2 for a formal statement and the proof.
+
+---
+
+## 4 Learning and Generalization in Relational Neural Networks
+
+Given our understanding of what functions can be realized by NLMs, we move on to the problems of learning them: Can we effectively learn a NLMs to solve a desired task given a sufficient number of input-output examples? In this paper, we show that applying enumerative training with examples up to some fixed graph size can ensure that the trained neural network will generalize to all graphs larger than those appearing in the training set.
+
+A critical determinant of the generalization ability for NLMs is the aggregation function they use. Specifically, Xu et al. [18] have shown that using sum as the aggregation function provides maximum expressiveness for graph neural networks. However, sum aggregation cannot be implemented in fixed-precision models with an arbitrary number of nodes, because as the graph size $n$ increases, the range of the sum aggregation also increases.
+
+Definition 4.1 (Fixed-precision aggregation function). An aggregation function is fixed precision if it maps from any finite set of inputs with values drawn from finite domains to a fixed finite set of possible output values; that is, the cardinality of the range of the function cannot grow with the number of elements in the input set. Two useful fixed-precision aggregation functions are max, which computes the dimension-wise maximum over the set of input values, and fixed-precision mean, which approximates the dimension-wise mean to a fixed decimal place.
+
+In order to focus on structural generalization in this section, we consider an enumerative training paradigm. When the input hypergraph representation domain $\mathcal{S}$ is a finite set, we can enumerate the set ${\mathcal{G}}_{ < N}$ of all possible input hypergraph representations of size bounded by $N$ . We first enumerate all graph sizes $n \leq N$ ; for each $n$ , we enumerate all possible values assigned to the hyperedges in the input. Given training size $N$ , we enumerate all inputs in ${\mathcal{G}}_{ \leq N}$ , associate with each one the corresponding ground-truth output representation, and train the model with these input-output pairs.
+
+This has much stronger data requirements than the standard sampling-based training mechanisms in machine learning. In practice, this can be approximated well when the input domain $\mathcal{S}$ is small and the input data distribution is approximately uniformly distributed. The enumerative learning setting is studied by the language identification in the limit community [19], in which it is called complete presentation. This is an interesting learning setting because even if the domain for each individual hyperedge representation is finite, as the graph size can go arbitrarily large, the number of possible inputs is enumerable but unbounded.
+
+Theorem 4.1 (Fixed-precision generalization under complete presentation). For any hypergraph reasoning function $f$ , if it can be realized by a fixed-precision relational neural network model $\mathcal{M}$ , then there exists an integer $N$ , such that if we train the model with complete presentation on all input hypergraph representations with size smaller than $N,{\mathcal{G}}_{ \leq N}$ , then for all $M \in \mathcal{M}$ ,
+
+$$
+\mathop{\sum }\limits_{{G \in {\mathcal{G}}_{ < N}}}1\left\lbrack {M\left( G\right) \neq f\left( G\right) }\right\rbrack = 0 \Rightarrow \forall G \in {\mathcal{G}}_{\infty } : M\left( G\right) = f\left( G\right) .
+$$
+
+That is, as long as $M$ fits all training examples, it will generalize to all possible hypergraphs in ${\mathcal{G}}_{\infty }$ .
+
+Proof. The key observation is that for any fixed vector representation length $W$ , there are only a finite number of distinctive models in a fixed-precision NLM family, independent of the graph size $n$ . Let ${W}_{b}$ be the number of bits in each intermediate representation of a fixed-precision NLM. There are at most ${\left( {2}^{{W}_{b}}\right) }^{{2}^{{W}_{b}}}$ different mappings from inputs to outputs. Hence, if $N$ is sufficiently large to enumerate all input hypergraphs, we can always identify the correct model in the hypothesis space.
+
+Our results are related to the algorithmic alignment approach [20, 21]. In contrast to their Probably Approximately Correct Learning (PAC-Learning) bounds for sample efficiency, our expressiveness results directly quantifies whether a hypergraph neural network can be trained to realize a specific function. Our generalization theorem applies to more generally than their result on Max-Degree function learning due to the assumption of fixed precision.
+
+## 5 Conclusion
+
+In this extended abstract, we have shown the substantial increase of expressive power due to higher-arity relations and increasing depth, and have characterized very powerful structural generalization from training on small graphs to performance on larger ones. We further discuss the relationship between these results and existing results in Appendix A. All theoretical results are further supported by the empirical results, discussed in Appendix D. Although many questions remain open about the overall generalization capacity of these models in continuous and noisy domains, we believe this work has shed some light on their utility and potential for application in a variety of problems.
+
+References
+
+[1] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In ${ICML},{2017.1}$
+
+[2] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In ESWC, 2018.
+
+[3] Qi Liu, Maximilian Nickel, and Douwe Kiela. Hyperbolic graph neural networks. IEEE Transactions on Knowledge and Data Engineering, 2017. 1
+
+[4] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80, 2008. 1
+
+[5] Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. Neural logic machines. In ${ICLR},{2019.1},2,7,8,{16}$
+
+[6] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 1
+
+[7] Christian Merkwirth and Thomas Lengauer. Automatic generation of complementary descriptors with molecular graph networks. Journal of chemical information and modeling, 45(5):1159- 1168, 2005.
+
+[8] Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In ${ICLR},{2020.1}$
+
+[9] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In ${AAAI},{2019.2},7,{12}$
+
+[10] AA Leman and B Weisfeiler. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsiya, 2(9):12-16, 1968. 2
+
+[11] Jin-Yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389-410, 1992. 3, 12, 13
+
+[12] Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In ICLR, 2020. 3, 7, 13
+
+[13] Juris Hartmanis and Richard E Stearns. On the computational complexity of algorithms. Transactions of the American Mathematical Society, 117:285-306, 1965. 3
+
+[14] Janne H Korhonen and Jukka Suomela. Towards a complexity theory for the congested clique. In ${SPAA},{2018.3}$
+
+[15] Martin Fürer. Weisfeiler-lehman refinement requires at least a linear number of iterations. In ICALP, 2001. 3
+
+[16] Mauricio Karchmer and Avi Wigderson. Monotone circuits for connectivity require super-logarithmic depth. SIAM J. Discrete Math., 3(2):255-265, 1990. 3, 7
+
+[17] Shreyas Pai and Sriram V Pemmaraju. Connectivity lower bounds in broadcast congested clique. In ${PODC},{2019.3}$
+
+[18] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019. 4, 7
+
+[19] E Mark Gold. Language identification in the limit. Inf. Control., 10(5):447-474, 1967. 4
+
+[20] Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? In ICLR, 2020. 4, 7
+
+[21] Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. How neural networks extrapolate: From feedforward to graph neural networks. In ICLR, 2021. 4, 7
+
+[22] Yuan Li, Alexander Razborov, and Benjamin Rossman. On the ac^0 complexity of subgraph isomorphism. SIAM J. Comput., 46(3):936-971, 2017. 7
+
+[23] Benjamin Rossman. Average-case complexity of detecting cliques. PhD thesis, Massachusetts Institute of Technology, 2010. 7
+
+[24] Andreas Loukas. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations, 2020. 7
+
+[25] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. In ${ICLR},{2019.7}$
+
+[26] Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In ${AAAI},{2019.11}$
+
+[27] Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method of training graph convolutional networks on hypergraphs. In NeurIPS, 2019.
+
+[28] Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition, 110:107637, 2021. 11
+
+[29] Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, and Huan Liu. Be more with less: Hypergraph attention networks for inductive text classification. In EMNLP, 2020. 11
+
+[30] Jing Huang and Jie Yang. Unignn: a unified framework for graph and hypergraph neural networks. In IJCAI, 2021. 11
+
+[31] Sandra Kiefer and Pascal Schweitzer. Upper bounds on the quantifier depth for graph differentiation in first order logic. In LICS, 2016. 12
+
+[32] Efim A Dinic. Algorithm for solution of a problem of maximum flow in networks with power estimation. In Soviet Math. Doklady, volume 11, pages 1277-1280, 1970. 15
+
+272
+
+## Appendix
+
+The appendix is organized as the following. In Appendix A, we discuss the related work. In Appendix B, we provide a formalization of two types of hypergraph neural networks discussed in the main paper, and proved their equivalence. In Appendix C, we prove the theorems for the arity hierarchy and provide concrete examples for expressiveness analyses. Finally, in Appendix D, we include additional experiment results to empirically illustrate the application of theorems discussed in the paper.
+
+## A Related Work
+
+Solving problems on graphs of arbitrary size is studied in many fields. NLMs can be viewed as circuit families with constrained architecture. In distributed computation, the congested clique model can be viewed as 2-arity NLMs, where nodes have identities as extra information. Common graph problems including sub-structure detection $\left\lbrack {{22},{23}}\right\rbrack$ and connectivity $\left\lbrack {16}\right\rbrack$ are studied for lower bounds in terms of depth, width and communication. This has been connected to GNNs for deriving expressiveness bounds [24].
+
+Studies have been conducted on the expressiveness of GNNs and their variants. [18] provide an illuminating characterization of GNN expressiveness in terms of the WL graph isomorphism test. [12] reviewed GNNs from the logical perspective and rigorously refined their logical expressiveness with respect to fragments of first-order logic. [5] proposed Neural Logical Machines (NLMs) to reason about higher-order relations, and showed that increasing order inreases expressiveness. It is also possible to gain expressiveness using unbounded computation time, as shown by the work of Dehghani et al. [25] on dynamic halting in transformers.
+
+It is interesting that GNNs may generalize to larger graphs. Xu et al. [20, 21] have studied the notion of algorithmic alignment to quantify such structural generalization. Dong et al. [5] provided empirical results showing that NLMs generalize to much larger graphs on certain tasks. In Xu et al. [20], they analyzed and compared the sample complexity of Graph Neural Networks. This is different from our notion of expressiveness for realizing functions. In Xu et al. [21], they showed emperically on some problems (e.g., Max-Degree, Shortest Path, and n-body problem) that algorithm alignment helps GNNs to extrapolate, and theoretically proved the improvement by algorithm alignment on the Max-Degree problem. In this extended abstract, instead of focusing on computing specific graph problems, we analyzed how GNNs can extrapolate to larger graphs in a general case, based on the assumption of fixed precision computation.
+
+## B Hypergraph Neural Networks
+
+We now introduce two important hypergraph neural network implementations that can be trained to solve graph reasoning problems: Higher-order Graph Neural Networks [HO-GNN; 9] and Neural Logic Machines [NLM; 5]. The are equivalent to each other in terms of expressiveness. Showing this equivalence allows us to focus the rest of the paper on analyzing a single model type, with the understanding that the conclusions generalize to a broader class of hypergraph neural networks.
+
+### B.1 Higher-order Graph Neural Networks
+
+Higher-order Graph Neural Networks [HO-GNNs; 9] are Graph Neural Networks (GNNs) that apply to hypergraphs. A GNN is usually defined based on two message passing operations.
+
+- Edge update: the feature of each edge is updated by features of its ends.
+
+- Note update: the feature of each node is updated by features of all edges adjacent to it.
+
+However, computing only node-wise and edge-wise features does not handle higher-order relations, such as triangles in the graph. In order to obtain more expressive power, GNNs have be extend to hypergraphs of higher arity [9]. Specifically, HO-GNNs on $B$ -ary hypergraph maintains features for all $B$ -tuple of nodes, and the neighborhood is extended to $B$ -tuples accordingly: the feature of tuple $\left( {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right)$ is updated by the $\left| V\right|$ element multiset (contain $\left| V\right|$ elements for each $u \in V$ ) of $B$ -tuples of features
+
+$$
+\left( {{H}_{i}\left\lbrack {u,{v}_{2},\cdots ,{v}_{B}}\right\rbrack ,{H}_{i - 1}\left\lbrack {{v}_{1}, u,{v}_{2},\cdots ,{v}_{B}}\right\rbrack ,\cdots {H}_{i - 1}\left\lbrack {{v}_{1},\cdots ,{v}_{B - 1}, u}\right\rbrack }\right) \tag{B.1}
+$$
+
+
+
+Figure 1: The overall architecture of our Neural Logic Machines (NLMs). It follows the computation graph of NLM [5] and can be applied to hypergraphs.
+
+where ${H}_{i - 1}\left\lbrack \mathbf{v}\right\rbrack$ is the feature of tuple $\mathbf{v}$ from the previous iteration.
+
+We now introduce the formal definition of the high-dimensional message passing. We denote $v$ as a $B$ -tuple of nodes $\left( {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right)$ , and generalize the neighborhood to a higher dimension by defining the neighborhood of $\mathbf{v}$ as all node tuples that differ from $\mathbf{v}$ at one position.
+
+$$
+\operatorname{Neighbors}\left( {\mathbf{v}, u}\right) = \left( {\left( {u,{v}_{2},\cdots ,{v}_{B}}\right) ,\left( {{v}_{1}, u,{v}_{3},\cdots ,{v}_{B}}\right) ,\cdots ,\left( {{v}_{1},\cdots ,{v}_{B - 1}, u}\right) }\right) \tag{B.2}
+$$
+
+$$
+N\left( \mathbf{v}\right) = \{ \text{ Neighbors }\left( {\mathbf{v}, u}\right) \mid u \in V\} \tag{B.3}
+$$
+
+Then message passing scheme naturally generalizes to high-dimensional features using the high-dimensional neighborhood.
+
+$$
+{\operatorname{Received}}_{i}\left\lbrack \mathbf{v}\right\rbrack = \mathop{\sum }\limits_{u}\left( {{\mathrm{{NN}}}_{1}\left( {{H}_{i - 1}\left\lbrack \mathbf{v}\right\rbrack ;{\operatorname{CONCAT}}_{{\mathbf{v}}^{\prime } \in \text{ neighbors }\left( {\mathbf{v}, u}\right) }{H}_{i - 1}\left\lbrack {\mathbf{v}}^{\prime }\right\rbrack }\right) }\right) \tag{B.4}
+$$
+
+### B.2 Neural Logic Machines
+
+A NLM is a multi-layer neural network that operates on hypergraph representations, in which the hypergraph representation functions are represented as tensors. The input is a hypergraph representation(V, X). There are then several computational layers, each of which produces a hypergraph representation with nodes $V$ and a new set of representation functions. Specifically, a $B$ -ary NLM produces hypergraph representation functions with arities from 0 up to a maximum hyperedge arity of $B$ . We let ${T}_{i, j}$ denote the tensor representation for the output at layer $i$ and arity $j$ . Each entry in the tensor is a mapping from a set of node indices $\left( {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right)$ to a vector in a latent space ${\mathbb{R}}^{W}$ . Thus, ${T}_{i, j}$ is a tensor of $j + 1$ dimensions, with the first $j$ dimensions corresponding to $j$ -tuple of nodes, and the last feature dimension. For convenience, we write ${h}_{0, \cdot }$ for the input hypergraph representation and ${h}_{D, \cdot }$ for the output of the NLM.
+
+Fig. 1a shows the overall architecture of NLMs. It has $D \times B$ computation blocks, namely relational reasoning layers (RRLs). Each block ${\mathrm{{RRL}}}_{i, j}$ , illustrated in Fig. 1b, takes the output from neighboring arities in the previous layer, ${T}_{i - 1, j - 1},{T}_{i - 1, j}$ and ${T}_{i - 1, j + 1}$ , and produces ${T}_{i, j}$ . Below we show the computation of each primitive operation in an RRL.
+
+The expand operation takes tensor ${T}_{i - 1, j - 1}$ (arity $j - 1$ ) and produces a new tensor ${T}_{i - 1, j - 1}^{E}$ of arity $j$ . The reduce operation takes tensor ${T}_{i - 1, j + 1}$ (arity $j + 1$ ) and produces a new tensor ${T}_{i - 1, j + 1}^{R}$ of arity $j + 1$ . Mathematically,
+
+$$
+{T}_{i - 1, j - 1}^{E}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right\rbrack = {T}_{i - 1, j - 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j - 1}}\right\rbrack ;
+$$
+
+$$
+{T}_{i - 1, j + 1}^{R}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right\rbrack = {\operatorname{Agg}}_{{v}_{j + 1}}\left\{ {{T}_{i - 1, j + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j},{v}_{j + 1}}\right\rbrack }\right\} .
+$$
+
+Here, Agg is called the aggregation function of a NLM. For example, a sum aggregation function takes the summation along the dimension $j + 1$ of the tensor, and a max aggregation function takes the max along that dimension.
+
+The concat (concatenate) operation $\oplus$ is applied at the "vector representation" dimension. The permute operation generates a new tensor of the same arity, but it fuses the representations of hyperedges that share the same set of entities but in different order, such as $\left( {{v}_{1},{v}_{2}}\right)$ and $\left( {{v}_{2},{v}_{1}}\right)$ . Mathematically, for tensor $X$ of arity $j$ , if $Y =$ permute(X)then
+
+$$
+Y\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right\rbrack = \mathop{\operatorname{Concat}}\limits_{{\sigma \in {S}_{j}}}\left\{ {X\left\lbrack {{v}_{{\sigma }_{1}},{v}_{{\sigma }_{2}},\cdots ,{v}_{{\sigma }_{j}}}\right\rbrack }\right\} ,
+$$
+
+where $\sigma \in {S}_{j}$ iterates over all permuations of $\{ 1,2,\cdots j\} .{\mathrm{{NN}}}_{j}$ is a multi-layer perceptron (MLP) applied to each entry in the tensor produced after permutation, with nonlinearity $\sigma$ (e.g., ReLU).
+
+It is important to note that we intentionally name the MLPs ${\mathrm{{NN}}}_{j}$ instead of ${\mathrm{{NN}}}_{i, j}$ . In generalized relational neural networks, for a given arity $j$ , all MLPs across all layers $i$ are shared. It is straightforward to see that this "weight-shared" model can realize a "non-weight-shared" NLM that uses different weights for MLPs at different layers when the number of layers is a constant. With a sufficiently large length of the representation vector, we can simulate the computation of applying different transformations by constructing block matrix weights. (A more formal proof is in Appendix B) The advantage of this weight sharing is that the network can be easily extended to a "recurrent" model. For example, we can apply the NLM for a number of layers that is a function of $n$ , where $n$ is the the number of nodes in the input graph. Thus, we will use the term layers and iterations interchangeably.
+
+Handling high-arity features and using deeper models usually increase the computational cost. In appendix B.5, we show that the time and space complexity of NLM $\left\lbrack {D, B}\right\rbrack$ is $O\left( {D{n}^{B}}\right)$ .
+
+Note that even when hyperparameters such as the maximum arity and the number of iterations are fixed, a NLM is still a model family $\mathcal{M}$ : the weights for MLPs will be trained on some data. Furthermore, each model $M \in \mathcal{M}$ is a NLM with a specific set of MLP weights.
+
+### B.3 Expressiveness Equivalence of Relational Neural Networks
+
+Since we are going to study both constant-depth and adaptive-depth graph neural networks, we first prove the following lemma (for general multi-layer neural networks), which helps us simplify the analysis.
+
+Lemma B.1. A neural network with representation width $W$ that has $D$ different layers ${\mathrm{{NN}}}_{1},\cdots ,{\mathrm{{NN}}}_{D}$ can be realized by a neural network that applies a single layer ${\mathrm{{NN}}}^{\prime }$ for $D$ iterations with width $\left( {D + 1}\right) \left( {W + 1}\right)$ .
+
+Proof. The representation for ${\mathrm{{NN}}}^{\prime }$ can be partitioned into $D + 1$ segments each of length $W + 1$ . Each segment consist of a "flag" element and a $W$ -element representation, which are all 0 initially, except for the first segment, where the flag is set to 1 , and the representation is the input.
+
+${\mathrm{{NN}}}^{\prime }$ has the weights for all ${\mathrm{{NN}}}_{1},\cdots ,{\mathrm{{NN}}}_{D}$ , where weights ${\mathrm{{NN}}}_{i}$ are used to compute the representation in segment $i + 1$ from the representation in segment $i$ . Additionally, at each iteration, segment $i + 1$ can only be computed if the flag in segment $i$ is 1, in which case the flag of segment $i + 1$ is set to 1 . Clearly, after $D$ iterations, the output of ${\mathrm{{NN}}}_{k}$ should be the representation in segment $D + 1$ .
+
+Due to Lemma B.1, we consider the neural networks that recurrently apply the same layer because a) they are as expressive as those using layers of different weights, b) it is easier to analyze a single neural network layer than $D$ layers, and c) they naturally generalize to neural networks that runs for adaptive number of iterations (e.g. GNNs that run $O\left( {\log n}\right)$ iterations where $n$ is the size of the input graph).
+
+We first describe a framework for quantifying if two hypergraph neural network models are equally expressive on regression tasks (which is more general than classification problems). The framework view the expressiveness from the perspective of computation. Specifically, we will prove the expressiveness equivalence between models by showing that their computation can be aligned.
+
+In complexity, we usually show a problem is at least as hard as the other one by showing a reduction from the other problem to the problem. Similarly, on the expressiveness of NLMs, we can construct reduction from model family $\mathcal{A}$ to model family $\mathcal{B}$ to show that $\mathcal{B}$ can realize all computation that $\mathcal{A}$ does, or even more. Formally, we have the following definition.
+
+Definition B.1 (Expressiveness reduction). For two model families $\mathcal{A}$ and $\mathcal{B}$ , we say $\mathcal{A}$ can be reduced to $\mathcal{B}$ if and only if there is a function $r : \mathcal{A} \rightarrow \mathcal{B}$ such that for each model instance $A \in \mathcal{A}$ , $r\left( A\right) \in \mathcal{B}$ and $A$ have the same outputs on all inputs. In this case, we say $\mathcal{B}$ is at least as expressive as $\mathcal{A}$ .
+
+Definition B. 2 (Expressiveness equivalence). For two model families $\mathcal{A}$ and $\mathcal{B}$ , if $\mathcal{A}$ and $\mathcal{B}$ can be reduced to each other, then $\mathcal{A}$ and $\mathcal{B}$ are equally expressive. Note that this definition of expressiveness equivalence generalizes to both classification and regression tasks.
+
+Equivalence between HO-GNNs and NLMs. We will prove the equivalence between HO-GNNs and NLMs by making reductions in both directions.
+
+Lemma B.2. A $B$ -ary HO-GNN with depth $D$ can be realized by a NLM with maximum arity $B + 1$ and depth ${2D}$ .
+
+Proof. We prove lemma B. 2 by showing that one layer of GNNs on $B$ -ary hypergraphs can be realized by two NLM with maximum arity $B + 1$ .
+
+Firstly, a GNN layer maintain features of $B$ -tuples, which are stored in correspondingly in an NLM layer at dimension $B$ . Then we will realize the message passing scheme using the NLM features of dimension $B$ and $B + 1$ in two steps.
+
+Recall the message passing scheme generalized to high dimensions (to distinguish, we use $H$ for HO-GNN features and $T$ for NLM features.)
+
+$$
+{\operatorname{Received}}_{i}\left( \mathbf{v}\right) = \mathop{\sum }\limits_{u}\left( {{\mathrm{{NN}}}_{1}\left( {{H}_{i - 1, B}\left\lbrack \mathbf{v}\right\rbrack ;{\operatorname{CONCAT}}_{{\mathbf{v}}^{\prime } \in \text{ neighbors }\left( {\mathbf{v}, u}\right) }{H}_{i - 1}\left\lbrack {\mathbf{v}}^{\prime }\right\rbrack }\right) }\right) \tag{B.5}
+$$
+
+At the first step, the Expand operation first raise the dimension to $B + 1$ by expanding a non-related variable $u$ to the end, and the Permute operation can then swap $u$ with each of the elements (or no swap). Particularly, ${T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack$ will be expand to
+
+$$
+{T}_{i + 1, B + 1}\left\lbrack {u,{v}_{2},{v}_{3},\cdots ,{v}_{B},{v}_{1}}\right\rbrack ,{T}_{i + 1, B + 1}\left\lbrack {{v}_{1}, u,{v}_{3},\cdots ,{v}_{B},{v}_{2}}\right\rbrack ,\cdots ,
+$$
+
+$$
+{T}_{i + 1, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B - 1}, u,{v}_{B}}\right\rbrack \text{, and}{T}_{i + 1, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B - 1},{v}_{B}, u}\right\rbrack
+$$
+
+Hence, ${T}_{i + 1, B + 1}\left\lbrack {{v}_{1},{v}_{2},{v}_{3},\cdots ,{v}_{B}, u}\right\rbrack$ receives the features from
+
+$$
+{T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack ,{T}_{i, B}\left\lbrack {u,{v}_{2},{v}_{3},\cdots ,{v}_{B}}\right\rbrack ,{T}_{i, B}\left\lbrack {{v}_{1}, u,{v}_{3},\cdots ,{v}_{B}}\right\rbrack ,\cdots ,{T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B - 1}, u}\right\rbrack
+$$
+
+These features matches the input of ${\mathrm{{NN}}}_{1}$ in equation B. 5, and in this layer ${\mathrm{{NN}}}_{1}$ can be applied to compute things inside the summation.
+
+Then at the second step, the last element is reduced to get what tuple $v$ should receive, so $v$ can be updated. Since each HO-GNN layer can be realized by such two NLM layers, each $B$ -ary HO-GNN with depth $D$ can be realized by a NLM of maximum arity $\left( {B + 1}\right)$ and depth ${2D}$ .
+
+To complete the proof we need to find a reduction from NLMs of maximum arity $B + 1$ to $B$ -ary HO-GNNs. The key observation here is that the features of $\left( {B + 1}\right)$ -tuples in NLMs can only be expanded from sub-tuples, and the expansion and reduction involving $\left( {B + 1}\right)$ -tuples can be simulated by the message passing process.
+
+Lemma B.3. The features of $\left( {B + 1}\right)$ -tuples feature ${T}_{i, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B + 1}}\right\rbrack$ can be computed from the following tuples
+
+$$
+\left( {{T}_{i, B}\left\lbrack {{v}_{2},{v}_{3},\cdots ,{v}_{B + 1}}\right\rbrack ,{T}_{i, B}\left\lbrack {{v}_{1},{v}_{3},\cdots ,{v}_{B + 1}}\right\rbrack ,\cdots ,{T}_{i, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack }\right) .
+$$
+
+Proof. Lemma B. 3 is true because $\left( {B + 1}\right)$ -dimensional representations can either be computed from themselves at the previous iteration, or expanded from $B$ -dimensional representations. Since representations at all previous iterations $j < i$ can be contained in ${T}_{i, B}$ , it is sufficient to compute ${T}_{i, B + 1}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B + 1}}\right\rbrack$ from all its $B$ -ary sub-tuples. Then let's construct the HO-GNN for given NLM to show the existence of the reduction.
+
+Lemma B.4. A NLM of maximum arity $B + 1$ and depth $D$ can be realized by a $B$ -ary HO-GNN with no more than $D$ iterations.
+
+Proof. We can realize the Expand and Reduce operation with only the $B$ -dimensional features using the broadcast message passing scheme. Note that Expand and Reduce between $B$ -dimensional features and $\left( {B + 1}\right)$ -dimensional features in the NLM is a special case where claim B.3 is applied.
+
+Let’s start with Expand and Reduce operations between features of dimension $B$ or lower. For the $b$ -dimensional feature in the NLM, we keep ${n}^{\underline{b}}{n}^{B - b \ddagger }$ copies of it and store them the representation of every $B$ -tuple who has a sub-tuple ${}^{§}$ that is a permutation of the $b$ -tuple. That is, for each $B$ -tuple in the $B$ -ary HO-GNN, for its every sub-tuple of length $b$ , we store $b$ ! representations corresponding to every permutation of the $b$ -tuple in the NLM. Keeping representation for all sub-tuple permutations make it possible to realize the Permute operation. Also, it is easy to notice that Expand operation is realized already, as all features with dimension lower than $B$ are naturally expanded to $B$ dimension by filling in all possible combinations of the rest elements. Finally, the Reduce operation can be realized using a broadcast casting message passing on certain position of the tuple.
+
+Now let's move to the special case - the Expand and Reduce operation between features of dimensions $B$ and $B + 1$ . Claim B. 3 suggests how the $\left( {B + 1}\right)$ -dimensional features are stored in $B$ -dimensional representations in GNNs, and we now show how the Reduce can be realized by message passing.
+
+We first bring in claim B. 3 to the HO-GNN message passing, where we have ${\operatorname{Received}}_{i}\left\lbrack \mathbf{v}\right\rbrack$ to be
+
+$$
+\sum \left( {{\mathrm{{NN}}}_{1}\left( {{T}_{i - 1, B}\left\lbrack {{v}_{2},{v}_{3},\cdots ,{v}_{B}, u}\right\rbrack ,{T}_{i - 1, B}\left\lbrack {{v}_{1},{v}_{3},\cdots ,{v}_{B}, u}\right\rbrack ,\cdots ,{T}_{\left( {i - 1}\right) , B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack }\right) }\right)
+$$
+
+Note that the last term ${T}_{i - 1, B}\left\lbrack {{v}_{1},{v}_{2},\cdots ,{v}_{B}}\right\rbrack$ is contained in ${H}_{i - 1}\left( v\right)$ in equation B.5, and other terms are contained in ${H}_{i - 1}\left( {v}^{\prime }\right)$ for ${v}^{\prime } \in$ neighbors(v, u). Hence, equation B. 5 is sufficient to simulate the Reduce operation.
+
+Theorem B.5. $B$ -ary HO-GNNs are equally expressive as NLMs with maximum arity $B + 1$ .
+
+Proof. This is a direct conclusion by combining Lemma B. 2 and Lemma B.4.
+
+### B.4 Expressiveness of hypergraph convolution and attention
+
+There exist other variants of hypergraph neural networks. In particular, hypergraph convolution[26- 28], attention[29] and message passing[30] focus on updating node features instead of tuple features through hyperedges . These approaches can be viewed as instances of hypergraph neural networks, and they have smaller time complexity because they do not model all high-arity tuples. However, they are less expressive than the standard hypergraph neural networks with equal max arity.
+
+These approaches can be formulated to two steps at each iteration. At the first step, each hyperedge is updated by the features of nodes it connects.
+
+$$
+{h}_{i, e} = {\mathrm{{AGG}}}_{v \in e}{f}_{i - 1, v} \tag{B.6}
+$$
+
+At the second step, each node is updated by the features of hyperedges connecting it.
+
+$$
+{f}_{i, v} = {\mathrm{{AGG}}}_{v \in e}{h}_{i, e} \tag{B.7}
+$$
+
+where ${f}_{i, v}$ is the feature of node $v$ at iteration $i$ , and ${h}_{i, v}$ is the aggregated message passing through hyperedge $e$ at iteration $i + 1$ .
+
+It is not hard to see that B. 6 can be realized by $B$ iterations of NLM layers with Expand operations where $B$ is the max arity of hyperedges. This can be done by expanding each node feature to every high arity features that contain the node, and aggregate them at the tuple corresponding to each hyperedge. Then, B. 7 can also be realized by $B$ iterations of NLM layers with Reduce operations, as the tuple feature will finally be reduced to a single node contained in the tuple.
+
+---
+
+${}^{ \ddagger }{n}^{\underline{k}} = n \times \left( {n - 1}\right) \times \cdots \times \left( {n - k + 1}\right) .$
+
+${}^{§}$ The sub-tuple does not have to be consecutive, but instead can be a any subset of the tuple that keeps the element order.
+
+---
+
+This approach has lower complexity compared to the GNNs we study applied on hyperedges, because it only requires communication between nodes and hyperedges connecting to them, which takes $O\left( {\left| V\right| \cdot \left| E\right| }\right)$ time at each iteration. Compared to them, NLMs takes $O\left( {\left| V\right| }^{B}\right)$ time because NLMs keep features of every tuple with max arity $B$ , and allow communication from tuples to tuples instead of between tuples and single nodes. An example is provided below that this approach can not solve while NLMs can.
+
+Consider a graph with 6 nodes and 6 edges forming two triangles(1,2,3)and(4,5,6). Because of the symmetry, the representation of each node should be identical throughout hypergraph message passing rounds. Hence, it is impossible for these models to conclude that(1,2,3)is a triangle but (4,2,3)is not, based only on the node representations, because they are identical. In contrast, NLMs with max arity 3 can solve them (as standard triangle detection problem in Table 1).
+
+### B.5 The Time and Space Complexity of NLMs
+
+Handling high-arity features and using deeper models usually increase the computational cost in terms of time and space. As an instance that use the architecture of RelNN, NLMs with depth $D$ and max arity $B$ takes $O\left( {D{n}^{B}}\right)$ time when applying to graphs with size $n$ . This is because both Expand and Reduce operation have linear time complexity with respect to the input size (which is $O\left( {n}^{B}\right)$ at each iteration). If we need to record the computational history (which is typically the case when training the network using back propagation), the space complexity is the same as the time complexity.
+
+GNNs applied to(B - 1)-ary hyperedges and depth $D$ are equally expressive as RelNNs with depth $O\left( D\right)$ and max arity $B$ . Though up to(B - 1)-ary features are kept in their architecture, the broadcast message passing scheme scale up the complexity by a factor of $O\left( n\right)$ , so they also have time and space complexity $O\left( {D{n}^{B}}\right)$ . Here the length of feature tensors $W$ is treated as a constant.
+
+## C Arity and Depth Hierarchy: Proofs and Analysis
+
+### C.1 Proof of Theorem 3.1: Arity Hierarchy.
+
+[9] have connected high-dimensional GNNs with high-dimensional WL tests. Specifically, they showed that the $B$ -ary HO-GNNs are equally expressive as $B$ -dimensional WL test on graph isomorphism test problem. In Theorem B. 5 we proved that $B$ -ary HO-GNNs are equivalent to NLM of maximum arity $B + 1$ in terms of expressiveness. Hence, NLM of maximum arity $B + 1$ can distinguish if two non-isomorphic graphs if and only if $B$ -dimensional WL test can distinguish them.
+
+However, Cai et al. [11] provided an construction that can generate a pair of non-isomorphic graphs for every $B$ , which can not be distinguished by(B - 1)-dimensional WL test but can be distinguished by $B$ -dimensional WL test. Let ${G}_{B}^{1}$ and ${G}_{B}^{2}$ be such a pair of graph.
+
+Since NLM of maximum arity $B + 1$ is equally expressive as $B$ -ary HO-GNNs, there must be such a NLM that classify ${G}_{B}^{1}$ and ${G}_{B}^{2}$ into different label. However, such NLM can not be realized by any NLM of maximum arity $B$ because they are proven to have identical outputs on ${G}_{B}^{1}$ and ${G}_{B}^{2}$ .
+
+In the other direction, NLMs of maximum arity $B + 1$ can directly realize NLMs of maximum arity $B$ , which completes the proof.
+
+### C.2 Upper Depth Bound for Unbounded-Precision NLM.
+
+The idea for proving an upper bound on depth is to connect NLMs to WL-test, and use the $O\left( {n}^{B}\right)$ upper bound on number of iterations for $B$ -dimensional test [31], and FOC formula is the key connection.
+
+For any fixed $n, B$ -dimensional WL test divide all graphs of size $n,{\mathcal{G}}_{ = n}$ , into a set of equivalence classes $\left\{ {{\mathcal{C}}_{1},{\mathcal{C}}_{2},\cdots ,{\mathcal{C}}_{m}}\right\}$ , where two graphs belong to the same class if they can not be distinguished by the WL test. We have shown that NLMs of maximum arity $\left( {B + 1}\right)$ must have the same input for all graphs in the same equivalence class. Thus, any NLM of maximum arity $B + 1$ can be view as a labeling over ${\mathcal{C}}_{1},\cdots ,{\mathcal{C}}_{m}$ .
+
+Stated by Cai et al. [11], $B$ -dimensional WL test are as powerful as ${\mathrm{{FOC}}}_{B + 1}$ in differentiating graphs graphs. Combined with the $O\left( {n}^{B}\right)$ upper bound of WL test iterations, for each ${\mathcal{C}}_{i}$ , there must be an ${\mathrm{{FOC}}}_{B + 1}$ formula of quantifier depth $O\left( {n}^{B}\right)$ that exactly recognize ${\mathcal{C}}_{i}$ over ${\mathcal{G}}_{ = n}$ .
+
+Finally, with unbounded precision, for any $f\left( n\right)$ , NLM of maximum arity $B + 1$ and depth $f\left( n\right)$ can compute all ${\mathrm{{FOC}}}_{B + 1}$ formulas with quantifier depth $f\left( n\right)$ . Note that there are finite number of such formula because the supscript of counting quantifiers is bounded by $n$ .
+
+For any graph in some class ${\mathcal{C}}_{i}$ , the class can be determined by evaluating these FOC formulas, and then the label is determined. Therefore, any NLM of maximum arity $B + 1$ can be realized by a NLM of maximum arity $B + 1$ and depth $O\left( {n}^{B}\right)$ .
+
+### C.3 Graph Problems
+
+| $B = 4$ | 4-Clique Detection NLM $\left\lbrack {O\left( 1\right) ,4}\right\rbrack$ | 4-Clique Count NLM $\left\lbrack {O\left( 1\right) ,4}\right\rbrack$ |
| $B = 3$ | Triangle Detection NLM $\left\lbrack {O\left( 1\right) ,3}\right\rbrack$ Bipartiteness NLM[O( $\log n$ ), 3]* All-Pair Connectivity NLM[O( $\log n$ ),3] ${}^{ \star }$ All-Pair Connectivity- $k$ NLM ${\left\lbrack O\left( \log k\right) ,3\right\rbrack }^{ \star }$ | All-Pair Distance NLM ${\left\lbrack O\left( \log n\right) ,3\right\rbrack }^{ \star }$ |
| $B = 2$ | ${\mathrm{{FOC}}}_{2}$ Realization NLM $\left\lbrack {\cdot ,2}\right\rbrack \left\lbrack {12}\right\rbrack$ 3/4-Link Detection NLM[O(1), 2] S-T Connectivity NLM $\left\lbrack {O\left( n\right) ,2}\right\rbrack$ S-T Connectivity- $k$ NLM $\left\lbrack {O\left( k\right) ,2}\right\rbrack$ | S-T Distance NLM $\left\lbrack {O\left( n\right) ,2}\right\rbrack$ Max Degree NLM $\left\lbrack {O\left( 1\right) ,2}\right\rbrack$ Max Flow NLM ${\left\lbrack O\left( {n}^{3}\right) ,2\right\rbrack }^{ \star }$ |
| $B = 1$ | Node Color Majority: NLM[O(1), 1] | Count Red Nodes: NLM $\left\lbrack {O\left( 1\right) ,1}\right\rbrack$ |
| Classification Tasks | Regression Tasks |
+
+Table 1: The minimum depth and arity of NLMs for solving graph classification and regression tasks. The * symbol indicates that these are conjectured lower bounds.
+
+We list a number of examples for graph classification and regression tasks, and we provide the definitions and the current best known NLMs for learning these tasks from data. For some of the problems, we will also show why they can not be solved by a simpler problems, or indicate them as open problems.
+
+Node Color Majority. Each node is assigned a color $c \in \mathcal{C}$ where $\mathcal{C}$ is a finite set of all colors. The model needs to predict which color the most nodes have.
+
+Using a single layer with sum aggregation, the model can count the number of nodes of color $c$ for each $c \in \mathcal{C}$ on its global representation.
+
+Count Red Nodes. Each node is assigned a color of red or blue. The model needs to count the number of red nodes.
+
+Similarly, using a single layer with sum aggregation, the model can count the number of red nodes on its global representation.
+
+3-Link Detection. Given an unweighted, undirected graph, the model needs to detect whether there is a triple of nodes(a, b, c)such that $a \neq c$ and(a, b)and(b, c)are edges.
+
+This is equivalent to check whether there exists a node with degree at least 2 . We can use a Reduction operation with sum aggregation to compute the degree for each node, and then use a Reduction operation with max aggregation to check whether the maximum degree of nodes is greater than or equal to 2 .
+
+Note that this can not be done with 1 layer, because the edge information is necessary for the problem, and they require at least 2 layers to be passed to the global representation.
+
+4-Link Detection. Given an unweighted undirected graph, the model needs to detect whether there is a 4-tuple of nodes(a, b, c, d)such that $a \neq c, b \neq d$ and $\left( {a, b}\right) ,\left( {b, c}\right) ,\left( {c, d}\right)$ are edges (note that a triangle is also a 4-link).
+
+This problem is equivalent to check whether there is an edge between two nodes with degrees $\geq 2$ . We can first reduce the edge information to compute the degree for each node, and then expand it back to 2-dimensional representations, so we can check for each edge if the degrees of its ends are $\geq 2$ . Then the results are reduced to the global representation with existential quantifier (realized by max aggregation) in 2 layers.
+
+Triangle Detection. Given a unweighted undirected graph, the model is asked to determine whether there is a triangle in the graph i.e. a tuple(a, b, c)so that $\left( {a, b}\right) ,\left( {b, c}\right) ,\left( {c, a}\right)$ are all edges.
+
+This problem can be solved by NLM [4,3]: we first expand the edge to 3-dimensional representations, and determine for each 3-tuple if they form a triangle. The results of 3-tuples require 3 layers to be passed to the global representation.
+
+We can prove that Triangle Detection indeed requires breadth at least 3 . Let $k$ -regular graphs be graphs where each node has degree $k$ . Consider two $k$ -regular graphs both with $n$ nodes, so that exactly one of them contains a triangle ${}^{1}$ . However, NLMs of breadth 2 has been proven not to be stronger than WL test on distinguish graphs, and thus can not distinguish these two graphs (WL test can not distinguish any two $k$ -regular graphs with equal size).
+
+4-Clique Detection and Counting. Given an undirected graph, check existence of, or count the number of tuples(a, b, c, d)so that there are edges between every pair of nodes in the tuple.
+
+This problem can be easily solved by a NLM with breadth 4 that first expand the edge information to the 4-dimensional representations, and for each tuple determine whether its is a 4-clique. Then the information of all 4-tuples are reduced 4 times to the global representation (sum aggregation can be used for counting those).
+
+Though we did not find explicit counter-example construction on detecting 4-cliques with NLMs of breadth 3 , we suggest that this problem can not be solved with NLMs with 3 or lower breadth.
+
+Connectivity. The connectivity problems are defined on unweighted undirected graphs. S-T connectivity problems provides two nodes $S$ and $T$ (labeled with specific colors), and the model needs to predict if they are connected by some edges. All pair connectivity problem require the model to answer for every pair of nodes. Connectivity- $k$ problems have an additional requirement that the distance between the pair of nodes can not exceed $k$ .
+
+S-T connectivity- $k$ can be solved by a NLM of breadth 2 with $k$ iterations. Assume $S$ is colored with color $c$ , at every iteration, every node with color $c$ will spread the color to its neighbors. Then, after $k$ iterations, it is sufficient to check whether $T$ has the color $c$ .
+
+With NLMs of breadth 3, we can use $O\left( {\log k}\right)$ matrix multiplications to solve connectivity- $k$ between every pair of nodes. Since the matrix multiplication can naturally be realized by NLMs of breadth 3 with two layers. All-pair connectivity problems can all be solved with $O\left( {\log k}\right)$ layers.
+
+Theorem C.1(S-T connectivity- $k$ with NLM). S-T connectivity- $k$ can not be solved by a NLM of maximum arity within $o\left( k\right)$ iterations.
+
+Proof. We construct two graphs each has ${2k}$ nodes ${u}_{1},\cdots ,{u}_{k},{v}_{1},\cdots ,{v}_{k}$ . In both graph, there are edges $\left( {{u}_{i},{u}_{i + 1}}\right)$ and $\left( {{v}_{i},{v}_{i + 1}}\right)$ for $1 \leq i \leq k - 1$ i.e. there are two links of length $k$ . We then set $S = {u}_{1}, T = {u}_{n}$ and $S = {u}_{1}, T = {v}_{n}$ the the two graphs.
+
+We will analysis GNNs as NLMs are proved to be equivalent to them by scaling the depth by a constant factor. Now consider the node refinement process where each node $x$ is refined by the multiset of labels of $x$ ’s neighbots and the multiiset of labels of $x$ ’s non-neighbors.
+
+Let ${C}_{j}^{\left( i\right) }\left( x\right)$ be the label of $x$ in graph $j$ after $i$ iterations, at the beginning, WLOG, we have
+
+$$
+{C}_{1}^{\left( 0\right) }\left( {u}_{1}\right) = 1,{C}_{1}^{\left( 0\right) }\left( {u}_{n}\right) = 2{C}_{1}^{\left( 0\right) }\left( {u}_{1}\right) = 1,{C}_{1}^{\left( 0\right) }\left( {v}_{n}\right) = 2
+$$
+
+and all other nodes are labeled as 0 .
+
+Then we can prove by induction: after $i \leq \frac{k}{2} - 1$ iterations, for $1 \leq t \leq i + 1$ we have
+
+$$
+{C}_{1}^{\left( {u}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {u}_{t}\right) ,{C}_{1}^{\left( {v}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {v}_{t}\right)
+$$
+
+598
+
+$$
+{C}_{1}^{\left( {u}_{k - t + 1}\right) } = {C}_{2}^{\left( i\right) }\left( {v}_{k - t + 1}\right) ,{C}_{1}^{\left( {v}_{k - t + 1}\right) } = {C}_{2}^{\left( i\right) }\left( {u}_{k - t + 1}\right)
+$$
+
+---
+
+${}^{1}$ Such construction is common. One example is $k = 2, n = 6$ , and the graph may consist of two separated triangles or one hexagon
+
+---
+
+and for $i + 2 \leq t \leq k - i - 1$ we have
+
+$$
+{C}_{1}^{\left( {u}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {u}_{t}\right) ,{C}_{1}^{\left( {v}_{t}\right) } = {C}_{2}^{\left( i\right) }\left( {v}_{t}\right)
+$$
+
+This is true because before $\frac{k}{2}$ iterations are run, the multiset of all node labels are identical for the two graphs (say ${S}^{\left( i\right) }$ ). Hence each node $x$ is actually refined by its neighbors and ${S}^{\left( i\right) }$ where ${S}^{\left( i\right) }$ is the same for all nodes. Hence, before running $\frac{k}{2}$ iterations when the message between $S$ and $T$ finally meets in the first graph, GNN can not distinguish the two graphs, and thus can not solve the connectivity with distance $k - 1$ .
+
+Max Degree. The max degree problem gives a graph and ask the model to output the maximum degree of its nodes.
+
+Like we mentioned in 3-link detection, one layer for computing the degree for each node, and another layer for taking the max operation over nodes should be sufficient.
+
+Max Flow. The Max Flow problem gives a directional graph with capacities on edges, and indicate two nodes $S$ and $T$ . The models is then asked to compute the amount of max-flow from $S$ to $T$ .
+
+Notice that the Breadth First Search (BFS) component in Dinic's algorithm[32] can be implemented on NLMs as they does not require node identities (all new-visited nodes can augment to their non-visited neighbors in parallel). Since the BFS runs for $O\left( n\right)$ iteration, and the Dinic’s algorithm runs BFS $O\left( {n}^{2}\right)$ times, the max-flow can be solved by NLMs with in $O\left( {n}^{3}\right)$ iterations.
+
+Distance. Given a graph with weighted edges, compute the length of the shortest between specified node pair (S-T Distance) or all node pairs (All-pair Distance).
+
+Similar to Connectivity problems, but Distance problems now additionally record the minimum distance from $S$ (for S-T) or between every node pairs (for All-pair), which can be updated using min operator (using Min-plus matrix multiplication for All-pair case).
+
+## D Experiments
+
+We now study how our theoretical results on model expressiveness and learning apply to relational neural networks trained with gradient descent on practically meaningful problems. We begin by describing two synthetic benchmarks: graph substructure detection and relational reasoning.
+
+In the graph substructure detection dataset, there are several tasks of predicting whether there input graph containd a sub-graph with specific structure. The tasks are: 3-link (length-3 path), 4-link, triangle, and 4-clique. These are important graph properties with many potential applications.
+
+The relational reasoning dataset is composed of two family-relationship prediction tasks and two connectivity-prediction tasks. They are all binary edge classification tasks. In the family-relationship prediction task, the input contains the mother and father relationships, and the task is to predict the grandparent and uncle relationships between all pairs of entities. In the connectivity-prediction tasks, the input is the edges in an undirected graph and the task is to predict, for all pairs of nodes, whether they are connected with a path of length $\leq 4$ (connectivity-4) and whether they are connected with a path of arbitrary length (connectivity). The data generation for all datasets is included in Appendix D.
+
+### D.1 Experiment Setup
+
+For all problems, we have 800 training samples, 100 validation samples, and 300 test samples for each different $n$ we are testing the models on.
+
+We then provide the details on how we synthesize the data. For most of the problems, we generate the graph by randomly selecting from all potential edges i.e. the Erdős-Rényi model. We sample the number of edges around $n,{2n}, n\log n$ and ${n}^{2}/2$ . For all problems, with ${50}\%$ probability the graph will first be divided into2,3,4or 5 parts with equal number of components, where we use the first generated component to fill the edges for rest of the components. Some random edges are added afterwards. This make the data contain more isomorphic sub-graphs, which we found challenging empirically.
+
+Substructure Detection. To generate a graph that does not contain a certain substructure, we randomly add edges when reaching a maximal graph not containing the substructure or reaching the
+
+| Model | Agg. | 3-link | 4-link | triangle | 4-clique |
| $n = {10}$ | $n = {30}$ | $n = {10}$ | $n = {30}$ | $n = {10}$ | $n = {30}$ | $n = {10}$ | $n = {30}$ |
| 1-ary GNN | Max | ${70.0} \pm {0.0}$ | ${82.7} \pm {0.0}$ | ${92.0}_{\pm {0.0}}$ | ${91.7} \pm {0.0}$ | ${73.7}_{\pm {3.2}}$ | ${50.2} \pm {1.8}$ | ${55.3}_{\pm {4.0}}$ | 46.2±1.3 |
| Sum | ${100.0}_{\pm {0.0}}$ | ${89.4}_{\pm {0.4}}$ | ${100.0}_{\pm {0.0}}$ | ${86.1}_{\pm {1.2}}$ | ${77.7}_{\pm {8.5}}$ | ${48.6}_{\pm {1.6}}$ | ${53.7}_{\pm {0.6}}$ | ${55.2}_{\pm {0.8}}$ |
| 2-ary NLM | Max | ${65.3}_{\pm {0.6}}$ | ${54.0}_{\pm {0.6}}$ | ${93.0}_{\pm {0.0}}$ | ${95.7}_{\pm {0.0}}$ | ${51.0}_{\pm {1.7}}$ | ${49.2} \pm {0.4}$ | ${55.0}_{\pm {0.0}}$ | ${45.7}_{\pm {0.0}}$ |
| Sum | ${100.0}_{\pm {0.0}}$ | ${88.3}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${67.4}_{\pm {16.4}}$ | ${82.0}_{\pm {2.6}}$ | ${48.3}_{\pm {0.0}}$ | ${53.0}_{\pm {0.0}}$ | ${54.4}_{\pm {1.5}}$ |
| 2-ary GNN | Max | ${78.7}_{\pm {0.6}}$ | ${76.0}_{\pm {17.3}}$ | ${97.7}_{\pm {4.0}}$ | ${98.6}_{\pm {2.5}}$ | ${100.0} \pm {0.0}$ | ${100.0} \pm {0.0}$ | ${55.0}_{\pm {0.0}}$ | ${45.7}_{\pm {0.0}}$ |
| Sum | ${100.0}_{\pm {0.0}}$ | ${51.2} \pm {7.9}$ | 100.0 $\pm {0.0}$ | ${}_{{45.7} \pm {7.6}}$ | 100.0 $\pm {0.0}$ | ${49.2} \pm {1.0}$ | ${61.0}_{\pm {5.6}}$ | ${54.3}_{\pm {0.0}}$ |
| 3-ary NLM | Max | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${59.0}_{\pm {6.9}}$ | ${45.9}_{\pm {0.4}}$ |
| Sum | ${100.0}_{\pm {0.0}}$ | ${87.6}_{\pm {11.0}}$ | 100.0 ${}_{\pm {0.0}}$ | ${65.4}_{\pm {14.3}}$ | ${100.0}_{\pm {0.0}}$ | ${80.6} \pm {8.8}$ | ${73.7}_{\pm {13.8}}$ | ${53.3}_{\pm {8.8}}$ |
| 3-ary GNN | Max | ${79.0}_{\pm {0.0}}$ | ${86.0} \pm {0.0}$ | ${100.0} \pm {0.0}$ | 100.0±0.0 | ${100.0} \pm {0.0}$ | 100.0±0.0 | ${84.0}_{\pm {0.0}}$ | ${93.3}_{\pm {0.0}}$ |
| Sum | ${100.0}_{\pm {0.0}}$ | ${84.1}_{\pm {18.6}}$ | ${100.0}_{\pm {0.0}}$ | ${61.1}_{\pm {15.0}}$ | 100.0 ${}_{\pm {0.0}}$ | ${95.1}_{\pm {7.3}}$ | ${80.5}_{\pm {0.7}}$ | ${66.2}_{\pm {19.6}}$ |
| 4-ary NLM | Max | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${82.0}_{\pm {1.7}}$ | ${93.1}_{\pm {0.2}}$ |
| Sum | ${100.0} \pm {0.0}$ | ${59.1}_{\pm {5.3}}$ | ${100.0} \pm {0.0}$ | ${67.7}_{\pm {24.1}}$ | ${100.0} \pm {0.0}$ | ${82.1}_{\pm {12.8}}$ | ${84.0} \pm {0.0}$ | ${67.0}_{\pm {18.9}}$ |
+
+Table 2: Overall accuracy on relational reasoning problems. All models are trained on $n = {10}$ , and tested on $n = {30}$ . The standard error of all values are computed based on three random seeds.
+
+edge limit. For generating a graph that does contain the certain substructure, we first generate one that does not contain, and then randomly replace present edges with missing edges until we detect the substructure in the graph. This aim to change the label from "No" to "Yes" while minimizing the change to the overall graph properties, and we found that data generated using edge replacing is much more difficult for neural networks compared to random generated graphs from scratch.
+
+Family Tree. We generate the family trees using the algorithm modified from [5]. We add people to the family one by one. When a person is added, with probability $p$ we will try to find a single woman and a single man, get them married and let the new children be their child, and otherwise the new person is introduced as a non-related person. Every new person is marked as single and set the gender with a coin flip.
+
+We adjust $p$ based on the ratio of single population: $p = {0.7}$ when more than ${40}\%$ of the population are single, and $p = {0.3}$ when less than ${20}\%$ of the population are single, and $p = {0.5}$ otherwise.
+
+Connectivity. For connectivity problems, we use the similar generation method as the substructure detection. We sample the query pairs so that the labels are balanced.
+
+### D.2 Model Implementation Details
+
+For all models, we use a hidden dimension 128 except for 3-dimensional HO-GNN and 4-dimensional NLM where we use hidden dimension 64.
+
+All model have 4 layers that each has its own parameters, except for connectivity where we use the recurrent models that apply the second layer $k$ times, where $k$ is sampled from integers in $\left\lbrack {2\log n,3\log n}\right\rbrack$ . The depths are proven to be sufficient for solving these problems (unless the model itself can not solve).
+
+All models are trained for 100 epochs using adam optimizer with learning rate $3 \times {10}^{-4}$ decaying at epoch 50 and 80 .
+
+We have varied the depth, the hidden dimension, and the activation function of different models. We select sufficient hidden dimension and depth for every model and problem (i.e., we stop when increasing depth or hidden dimension doesn't increase the accuracy). We tried linear, ReLU, and Sigmoid activation functions, and ReLU performed the best overall combinations of models and tasks.
+
+### D.3 Results
+
+Our main results on all datasets are shown in Table 2 and Table 3. We empirically compare relational neural networks with different maximum arity $B$ , different model architecture (GNN and NLM), and different aggregation functions (max and sum). All models use sigmoidal activation for all MLPs. For each task on both datasets we train on a set of small graphs $\left( {n = {10}}\right)$ and test the trained model on both small graphs and large graphs $\left( {n = {10}\text{and}n = {30}}\right)$ . We summarize the findings below.
+
+| Model | | grand parent | uncle | connectivity-4 ${}^{\parallel }$ | connectivity |
| Agg. | $n = {20}$ | $n = {80}$ | $n = {20}$ | $n = {80}$ | $n = {10}$ | $n = {80}$ | $n = {10}$ | $n = {80}$ |
| 1-ary GNN | Max | ${84.0}_{\pm {0.3}}$ | ${64.8}_{\pm {0.0}}$ | ${93.6}_{\pm {0.3}}$ | ${66.1}_{\pm {0.0}}$ | ${72.6}_{\pm {3.6}}$ | ${67.5}_{\pm {0.5}}$ | ${85.6}_{\pm {0.3}}$ | ${75.1}_{\pm {1.9}}$ |
| Sum | ${84.7}_{\pm {0.1}}$ | ${64.4}_{\pm {0.0}}$ | ${94.3}_{\pm {0.2}}$ | ${66.2}_{\pm {0.0}}$ | ${79.6}_{\pm {0.1}}$ | ${68.3}_{\pm {0.1}}$ | ${87.1}_{\pm {0.3}}$ | ${75.0}_{\pm {0.2}}$ |
| 2-ary NLM | Max | ${82.3}_{\pm {0.5}}$ | ${65.6}_{\pm {0.1}}$ | ${93.1}_{\pm {0.0}}$ | ${66.6}_{\pm {0.0}}$ | ${91.2}_{\pm {0.2}}$ | ${51.0}_{\pm {0.6}}$ | ${88.9}_{\pm {2.6}}$ | ${67.1}_{\pm {4.8}}$ |
| Sum | ${82.9} \pm {0.1}$ | ${64.6}_{\pm {0.1}}$ | ${93.4}_{\pm {0.0}}$ | ${66.7}_{\pm {0.2}}$ | ${96.0} \pm {0.4}$ | ${68.3}_{\pm {0.5}}$ | ${84.0}_{\pm {0.0}}$ | ${71.9}_{\pm {0.0}}$ |
| 2-ary GNN | Max | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${100.0} \pm {0.0}$ | 100.0±0.0 | ${100.0}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${84.0}_{\pm {0.0}}$ | ${71.9}_{\pm {0.0}}$ |
| Sum | 100.0±0.0 | ${35.7}_{\pm {0.0}}$ | 100.0 ${}_{\pm {0.0}}$ | ${33.9}_{\pm {0.0}}$ | 100.0 ${}_{\pm {0.0}}$ | ${51.3}_{\pm {5.3}}$ | ${84.0}_{\pm {0.0}}$ | ${71.9}_{\pm {0.0}}$ |
| 3-ary NLM | Max | 100.0±0.0 | 100.0±0.0 | 100.0±0.0 | 100.0±0.0 | 100.0 $\pm {0.0}$ | 100.0 $\pm {0.0}$ | 100.0±0.0 | 100.0±0.1 |
| Sum | 100.0 ${}_{\pm {0.0}}$ | ${35.7}_{\pm {0.0}}$ | 100.0 ${}_{\pm {0.0}}$ | ${50.8}_{\pm {29.4}}$ | ${100.0}_{\pm {0.0}}$ | ${77.8}_{\pm {11.8}}$ | ${100.0}_{\pm {0.0}}$ | ${88.2}_{\pm {8.0}}$ |
| 3-ary ${\mathrm{{NLM}}}_{\mathrm{{HE}}}$ | Max | ${100.0} \pm {0.0}$ | ${100.0}_{\pm {0.0}}$ | ${100.0} \pm {0.0}$ | 100.0±0.0 | N/A | N/A | N/A | N/A |
| Sum | ${100.0}_{\pm {0.0}}$ | ${35.7}_{\pm {0.0}}$ | ${100.0}_{\pm {0.0}}$ | ${33.8}_{\pm {29.4}}$ | N/A | N/A | N/A | N/A |
+
+Table 3: Overall accuracy on relational reasoning problems. Models for family-relationship prediction are trained on $n = {20}$ , while models for connectivity problems are trained on $n = {10}$ . All model are tested on $n = {80}$ . The standard error of all values are computed based on three random seeds. The 3-ary NLMs marked with "HE" have hyperedges in inputs, where each family is represented by a 3-ary hyperedge instead of two parent-child edges, and the results are similar to binary edges.
+
+Expressiveness. We have seen a theoretical equal expressiveness between GNNs and NLMs applied to hypergraphs. That is, a GNN applied to $B$ -ary hyperedges is equivalent to a $\left( {B + 1}\right)$ -ary NLM. Table 2 and 3 further suggest their similar performance on tasks when trained with gradient descent.
+
+Formally, triangle detection requires NLMs with at least $B = 3$ to solve. Thus, we see that all NLMs with arity $B = 2$ fail on this task, but models with $B = 3$ perform well. Formally,4-clique is realizable by NLMs with maximum arity $B = 4$ , but we failed to reliably train models to reach perfect accuracy on this problem. It is not yet clear what the cause of this behavior is.
+
+Structural generalization. We discussed the structural generalization properties of NLMs in Section 4, in a learning setting based on fixed-precision networks and enumerative training. This setting can be approximated by training NLMs with max aggregation and sigmoidal activation on sufficient data.
+
+We run a case study on the problem connectivity-4 about how the generalization performance changes when the test graph size gradually becomes larger. Figure 2 show how these models generalize to gradually larger graphs with size increasing from 10 to 80 . From the curves we can see that only models with sufficient expressiveness can get ${100}\%$ accuracy on the same size graphs, and among them the models using max aggregation generalize to larger graphs with no performance drop. 2-ary GNN and 3-ary NLM that use max aggregation have sufficient expressiveness and better generalization property. They achieve ${100}\%$ accuracy on the original graph size and generalize 698 perfectly to larger graphs.
+
+
+
+Figure 2: How the performance of models drop when generalizing to larger graphs on the problem connectivity-4 (trained on graphs with size 10).
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..80a7539783bf90e68f3e96a4823fd0d5e127552f
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/4FlyRlNSUh/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,107 @@
+§ ON THE EXPRESSIVENESS AND GENERALIZATION OF HYPERGRAPH NEURAL NETWORKS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (Hyper-GNNs). Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes. Our first contribution is a fine-grained analysis of the expressiveness of Hyper-GNNs, that is, the set of functions that they can realize. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depths and edge arities. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a finite set of small graphs and generalize to larger graphs, which we term structural generalization. Our theoretical results are further supported by the empirical results.
+
+§ 1 INTRODUCTION
+
+Reasoning over graph-structured data is an important task in many applications, including molecule analysis, social network modeling, and knowledge graph reasoning [1-3]. While we have seen great success of various relational neural networks, such as Graph Neural Networks [GNNs; 4] and Neural Logical Machines [NLM; 5] in a variety of applications [6-8], we do not yet have a full understanding of how different design parameters, such as the depth of the neural network, affects the expressiveness of these models, or how effectively these models generalize from limited data.
+
+This paper analyzes the expressiveness and generalization of relational neural networks applied to hypergraphs, which are graphs with edges connecting more than two nodes. We have formally shown the "if and only if" conditions for the expressive power with respect to the edge arity. That is, $k$ -ary hyper-graph neural networks are sufficient and necessary for realizing FOC- $k$ , a fragment of first-order logic which involves at most $k$ variables. This is a helpful result because now we can determine whether a specific hypergraph neural network can solve a problem by understanding what form of logic formula can represent the solution to this problem. Next, we formally described the relationship between expressiveness and non-constant-depth networks. We state a conjecture about the "depth hierarchy," and connect the potential proof of this conjecture to the distributed computing literature. Our results highlight that: Even when the inputs and outputs of models have only unary and binary relations, allowing intermediate hyperedge representations increases the expressiveness.
+
+Furthermore, we prove, under certain realistic assumptions, it is possible to train a hypergraph neural networks on a finite set of small graphs, and it will generalize to arbitrarily large graphs. This ability is the result of the weight-sharing nature of hypergraph neural networks. We hope our work can serve as a foundation for designing hypergraph neural networks: to solve a specific problem, what arity do you need? What depth do you need? Will my model have structural generalization (i.e., to larger graphs)? Our theoretical results on learning are further supported by experiments, for empirical demonstration of the theorems.
+
+§ 2 HYPERGRAPH REASONING PROBLEMS AND HYPERGRAPH NEURAL NETWORKS
+
+A hypergraph representation $G$ is a tuple(V, X), where $V$ is a set of entities (nodes), and $X$ is a set of hypergraph representation functions. Specifically, $X = \left\{ {{X}_{0},{X}_{1},{X}_{2},\cdots ,{X}_{k}}\right\}$ , where ${X}_{j} : \left( {{v}_{1},{v}_{2},\cdots ,{v}_{j}}\right) \rightarrow \mathcal{S}$ is a function mapping every tuple of $j$ nodes to a value. We call $j$ the arity of the hyperedge and $k$ is the max arity of input hyperedges.m The range $\mathcal{S}$ can be any set of discrete labels that describes relation type, or a scalar number (e.g., the length of an edge), or a vector. In general, we will use the arity 0 representation function ${X}_{0}\left( \varnothing \right) \rightarrow \mathcal{S}$ to represent any global properties of the graph as a whole.
+
+A graph reasoning function $f$ is a mapping from a hypergraph representation $G = \left( {V,X}\right)$ to another hyperedge representation function $Y$ on $V$ . As concrete examples, asking whether a graph is fully connected is a graph classification problem, where the output $Y = \left\{ {Y}_{0}\right\}$ and ${Y}_{0}\left( \varnothing \right) \rightarrow {\mathcal{S}}^{\prime } = \{ 0,1\}$ is a global label; finding the set of disconnected subgraphs of size $k$ is a $k$ -ary hyperedge classification problem, where the output $Y = \left\{ {Y}_{k}\right\}$ is a label for each $k$ -ary hyperedges.
+
+There are two main motivations and constructions of a neural network applied to graph reasoning problems: message-passing-based and first-order-logic-inspired. Both approaches construct the computation graph layer by layer. The input to the entire neural network consists of the input features of nodes and hyperedges, while the output of the neural network is the per-node or per-edge prediction of desired properties, depending on the training task.
+
+In a nutshell, within each layer, message-passing-based hypergraph neural networks, Higher-Order GNNs [9], perform message passing between each hyperedge and its neighbours. Specifically, we say the j-th neighbour set of a hyperedge $u = \left( {{x}_{1},{x}_{2},\cdots ,{x}_{i}}\right)$ of arity $i$ is ${N}_{j}\left( u\right) =$ $\left\{ \left( {{x}_{1},{x}_{2},\cdots ,{x}_{j - 1},r,{x}_{j + 1},\cdots ,{x}_{i}}\right) \right\}$ , where $r \in V$ . Then, the all neighbours of node $u$ is the union of all ${N}_{j}$ ’s, where $j = 1,2,\cdots ,i$ .
+
+On the other hand, first-order-logic-inspired hypergraph neural networks consider building neural networks that can emulate first logic formulas. Neural Logic Machines [NLM; 5] are defined in terms of a set of input hyperedges; each hyperedge of arity $k$ is represented by a vector of (possibly real) values obtained by applying all of the k -ary predicates in the domain to the tuple of vertices it connects. Each layer in an NLM learns to apply a linear transformation with nonlinear activation and quantification operators (analogous to the for all $\forall$ and exists $\exists$ quantifiers in first-order logic), on these values. It is easy to prove, by construction, that given a sufficient number of layers and maximum arity, NLMs can learn to realize any first-order-logic formula. For readers who are not familiar with HO-GNNs [9] and NLMs [5], we include a mathematical summary of their computation graph in Appendix B. Our analysis starts from the following theorem.
+
+Theorem 2.1. HO-GNNs [9] are equivalent to NLMs in terms of expressiveness. Specifically, a B-ary HO-GNN is equivalent to an NLM applied to $B + 1$ -ary hyperedges. Proofs are in Appendix B.3.
+
+Given Theorem 2.1, we can focus our analysis on just one single type of hypergraph neural network. Specifically, we will focus on Neural Logic Machines [NLM; 5] because its architecture naturally aligns with first-order logic formula structures, which will aid some of our analysis. An NLM is characterized by hyperparameters $D$ (depth), and $B$ maximum arity. We are going to assume that $B$ is a constant, but $D$ can be dependent on the size of the input graph. We will use $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ to denote an NLM family with depth $D$ and max arity $B$ . Other parameters such as the width of neural networks affects the precise details of what functions can be realized, as it does in a regular neural network, but does not affect the analyses in this extended abstract. Furthermore, we will be focusing on neural networks with bounded precision, and briefly discuss how our results generalize to unbounded precision cases.
+
+§ 3 EXPRESSIVENESS OF RELATIONAL NEURAL NETWORKS
+
+We start from a formal definition of hypergraph neural network expressiveness.
+
+Definition 3.1 (Expressiveness). We say a model family ${\mathcal{M}}_{1}$ is at least expressive as ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ , if for all ${M}_{2} \in {\mathcal{M}}_{2}$ , there exists ${M}_{1} \in {\mathcal{M}}_{1}$ such that ${M}_{1}$ can realize ${M}_{2}$ . A model family ${\mathcal{M}}_{1}$ is more expressive than ${\mathcal{M}}_{2}$ , written as ${\mathcal{M}}_{1} \succ {\mathcal{M}}_{2}$ , if ${\mathcal{M}}_{1} \succcurlyeq {\mathcal{M}}_{2}$ and $\exists {M}_{1} \in {\mathcal{M}}_{1}$ , $\forall {M}_{2} \in {\mathcal{M}}_{2},{M}_{2}$ can not realize ${M}_{1}$ .
+
+Arity Hierarchy We first aim to quantify how the maximum arity $B$ of the network’s representation affects its expressiveness and find that, in short, even if the inputs and outputs of neural networks are of low arity, the higher the maximum arity for intermediate layers, the more expressive the NLM is.
+
+Theorem 3.1 (Arity Hierarchy). For any maximum arity $B$ , there exists a depth ${D}^{ * }$ such that: $\forall D \geq {D}^{ * },\operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ is more expressive than $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ . This theorem applies to both fixed-precision and unbounded-precision networks.
+
+Proof sketch: Our proof slightly extends the proof of Morris et al. [9]. First, the set of graphs distinguishable by $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ is bounded by graphs distinguishable by a $D$ -round order- $B$ Weisfeiler-Leman test [10]. If models in NLM $\left\lbrack {D,B}\right\rbrack$ cannot generate different outputs for two distinct
+
+hypergraphs ${G}_{1}$ and ${G}_{2}$ , but there exists $M \in \operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ that can generate different outputs for ${G}_{1}$ and ${G}_{2}$ , then we can construct a graph classification function $f$ that $\operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ (with some fixed precision) can realize but $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ (even with unbounded precision) cannot.* The full proof is described in Appendix C.1.
+
+It is also important to quantify the minimum arity for realizing certain graph reasoning functions.
+
+Theorem 3.2 (FOL realization bounds). Let ${\mathrm{{FOC}}}_{B}$ denote a fragment of first order logic with at most $B$ variables, extended with counting quantifiers of the form ${\exists }^{ \geq n}\phi$ , which state that there are at least $n$ nodes satisfying formula $\phi$ [11].
+
+ * (Upper Bound) Any function $f$ in ${\mathrm{{FOC}}}_{B}$ can be realized by $\mathrm{{NLM}}\left\lbrack {D,B}\right\rbrack$ for some $D$ .
+
+ * (Lower Bound) There exists a function $f \in {\mathrm{{FOC}}}_{B}$ such that for all $D,f$ cannot be realized by $\operatorname{NLM}\left\lbrack {D,B - 1}\right\rbrack$ .
+
+Proof: The upper bound part of the claim has been proved by Barceló et al. [12] for $B = 2$ . The results generalize easily to arbitrary $B$ because the counting quantifiers can be realized by sum aggregation. The lower bound part can be proved by applying Section 5 of [11], in which they show that ${\mathrm{{FOC}}}_{B}$ is equivalent to a(B - 1)-dimensional WL test in distinguishing non-isomorphic graphs. Given that $\mathrm{{NLM}}\left\lbrack {D,B - 1}\right\rbrack$ is equivalent to the(B - 2)-dimensional WL test of graph isomorphism, there must be an ${\mathrm{{FOL}}}_{B}$ formula that distinguishes two non-isomorphic graphs that $\operatorname{NLM}\left\lbrack {D,B - 1}\right\rbrack$ cannot. Hence, ${\mathrm{{FOL}}}_{B}$ cannot be realized by $\mathrm{{NLM}}\left\lbrack {\cdot ,B - 1}\right\rbrack$ .
+
+Depth Hierarchy We now study the dependence of the expressiveness of NLMs on depth $D$ . Neural networks are generally defined to have a fixed depth, but allowing them to have a depth that is dependent on the number of nodes $n = \left| V\right|$ in the graph can substantially increase their expressive power. In the following, we define a depth hierarchy by analogy to the time hierarchy in computational complexity theory [13], and we extend our notation to let $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) ,B}\right\rbrack$ denote the class of adaptive-depth NLMs in which the growth-rate of depth $D$ is bounded by $O\left( {f\left( n\right) }\right)$ .
+
+Conjecture 3.3 (Depth hierarchy). For any maximum arity $B$ , for any two functions $f$ and $g$ , if $g\left( n\right) = o\left( {f\left( n\right) /\log n}\right)$ , that is, $f$ grows logarithmically more quickly than $g$ , then fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {f\left( n\right) }\right) ,B}\right\rbrack$ is more expressive than fixed-precision $\operatorname{NLM}\left\lbrack {O\left( {g\left( n\right) }\right) ,B}\right\rbrack$ .
+
+There is a closely related result for the congested clique model in distributed computing, where [14] proved that $\operatorname{CLIQUE}\left( {g\left( n\right) }\right) \varsubsetneq \operatorname{CLIQUE}\left( {f\left( n\right) }\right)$ if $g\left( n\right) = o\left( {f\left( n\right) }\right)$ . This result does not have the $\log n$ gap because the congested clique model allows $\log n$ bits to transmit between nodes at each iteration, while fixed-precision NLM allows only a constant number of bits. The reason why the result on congested clique can not be applied to fixed-precision NLMs is that congested clique assumes unbounded precision representation for each individual node.
+
+However, Conjecture 3.3 is not true for NLMs with unbounded precision, because there is an upper bound depth $O\left( {n}^{B - 1}\right)$ for a model’s expressiveness power. ${}^{ \dagger }$ That is, an unbounded-precision NLM can not achieve stronger expressiveness by increasing its depth beyond $O\left( {n}^{B - 1}\right)$ .
+
+It is important to point out that, to realize a specific graph reasoning function, NLMs with different maximum arity $B$ may require different depth $D$ . Fürer [15] provides a general construction for problems that higher-dimensional NLMs can solve in asymptotically smaller depth than lower-dimensional NLMs. In the following we give a concrete example for computing $S - T$ Connectivity- $k$ , which asks whether there is a path of nodes from $S$ and $T$ in a graph, with length $\leq k$ .
+
+Theorem 3.4 (S-T Connectivity- $k$ with Different Max Arity). For any function $f\left( k\right)$ , if $f\left( k\right) = o\left( k\right)$ , $\operatorname{NLM}\left\lbrack {O\left( {f\left( k\right) }\right) ,2}\right\rbrack$ cannot realize S-T Connectivity- $k$ . That is, S-T Connectivity- $k$ requires depth at least $O\left( k\right)$ for a relational neural network with an maximum arity of $B = 2$ . However, S-T Connectivity- $k$ can be realized by $\operatorname{NLM}\left\lbrack {O\left( {\log k}\right) ,3}\right\rbrack$ .
+
+Proof sketch. For any integer $k$ , we can construct a graph with two chains of length $k$ , so that if we mark two of the four ends as $S$ or $T$ , any $\operatorname{NLM}\left\lbrack {k - 1,2}\right\rbrack$ cannot tell whether $S$ and $T$ are on the same chain. The full proof is described in Appendix C.3.
+
+There are many important graph reasoning tasks that do not have known depth lower bounds, including all-pair connectivity and shortest distance [16, 17]. In Appendix C.3, we discuss the concrete complexity bounds for a series of graph reasoning problems.
+
+${}^{ * }$ Note that the arity hierarchy is applied to fixed-precision and unbounded-precision separately. For example, $\operatorname{NLM}\left\lbrack {D,B}\right\rbrack$ with unbounded precision is incomparable with $\operatorname{NLM}\left\lbrack {D,B + 1}\right\rbrack$ with fixed precision.
+
+${}^{ \dagger }$ See appendix C. 2 for a formal statement and the proof.
+
+§ 4 LEARNING AND GENERALIZATION IN RELATIONAL NEURAL NETWORKS
+
+Given our understanding of what functions can be realized by NLMs, we move on to the problems of learning them: Can we effectively learn a NLMs to solve a desired task given a sufficient number of input-output examples? In this paper, we show that applying enumerative training with examples up to some fixed graph size can ensure that the trained neural network will generalize to all graphs larger than those appearing in the training set.
+
+A critical determinant of the generalization ability for NLMs is the aggregation function they use. Specifically, Xu et al. [18] have shown that using sum as the aggregation function provides maximum expressiveness for graph neural networks. However, sum aggregation cannot be implemented in fixed-precision models with an arbitrary number of nodes, because as the graph size $n$ increases, the range of the sum aggregation also increases.
+
+Definition 4.1 (Fixed-precision aggregation function). An aggregation function is fixed precision if it maps from any finite set of inputs with values drawn from finite domains to a fixed finite set of possible output values; that is, the cardinality of the range of the function cannot grow with the number of elements in the input set. Two useful fixed-precision aggregation functions are max, which computes the dimension-wise maximum over the set of input values, and fixed-precision mean, which approximates the dimension-wise mean to a fixed decimal place.
+
+In order to focus on structural generalization in this section, we consider an enumerative training paradigm. When the input hypergraph representation domain $\mathcal{S}$ is a finite set, we can enumerate the set ${\mathcal{G}}_{ < N}$ of all possible input hypergraph representations of size bounded by $N$ . We first enumerate all graph sizes $n \leq N$ ; for each $n$ , we enumerate all possible values assigned to the hyperedges in the input. Given training size $N$ , we enumerate all inputs in ${\mathcal{G}}_{ \leq N}$ , associate with each one the corresponding ground-truth output representation, and train the model with these input-output pairs.
+
+This has much stronger data requirements than the standard sampling-based training mechanisms in machine learning. In practice, this can be approximated well when the input domain $\mathcal{S}$ is small and the input data distribution is approximately uniformly distributed. The enumerative learning setting is studied by the language identification in the limit community [19], in which it is called complete presentation. This is an interesting learning setting because even if the domain for each individual hyperedge representation is finite, as the graph size can go arbitrarily large, the number of possible inputs is enumerable but unbounded.
+
+Theorem 4.1 (Fixed-precision generalization under complete presentation). For any hypergraph reasoning function $f$ , if it can be realized by a fixed-precision relational neural network model $\mathcal{M}$ , then there exists an integer $N$ , such that if we train the model with complete presentation on all input hypergraph representations with size smaller than $N,{\mathcal{G}}_{ \leq N}$ , then for all $M \in \mathcal{M}$ ,
+
+$$
+\mathop{\sum }\limits_{{G \in {\mathcal{G}}_{ < N}}}1\left\lbrack {M\left( G\right) \neq f\left( G\right) }\right\rbrack = 0 \Rightarrow \forall G \in {\mathcal{G}}_{\infty } : M\left( G\right) = f\left( G\right) .
+$$
+
+That is, as long as $M$ fits all training examples, it will generalize to all possible hypergraphs in ${\mathcal{G}}_{\infty }$ .
+
+Proof. The key observation is that for any fixed vector representation length $W$ , there are only a finite number of distinctive models in a fixed-precision NLM family, independent of the graph size $n$ . Let ${W}_{b}$ be the number of bits in each intermediate representation of a fixed-precision NLM. There are at most ${\left( {2}^{{W}_{b}}\right) }^{{2}^{{W}_{b}}}$ different mappings from inputs to outputs. Hence, if $N$ is sufficiently large to enumerate all input hypergraphs, we can always identify the correct model in the hypothesis space.
+
+Our results are related to the algorithmic alignment approach [20, 21]. In contrast to their Probably Approximately Correct Learning (PAC-Learning) bounds for sample efficiency, our expressiveness results directly quantifies whether a hypergraph neural network can be trained to realize a specific function. Our generalization theorem applies to more generally than their result on Max-Degree function learning due to the assumption of fixed precision.
+
+§ 5 CONCLUSION
+
+In this extended abstract, we have shown the substantial increase of expressive power due to higher-arity relations and increasing depth, and have characterized very powerful structural generalization from training on small graphs to performance on larger ones. We further discuss the relationship between these results and existing results in Appendix A. All theoretical results are further supported by the empirical results, discussed in Appendix D. Although many questions remain open about the overall generalization capacity of these models in continuous and noisy domains, we believe this work has shed some light on their utility and potential for application in a variety of problems.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..027e9e3a9862cb424681057453f0621a03d22dc7
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,343 @@
+# Beyond 1-WL with Local Ego-Network Encodings
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Identifying similar network structures is key to capture graph isomorphisms and learn representations that exploit structural information encoded in graph data. This work shows that ego-networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler-Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego-networks into sparse vectors that enrich Message Passing (MP) Graph Neural Networks (GNNs) beyond 1-WL expressivity. We describe formally the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on seven GNN architectures.
+
+## 13 1 Introduction
+
+Novel approaches for learning on graphs have appeared in recent years within the machine learning community [1]. Notably, the introduction of Graph Convolutional Networks [2, 3] led to a broad body of research aiming to efficiently capture network interactions, leveraging spectral information [4], scaling beyond seen nodes [5], or generalizing attention to Graph Neural Networks (GNNs) [6]. Underlying this family of GNN models is the Message Passing (MP) mechanism [7]. In MP-GNNs, a node is represented by iteratively aggregating feature 'messages' from its neighbours based on edge connectivity, with successful applications on several domains [7-11]. However, recently it has been shown that message passing limits the representational power of GNNs, which are bound by the Weisfeiler-Lehman (1-WL) test [12]. As such, MP-GNNs cannot reach the expressivity of $k$ -dimensional WL generalizations [13,14] ( $k$ -WL) or analogous MATLANG [15,16] languages. Casting MP-GNN representations in these terms [17] has driven recent research towards expressivity.
+
+To improve expressivity, recent approaches extend message-passing, leveraging topological information from cell complexes [18], extending the message-passing mechanism with sub-graph information [19-22], propagating messages through $k$ network hops [23], introducing relative positioning information for network vertices [24], or using higher order $k$ -vertex tuples to reach $k$ -WL expressivity [14]. In an other direction, methods such as Provably Powerful Graph Networks (PPGN) [25] are guaranteed to be as expressive as the 3-WL test at cubic time and quadratic memory costs. More recently, GNNML3 [26] introduced a network architecture with equal memory and time costs to MP-GNNs, but experimentally capable of 3-WL expressivity that introduces spectral information through a preprocessing step with cubic worst-case time complexity.
+
+The aforementioned approaches improve expressivity by extending MP-GNNs architectures, often evaluating on standarized benchmarks [27-29]. However, identifying the optimal approach on novel domains remains unclear and requires costly architecture search. In this work, we present IGEL, an Inductive Graph Encoding of Local information allowing MP-GNN and Deep Neural Network (DNN) models to go beyond 1-WL expressivity without modifying model architectures. IGEL is closely related to the Weisfeiler-Lehman isomorphism test, and produces inductive representations of vertex structures that can be introduced into MP-GNN models. IGEL reframes capturing 1-WL information irrespective of model architecture as a pre-processing step that simply extends node attributes.
+
+## 2 IGEL: Ego-Networks As Sparse Inductive Representations.
+
+Given a graph $G = \left( {V, E}\right)$ , we define $n = \left| V\right|$ and $m = \left| E\right| ,{d}_{G}\left( v\right)$ is the degree of a node $v$ in $G$ and ${d}_{\max }$ is the maximum degree. For $u, v \in V,{l}_{G}\left( {u, v}\right)$ is their shortest distance, and $\operatorname{diam}\left( G\right) = \max \left( {{l}_{G}\left( {u, v}\right) \mid u, v \in V}\right)$ is the diameter of $G.{\mathcal{N}}_{G}^{\alpha }\left( v\right)$ is the set of neighbours of $v$ in $G$ up to distance $\alpha$ (Equation 1), and ${\mathcal{E}}_{v}^{\alpha }$ is the $\alpha$ -depth ego-network centered on $v$ (Equation 2):
+
+$$
+{\mathcal{N}}_{G}^{\alpha }\left( v\right) = \left\{ {u \mid u \in V \land {l}_{G}\left( {u, v}\right) \leq \alpha }\right\} ,
+$$
+
+Let $\{ | \cdot \rangle \}$ denote a lexicographically-ordered multi-set. Algorithm 1 shows the 1-WL test, where hash a 1-WL iteration. The output of 1-WL is ${\mathbb{N}}^{n}$ -mapping each node to a color, bounded by $n$ distinct operate on $k$ -tuples of vertices, such that colors are assigned to $k$ -vertex tuples. If two graphs ${G}_{1},{G}_{2}$ are not distinguishable by the $k$ -WL test (that is, their coloring histograms match), they are $k$ -WL equivalent-denoted ${G}_{1}{ \equiv }_{k - \mathrm{{WL}}}{G}_{2}$ . Due to the hashing step,1-WL does not preserve distance information in the encoding, and perturbations (e.g. different color in a neighbour) produce different node-level representations. IGEL addresses both limitations, improving expressivity in the process.
+
+### 2.1 The IGEL Algorithm
+
+Intuitively, IGEL encodes a vertex $v$ with the multi-set of ordered degree sequences at each distance $\alpha$ steps with two modifications. First, the hashing step is removed and replaced by computing the union of multi-sets across steps $\left( \cup \right)$ ; second, the iteration number is explicitly introduced in the representation-with the output multi-set ${e}_{v}^{\alpha }$ shown in Algorithm 2.
+
+In order to be used as vertex features, the multi-set can be represented as a sparse vector ${\operatorname{IGEL}}_{\text{vec }}^{\alpha }\left( v\right)$ , where the $i$ -th index contains the frequency of path-length and degree pairs $\left( {\lambda ,\delta }\right)$ . Degrees greater than ${d}_{\max }$ are capped to ${d}_{\max }$ , and vector indices are output by bijective function $f : \left( {\mathbb{N},\mathbb{N}}\right) \mapsto \mathbb{N}$ -shown in Figure 1:
+
+${\operatorname{IGEL}}_{\text{vec }}^{\alpha }{\left( v\right) }_{i} = \left| \left\{ {\left( {\lambda ,\delta }\right) \in {e}_{v}^{\alpha }\text{ s.t. }f\left( {\lambda ,\delta }\right) = i}\right\} \right| .$
+
+${G}_{1} = \left( {{V}_{1},{E}_{1}}\right)$ and ${G}_{2} = \left( {{V}_{1},{E}_{1}}\right)$ are IGEL-equivalent for $\alpha$ if the sorted multi-set containing node representations is the same for ${G}_{1}$ and ${G}_{2}$ :
+
+${G}_{1}{ \equiv }_{\text{IGEL }}^{\alpha }{G}_{2} \Leftrightarrow$
+
+$\left\{ \left\{ {{e}_{{v}_{1}}^{\alpha } : \forall {v}_{1} \in {V}_{1}}\right\} \right\} = \left\{ \left\{ {{e}_{{v}_{2}}^{\alpha } : \forall {v}_{2} \in {V}_{2}}\right\} \right\} .$(1)
+
+$$
+{\mathcal{E}}_{v}^{\alpha } = \left( {{V}^{\prime },{E}^{\prime }}\right) \subseteq G\text{, s.t.}u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \Leftrightarrow u \in {V}^{\prime },\left( {u, v}\right) \in {E}^{\prime } \subseteq E \Leftrightarrow u, v \in {V}^{\prime }\text{.} \tag{2}
+$$
+
+maps a multi-set to an equivalence class shared by all nodes with matching multi-set encodings after colors if each node is uniquely colored. $k$ -higher order variants of the WL test (denoted $k$ -WL) tests within ${\mathcal{E}}_{v}^{\alpha }$ . As such, IGEL is a variant of the 1-WL algorithm shown in Algorithm 1, executed for
+
+
+
+Figure 1: IGEL encoding of the green vertex. Dashed region denotes ${\mathcal{E}}_{v}^{\alpha }\left( {\alpha = 2}\right)$ . The green vertex is at distance 0 , blue vertices at 1 and red vertices at 2. Labels show degrees in ${\mathcal{E}}_{v}^{\alpha }$ . The frequency of $\left( {\lambda ,\delta }\right)$ tuples forming ${\operatorname{IGEL}}_{\text{vec }}^{\alpha }\left( v\right)$ is: $\{ \left( {0,2}\right) : 1,\left( {1,2}\right) : 1,\left( {1,4}\right) : 1,\left( {2,3}\right) : 2,\left( {2,4}\right) : 1\}$ .
+
+Algorithm 1 1-WL (Color refinement).
+
+---
+
+Input: $G = \left( {V, E}\right)$
+
+ 1: ${c}_{v}^{0} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{d}_{G}\left( v\right) }\right\} \right\} \right) \forall v \in V$
+
+ do
+
+ ${c}_{v}^{i + 1} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{c}_{u}^{i} : \mathop{\forall }\limits_{{u \neq v}}u \in {\mathcal{N}}_{G}^{1}\left( v\right) }\right\} \right\} \right)$
+
+ while ${c}_{v}^{i} \neq {c}_{v}^{i - 1}$
+
+Output: ${c}_{v}^{i} : V \mapsto \mathbb{N}$
+
+---
+
+Algorithm 2 IGEL Encoding.
+
+---
+
+Input: $G = \left( {V, E}\right) ,\alpha : \mathbb{N}$
+
+ ${e}_{v}^{0} \mathrel{\text{:=}} \left\{ {\{ \left( {0,{d}_{G}\left( v\right) }\right) \} }\right\} \forall v \in V$
+
+ for $i \mathrel{\text{:=}} 1;i + = 1$ until $i = \alpha$ do
+
+ ${e}_{v}^{i} \mathrel{\text{:=}} \bigcup \left( {{e}_{v}^{i - 1},}\right.$
+
+ $\{ (i,{d}_{{\mathcal{E}}_{C}^{\alpha }(v)}(u))$
+
+ $\left. \left. {\forall u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \mid {l}_{G}\left( {u, v}\right) = i}\right\} \right)$
+
+ end for
+
+ Dutput: ${e}_{v}^{\alpha } : V \mapsto \{ \{ \left( {\mathbb{N},\mathbb{N}}\right) \} \}$
+
+---
+
+Space complexity. IGEL’s worst case space complexity is $\mathcal{O}\left( {\alpha \cdot n \cdot {d}_{\max }}\right)$ , conservatively assuming that every node will require ${d}_{\max }$ parameters at every $\alpha$ depth from the center of the ego-network.
+
+Time complexity. For IGEL, each vertex has ${d}_{\max }$ neighbours where the $\alpha$ iterations imply traversing through geometrically larger ego-networks with ${\left( {d}_{\max }\right) }^{\alpha }$ vertices, upper bounded by $m$ . Thus IGEL’s time complexity follows $\mathcal{O}\left( {n \cdot \min \left( {m,{\left( {d}_{\max }\right) }^{\alpha }}\right) }\right)$ , with $\mathcal{O}\left( {n \cdot m}\right)$ when $\alpha \geq \operatorname{diam}\left( G\right)$ .
+
+## 3 Theoretical and Experimental Findings
+
+First, we analyze IGEL's expressive power with respect to 1-WL and recent improvements. Second, we measure the impact of IGEL as an additional input to enrich existing MP-GNN architectures.
+
+### 3.1 Expressivity: Which Graphs are IGEL-Distinguishable?
+
+In this section, we discuss the increased expressivity of IGEL with respect to 1-WL, and identify expressivity upper-bounds for graphs that are indistinguishable under MATLANG and the 3-WL test.
+
+- Relationship to 1-WL. IGEL is capable of distinguishing graphs that are indistinguishable by the 1-WL test-e.g., $d$ -regular graphs. A graph is $d$ -regular graph if all nodes have degree $d.d$ - regular graphs with equal cardinality are indistinguishable by 1-WL. Specifically, for any pair of $d$ - regular graphs ${G}_{1}$ and ${G}_{2}$ such that $\left| {V}_{1}\right| = \left| {V}_{2}\right|$ , ${G}_{1}{ \equiv }_{1 - \mathrm{{WL}}}{G}_{2}$ (see Appendix A for details).
+
+
+
+Figure 2: IGEL encodings for two Cospectral 4- regular graphs from [30]. IGEL distinguishes 4 kinds of structures within the graphs (associated with every node as a, b, c, and d). The two graphs can be distinguished since the encoded structures and their frequencies do not match.
+
+However, there exist $d$ -regular graphs that can be distinguished by IGEL, as shown in Figure 2. Since the graph is $d$ -regular, tracing Algorithm 1 shows that the 1-WL test assigns the same color to all nodes and stabilizes after one iteration. In contrast, IGEL with $\alpha = 1$ identifies 4 kinds of structures with different frequencies between the graphs-thus being able to distinguish them.
+
+- Expressivity upper bounds. We identify an upper expressivity bound for IGEL, where the method fails to distinguish graphs e.g. Strongly Regular Graphs (Definition 1) with equal parameters (Theorem 1, see Appendix B for details): Definition 1. An-vertex $d$ -regular graph is strongly regular-denoted $\operatorname{SRG}\left( {n, d,\beta ,\gamma }\right)$ -if adjacent vertices have $\beta$ vertices in common, and non-adjacent vertices have $\gamma$ vertices in common.
+
+Theorem 1. IGEL cannot distinguish SRGs when $n, d$ , and $\beta$ are the same, and between any value of $\gamma$ (same or otherwise). IGEL when $\alpha = 1$ can only distinguish SRGs with different values of $n, d$ , and $\beta$ , while IGEL when $\alpha = 2$ can only distinguish SRGs with different values of $n$ and $d$ .
+
+Our findings show that IGEL is a powerful representation, capable of distinguishing 1-WL equivalent graphs such as Figure 2-which as cospectral graphs, are known to be expressable in strictly more powerful MATLANG sub-languages than 1-WL [16]. Additionally, the upper bound on Strongly Regular Graphs is a hard ceiling on expressivity since SRGs are known to be indistinguishable by 3-WL [31]. IGEL shares the experimental upper-bound of expressivity of recent methods such as GNNML3 [26]. Furthermore, IGEL can provably reach comparable expressivity on SRGs with respect to sub-graph methods implemented within MP-GNN architectures (see Appendix B, subsection B.2), such as Nested GNNs [21] and GNN-AK [22], which are known to be not less powerful than 3-WL, and the ESAN framework when leveraging ego-networks with root-node flags as a subgraph sampling policy (EGO+) [19], which is as powerful as the 3-WL.
+
+### 3.2 Experimental Evaluation
+
+We evaluate ${\operatorname{IGEL}}_{\text{vec }}^{\alpha }\left( v\right)$ as a method of producing architecture-agnostic vertex features on five tasks: graph classification, isomorphism detection, graphlet counting, link prediction, and node classification.
+
+Experimental Setup. We reproduce results from [26], introducing IGEL as features on graph classification, isomorphism and graphlet counting, comparing the performance of adding/removing IGEL on six GNN architectures. We also evaluate IGEL on link prediction against transductive baselines, and on node classification as an additional feature used in MLPs without message-passing
+
+Notation. The following formatting denotes significant (as per paired t-tests) positive, negative, and insignificant differences after introducing IGEL, with the best results per task / dataset underlined.
+
+Table 1: Per-model graph classification accuracy met rics on TU data sets. Each cell shows the average accuracy of the model and data set in that row and column, with IGEL (left) and without IGEL (right).
+
+| Model | Enzymes | $\mathbf{{Mutag}}$ | Proteins | PTC |
| $\mathbf{{MLP}}$ | 41.10>26.18 ${}^{ \circ }$ | ${87.61} > {84.61}^{ \circ }$ | 75.43~75.01 | ${64.59} > {62.79}^{ \circ }$ |
| GCN | ${54.48} > {48.60}^{ \circ }$ | ${89.61} > {85.42}^{ \circ }$ | 75.67>74.50 ${}^{ \circ }$ | 65.76~65.21 |
| $\mathbf{{GAT}}$ | 54.88~54.95 | 90.00>86.14 ${}^{ \circ }$ | 73.44>70.51 ${}^{ \circ }$ | 66.29~66.29 |
| GIN | ${{54.77} > {53.44}}^{ * }$ | 89.56~88.33 | ${73.32}^{ \circ }{72.05}^{ \circ }$ | 61.44~60.21 |
| Chebnet | 61.88~62.23 | 91.44>88.33 ${}^{ \circ }$ | 74.30>66.94 ${}^{ \circ }$ | 64.79~63.87 |
| GNNML3 | ${61.42} < {62.79}^{ \circ }$ | ${92.50} > {91.47}^{ * }$ | 75.54>62.32 ${}^{ \circ }$ | ${64.26} < {66.10}^{ \circ }$ |
| * $p < {0.01}$ ,$p < {0.0001}$ |
+
+Table 2: Mean $\pm$ stddev of the best IGEL-augmented graph classification model and reported results on $k$ -hop, GSN, and ESAN from $\left\lbrack {{19},{20},{23}}\right\rbrack$ . Best performing baselines underlined.
+
+| Model | Mutag | Proteins | PTC |
| IGEL (best) | ${92.5} \pm {1.2}$ | ${75.7} \pm {0.3}$ | ${66.3} \pm {1.3}$ |
| $k$ -hop [23] ${}^{ \dagger }$ | ${87.9} \pm {1.2}^{\diamond }$ | ${75.3} \pm {0.4}$ | - |
| GSN [20] ${}^{ \dagger }$ | ${92.2} \pm {7.5}$ | ${76.6} \pm {5.0}$ | ${68.2} \pm {7.2}$ |
| ESAN [19] ${}^{ \dagger }$ | ${91.1} \pm {7.0}$ | ${76.7} \pm {4.1}$ | ${69.2} \pm {6.5}$ |
+
+$\dagger$ : Results as reported by $\left\lbrack {{19},{20},{23}}\right\rbrack$ .
+
+— Graph Classification. Table 1 shows graph classification results on the TU molecule data sets [28]. We evaluate differences in mean accuracy between 10 runs with (left) / without (right) IGEL. We do not tune network hyper-parameters and establish statistical significance through paired t-tests, with $p < {0.01}$ (*) and $p < {0.0001}$ (*). Our results show that IGEL in the Mutag and Proteins data sets improves the performance of all MP-GNN models. On the Enzymes and PTC data sets, results are mixed: for all models other than GNNML3, IGEL either significantly improves accuracy (on MLPNet, GCN, and GIN on Enzymes), or does not have a negative impact on performance.
+
+In Table 2, we compare the best IGEL results from Table 1 with reported results for expressive baselines: $k$ -hop GNNs [23], GSNs [20], and ESAN [19]. All results are comparable to IGEL except Mutag, where IGEL significantly outperforms $k$ -hop with $p < {0.0001}$ . When comparing IGEL and best performing baselines for every data set, no differences are statistically significant $\left( {p > {0.01}}\right)$ .
+
+- Isomorphism Detection & Graphlet Counting. Adding IGEL to the six models in Table 1 on the EXP [32] graph isomorphism task produces significant improvements: all GNN models distinguish all non-isomorphic yet 1-WL equivalent EXP graph pairs with IGEL vs. 50% accuracy without IGEL (i.e. random guessing). Likewise, IGEL significantly improves GNN performance on the RandomGraph data set [33] counting triangles, tailed triangles and the custom 1-WL graphlets proposed by [26] (see detailed results on Appendix C).
+
+- Link Prediction & Node Classification. We test IGEL on edge / node level tasks to assess its use as a baseline in non-GNN settings. On a transductive link prediction task, we train DeepWalk [34] style embeddings of IGEL encodings rather than node identities on the Facebook and CA-AstroPh graphs [35]. IGEL-derived embeddings outperform transductive baselines modelling link prediction as an edge-level binary classification task, measuring 0.976 vs. 0.968 (Facebook) and 0.984 vs. 0.937 (CA-AstroPh) AUC comparing IGEL vs. node2vec [36]. On multi-label node classification on PPI [5], we train an MLP (e.g. no message passing) with node features and IGEL encodings. Our MLP shows better micro-F1 (0.850) when $\alpha = 1$ than MP-GNN architectures such as GraphSAGE (0.768, as reported in [6]), but underperforms compared to a 3-layer GAT (0.973 micro-F1 from [6]).
+
+- Experimental Summary. Introducing IGEL yields comparable performance to state-of-the-art methods without architectural modifications-including when compared to strong baseline models focused on WL expressivity such as GNNML3, $k$ -hop, GSN or ESAN. Furthermore, IGEL achieves this at a lower computational cost, in comparison for instance with GNNML3, which requires a $\mathcal{O}\left( {n}^{3}\right)$ eigen-decomposition step to introduce spectral channels. Finally, IGEL can also be used in transductive settings (link prediction) as well as node-level tasks (node classification) and outperform strong transductive baselines or enhance models without message-passing, such as MLPs. As such, we believe IGEL is an attractive baseline with a clear relationship to the 1-WL test that can be used to improve MP-GNN expressivity without the need of costly architecture search.
+
+## 4 Conclusions
+
+We presented IGEL, a novel vertex representation algorithm on unattributed graphs allowing MP-GNN architectures to go beyond 1-WL expressivity. We showed that IGEL is related and more expressive than the 1-WL test, and formally proved an expressivity upper bound on certain families of Strongly Regular Graphs. Finally, our experimental results indicate that introducing IGEL in existing MP-GNN architectures yield comparable performance to state-of-the-art methods, without architectural modifications and at lower computational costs than other approaches.
+
+References
+
+[1] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. ArXiv, abs/2104.13478, 2021. 1
+
+[2] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pages 2014-2023, New York, USA, 2016. 1
+
+[3] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR, 2017. 1
+
+[4] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. 1
+
+[5] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30, 2017. 1, 4
+
+[6] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018.1,4
+
+[7] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML), volume 70, page 1263-1272, 2017. 1
+
+[8] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM International Conference on Knowledge Discovery & Data Mining, pages 974-983, 2018.
+
+[9] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015.
+
+[10] Bidisha Samanta, Abir De, Gourhari Jana, Vicenç Gómez, Pratim Chattaraj, Niloy Ganguly, and Manuel Gomez-Rodriguez. NEVAE: A deep generative model for molecular graphs. Journal of Machine Learning Research, 21(114):1-33, 2020.
+
+[11] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and koray kavukcuoglu. Interaction networks for learning about objects, relations and physics. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. 1
+
+[12] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. 1
+
+[13] Martin Grohe. Descriptive Complexity, Canonisation, and Definable Graph Structure Theory. Lecture Notes in Logic. Cambridge University Press, 2017. 1
+
+[14] Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):4602-4609, Jul. 2019. 1
+
+[15] Robert Brijder, Floris Geerts, Jan Van Den Bussche, and Timmy Weerwag. On the expressive power of query languages for matrices. ACM Trans. Database Syst., 44(4), oct 2019. 1
+
+[16] Floris Geerts. On the expressive power of linear algebra on graphs. Theory of Computing Systems, 65:1-61, 01 2021. 1, 3
+
+[17] Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M. Kriege, Martin Grohe, Matthias Fey, and Karsten Borgwardt. Weisfeiler and Leman go machine learning: The story so far. Weisfeiler and Leman go Machine Learning: The Story so far, 2021. 1
+
+[18] Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Liò, Guido F Montufar, and Michael Bronstein. Weisfeiler and Lehman go cellular: CW networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 2625-2640. Curran Associates, Inc., 2021. 1
+
+[19] Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. In International Conference on Learning Representations, 2022. 1, 3, 4
+
+[20] Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M. Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting, 2021. 4
+
+[21] Muhan Zhang and Pan Li. Nested graph neural networks. arXiv preprint arXiv:2110.13197, 2021. 3
+
+[22] Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. From stars to subgraphs: Uplifting any GNN with local structure awareness. In International Conference on Learning Representations, 2022.1,3
+
+[23] Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. k-hop graph neural networks. Neural Networks, 130:195-205, 2020. 1, 4
+
+[24] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 7134- 7143, Long Beach, California, USA, 09-15 Jun 2019. PMLR. 1
+
+[25] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. 1
+
+[26] Muhammet Balcilar, Pierre Héroux, Benoit Gaüzère, Pascal Vasseur, Sébastien Adam, and Paul Honeine. Breaking the limits of message passing graph neural networks. In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021. 1, 3, 4, 10
+
+[27] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020. 1
+
+[28] Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In ${ICML}$ 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020. URL www.graphlearning.io. 4
+
+[29] Jiaxuan You, Rex Ying, and Jure Leskovec. Design space for graph neural networks. In NeurIPS, 2020. 1
+
+[30] Edwin R Van Dam and Willem H Haemers. Which graphs are determined by their spectrum? Linear Algebra and its Applications, 373:241-272, 2003. 3
+
+[31] V. Arvind, Frank Fuhlbrück, Johannes Köbler, and Oleg Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 113:42-59, 2020. 3
+
+[32] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 2112-2118. International Joint Conferences on Artificial Intelligence Organization, 8 2021.4
+
+[33] Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 10383-10395. Curran Associates, Inc., 2020. 4, 10
+
+[34] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 701-710, 2014. 4
+
+[35] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection, June 2014. URL http://snap.stanford.edu/data.4
+
+[36] Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 855-864, 2016. 4
+
+## A 1-WL Expressivity and Regular Graphs.
+
+Remark 1 shows that 1-WL, as defined in Algorithm 1, is unable of distinguishing $d$ -regular graphs:
+
+Remark 1. Let ${G}_{1}$ and ${G}_{2}$ be two $d$ -regular graphs such that $\left| {V}_{1}\right| = \left| {V}_{1}\right|$ . Tracing Algorithm 1, all vertices in ${V}_{1},{V}_{2}$ share the same initial color due to d-regularity: $\forall v \in {V}_{1}\bigcup {V}_{2};{c}_{v}^{0} = \operatorname{hash}\left( {\{ \{ d\} \} }\right)$ . After the first color refinement iteration, consider the colorings of ${G}_{1}$ and ${G}_{2}$ :
+
+$\neg \forall {v}_{1} \in {V}_{1};{c}_{{v}_{1}}^{1} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{c}_{{u}_{1}}^{0} : \mathop{\forall }\limits_{{{u}_{1} \neq {v}_{1}}}{u}_{1} \in {\mathcal{N}}_{{G}_{1}}^{1}\left( {v}_{1}\right) }\right\} \right\} \right) ,$
+
+$- \forall {v}_{2} \in {V}_{2};{c}_{{v}_{2}}^{1} \mathrel{\text{:=}} \operatorname{hash}\left( \left\{ \left\{ {{c}_{{u}_{2}}^{0} : \mathop{\forall }\limits_{{{u}_{2} \neq {v}_{2}}}{u}_{2} \in {\mathcal{N}}_{{G}_{2}}^{1}\left( {v}_{2}\right) }\right\} \right\} \right) .$
+
+Since $\forall {v}_{1} \in {V}_{1},{v}_{2} \in {V}_{2};d = \left| {{\mathcal{N}}_{{G}_{1}}^{1}\left( {v}_{1}\right) }\right| = \left| {{\mathcal{N}}_{{G}_{2}}^{1}\left( {v}_{2}\right) }\right|$ , substituting ${c}_{{v}_{1}}^{1},{c}_{{v}_{2}}^{1}$ in the next iteration step yields $\left\{ \left\{ {\operatorname{hash}\left( {c}_{{v}_{1}}^{1}\right) : \forall {v}_{1} \in {V}_{1}}\right\} \right\} = \left\{ \left\{ {\operatorname{hash}\left( {c}_{{v}_{2}}^{1}\right) : \forall {v}_{2} \in {V}_{2}}\right\} \right\}$ . Thus, on any pair of $d$ -regular graphs with equal cardinality, 1-WL stabilizes after one iteration produces equal colorings for all nodes on both graphs-regardless of whether ${G}_{1}$ and ${G}_{2}$ are isomorphic, as Figure 2 shows.
+
+## B Proof of Theorem 1.
+
+In this appendix, we provide proof for Theorem 1, showing that IGEL cannot distinguish certain pairs of SRGs with equal parameters of $n$ (cardinality), $d$ (degree), $\beta$ (shared edges between adjacent nodes), and $\gamma$ (shared edges between non-adjacent nodes). Let $\{ \{ \cdot \} {\} }^{d}$ denote a repeated multi-set with $d$ -times the cardinality of the items in the multi-set, and let ${e}_{G}^{\alpha } = \left\{ \left\{ {{e}_{v}^{\alpha } : \forall v \in V}\right\} \right\}$ be short-hand notation for the IGEL encoding of $G$ , defined as the sorted multi-set containing IGEL encodings of all nodes in $G$ .
+
+Proof. Per Remark 2 and Remark 3, SRGs have a maximum diameter of two, and IGEL encodings are equal for all $\alpha \geq \operatorname{diam}\left( G\right)$ . Thus, given $G = \operatorname{SRG}\left( {n, d,\beta ,\gamma }\right)$ , only $\alpha \in \{ 1,2\}$ produce different encodings of $G$ . It can be shown that ${e}_{v}^{\alpha }$ can only distinguish different values of $n, d$ and $\beta$ , and ${\mathrm{{IGEL}}}_{\text{enc }}^{2}$ can only distinguish values of $n$ and $d$ :
+
+- Let $\alpha = 1 : \forall v \in V,{\mathcal{E}}_{v}^{1} = \left( {{V}^{\prime },{E}^{\prime }}\right)$ s.t. ${V}^{\prime } = {\mathcal{N}}_{G}^{1}\left( v\right)$ . Since $G$ is $d$ -regular, $v$ is the center of ${\mathcal{E}}_{v}^{1}$ , and has $d$ -neighbours. By SRG’s definition, the $d$ neighbours of $v$ have $\beta$ shared neighbours with $v$ each, plus an edge with $v$ . Thus, for any SRGs ${G}_{1},{G}_{2}$ where ${n}_{1} = {n}_{2},{d}_{1} = {d}_{2}$ , and ${\beta }_{1} = {\beta }_{2}$ , ${e}_{{G}_{1}}^{1} = {e}_{{G}_{2}}^{1}$ produce equal encodings by expanding ${e}_{v}^{1}$ in Algorithm 2:
+
+$$
+{e}_{v}^{1} = \{ \left( {0, d}\right) \} \bigcup \{ \left( {1,\beta + 1}\right) {\} }^{d}
+$$
+
+- Let $\alpha = 2 : \forall v \in V,{\mathcal{E}}_{v}^{2} = G$ as $\forall u \in V, u \in {\mathcal{N}}_{G}^{2}\left( v\right)$ when $\operatorname{diam}\left( G\right) \leq 2.G$ is $d$ -regular, so $\forall v \in V, d = {d}_{{\mathcal{E}}_{v}^{2}}\left( v\right) = {d}_{G}\left( v\right)$ . Thus, for any SRGs ${G}_{1},{G}_{2}$ s.t. ${n}_{1} = {n}_{2}$ and ${d}_{1} = {d}_{2},{e}_{{G}_{1}}^{2} = {e}_{{G}_{1}}^{2}$ , containing $n$ equal ${e}_{v}^{2}$ encodings by expanding Algorithm 2:
+
+$$
+{e}_{v}^{2} = \{ \{ \left( {0, d}\right) \} \bigcup \{ \left( {1, d}\right) {\} }^{d}\bigcup \{ \left( {2, d}\right) {\} }^{n - d - 1}
+$$
+
+Thus, IGEL cannot distinguish pairs of SRGs when $n, d$ , and $\beta$ are the same, and between any value of $\gamma$ (equal or different between the pair). IGEL when $\alpha = 1$ can only distinguish SRGs with different values of $n, d$ , and $\beta$ , while IGEL when $\alpha = 2$ can only distinguish SRGs with different values of $n$ and $d$ .
+
+We note that it is straightforward to extend IGEL so that different values of $\gamma$ can be distinguished. We explore one possible extension in subsection B.2.
+
+### B.1 Additional Remarks used by Proof 1.
+
+Remark 2. For any $G = \operatorname{SRG}\left( {n, d,\beta ,\gamma }\right)$ , $\operatorname{diam}\left( G\right) \leq 2$ .
+
+Note that by definition of SRGs, $n$ affects cardinality while $d$ and $\beta$ control adjacent vertex connectivity at 1-hop. For $\gamma$ , we have to consider two cases: when $\gamma \geq 1$ and when $\gamma = 0$ : - Let $\gamma \geq 1$ : by definition, $\forall u, v \in V$ s.t. $\left( {u, v}\right) \notin E,\exists w \in V$ s.t. $\left( {u, w}\right) \in E \land \left( {v, w}\right) \in E$ . Thus, $\forall \left( {u, v}\right) \in E,{l}_{G}\left( {u, v}\right) = 1$ and $\forall \left( {u, v}\right) \notin E,{l}_{G}\left( {u, v}\right) = 2$ .
+
+- Let $\gamma = 0 : \forall u, v \in V$ , if $\left( {u, v}\right) \notin E$ then $\nexists w \in V$ s.t. $\left( {u, w}\right) \in E \land \left( {v, w}\right) \in E$ as $w$ is in common between $u$ and $v$ . Then, $\forall u, v, w \in V$ s.t. $\left( {u, v}\right) \in E,\left( {u, w}\right) \in E \Leftrightarrow \left( {v, w}\right) \in E$ -hence, only nodes and their neighbours can be in common. Thus: $\forall u, v \in V$ s.t. $u \neq v,{l}_{G}\left( {u, v}\right) = 1$ .
+
+Given both scenarios, we can conclude that for any $\gamma \in \mathbb{N},\forall u, v \in V,{l}_{G}\left( {u, v}\right) \leq 2$ and thus $\operatorname{diam}\left( G\right) \leq 2$ .
+
+Remark 3. For any finite graph $G$ , there is a finite range of $\alpha \in \mathbb{N}$ where IGEL encodings distinguish between different values of $\alpha$ . For values of $\alpha$ larger than the diameter of the graph (that is, $\alpha \geq \operatorname{diam}\left( G\right)$ ), it holds that ${e}_{v}^{\alpha } = {e}_{v}^{\alpha + 1}$ as ${\mathcal{E}}_{v}^{\alpha } = {\mathcal{E}}_{v}^{\alpha + 1} = G$ .
+
+### B.2 Improving Expressivity on the $\gamma$ Parameter.
+
+IGEL as presented is unable to distinguish between any values of $\gamma$ in SRGs. However, IGEL can be trivially extended to distinguish between pairs of SRGs, bringing parity with methods such as the EGO+ policy in ESAN, NGNNs and GNN-AK.
+
+Intuitively, IGEL is unable to distinguish $\gamma$ because its $\left( {\lambda ,\delta }\right)$ tuples are unable to represent relationships between vertices at different distances (e.g. the $\gamma$ parameter). The structural feature definition may be extended to compute the degree between 'distance layers' in the sub-graphs, addressing this pitfall. This means modifying ${e}_{v}^{i}$ in Algorithm 2:
+
+$$
+{e}_{v}^{i} = {e}_{v}^{i - 1} \cup \left\{ {\rho \left( {u, v}\right) : \forall u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \mid {l}_{G}\left( {u, v}\right) \in \{ i, i + 1\} }\right\}
+$$
+
+where:
+
+$$
+\rho \left( {u, v}\right) = \left( {{l}_{{\mathcal{E}}_{v}^{\alpha }}\left( {u, v}\right) ,{d}_{{\mathcal{E}}_{v}^{\alpha }}^{0}\left( {u, v}\right) ,{d}_{{\mathcal{E}}_{v}^{\alpha }}^{1}\left( {u, v}\right) }\right)
+$$
+
+and ${d}_{G}^{p}\left( {u, v}\right)$ generalizes ${d}_{G}\left( u\right)$ to count edges of $u$ at a relative distance $p$ of $v$ in $G = \left( {V, E}\right)$ :
+
+$$
+{d}_{G}^{p}\left( {u, v}\right) = \left| {\left( {u, w}\right) \in E\forall w \in V\text{ s.t. }{l}_{G}\left( {u, w}\right) = {l}_{G}\left( {u, v}\right) + p}\right| .
+$$
+
+It can be shown that this definition of ${e}_{v}^{i}$ is strictly more powerful distinguishing at SRGs following an expansion of Algorithm 2 with $\alpha = 2$ :
+
+$$
+{e}_{v}^{2} = \{ \{ \left( {0,0, d}\right) \} \bigcup \{ \left( {1,\beta ,\gamma }\right) {\} }^{d}\bigcup \{ \left( {2, d - \gamma ,0}\right) {\} }^{n - d - 1}
+$$
+
+Proof. For any $G = \operatorname{SRG}\left( {n, d,\beta ,\gamma }\right) ,\forall v \in V,{l}_{{\mathcal{E}}_{v}^{2}}\left( {v, v}\right) = 0$ and there are $d$ edges towards its neighbours-thus the root is encoded as(0,0, d). Each neighbour is at ${l}_{{\mathcal{E}}_{v}^{2}}\left( {u, v}\right) = 1$ , with $\beta$ edges among each other, and $\gamma$ with vertices not adjacent to $v$ -thus $\left( {1,\beta ,\gamma }\right)$ , where $d = 1 + \beta + \gamma$ . By definition, every vertex $w \in V$ s.t. $\left( {u, w}\right) \notin E$ has $\gamma$ neighbours shared with $v$ , and $d$ neighbours overall. Per Remark 2, the maximum diameter of $G$ is two, hence ${l}_{{\mathcal{E}}_{v}^{2}}\left( {v, w}\right) = 2$ and for any $w$ , the representation is $\left( {2, d - \gamma ,0}\right)$ .
+
+## C Extended Results on Isomorphism Detection and Graphlet Counting.
+
+In this section we summarize additional results on isomorphism detection and graphlet counting.
+
+### C.1 Isomorphism Detection.
+
+We provide a detailed breakdown of isomorphism detection performance after introducing IGEL in Table 3, complimenting our summary on subsection 3.2.
+
+- Graph8c. On the Graph8c dataset ${}^{1}$ , introducing IGEL significantly reduces the amount of graph pairs erroneously identified as isomorphic for all MP-GNN models, as shown in Table 3. Furthermore, IGEL allows a linear baseline employing a sum readout function over input feature vectors, then projecting onto a 10-component space, to identify all but 1571 non-isomorphic pairs compared to the erroneous pairs GCNs (4196 errors) or GATs (1827 errors) can identify without IGEL. Additionally, we find that all Graph8c graphs can be distinguished if the IGEL encodings for $\alpha = 1$ and $\alpha = 2$ are concatenated. We do not explore the expressivity of combinations of $\alpha$ in this work, but hypothesize that concatenated encodings of $\alpha$ may be more expressive.
+
+Table 3: Graph isomorphism detection results. The IGEL column denotes whether IGEL is used or not in the configuration. For Graph8c, we describe graph pairs erroneously detected as isomorphic. For EXP classify, we show the accuracy of distinguishing non-isomorphic graphs in a binary classification task.
+
+| Model | +IGEL | Graph8c (#Errors) | EXP Classify (Accuracy) |
| Linear | No | 6.242M | 50% |
| $\mathbf{{Yes}}$ | 1571 | 97.25% |
| $\mathbf{{MLP}}$ | No | 293K | 50% |
| $\mathbf{{Yes}}$ | 1487 | 100% |
| GCN | No | 4196 | 50% |
| $\mathbf{{Yes}}$ | 5 | 100% |
| GAT | No | 1827 | 50% |
| $\mathbf{{Yes}}$ | 5 | 100% |
| GIN | No | 571 | 50% |
| $\mathbf{{Yes}}$ | 5 | 100% |
| Chebnet | No | 44 | 50% |
| $\mathbf{{Yes}}$ | 1 | 100% |
| GNNML3 | No | 0 | 100% |
| $\mathbf{{Yes}}$ | 0 | 100% |
+
+— Empirical Results on Strongly Regular Graphs. We also evaluate IGEL on ${\mathrm{{SR25}}}^{2}$ , which contains 15 Strongly Regular graphs with 25 vertices, known to be indistinguishable by 3-WL. With SR25, we empirically validate Theorem 1. [26] showed that no models in our benchmark distinguish any of the 105 non-isomorphic graph pairs in SR25. As expected from Theorem 1, introducing IGEL does not improve distinguishability.
+
+### C.2 Graphlet Counting.
+
+We evaluate IGEL on a (regression) graphlet ${}^{3}$ counting task. We minimize Mean Squared Error (MSE) on normalized graphlet counts ${}^{4}$ . Table 4 shows the results of introducing IGEL in 5 graphlet counting tasks on the RandomGraph data set [33]. Stat sig. differences $\left( {p < {0.0001}}\right)$ shown in bold green, with best (lowest MSE) per-graphlet results underlined.
+
+Introducing IGEL improves counting performance on triangles, tailed triangles and the custom 1-WL graphlets proposed by [26]. Star graphlets can be identified by all baselines, and IGEL only produces statistically significant improvements for the Linear baseline.
+
+Table 4: Graphlet counting results. Cells contain mean test set MSE error (lower is better), stat. sig highlighted.
+
+| Model | + IGEL | Star | Triangle | Tailed Tri. | 4-Cycle | $\mathbf{{Custom}}$ |
| Linear | No | ${1.60}\mathrm{E} - {01}$ | ${3.41}\mathrm{E} - {01}$ | ${2.82}\mathrm{E} - {01}$ | ${2.03}\mathrm{E} - {01}$ | ${5.11}\mathrm{E} - {01}$ |
| $\mathbf{{Yes}}$ | ${4.23}\mathrm{E} - {03}$ | 4.38E-03 | ${1.85}\mathrm{E} - {02}$ | ${1.36}\mathrm{E} - {01}$ | ${5.25}\mathrm{E} - {02}$ |
| MLP | No | ${2.66}\mathrm{E} - {06}$ | ${2.56}\mathrm{E} - {01}$ | ${1.60}\mathrm{E} - {01}$ | ${1.18}\mathrm{E} - {01}$ | ${4.54}\mathrm{E} - {01}$ |
| $\mathbf{{Yes}}$ | ${8.31}\mathrm{E} - {05}$ | ${5.69}\mathrm{E} - {05}$ | 5.57E-05 | ${7.64}\mathrm{E} - {02}$ | ${2.34}\mathrm{E} - {04}$ |
| GCN | No | 4.72E-04 | ${2.42}\mathrm{E} - {01}$ | ${1.35}\mathrm{E} - {01}$ | ${1.11}\mathrm{E} - {01}$ | ${1.54}\mathrm{E} - {03}$ |
| $\mathbf{{Yes}}$ | ${8.26}\mathrm{E} - {04}$ | ${1.25}\mathrm{E} - {03}$ | 4.15E-03 | 7.32E-02 | 1.17E-03 |
| $\mathbf{{GAT}}$ | No | ${4.15}\mathrm{E} - {04}$ | ${2.35}\mathrm{E} - {01}$ | ${1.28}\mathrm{E} - {01}$ | ${1.11}\mathrm{E} - {01}$ | ${2.85}\mathrm{E} - {03}$ |
| Yes | 4.52E-04 | 6.22E-04 | 7.77E-04 | 7.33E-02 | ${6.66}\mathrm{E} - {04}$ |
| GIN | No | ${3.17}\mathrm{E} - {04}$ | ${2.26}\mathrm{E} - {01}$ | ${1.22}\mathrm{E} - {01}$ | ${1.11}\mathrm{E} - {01}$ | ${2.69}\mathrm{E} - {03}$ |
| $\mathbf{{Yes}}$ | ${6.09}\mathrm{E} - {04}$ | ${1.03}\mathrm{E} - {03}$ | 2.72E-03 | ${6.98}\mathrm{E} - {02}$ | ${2.18}\mathrm{E} - {03}$ |
| Chebnet | No | ${5.79}\mathrm{E} - {04}$ | ${1.71}\mathrm{E} - {01}$ | ${1.12}\mathrm{E} - {01}$ | ${8.95}\mathrm{E} - {02}$ | ${2.06}\mathrm{E} - {03}$ |
| $\mathbf{{Yes}}$ | ${3.81}\mathrm{E} - {03}$ | ${7.88}\mathrm{E} - {04}$ | ${2.10}\mathrm{E} - {03}$ | 7.90E-02 | ${2.05}\mathrm{E} - {03}$ |
| GNNML3 | No | ${8.90}\mathrm{E} - {05}$ | ${2.36}\mathrm{E} - {04}$ | ${2.91}\mathrm{E} - {04}$ | ${6.82}\mathrm{E} - {04}$ | ${9.86}\mathrm{E} - {04}$ |
| Yes | ${9.29}\mathrm{E} - {04}$ | ${2.19}\mathrm{E} - {04}$ | ${4.23}\mathrm{E} - {04}$ | ${6.98}\mathrm{E} - {04}$ | 4.17E-04 |
+
+Notably, the Linear baseline plus IGEL outperforms MP-GNNs without IGEL for star, triangle, tailed triangle and custom 1-WL graphlets. By introducing IGEL on the MLP baseline, it outperforms all other models including GNNML3 on the triangle, tailed-triangle and custom 1-WL graphlets.
+
+Since Linear and MLP baselines do not use message passing, we believe raw IGEL encodings may be sufficient to identify certain graph structures even with simple linear models. For all graphlets except 4-cycles, introducing IGEL yields performance similar to GNNML3 at lower pre-processing and model training/inference costs, as IGEL obviates the need for costly eigen-decomposition and can be used in simple models only performing graph-level readouts without message passing.
+
+---
+
+${}^{1}$ Simple 8 vertices graphs from: http://users.cecs.anu.edu.au/~bdm/data/graphs.html
+
+${}^{2}$ SRG(25,12,5,6)graphs from: http://users.cecs.anu.edu.au/~bdm/data/graphs.html
+
+${}^{3}$ 3-stars, triangles, tailed triangles and 4-cycles, plus a custom 1-WL graphlet proposed in [26]
+
+${}^{4}$ Counts are stddev-normalized so that MSE values are comparable across graphlet types, following [26].
+
+---
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..108073b2fd50a55600dd0333e17d042e387f690b
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/5Zxh3fQ8F-h/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,184 @@
+§ BEYOND 1-WL WITH LOCAL EGO-NETWORK ENCODINGS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Identifying similar network structures is key to capture graph isomorphisms and learn representations that exploit structural information encoded in graph data. This work shows that ego-networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler-Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego-networks into sparse vectors that enrich Message Passing (MP) Graph Neural Networks (GNNs) beyond 1-WL expressivity. We describe formally the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on seven GNN architectures.
+
+§ 13 1 INTRODUCTION
+
+Novel approaches for learning on graphs have appeared in recent years within the machine learning community [1]. Notably, the introduction of Graph Convolutional Networks [2, 3] led to a broad body of research aiming to efficiently capture network interactions, leveraging spectral information [4], scaling beyond seen nodes [5], or generalizing attention to Graph Neural Networks (GNNs) [6]. Underlying this family of GNN models is the Message Passing (MP) mechanism [7]. In MP-GNNs, a node is represented by iteratively aggregating feature 'messages' from its neighbours based on edge connectivity, with successful applications on several domains [7-11]. However, recently it has been shown that message passing limits the representational power of GNNs, which are bound by the Weisfeiler-Lehman (1-WL) test [12]. As such, MP-GNNs cannot reach the expressivity of $k$ -dimensional WL generalizations [13,14] ( $k$ -WL) or analogous MATLANG [15,16] languages. Casting MP-GNN representations in these terms [17] has driven recent research towards expressivity.
+
+To improve expressivity, recent approaches extend message-passing, leveraging topological information from cell complexes [18], extending the message-passing mechanism with sub-graph information [19-22], propagating messages through $k$ network hops [23], introducing relative positioning information for network vertices [24], or using higher order $k$ -vertex tuples to reach $k$ -WL expressivity [14]. In an other direction, methods such as Provably Powerful Graph Networks (PPGN) [25] are guaranteed to be as expressive as the 3-WL test at cubic time and quadratic memory costs. More recently, GNNML3 [26] introduced a network architecture with equal memory and time costs to MP-GNNs, but experimentally capable of 3-WL expressivity that introduces spectral information through a preprocessing step with cubic worst-case time complexity.
+
+The aforementioned approaches improve expressivity by extending MP-GNNs architectures, often evaluating on standarized benchmarks [27-29]. However, identifying the optimal approach on novel domains remains unclear and requires costly architecture search. In this work, we present IGEL, an Inductive Graph Encoding of Local information allowing MP-GNN and Deep Neural Network (DNN) models to go beyond 1-WL expressivity without modifying model architectures. IGEL is closely related to the Weisfeiler-Lehman isomorphism test, and produces inductive representations of vertex structures that can be introduced into MP-GNN models. IGEL reframes capturing 1-WL information irrespective of model architecture as a pre-processing step that simply extends node attributes.
+
+§ 2 IGEL: EGO-NETWORKS AS SPARSE INDUCTIVE REPRESENTATIONS.
+
+Given a graph $G = \left( {V,E}\right)$ , we define $n = \left| V\right|$ and $m = \left| E\right| ,{d}_{G}\left( v\right)$ is the degree of a node $v$ in $G$ and ${d}_{\max }$ is the maximum degree. For $u,v \in V,{l}_{G}\left( {u,v}\right)$ is their shortest distance, and $\operatorname{diam}\left( G\right) = \max \left( {{l}_{G}\left( {u,v}\right) \mid u,v \in V}\right)$ is the diameter of $G.{\mathcal{N}}_{G}^{\alpha }\left( v\right)$ is the set of neighbours of $v$ in $G$ up to distance $\alpha$ (Equation 1), and ${\mathcal{E}}_{v}^{\alpha }$ is the $\alpha$ -depth ego-network centered on $v$ (Equation 2):
+
+$$
+{\mathcal{N}}_{G}^{\alpha }\left( v\right) = \left\{ {u \mid u \in V \land {l}_{G}\left( {u,v}\right) \leq \alpha }\right\} ,
+$$
+
+Let $\{ | \cdot \rangle \}$ denote a lexicographically-ordered multi-set. Algorithm 1 shows the 1-WL test, where hash a 1-WL iteration. The output of 1-WL is ${\mathbb{N}}^{n}$ -mapping each node to a color, bounded by $n$ distinct operate on $k$ -tuples of vertices, such that colors are assigned to $k$ -vertex tuples. If two graphs ${G}_{1},{G}_{2}$ are not distinguishable by the $k$ -WL test (that is, their coloring histograms match), they are $k$ -WL equivalent-denoted ${G}_{1}{ \equiv }_{k - \mathrm{{WL}}}{G}_{2}$ . Due to the hashing step,1-WL does not preserve distance information in the encoding, and perturbations (e.g. different color in a neighbour) produce different node-level representations. IGEL addresses both limitations, improving expressivity in the process.
+
+§ 2.1 THE IGEL ALGORITHM
+
+Intuitively, IGEL encodes a vertex $v$ with the multi-set of ordered degree sequences at each distance $\alpha$ steps with two modifications. First, the hashing step is removed and replaced by computing the union of multi-sets across steps $\left( \cup \right)$ ; second, the iteration number is explicitly introduced in the representation-with the output multi-set ${e}_{v}^{\alpha }$ shown in Algorithm 2.
+
+In order to be used as vertex features, the multi-set can be represented as a sparse vector ${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }\left( v\right)$ , where the $i$ -th index contains the frequency of path-length and degree pairs $\left( {\lambda ,\delta }\right)$ . Degrees greater than ${d}_{\max }$ are capped to ${d}_{\max }$ , and vector indices are output by bijective function $f : \left( {\mathbb{N},\mathbb{N}}\right) \mapsto \mathbb{N}$ -shown in Figure 1:
+
+${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }{\left( v\right) }_{i} = \left| \left\{ {\left( {\lambda ,\delta }\right) \in {e}_{v}^{\alpha }\text{ s.t. }f\left( {\lambda ,\delta }\right) = i}\right\} \right| .$
+
+${G}_{1} = \left( {{V}_{1},{E}_{1}}\right)$ and ${G}_{2} = \left( {{V}_{1},{E}_{1}}\right)$ are IGEL-equivalent for $\alpha$ if the sorted multi-set containing node representations is the same for ${G}_{1}$ and ${G}_{2}$ :
+
+${G}_{1}{ \equiv }_{\text{ IGEL }}^{\alpha }{G}_{2} \Leftrightarrow$
+
+$\left\{ \left\{ {{e}_{{v}_{1}}^{\alpha } : \forall {v}_{1} \in {V}_{1}}\right\} \right\} = \left\{ \left\{ {{e}_{{v}_{2}}^{\alpha } : \forall {v}_{2} \in {V}_{2}}\right\} \right\} .$(1)
+
+$$
+{\mathcal{E}}_{v}^{\alpha } = \left( {{V}^{\prime },{E}^{\prime }}\right) \subseteq G\text{ , s.t. }u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \Leftrightarrow u \in {V}^{\prime },\left( {u,v}\right) \in {E}^{\prime } \subseteq E \Leftrightarrow u,v \in {V}^{\prime }\text{ . } \tag{2}
+$$
+
+maps a multi-set to an equivalence class shared by all nodes with matching multi-set encodings after colors if each node is uniquely colored. $k$ -higher order variants of the WL test (denoted $k$ -WL) tests within ${\mathcal{E}}_{v}^{\alpha }$ . As such, IGEL is a variant of the 1-WL algorithm shown in Algorithm 1, executed for
+
+ < g r a p h i c s >
+
+Figure 1: IGEL encoding of the green vertex. Dashed region denotes ${\mathcal{E}}_{v}^{\alpha }\left( {\alpha = 2}\right)$ . The green vertex is at distance 0, blue vertices at 1 and red vertices at 2. Labels show degrees in ${\mathcal{E}}_{v}^{\alpha }$ . The frequency of $\left( {\lambda ,\delta }\right)$ tuples forming ${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }\left( v\right)$ is: $\{ \left( {0,2}\right) : 1,\left( {1,2}\right) : 1,\left( {1,4}\right) : 1,\left( {2,3}\right) : 2,\left( {2,4}\right) : 1\}$ .
+
+Algorithm 1 1-WL (Color refinement).
+
+Input: $G = \left( {V,E}\right)$
+
+ 1: ${c}_{v}^{0} \mathrel{\text{ := }} \operatorname{hash}\left( \left\{ \left\{ {{d}_{G}\left( v\right) }\right\} \right\} \right) \forall v \in V$
+
+ do
+
+ ${c}_{v}^{i + 1} \mathrel{\text{ := }} \operatorname{hash}\left( \left\{ \left\{ {{c}_{u}^{i} : \mathop{\forall }\limits_{{u \neq v}}u \in {\mathcal{N}}_{G}^{1}\left( v\right) }\right\} \right\} \right)$
+
+ while ${c}_{v}^{i} \neq {c}_{v}^{i - 1}$
+
+Output: ${c}_{v}^{i} : V \mapsto \mathbb{N}$
+
+Algorithm 2 IGEL Encoding.
+
+Input: $G = \left( {V,E}\right) ,\alpha : \mathbb{N}$
+
+ ${e}_{v}^{0} \mathrel{\text{ := }} \left\{ {\{ \left( {0,{d}_{G}\left( v\right) }\right) \} }\right\} \forall v \in V$
+
+ for $i \mathrel{\text{ := }} 1;i + = 1$ until $i = \alpha$ do
+
+ ${e}_{v}^{i} \mathrel{\text{ := }} \bigcup \left( {{e}_{v}^{i - 1},}\right.$
+
+ $\{ (i,{d}_{{\mathcal{E}}_{C}^{\alpha }(v)}(u))$
+
+ $\left. \left. {\forall u \in {\mathcal{N}}_{G}^{\alpha }\left( v\right) \mid {l}_{G}\left( {u,v}\right) = i}\right\} \right)$
+
+ end for
+
+ Dutput: ${e}_{v}^{\alpha } : V \mapsto \{ \{ \left( {\mathbb{N},\mathbb{N}}\right) \} \}$
+
+Space complexity. IGEL’s worst case space complexity is $\mathcal{O}\left( {\alpha \cdot n \cdot {d}_{\max }}\right)$ , conservatively assuming that every node will require ${d}_{\max }$ parameters at every $\alpha$ depth from the center of the ego-network.
+
+Time complexity. For IGEL, each vertex has ${d}_{\max }$ neighbours where the $\alpha$ iterations imply traversing through geometrically larger ego-networks with ${\left( {d}_{\max }\right) }^{\alpha }$ vertices, upper bounded by $m$ . Thus IGEL’s time complexity follows $\mathcal{O}\left( {n \cdot \min \left( {m,{\left( {d}_{\max }\right) }^{\alpha }}\right) }\right)$ , with $\mathcal{O}\left( {n \cdot m}\right)$ when $\alpha \geq \operatorname{diam}\left( G\right)$ .
+
+§ 3 THEORETICAL AND EXPERIMENTAL FINDINGS
+
+First, we analyze IGEL's expressive power with respect to 1-WL and recent improvements. Second, we measure the impact of IGEL as an additional input to enrich existing MP-GNN architectures.
+
+§ 3.1 EXPRESSIVITY: WHICH GRAPHS ARE IGEL-DISTINGUISHABLE?
+
+In this section, we discuss the increased expressivity of IGEL with respect to 1-WL, and identify expressivity upper-bounds for graphs that are indistinguishable under MATLANG and the 3-WL test.
+
+ * Relationship to 1-WL. IGEL is capable of distinguishing graphs that are indistinguishable by the 1-WL test-e.g., $d$ -regular graphs. A graph is $d$ -regular graph if all nodes have degree $d.d$ - regular graphs with equal cardinality are indistinguishable by 1-WL. Specifically, for any pair of $d$ - regular graphs ${G}_{1}$ and ${G}_{2}$ such that $\left| {V}_{1}\right| = \left| {V}_{2}\right|$ , ${G}_{1}{ \equiv }_{1 - \mathrm{{WL}}}{G}_{2}$ (see Appendix A for details).
+
+ < g r a p h i c s >
+
+Figure 2: IGEL encodings for two Cospectral 4- regular graphs from [30]. IGEL distinguishes 4 kinds of structures within the graphs (associated with every node as a, b, c, and d). The two graphs can be distinguished since the encoded structures and their frequencies do not match.
+
+However, there exist $d$ -regular graphs that can be distinguished by IGEL, as shown in Figure 2. Since the graph is $d$ -regular, tracing Algorithm 1 shows that the 1-WL test assigns the same color to all nodes and stabilizes after one iteration. In contrast, IGEL with $\alpha = 1$ identifies 4 kinds of structures with different frequencies between the graphs-thus being able to distinguish them.
+
+ * Expressivity upper bounds. We identify an upper expressivity bound for IGEL, where the method fails to distinguish graphs e.g. Strongly Regular Graphs (Definition 1) with equal parameters (Theorem 1, see Appendix B for details): Definition 1. An-vertex $d$ -regular graph is strongly regular-denoted $\operatorname{SRG}\left( {n,d,\beta ,\gamma }\right)$ -if adjacent vertices have $\beta$ vertices in common, and non-adjacent vertices have $\gamma$ vertices in common.
+
+Theorem 1. IGEL cannot distinguish SRGs when $n,d$ , and $\beta$ are the same, and between any value of $\gamma$ (same or otherwise). IGEL when $\alpha = 1$ can only distinguish SRGs with different values of $n,d$ , and $\beta$ , while IGEL when $\alpha = 2$ can only distinguish SRGs with different values of $n$ and $d$ .
+
+Our findings show that IGEL is a powerful representation, capable of distinguishing 1-WL equivalent graphs such as Figure 2-which as cospectral graphs, are known to be expressable in strictly more powerful MATLANG sub-languages than 1-WL [16]. Additionally, the upper bound on Strongly Regular Graphs is a hard ceiling on expressivity since SRGs are known to be indistinguishable by 3-WL [31]. IGEL shares the experimental upper-bound of expressivity of recent methods such as GNNML3 [26]. Furthermore, IGEL can provably reach comparable expressivity on SRGs with respect to sub-graph methods implemented within MP-GNN architectures (see Appendix B, subsection B.2), such as Nested GNNs [21] and GNN-AK [22], which are known to be not less powerful than 3-WL, and the ESAN framework when leveraging ego-networks with root-node flags as a subgraph sampling policy (EGO+) [19], which is as powerful as the 3-WL.
+
+§ 3.2 EXPERIMENTAL EVALUATION
+
+We evaluate ${\operatorname{IGEL}}_{\text{ vec }}^{\alpha }\left( v\right)$ as a method of producing architecture-agnostic vertex features on five tasks: graph classification, isomorphism detection, graphlet counting, link prediction, and node classification.
+
+Experimental Setup. We reproduce results from [26], introducing IGEL as features on graph classification, isomorphism and graphlet counting, comparing the performance of adding/removing IGEL on six GNN architectures. We also evaluate IGEL on link prediction against transductive baselines, and on node classification as an additional feature used in MLPs without message-passing
+
+Notation. The following formatting denotes significant (as per paired t-tests) positive, negative, and insignificant differences after introducing IGEL, with the best results per task / dataset underlined.
+
+Table 1: Per-model graph classification accuracy met rics on TU data sets. Each cell shows the average accuracy of the model and data set in that row and column, with IGEL (left) and without IGEL (right).
+
+max width=
+
+Model Enzymes $\mathbf{{Mutag}}$ Proteins PTC
+
+1-5
+$\mathbf{{MLP}}$ 41.10>26.18 ${}^{ \circ }$ ${87.61} > {84.61}^{ \circ }$ 75.437̃5.01 ${64.59} > {62.79}^{ \circ }$
+
+1-5
+GCN ${54.48} > {48.60}^{ \circ }$ ${89.61} > {85.42}^{ \circ }$ 75.67>74.50 ${}^{ \circ }$ 65.766̃5.21
+
+1-5
+$\mathbf{{GAT}}$ 54.885̃4.95 90.00>86.14 ${}^{ \circ }$ 73.44>70.51 ${}^{ \circ }$ 66.296̃6.29
+
+1-5
+GIN ${{54.77} > {53.44}}^{ * }$ 89.568̃8.33 ${73.32}^{ \circ }{72.05}^{ \circ }$ 61.446̃0.21
+
+1-5
+Chebnet 61.886̃2.23 91.44>88.33 ${}^{ \circ }$ 74.30>66.94 ${}^{ \circ }$ 64.796̃3.87
+
+1-5
+GNNML3 ${61.42} < {62.79}^{ \circ }$ ${92.50} > {91.47}^{ * }$ 75.54>62.32 ${}^{ \circ }$ ${64.26} < {66.10}^{ \circ }$
+
+1-5
+5|c|* $p < {0.01}$ , $p < {0.0001}$
+
+1-5
+
+Table 2: Mean $\pm$ stddev of the best IGEL-augmented graph classification model and reported results on $k$ -hop, GSN, and ESAN from $\left\lbrack {{19},{20},{23}}\right\rbrack$ . Best performing baselines underlined.
+
+max width=
+
+Model Mutag Proteins PTC
+
+1-4
+IGEL (best) ${92.5} \pm {1.2}$ ${75.7} \pm {0.3}$ ${66.3} \pm {1.3}$
+
+1-4
+$k$ -hop [23] ${}^{ \dagger }$ ${87.9} \pm {1.2}^{\diamond }$ ${75.3} \pm {0.4}$ -
+
+1-4
+GSN [20] ${}^{ \dagger }$ ${92.2} \pm {7.5}$ ${76.6} \pm {5.0}$ ${68.2} \pm {7.2}$
+
+1-4
+ESAN [19] ${}^{ \dagger }$ ${91.1} \pm {7.0}$ ${76.7} \pm {4.1}$ ${69.2} \pm {6.5}$
+
+1-4
+
+$\dagger$ : Results as reported by $\left\lbrack {{19},{20},{23}}\right\rbrack$ .
+
+— Graph Classification. Table 1 shows graph classification results on the TU molecule data sets [28]. We evaluate differences in mean accuracy between 10 runs with (left) / without (right) IGEL. We do not tune network hyper-parameters and establish statistical significance through paired t-tests, with $p < {0.01}$ (*) and $p < {0.0001}$ (*). Our results show that IGEL in the Mutag and Proteins data sets improves the performance of all MP-GNN models. On the Enzymes and PTC data sets, results are mixed: for all models other than GNNML3, IGEL either significantly improves accuracy (on MLPNet, GCN, and GIN on Enzymes), or does not have a negative impact on performance.
+
+In Table 2, we compare the best IGEL results from Table 1 with reported results for expressive baselines: $k$ -hop GNNs [23], GSNs [20], and ESAN [19]. All results are comparable to IGEL except Mutag, where IGEL significantly outperforms $k$ -hop with $p < {0.0001}$ . When comparing IGEL and best performing baselines for every data set, no differences are statistically significant $\left( {p > {0.01}}\right)$ .
+
+ * Isomorphism Detection & Graphlet Counting. Adding IGEL to the six models in Table 1 on the EXP [32] graph isomorphism task produces significant improvements: all GNN models distinguish all non-isomorphic yet 1-WL equivalent EXP graph pairs with IGEL vs. 50% accuracy without IGEL (i.e. random guessing). Likewise, IGEL significantly improves GNN performance on the RandomGraph data set [33] counting triangles, tailed triangles and the custom 1-WL graphlets proposed by [26] (see detailed results on Appendix C).
+
+ * Link Prediction & Node Classification. We test IGEL on edge / node level tasks to assess its use as a baseline in non-GNN settings. On a transductive link prediction task, we train DeepWalk [34] style embeddings of IGEL encodings rather than node identities on the Facebook and CA-AstroPh graphs [35]. IGEL-derived embeddings outperform transductive baselines modelling link prediction as an edge-level binary classification task, measuring 0.976 vs. 0.968 (Facebook) and 0.984 vs. 0.937 (CA-AstroPh) AUC comparing IGEL vs. node2vec [36]. On multi-label node classification on PPI [5], we train an MLP (e.g. no message passing) with node features and IGEL encodings. Our MLP shows better micro-F1 (0.850) when $\alpha = 1$ than MP-GNN architectures such as GraphSAGE (0.768, as reported in [6]), but underperforms compared to a 3-layer GAT (0.973 micro-F1 from [6]).
+
+ * Experimental Summary. Introducing IGEL yields comparable performance to state-of-the-art methods without architectural modifications-including when compared to strong baseline models focused on WL expressivity such as GNNML3, $k$ -hop, GSN or ESAN. Furthermore, IGEL achieves this at a lower computational cost, in comparison for instance with GNNML3, which requires a $\mathcal{O}\left( {n}^{3}\right)$ eigen-decomposition step to introduce spectral channels. Finally, IGEL can also be used in transductive settings (link prediction) as well as node-level tasks (node classification) and outperform strong transductive baselines or enhance models without message-passing, such as MLPs. As such, we believe IGEL is an attractive baseline with a clear relationship to the 1-WL test that can be used to improve MP-GNN expressivity without the need of costly architecture search.
+
+§ 4 CONCLUSIONS
+
+We presented IGEL, a novel vertex representation algorithm on unattributed graphs allowing MP-GNN architectures to go beyond 1-WL expressivity. We showed that IGEL is related and more expressive than the 1-WL test, and formally proved an expressivity upper bound on certain families of Strongly Regular Graphs. Finally, our experimental results indicate that introducing IGEL in existing MP-GNN architectures yield comparable performance to state-of-the-art methods, without architectural modifications and at lower computational costs than other approaches.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..cce1bbf76ceaeee6414e1722b10a74726b322091
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,307 @@
+# Continuous Neural Algorithmic Planners
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Neural algorithmic reasoning studies the problem of learning classical algorithms with neural networks, especially with a focus on graph architectures. A recent proposal, XLVIN, reaps the benefits of using a graph neural network that simulates value iteration algorithm in deep reinforcement learning agents. It allows model-free planning without access to privileged information of the environment, which is usually unavailable. However, XLVIN only supports discrete action spaces, and is hence nontrivially applicable to most tasks of real-world interest. We expand XLVIN to continuous action spaces by discretization, and evaluate several selective expansion policies to deal with the large planning graphs. Our proposal, CNAP, demonstrates how neural algorithmic reasoning can make a measurable impact in higher-dimensional continuous control settings, such as MuJoCo, bringing gains in low-data settings and outperforming model-free baselines.
+
+## 14 1 Introduction
+
+Neural networks are capable of learning directly from high-dimensional unstructured input data, tackling the input constraints that limit classical algorithms from solving more complex problems. However, neural networks often require large amounts of data to train and suffer from poor generalization and interpretability. On the other hand, algorithms intrinsically generalize and provide mathematical provability with performance guarantees. The complementary relationship motivates the topic of neural algorithmic reasoning to study the problem of learning classical algorithms with neural networks [1].
+
+Recent works focus on utilizing Graph Neural Networks (GNNs) [2-4] for algorithmic reasoning tasks due to the close algorithmic alignment that was proven to bring better sample efficiency and generalization ability $\left\lbrack {5,6}\right\rbrack$ . Besides shortest-path and spanning-tree algorithms, there have been a number of successful applications by aligning GNNs with classical algorithms, covering a range of problems such as bipartite matching [7], min-cut problem [8], and Travelling Salesman Problem [9].
+
+We look at the application of using a GNN that simulates the value iteration algorithm [10] in deep reinforcement learning agents. Value iteration [11] is a dynamic programming algorithm that guarantees to solve a reinforcement learning problem but is traditionally inhibited by its requirement of tabulated inputs. Earlier works [12-16] introduced value iteration as an inductive bias to facilitate the agents to perform implicit planning, without the need of explicitly invoking a planning algorithm, but were found to suffer from an algorithmic bottleneck [17]. Conversely, eXecuted Latent Value Iteration Net (XLVIN) [17] was proposed to leverage a value-iteration-behaving GNN [10] by adopting the neural algorithmic framework [1]. XLVIN is able to learn under a low-data regime, tackling the algorithmic bottleneck suffered by other implicit planners.
+
+One particular difficulty of implicit planners is handling a continuous action space. XLVIN uses a transition model to build a planning graph, over which the pre-trained GNN can execute value iteration in a latent space. So far, it only applies to environments with small and discrete action spaces. The limitation is that the construction of the planning graph requires an enumeration of all possible actions - starting from the current state and expanding for a number of hops equal to the planning horizon. The graph size quickly explodes as the dimensionality of the action space increases. Moreover, a continuous action space results in an infinite pool of action choices, making the construction of a planning graph infeasible.
+
+Nevertheless, continuous control is of significant importance, as most simulation or robotics control tasks [18] have continuous action spaces by design. High complexity also naturally arises as the problem moves towards more powerful real-world domains. To extend such an agent powered by neural algorithmic reasoning to complex continuous control problems, we propose Continuous Neural Algorithmic Planner (CNAP). It generalizes XLVIN to continuous action spaces by discretizing them through binning. Moreover, CNAP handles the large planning graph by following a sampling policy that carefully selects actions during the neighbor expansion stage. Choosing which actions to sample is critical as the graph built determines where the GNN would simulate value iteration computation, and ultimately influences the planning performance.
+
+In addition, the discreteness of the graph neural network simulating the value iteration update rule contrasts with the continuous action space, corresponding to continuous edges between states. CNAP also presents a novel setup for neural algorithmic reasoning, where the downstream task does not fully align with the algorithm studied. This opens a new path for the direction, going beyond the current standard of precise application of learned classical graph algorithms.
+
+We confirm the feasibility of CNAP on a continuous relaxation of a classical low-dimensional control task, where we can still fully expand all of the binned actions after discretization. Then, we apply CNAP to general MuJoCo [19] environments with complex continuous dynamics, where expanding the planning graph by taking all actions is impossible. By expanding the application scope from simple discrete control to complex continuous control, we show that such an intelligent agent with algorithmic reasoning power can be applied to tasks with more real-world interests.
+
+## 2 Background
+
+### 2.1 Markov Decision Process (MDP)
+
+A reinforcement learning problem can be formally described using the MDP framework. At each time step $t \in \{ 0,1,\ldots , T\}$ , the agent performs an action ${a}_{t} \in \mathcal{A}$ given the current state ${s}_{t} \in \mathcal{S}$ . This spawns a transition into a new state ${s}_{t + 1} \in \mathcal{S}$ according to the transition probability $p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ , and produces a reward ${r}_{t} = r\left( {{s}_{t},{a}_{t}}\right)$ . A policy $\pi \left( {{a}_{t} \mid {s}_{t}}\right)$ guides an agent by specifying the probability of choosing an action ${a}_{t}$ given a state ${s}_{t}$ . The trajectory $\tau$ is the sequence of actions and states the agents took $\left( {{s}_{0},{a}_{0},\ldots ,{s}_{T},{a}_{T}}\right)$ . We define the infinite horizon discounted return as $R\left( \tau \right) = \mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}$ , where $\gamma \in \left\lbrack {0,1}\right\rbrack$ is the discount factor. The goal of an agent is to maximize the overall return by finding the optimal policy ${\pi }^{ * } = {\operatorname{argmax}}_{\pi }{\mathbb{E}}_{\tau \sim \pi }\left\lbrack {R\left( \tau \right) }\right\rbrack$ . We can measure the desirability of a state $s$ using the state-value function ${V}^{ * }\left( s\right) = {\mathbb{E}}_{\tau \sim {\pi }^{ * }}\left\lbrack {R\left( \tau \right) \mid {s}_{t} = s}\right\rbrack$ .
+
+### 2.2 Value Iteration
+
+Value iteration is a dynamic programming algorithm that computes the optimal policy and its value function given a tabulated MDP that perfectly describes the environment. It randomly initializes ${V}^{ * }\left( s\right)$ and iteratively updates the value function of each state $s$ using the Bellman optimality equation [11]:
+
+$$
+{V}_{i + 1}^{ * }\left( s\right) = \mathop{\max }\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s, a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s, a}\right) {V}_{t}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{1}
+$$
+
+and we can extract the optimal policy using:
+
+$$
+{\pi }^{ * }\left( s\right) = \mathop{\operatorname{argmax}}\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s, a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s, a}\right) {V}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{2}
+$$
+
+### 2.3 Message-Passing GNN
+
+Graph Neural Networks (GNNs) generalize traditional deep learning techniques onto graph-structured data [20][21]. A message-passing GNN [3] iteratively updates its node feature ${\overrightarrow{h}}_{s}$ by aggregating messages from its neighboring nodes. At each timestep $t$ , a message can be computed between each connected pair of nodes via a message function $M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right)$ , where ${\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}$ is the edge feature. A node receives messages from all its connected neighbors $\mathcal{N}\left( s\right)$ and aggregates them via a permutation-invariant operator $\oplus$ that produces the same output regardless of the spatial permutation of the inputs. The aggregated message ${\overrightarrow{m}}_{s}^{t}$ of a node $s$ can be formulated as:
+
+$$
+{\overrightarrow{m}}_{s}^{t} = {\bigoplus }_{{s}^{\prime } \in \mathcal{N}\left( s\right) }M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right) \tag{3}
+$$
+
+The node feature ${\overrightarrow{h}}_{s}^{t}$ is then transformed via an update function $U$ :
+
+$$
+{\overrightarrow{h}}_{s}^{t + 1} = U\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{m}}_{s}^{t}}\right) \tag{4}
+$$
+
+### 2.4 Neural Algorithmic Reasoning
+
+A dynamic programming (DP) algorithm breaks down the problem into smaller sub-problems, and recursively computes the optimal solutions. DP algorithm has a general form:
+
+$$
+\text{Answer}\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack = \text{DP-Update}\left( {\{ \text{Answer}\left\lbrack {k - 1}\right\rbrack \left\lbrack j\right\rbrack \} , j = 1\ldots n}\right) \tag{5}
+$$
+
+The alignment between GNN and DP can be seen from mapping nodes representation ${\overrightarrow{h}}_{s}$ to Answer $\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack$ , and the aggregation step of GNN to DP-Update.
+
+An algorithmic alignment framework was proposed by [5], where they proved that GNNs could simulate dynamic programming algorithms efficiently with good sample complexity. Furthermore, [6] showed that imitating the individual steps and intermediate outputs of graph algorithms using GNNs can generalize well into out-of-distribution data.
+
+## 3 Related Work
+
+### 3.1 Continuous action space
+
+A common technique for dealing with continuous control problems is to discretize the action space, converting them into discrete control problems. However, discretization leads to an explosion in action space. [22] proposed to use a policy with factorized distribution across action dimensions, and proved it effective on high-dimensional complex tasks with on-policy optimization algorithms. Moreover, we can sample a subset of actions during node expansion when constructing a planning graph. Sampled MuZero [23] extended MuZero [24] with a sample-based policy based on parameter reuse for policy iteration algorithms. Instead, our work constructs a graph for a neural algorithmic reasoner to execute value iteration algorithm, where the actions sampled would directly participate in the Bellman optimality equation (1).
+
+### 3.2 Large-scale graphs
+
+Sampling modules [25] are introduced into GNN architectures to deal with large-scale graphs as a result of neighbor explosion from stacking multiple layers. The unrolling process to construct a planning graph requires node-level sampling. Previous work GraphSAGE [26] introduces a fixed size of node expansion procedure into GCN [2]. This is followed by PinSage [27], which uses a random-walk-based GCN to perform importance-based sampling. However, our work looks at sampling under an implicit planning context, where the importance of each node in sampling is more difficult to understand due to the lack of an exact description of the environment dynamics. Furthermore, sampling in a multi-dimensional action space also requires more careful thinking in the decision-making process.
+
+## 4 Architecture
+
+Our architecture uses XLVIN as a starting point, which we introduce first. This is followed by a discussion of the challenges that arise from extending neural algorithmic implicit planners to the continuous action space and the approaches we proposed to address them.
+
+### 4.1 XLVIN modules
+
+Given the observation space $\mathbf{S}$ and the action space $\mathcal{A}$ , we let the dimension of state embeddings in the latent space be $k$ . The XLVIN architecture can be broken down into four modules:
+
+
+
+Figure 1: XLVIN modules
+
+Encoder $\left( {z : S \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP which encodes the raw observation from the environment $s \in \mathbf{S}$ , to a state embedding ${\overrightarrow{h}}_{s} = z\left( s\right)$ in the latent space.
+
+Transition $\left( {T : {\mathbb{R}}^{k} \times \mathcal{A} \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP with layer norm taken before the last layer that takes two inputs: the state embedding of an observation $z\left( s\right) \in {\mathbb{R}}^{k}$ , and an action $a \in \mathcal{A}$ . It predicts the next state embedding $z\left( {s}^{\prime }\right) \in \mathbb{R}$ , where ${s}^{\prime }$ is the next state transitioned into when the agent performed an action $a$ under current state $s$ .
+
+Executor $\left( {X : {\mathbb{R}}^{k} \times {\mathbb{R}}^{\left| \mathcal{A}\right| \times k} \rightarrow {\mathbb{R}}^{k}}\right)$ : A message-passing GNN pre-trained to simulate each individual step of the value iteration algorithm following the set-up in [10]. Given the current state embedding ${\overrightarrow{h}}_{s}$ , a graph is constructed by enumerating all possible actions $a \in \mathcal{A}$ as edges to expand, and then using the Transition module to predict the next state embeddings as neighbors $\mathcal{N}\left( {\bar{h}}_{s}\right)$ . Finally, the Executor output is an updated state embedding ${\mathcal{X}}_{s} = X\left( {{\overrightarrow{h}}_{s},\mathcal{N}\left( {\overrightarrow{h}}_{s}\right) }\right)$ .
+
+Policy and Value $\left( {P : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow {\left\lbrack 0,1\right\rbrack }^{\left| \mathcal{A}\right| }\text{and}V : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow \mathbb{R}}\right)$ : The Policy module is a linear layer that takes the outputs from the Encoder and Executor, i.e. the state embedding ${\overrightarrow{h}}_{s}$ and the updated state embedding ${\overrightarrow{\mathcal{X}}}_{s}$ , and produces a categorical distribution corresponding to the estimated policy, $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ . The Tail module is also a linear layer that takes the same inputs and produces the estimated state-value function, $V\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ .
+
+The training procedure follows the XLVIN paper [17], and Proximal Policy Optimization (PPO) [28] is used to train the model, apart from the Executor. We use the PPO implementation and hyperparameters by [29]. The Executor is pre-trained as shown in [10] and directly plugged in.
+
+### 4.2 Discretization of the continuous action space
+
+Assume the continuous action space $\mathcal{A}$ has $D$ dimensions. Given the number of action bins $N,\mathcal{A}$ is discretized into evenly spaced discrete action bins. That is, in each dimension $i \in \{ 1,\ldots , D\}$ , ${\mathcal{A}}_{i} = \left\lbrack {{v}_{1},{v}_{2}}\right\rbrack$ is converted to $\left\{ {{a}_{i}^{1},{a}_{i}^{2},\ldots ,{a}_{i}^{N}}\right\}$ where ${a}_{i}^{k} = \left\lbrack {{v}_{1} + \left( {\left( {{v}_{2} - {v}_{1}}\right) /N}\right) \cdot k,{v}_{1} + \left( \left( {{v}_{2} - }\right. \right. }\right.$ $\left. {\left. {v}_{1}\right) /N}\right) \cdot \left( {k + 1}\right) )$ , and the upper bound is taken inclusively when $k = N$ . For each action bin ${a}_{i}^{k}$ , the median value is chosen as the action to take.
+
+Challenge: The discretization of a multi-dimensional continuous action space leads to a combinatorial explosion in action space size. The explosion results in two bottlenecks in the architecture: (i) the Policy module that produces the action probabilities and (ii) the construction of the GNN graph, which requires an enumeration of all possible actions. Below, we address the two bottlenecks respectively.
+
+### 4.3 Factorized joint policy
+
+Assume each action $\overrightarrow{a} \in \mathcal{A}$ has $D$ dimensions, and each dimension has $N$ discrete action bins. A naive policy ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ produces a categorical distribution with ${N}^{D}$ possible actions. To tackle this challenge, we follow a factorized joint policy proposed in [22]:
+
+$$
+{\pi }^{ * }\left( {\overrightarrow{a} \mid s}\right) = \mathop{\prod }\limits_{{i = 1}}^{D}{\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right) \tag{6}
+$$
+
+
+
+Figure 2: (a) Factorized joint policy on an action space with dimension of two. (b) Sampling methods when constructing the graph in Executor.
+
+As illustrated in Figure 2(a), a factorized joint policy $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ is a linear layer with an output dimension of $N * D$ . It approximates $D$ policies simultaneously. Each policy ${\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right)$ indicates the probability of choosing an action ${a}_{i} \in {\mathcal{A}}_{i}$ in the ${i}^{\text{th }}$ dimension, where $\left| {\mathcal{A}}_{i}\right| = N$ . This deals with the exponential explosion of action bins due to discretization, and the increase is now linear. Note there is a trade-off in the choice of $N$ , as a larger number of action bins retains more information from the continuous action space, but it also implies larger graphs and hence computation costs. We provide an ablation study in evaluation on the impact of this choice.
+
+### 4.4 Neighbor sampling methods
+
+As shown in Figure 2(b), the second bottleneck occurs when constructing a graph to execute the pre-trained GNN. It treats each state as a node, then enumerates all possible actions ${\overrightarrow{a}}_{i} \in \left| \mathcal{A}\right|$ to connect neighbors via approximating $\overrightarrow{h}\left( s\right) \overset{{\overrightarrow{a}}_{i}}{ \rightarrow }\overrightarrow{h}\left( {s}_{i}^{\prime }\right)$ . Therefore, each node has degree $\left| \mathcal{A}\right|$ , and graph size grows even faster as it expands deeper. To tackle this challenge, instead of using all possible actions, we propose to use a neighbor sampling method to choose a subset of actions to expand. The important question is which actions to select. The pre-trained GNN uses the graph constructed to simulate value iteration behavior and predict the state-value function. Hence, it is critical that we can include the action that produces a good approximation of the state-value function in our sampling.
+
+Below, we propose four possible methods to sample $K$ actions from $\mathcal{A}$ , where $K \ll \left| \mathcal{A}\right|$ is a fixed number, under the context of value-iteration-based planning.
+
+#### 4.4.1 Gaussian methods
+
+Gaussian distribution is a common baseline policy distribution for continuous action spaces, and it is straightforward to interpret. Furthermore, it discourages extreme actions while encouraging neutral ones with some level of continuity, which suits the requirement of many planning problems. We propose two variants of sampling policy based on Gaussian distribution.
+
+(1) Manual-Gaussian: A Gaussian distribution is used to randomly sample action values in each dimension ${a}_{i} \in {\mathcal{A}}_{i}$ , which are stacked together as a final action vector $\overrightarrow{a} = {\left\lbrack {a}_{0},\ldots ,{a}_{D - 1}\right\rbrack }^{T} \in \mathcal{A}$ . We repeat for $K$ times to sample a subset of $K$ action vectors. We set the mean $\mu = N/2$ and standard deviation $\sigma = N/4$ , where $N$ is the number of discrete action bins. These two parameters are chosen to spread a reasonable distribution over $\left\lbrack {0, N - 1}\right\rbrack$ . Outliers and non-integers are rounded to the nearest whole number within the range of $\left\lbrack {0, N - 1}\right\rbrack$ .
+
+(2) Learned-Gaussian: The two parameters manually chosen in the previous method pose a constraint on placing the median action in each dimension as the most likely. Here instead, two fully-connected linear layers are used to separately estimate the mean $\mu$ and standard deviation $\sigma$ . They take the state embedding ${\overrightarrow{h}}_{s}$ from Encoder and output parameter estimations for each dimension. We use the reparameterization trick [30] to make the sampling differentiable.
+
+#### 4.4.2 Parameter reuse
+
+Gaussian methods still restrain a fixed distribution on the sampling distribution, which may not necessarily fit. Previous work [23] studied a similar action sampling problem. They reasoned that since the actions selected by the policy are expected to be more valuable, we can directly use the policy for sampling.
+
+(3) Reuse-Policy: We can reuse Policy layer $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ to sample the actions when we expand the graph in Executor. This is equivalent to using the policy distribution ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ as the neighbor sampling distribution. However, the second input ${\overrightarrow{\mathcal{X}}}_{s}$ for Policy layer comes from Executor, which is not available at the time of constructing the graph. It is filled up by setting ${\overrightarrow{\mathcal{X}}}_{s} = \overrightarrow{0}$ as placeholders.
+
+#### 4.4.3 Learn to expand
+
+Lastly, we can also use a separate layer to learn the neighbor sampling distribution.
+
+(4) Learned-Sampling: This uses a fully-connected linear layer that consumes ${\overrightarrow{h}}_{s}$ and produces an output dimension of $\left| {N \cdot D}\right|$ . It is expected to learn the optimal neighbor sampling distribution in a factorized joint manner, same as Figure 2(a). The outputs are logits for $D$ categorical distributions, where we used Gumbel-Softmax [31] for differentiable sampling actions in each dimension, together producing $\overrightarrow{a} = {\left\lbrack {a}_{1},\ldots ,{a}_{D}\right\rbrack }^{T}$ .
+
+## 5 Results
+
+### 5.1 Classic Control
+
+To evaluate the performance of CNAP agents, we first ran the experiments on a relatively simple MountainCarContinuous-v0 environment from OpenAI Gym Classic Control suite [32], where the action space was one-dimensional. The training of the agent used PPO under 20 rollouts with 5 training episodes each, so the training consumed 100 episodes in total.
+
+We compared two variants of CNAP agents: "CNAP-B" had its Executor pre-trained on a type of binary graph that aimed to simulate the bi-directional control of the car, and "CNAP-R" had its Executor pre-trained on random synthetic Erdős-Rényi graphs. In Table 1, we compared both CNAP agents against a "PPO Baseline" agent that consisted of only the Encoder and Policy/Tail modules. Both the CNAP agents outperformed the baseline agent for this environment, indicating the success of extending XLVIN onto continuous settings via binning.
+
+Table 1: Mean rewards for MountainCarContinuous-v0 using PPO Baseline and two variants of CNAP agents. All three agents ran on 10 action bins, and were trained on 100 episodes in total. Both CNAP agents executed one step of value iteration. The reward was averaged over 100 episodes and 10 seeds.
+
+| Model | MountainCarContinuous-v0 |
| PPO Baseline | $- {4.96} \pm {1.24}$ |
| CNAP-B | ${55.73} \pm {45.10}$ |
| CNAP-R | $\mathbf{{63.41}} \pm {37.89}$ |
+
+#### 5.1.1 Effect of GNN width and depth
+
+In Table 2 and 3, we varied the two hyperparameters of the CNAP agents. In Table 2, we varied the number of action bins into which the continuous action space was discretized. In Table 3, we varied the number of GNN steps, corresponding to the number of steps we simulated in the value iteration algorithm. The two hyperparameters controlled the width and depth of the GNN graphs constructed, respectively. The two agents performed best with 10 action bins and one GNN step. We note that the number of training samples might not be sufficient when given larger graph width and depth. Also, a deeper graph required repeatedly applying the Transition module, where the imprecision might add on, leading to inappropriate state embeddings and hence less desirable results.
+
+Table 2: Mean rewards for MountainCarContinuous-v0 using Baseline and CNAP agents by varying number of action bins, i.e., width of graph. The results were averaged over 100 episodes and 10 seeds.
+
+| Model | Action Bins | MountainCar-Continuous |
| PPO | 5 | $- {2.16} \pm {1.25}$ |
| 10 | $- {4.96} \pm {1.24}$ |
| 15 | $- {3.95} \pm {0.77}$ |
| CNAP-B | 5 | ${29.46} \pm {57.57}$ |
| 10 | ${55.73} \pm {45.10}$ |
| 15 | ${22.79} \pm {41.24}$ |
| CNAP-R | 5 | ${20.32} \pm {53.13}$ |
| 10 | $\mathbf{{63.41}} \pm {37.89}$ |
| 15 | ${26.21} \pm {46.44}$ |
+
+Table 3: Mean rewards for MountainCarContinuous-v0 using CNAP agents by varying number of GNN steps, i.e., depth of graph. The results were averaged over 100 episodes and 10 seeds.
+
+| Model | GNN Steps | MountainCar-Continuous |
| CNAP-B | 1 | ${55.73} \pm {45.10}$ |
| 2 | ${46.93} \pm {44.13}$ |
| 3 | ${40.58} \pm {48.20}$ |
| CNAP-R | 1 | $\mathbf{{63.41}} \pm {37.89}$ |
| 2 | ${34.49} \pm {47.77}$ |
| 3 | ${43.61} \pm {46.16}$ |
+
+### 5.2 MuJoCo
+
+We then ran experiments on more complex environments from OpenAI Gym's MuJoCo suite [19, 32] to evaluate how CNAPs could handle the high increase in scale. Unlike the Classic Control suite, the $\mathrm{{MuJoCo}}$ environments have higher dimensions in both its observation and action spaces. We started by evaluating CNAP agents in two environments with relatively lower action dimensions, and then we moved on to two more environments with much higher dimensions. The discretization of the continuous action space also implied a combinatorial explosion in the action space, resulting in a large graph constructed for the GNN. We used the proposed factorized joint policy from Section 4.3 and the neighbor sampling methods from Section 4.4 to address the limitations.
+
+#### 5.2.1 On low-dimensional environments
+
+In Figure 3, we experimented with the four sampling methods discussed in Section 4.4 on Swimmer-v2 (action space dimension of 2) and HalfCheetah-v2 (action space dimension of 6). We chose to take the number of action bins $N = {11}$ for all the experiments following [22], where the best performance on MuJoCo environments was obtained when $7 \leq N \leq {15}$ . In all cases, CNAP outperformed the baseline in the final performances. Moreover, Manual-Gaussian and Reuse-Policy were the most promising sampling strategies as they also demonstrated faster learning, hence better sample efficiency. This pointed to the benefits of parameter reuse and the synergistic improvement between learning to act and learning to sample relevant neighbors, as well as the power of a well-chosen manual distribution. We also note that choosing a manual distribution can become non-trivial when the task becomes more complex, especially if choosing the average values for each dimension is not the most desirable. Our work acts as a proof-of-concept of sampling strategies and leaves the choice of parameters for future studies.
+
+#### 5.2.2 On high-dimensional environments
+
+We then further evaluated the scalability of CNAP agents in more complex environments where the dimensionality of the action space was significantly larger, while retaining a relatively low-data regime $\left( {10}^{6}\right.$ actor steps). In Figure 4, we compared all the previously proposed CNAP methods on two environments with highly complex dynamics, both having an action space dimension of 17. In the Humanoid task, all variants of CNAPs outperformed PPO, acquiring knowledge significantly faster.
+
+
+
+Figure 3: Average rewards over time for CNAP (red) and PPO baseline (blue), in Swimmer (action dimension=2) and Halfcheetah (action dimension=6), using different sampling methods. In Swimmer, CNAP with sampling methods were compared with the original version by expanding all actions (green). In (a), the actions were sampled using Gaussian distribution with mean $= N/2$ and std $= N/4$ , where $N$ was the number of action bins used to discretize the continuous action space. In (b), two linear layers were used to learn the mean and std, respectively. In (c), the Policy layer was reused in sampling actions to expand. In (d), a separate linear layer was used to learn the optimal neighbor sampling distribution. The mean rewards were averaged over 100 episodes, and the learning curve was aggregated from 5 seeds.
+
+Particularly, we found that nonparametric approaches to sampling the graph in CNAP (e.g. manual Gaussian and policy reuse) acquired this knowledge significantly faster than any other CNAP approach tested. This supplements our previous results well, and further testifies to the improved learning stability when the sampling process does not contain additional parameters to optimise.
+
+We also evaluated all of the methods considered against PPO on the HumanoidStandup task, with all methods learning to sit up, and no apparent distinction in the rate of acquisition. However, we provide some qualitative evidence that the solution found by CNAP appears to be more robust in the way this knowledge acquired-see Appendix A.
+
+
+
+Figure 4: Average rewards over time for CNAP (red) and PPO baseline (blue), in Humanoid (action dimension=17) and HumanoidStandup (action dimension=17), using Manual-Gaussian and Reuse-Policy sampling methods.
+
+268
+
+#### 5.2.3 Qualitative interpretation
+
+269 We captured the video recordings of the interactions between the agents and the environments to provide a qualitative interpretation to the results above. We chose to look at the selected frames at 1 equal time intervals from one episode after the last training iteration by CNAP (Manual-Gaussian) and PPO Baseline, respectively.
+
+
+
+Figure 5: Selected frames of two agents in HalfCheetah
+
+From Figure 5's HalfCheetah task, we can see the agent instructed by PPO Baseline fell over quickly and never managed to turn it back. However, CNAP's agent could balance well and kept running 75 forward. This observation could support the higher average episodic rewards gained by CNAP agents than by PPO Baseline in Figure 3.
+
+
+
+Figure 6: Selected frames of two agents in Humanoid
+
+Similarly, in Figure 6's Humanoid task, PPO Baseline's humanoid stayed stationary and lost balance quickly, while CNAP's humanoid could walk forward in small steps. This observation aligned with the results in Figure 4 where the gain from CNAP was significant.
+
+The selected frames for Swimmer and HumanoidStandup tasks are attached in Appendix A. We note that, although quantitatively CNAP agent did not differentiate from PPO Baseline in Humanoid-Standup task as shown in Figure 4, for the trajectories we observed, it successfully remained in a sitting position, while the PPO Baseline fell quickly.
+
+## 6 Conclusion
+
+We present CNAP, a method that generalizes implicit planners to continuous action spaces for the first time. In particular, we study implicit planners based on neural algorithmic reasoners and the unstudied implications of not having precise alignment between the learned graph algorithm and the setup where the executor is applied. To deal with the challenges in building the planning tree, as a result of the continuous, high-dimensional nature of the action space, we combine previous advancements in XLVIN with binning, as well as parametric and non-parametric neighbor sampling strategies. We evaluate the agent against its model-free variant, observing its efficiency in low-data settings and consistently better performance than the baseline. Moreover, this paves the way for extending other implicit planners to continuous action spaces and studying neural algorithmic reasoning beyond strict applications of graph algorithms.
+
+References
+
+[1] Petar Veličković and Charles Blundell. Neural algorithmic reasoning. arXiv preprint arXiv:2105.02761, 2021. 1
+
+[2] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, 2017. 1, 3
+
+[3] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1263-1272. PMLR, 2017. URL http: //proceedings.mlr.press/v70/gilmer17a.html. 2
+
+[4] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In ICLR, 2018. 1
+
+[5] Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? In 8th International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rJxbJeHFPS.1, 3
+
+[6] Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In 8th International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkgKOOEtvS.1, 3
+
+[7] Dobrik Georgiev and Pietro Lió. Neural bipartite matching. CoRR, abs/2005.11304, 2020. URL https://arxiv.org/abs/2005.11304.1
+
+[8] Pranjal Awasthi, Abhimanyu Das, and Sreenivas Gollapudi. Beyond \{gnn\}s: A sample efficient architecture for graph problems, 2021. URL https://openreview.net/forum?id= Px7xIKHjmMS. 1
+
+[9] Chaitanya K. Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent, and Xavier Bresson. Learning TSP requires rethinking generalization. CoRR, abs/2006.07054, 2020. URL https://arxiv.org/abs/2006.07054.1
+
+[10] Andreea Deac, Pierre-Luc Bacon, and Jian Tang. Graph neural induction of value iteration. arXiv preprint arXiv:2009.12604, 2020. 1, 4
+
+[11] Richard Bellman. Dynamic Programming. Dover Publications, 1957. ISBN 9780486428093. 1,2
+
+[12] Aviv Tamar, Sergey Levine, Pieter Abbeel, Yi Wu, and Garrett Thomas. Value iteration networks. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, NIPS, pages 2146-2154, 2016. 1
+
+[13] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. In NIPS, pages 6118-6128, 2017.
+
+[14] Sufeng Niu, Siheng Chen, Hanyu Guo, Colin Targonski, Melissa C. Smith, and Jelena Kovacevic. Generalized value iteration networks: Life beyond lattices. In AAAI, pages 6246-6253. AAAI Press, 2018.
+
+[15] Gregory Farquhar, Tim Rocktäschel, Maximilian Igl, and Shimon Whiteson. Treeqn and atreec: Differentiable tree-structured models for deep reinforcement learning. In ICLR, 2018.
+
+[16] Lisa Lee, Emilio Parisotto, Devendra Singh Chaplot, Eric Xing, and Ruslan Salakhutdinov. Gated path planning networks. In International Conference on Machine Learning, pages 2947-2955. PMLR, 2018. 1
+
+[17] Andreea Deac, Petar Velickovic, Ognjen Milinkovic, Pierre-Luc Bacon, Jian Tang, and Mladen Nikolic. Neural algorithmic reasoners are implicit planners. In Advances in Neural Information Processing Systems 34, pages 15529-15542, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/82e9e7a12665240d13d0b928be28f230-Abstract.html.1,4
+
+[18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. J. Mach. Learn. Res., 17:39:1-39:40, 2016. URL http://jmlr.org/ papers/v17/15-522.html. 2
+
+[19] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems, pages 5026-5033. IEEE, 2012. doi: 10.1109/IROS.2012.6386109. URL https://doi.org/10.1109/IROS.2012.6386109.2,7
+
+[20] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. doi: 10.1109/TNN.2008.2005605. 2
+
+[21] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Velickovic. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. CoRR, abs/2104.13478, 2021. URL https: //arxiv.org/abs/2104.13478. 2
+
+[22] Yunhao Tang and Shipra Agrawal. Discretizing continuous action space for on-policy optimization. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 5981-5988. AAAI Press, 2020. URL https://ojs.aaai.org/index.php/AAAI/ article/view/6059. 3, 4, 7
+
+[23] Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Mohammadamin Barekatain, Simon Schmitt, and David Silver. Learning and planning in complex action spaces. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pages 4476-4486. PMLR, 2021. URL http://proceedings.mlr.press/v139/hubert21a.html.3, 6
+
+[24] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020. 3
+
+[25] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review of methods and applications. CoRR, abs/1812.08434, 2018. URL http://arxiv.org/abs/1812.08434.3
+
+[26] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. CoRR, abs/1706.02216, 2017. URL http://arxiv.org/abs/1706.02216.3
+
+[27] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. CoRR, abs/1806.01973, 2018. URL http://arxiv.org/abs/1806.01973.3
+
+[28] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/ abs/1707.06347.4
+
+[29] Ilya Kostrikov. Pytorch implementations of reinforcement learning algorithms. https:// github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail,2018.4
+
+[30] Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013. URL https: //arxiv.org/abs/1312.6114. 5
+
+[31] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax, 2016. URL https://arxiv.org/abs/1611.01144.6
+
+[32] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. 6, 7
+
+## A Appendix
+
+A. 1 Selected frames for Swimmer and HumanoidStandup tasks
+
+
+
+Figure 7: Selected frames of two agents in Swimmer
+
+387 As seen in Figure 7, CNAP could fold itself slightly faster than PPO Baseline in this episode and
+
+388 swam more quickly.
+
+
+
+Figure 8: Selected frames of two agents in HumanoidStandup
+
+Then we noticed that although in HumanoidStandup task, the quantitative performances between PPO Baseline and CNAP were similar, Figure 8 revealed some different results. Both agents did not manage to stand up, explaining why the episodic rewards were similar numerically. However, the PPO Baseline agent lost balance and fell back to the ground while the CNAP agent remained sitting, trying to get up. Therefore, the CNAP qualitatively performed better in this example.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5ed4ec711c9f54941431a7cc723b2df4dbd8d2cb
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/60avttW0Mv/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,289 @@
+§ CONTINUOUS NEURAL ALGORITHMIC PLANNERS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Neural algorithmic reasoning studies the problem of learning classical algorithms with neural networks, especially with a focus on graph architectures. A recent proposal, XLVIN, reaps the benefits of using a graph neural network that simulates value iteration algorithm in deep reinforcement learning agents. It allows model-free planning without access to privileged information of the environment, which is usually unavailable. However, XLVIN only supports discrete action spaces, and is hence nontrivially applicable to most tasks of real-world interest. We expand XLVIN to continuous action spaces by discretization, and evaluate several selective expansion policies to deal with the large planning graphs. Our proposal, CNAP, demonstrates how neural algorithmic reasoning can make a measurable impact in higher-dimensional continuous control settings, such as MuJoCo, bringing gains in low-data settings and outperforming model-free baselines.
+
+§ 14 1 INTRODUCTION
+
+Neural networks are capable of learning directly from high-dimensional unstructured input data, tackling the input constraints that limit classical algorithms from solving more complex problems. However, neural networks often require large amounts of data to train and suffer from poor generalization and interpretability. On the other hand, algorithms intrinsically generalize and provide mathematical provability with performance guarantees. The complementary relationship motivates the topic of neural algorithmic reasoning to study the problem of learning classical algorithms with neural networks [1].
+
+Recent works focus on utilizing Graph Neural Networks (GNNs) [2-4] for algorithmic reasoning tasks due to the close algorithmic alignment that was proven to bring better sample efficiency and generalization ability $\left\lbrack {5,6}\right\rbrack$ . Besides shortest-path and spanning-tree algorithms, there have been a number of successful applications by aligning GNNs with classical algorithms, covering a range of problems such as bipartite matching [7], min-cut problem [8], and Travelling Salesman Problem [9].
+
+We look at the application of using a GNN that simulates the value iteration algorithm [10] in deep reinforcement learning agents. Value iteration [11] is a dynamic programming algorithm that guarantees to solve a reinforcement learning problem but is traditionally inhibited by its requirement of tabulated inputs. Earlier works [12-16] introduced value iteration as an inductive bias to facilitate the agents to perform implicit planning, without the need of explicitly invoking a planning algorithm, but were found to suffer from an algorithmic bottleneck [17]. Conversely, eXecuted Latent Value Iteration Net (XLVIN) [17] was proposed to leverage a value-iteration-behaving GNN [10] by adopting the neural algorithmic framework [1]. XLVIN is able to learn under a low-data regime, tackling the algorithmic bottleneck suffered by other implicit planners.
+
+One particular difficulty of implicit planners is handling a continuous action space. XLVIN uses a transition model to build a planning graph, over which the pre-trained GNN can execute value iteration in a latent space. So far, it only applies to environments with small and discrete action spaces. The limitation is that the construction of the planning graph requires an enumeration of all possible actions - starting from the current state and expanding for a number of hops equal to the planning horizon. The graph size quickly explodes as the dimensionality of the action space increases. Moreover, a continuous action space results in an infinite pool of action choices, making the construction of a planning graph infeasible.
+
+Nevertheless, continuous control is of significant importance, as most simulation or robotics control tasks [18] have continuous action spaces by design. High complexity also naturally arises as the problem moves towards more powerful real-world domains. To extend such an agent powered by neural algorithmic reasoning to complex continuous control problems, we propose Continuous Neural Algorithmic Planner (CNAP). It generalizes XLVIN to continuous action spaces by discretizing them through binning. Moreover, CNAP handles the large planning graph by following a sampling policy that carefully selects actions during the neighbor expansion stage. Choosing which actions to sample is critical as the graph built determines where the GNN would simulate value iteration computation, and ultimately influences the planning performance.
+
+In addition, the discreteness of the graph neural network simulating the value iteration update rule contrasts with the continuous action space, corresponding to continuous edges between states. CNAP also presents a novel setup for neural algorithmic reasoning, where the downstream task does not fully align with the algorithm studied. This opens a new path for the direction, going beyond the current standard of precise application of learned classical graph algorithms.
+
+We confirm the feasibility of CNAP on a continuous relaxation of a classical low-dimensional control task, where we can still fully expand all of the binned actions after discretization. Then, we apply CNAP to general MuJoCo [19] environments with complex continuous dynamics, where expanding the planning graph by taking all actions is impossible. By expanding the application scope from simple discrete control to complex continuous control, we show that such an intelligent agent with algorithmic reasoning power can be applied to tasks with more real-world interests.
+
+§ 2 BACKGROUND
+
+§ 2.1 MARKOV DECISION PROCESS (MDP)
+
+A reinforcement learning problem can be formally described using the MDP framework. At each time step $t \in \{ 0,1,\ldots ,T\}$ , the agent performs an action ${a}_{t} \in \mathcal{A}$ given the current state ${s}_{t} \in \mathcal{S}$ . This spawns a transition into a new state ${s}_{t + 1} \in \mathcal{S}$ according to the transition probability $p\left( {{s}_{t + 1} \mid {s}_{t},{a}_{t}}\right)$ , and produces a reward ${r}_{t} = r\left( {{s}_{t},{a}_{t}}\right)$ . A policy $\pi \left( {{a}_{t} \mid {s}_{t}}\right)$ guides an agent by specifying the probability of choosing an action ${a}_{t}$ given a state ${s}_{t}$ . The trajectory $\tau$ is the sequence of actions and states the agents took $\left( {{s}_{0},{a}_{0},\ldots ,{s}_{T},{a}_{T}}\right)$ . We define the infinite horizon discounted return as $R\left( \tau \right) = \mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t}$ , where $\gamma \in \left\lbrack {0,1}\right\rbrack$ is the discount factor. The goal of an agent is to maximize the overall return by finding the optimal policy ${\pi }^{ * } = {\operatorname{argmax}}_{\pi }{\mathbb{E}}_{\tau \sim \pi }\left\lbrack {R\left( \tau \right) }\right\rbrack$ . We can measure the desirability of a state $s$ using the state-value function ${V}^{ * }\left( s\right) = {\mathbb{E}}_{\tau \sim {\pi }^{ * }}\left\lbrack {R\left( \tau \right) \mid {s}_{t} = s}\right\rbrack$ .
+
+§ 2.2 VALUE ITERATION
+
+Value iteration is a dynamic programming algorithm that computes the optimal policy and its value function given a tabulated MDP that perfectly describes the environment. It randomly initializes ${V}^{ * }\left( s\right)$ and iteratively updates the value function of each state $s$ using the Bellman optimality equation [11]:
+
+$$
+{V}_{i + 1}^{ * }\left( s\right) = \mathop{\max }\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s,a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s,a}\right) {V}_{t}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{1}
+$$
+
+and we can extract the optimal policy using:
+
+$$
+{\pi }^{ * }\left( s\right) = \mathop{\operatorname{argmax}}\limits_{{a \in \mathcal{A}}}\left\{ {r\left( {s,a}\right) + \gamma \mathop{\sum }\limits_{{{s}^{\prime } \in \mathcal{S}}}p\left( {{s}^{\prime } \mid s,a}\right) {V}^{ * }\left( {s}^{\prime }\right) }\right\} \tag{2}
+$$
+
+§ 2.3 MESSAGE-PASSING GNN
+
+Graph Neural Networks (GNNs) generalize traditional deep learning techniques onto graph-structured data [20][21]. A message-passing GNN [3] iteratively updates its node feature ${\overrightarrow{h}}_{s}$ by aggregating messages from its neighboring nodes. At each timestep $t$ , a message can be computed between each connected pair of nodes via a message function $M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right)$ , where ${\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}$ is the edge feature. A node receives messages from all its connected neighbors $\mathcal{N}\left( s\right)$ and aggregates them via a permutation-invariant operator $\oplus$ that produces the same output regardless of the spatial permutation of the inputs. The aggregated message ${\overrightarrow{m}}_{s}^{t}$ of a node $s$ can be formulated as:
+
+$$
+{\overrightarrow{m}}_{s}^{t} = {\bigoplus }_{{s}^{\prime } \in \mathcal{N}\left( s\right) }M\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{h}}_{{s}^{\prime }}^{t},{\overrightarrow{e}}_{{s}^{\prime } \rightarrow s}}\right) \tag{3}
+$$
+
+The node feature ${\overrightarrow{h}}_{s}^{t}$ is then transformed via an update function $U$ :
+
+$$
+{\overrightarrow{h}}_{s}^{t + 1} = U\left( {{\overrightarrow{h}}_{s}^{t},{\overrightarrow{m}}_{s}^{t}}\right) \tag{4}
+$$
+
+§ 2.4 NEURAL ALGORITHMIC REASONING
+
+A dynamic programming (DP) algorithm breaks down the problem into smaller sub-problems, and recursively computes the optimal solutions. DP algorithm has a general form:
+
+$$
+\text{ Answer }\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack = \text{ DP-Update }\left( {\{ \text{ Answer }\left\lbrack {k - 1}\right\rbrack \left\lbrack j\right\rbrack \} ,j = 1\ldots n}\right) \tag{5}
+$$
+
+The alignment between GNN and DP can be seen from mapping nodes representation ${\overrightarrow{h}}_{s}$ to Answer $\left\lbrack k\right\rbrack \left\lbrack i\right\rbrack$ , and the aggregation step of GNN to DP-Update.
+
+An algorithmic alignment framework was proposed by [5], where they proved that GNNs could simulate dynamic programming algorithms efficiently with good sample complexity. Furthermore, [6] showed that imitating the individual steps and intermediate outputs of graph algorithms using GNNs can generalize well into out-of-distribution data.
+
+§ 3 RELATED WORK
+
+§ 3.1 CONTINUOUS ACTION SPACE
+
+A common technique for dealing with continuous control problems is to discretize the action space, converting them into discrete control problems. However, discretization leads to an explosion in action space. [22] proposed to use a policy with factorized distribution across action dimensions, and proved it effective on high-dimensional complex tasks with on-policy optimization algorithms. Moreover, we can sample a subset of actions during node expansion when constructing a planning graph. Sampled MuZero [23] extended MuZero [24] with a sample-based policy based on parameter reuse for policy iteration algorithms. Instead, our work constructs a graph for a neural algorithmic reasoner to execute value iteration algorithm, where the actions sampled would directly participate in the Bellman optimality equation (1).
+
+§ 3.2 LARGE-SCALE GRAPHS
+
+Sampling modules [25] are introduced into GNN architectures to deal with large-scale graphs as a result of neighbor explosion from stacking multiple layers. The unrolling process to construct a planning graph requires node-level sampling. Previous work GraphSAGE [26] introduces a fixed size of node expansion procedure into GCN [2]. This is followed by PinSage [27], which uses a random-walk-based GCN to perform importance-based sampling. However, our work looks at sampling under an implicit planning context, where the importance of each node in sampling is more difficult to understand due to the lack of an exact description of the environment dynamics. Furthermore, sampling in a multi-dimensional action space also requires more careful thinking in the decision-making process.
+
+§ 4 ARCHITECTURE
+
+Our architecture uses XLVIN as a starting point, which we introduce first. This is followed by a discussion of the challenges that arise from extending neural algorithmic implicit planners to the continuous action space and the approaches we proposed to address them.
+
+§ 4.1 XLVIN MODULES
+
+Given the observation space $\mathbf{S}$ and the action space $\mathcal{A}$ , we let the dimension of state embeddings in the latent space be $k$ . The XLVIN architecture can be broken down into four modules:
+
+ < g r a p h i c s >
+
+Figure 1: XLVIN modules
+
+Encoder $\left( {z : S \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP which encodes the raw observation from the environment $s \in \mathbf{S}$ , to a state embedding ${\overrightarrow{h}}_{s} = z\left( s\right)$ in the latent space.
+
+Transition $\left( {T : {\mathbb{R}}^{k} \times \mathcal{A} \rightarrow {\mathbb{R}}^{k}}\right)$ : A 3-layer MLP with layer norm taken before the last layer that takes two inputs: the state embedding of an observation $z\left( s\right) \in {\mathbb{R}}^{k}$ , and an action $a \in \mathcal{A}$ . It predicts the next state embedding $z\left( {s}^{\prime }\right) \in \mathbb{R}$ , where ${s}^{\prime }$ is the next state transitioned into when the agent performed an action $a$ under current state $s$ .
+
+Executor $\left( {X : {\mathbb{R}}^{k} \times {\mathbb{R}}^{\left| \mathcal{A}\right| \times k} \rightarrow {\mathbb{R}}^{k}}\right)$ : A message-passing GNN pre-trained to simulate each individual step of the value iteration algorithm following the set-up in [10]. Given the current state embedding ${\overrightarrow{h}}_{s}$ , a graph is constructed by enumerating all possible actions $a \in \mathcal{A}$ as edges to expand, and then using the Transition module to predict the next state embeddings as neighbors $\mathcal{N}\left( {\bar{h}}_{s}\right)$ . Finally, the Executor output is an updated state embedding ${\mathcal{X}}_{s} = X\left( {{\overrightarrow{h}}_{s},\mathcal{N}\left( {\overrightarrow{h}}_{s}\right) }\right)$ .
+
+Policy and Value $\left( {P : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow {\left\lbrack 0,1\right\rbrack }^{\left| \mathcal{A}\right| }\text{ and }V : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow \mathbb{R}}\right)$ : The Policy module is a linear layer that takes the outputs from the Encoder and Executor, i.e. the state embedding ${\overrightarrow{h}}_{s}$ and the updated state embedding ${\overrightarrow{\mathcal{X}}}_{s}$ , and produces a categorical distribution corresponding to the estimated policy, $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ . The Tail module is also a linear layer that takes the same inputs and produces the estimated state-value function, $V\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ .
+
+The training procedure follows the XLVIN paper [17], and Proximal Policy Optimization (PPO) [28] is used to train the model, apart from the Executor. We use the PPO implementation and hyperparameters by [29]. The Executor is pre-trained as shown in [10] and directly plugged in.
+
+§ 4.2 DISCRETIZATION OF THE CONTINUOUS ACTION SPACE
+
+Assume the continuous action space $\mathcal{A}$ has $D$ dimensions. Given the number of action bins $N,\mathcal{A}$ is discretized into evenly spaced discrete action bins. That is, in each dimension $i \in \{ 1,\ldots ,D\}$ , ${\mathcal{A}}_{i} = \left\lbrack {{v}_{1},{v}_{2}}\right\rbrack$ is converted to $\left\{ {{a}_{i}^{1},{a}_{i}^{2},\ldots ,{a}_{i}^{N}}\right\}$ where ${a}_{i}^{k} = \left\lbrack {{v}_{1} + \left( {\left( {{v}_{2} - {v}_{1}}\right) /N}\right) \cdot k,{v}_{1} + \left( \left( {{v}_{2} - }\right. \right. }\right.$ $\left. {\left. {v}_{1}\right) /N}\right) \cdot \left( {k + 1}\right) )$ , and the upper bound is taken inclusively when $k = N$ . For each action bin ${a}_{i}^{k}$ , the median value is chosen as the action to take.
+
+Challenge: The discretization of a multi-dimensional continuous action space leads to a combinatorial explosion in action space size. The explosion results in two bottlenecks in the architecture: (i) the Policy module that produces the action probabilities and (ii) the construction of the GNN graph, which requires an enumeration of all possible actions. Below, we address the two bottlenecks respectively.
+
+§ 4.3 FACTORIZED JOINT POLICY
+
+Assume each action $\overrightarrow{a} \in \mathcal{A}$ has $D$ dimensions, and each dimension has $N$ discrete action bins. A naive policy ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ produces a categorical distribution with ${N}^{D}$ possible actions. To tackle this challenge, we follow a factorized joint policy proposed in [22]:
+
+$$
+{\pi }^{ * }\left( {\overrightarrow{a} \mid s}\right) = \mathop{\prod }\limits_{{i = 1}}^{D}{\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right) \tag{6}
+$$
+
+ < g r a p h i c s >
+
+Figure 2: (a) Factorized joint policy on an action space with dimension of two. (b) Sampling methods when constructing the graph in Executor.
+
+As illustrated in Figure 2(a), a factorized joint policy $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ is a linear layer with an output dimension of $N * D$ . It approximates $D$ policies simultaneously. Each policy ${\pi }_{i}^{ * }\left( {{a}_{i} \mid s}\right)$ indicates the probability of choosing an action ${a}_{i} \in {\mathcal{A}}_{i}$ in the ${i}^{\text{ th }}$ dimension, where $\left| {\mathcal{A}}_{i}\right| = N$ . This deals with the exponential explosion of action bins due to discretization, and the increase is now linear. Note there is a trade-off in the choice of $N$ , as a larger number of action bins retains more information from the continuous action space, but it also implies larger graphs and hence computation costs. We provide an ablation study in evaluation on the impact of this choice.
+
+§ 4.4 NEIGHBOR SAMPLING METHODS
+
+As shown in Figure 2(b), the second bottleneck occurs when constructing a graph to execute the pre-trained GNN. It treats each state as a node, then enumerates all possible actions ${\overrightarrow{a}}_{i} \in \left| \mathcal{A}\right|$ to connect neighbors via approximating $\overrightarrow{h}\left( s\right) \overset{{\overrightarrow{a}}_{i}}{ \rightarrow }\overrightarrow{h}\left( {s}_{i}^{\prime }\right)$ . Therefore, each node has degree $\left| \mathcal{A}\right|$ , and graph size grows even faster as it expands deeper. To tackle this challenge, instead of using all possible actions, we propose to use a neighbor sampling method to choose a subset of actions to expand. The important question is which actions to select. The pre-trained GNN uses the graph constructed to simulate value iteration behavior and predict the state-value function. Hence, it is critical that we can include the action that produces a good approximation of the state-value function in our sampling.
+
+Below, we propose four possible methods to sample $K$ actions from $\mathcal{A}$ , where $K \ll \left| \mathcal{A}\right|$ is a fixed number, under the context of value-iteration-based planning.
+
+§ 4.4.1 GAUSSIAN METHODS
+
+Gaussian distribution is a common baseline policy distribution for continuous action spaces, and it is straightforward to interpret. Furthermore, it discourages extreme actions while encouraging neutral ones with some level of continuity, which suits the requirement of many planning problems. We propose two variants of sampling policy based on Gaussian distribution.
+
+(1) Manual-Gaussian: A Gaussian distribution is used to randomly sample action values in each dimension ${a}_{i} \in {\mathcal{A}}_{i}$ , which are stacked together as a final action vector $\overrightarrow{a} = {\left\lbrack {a}_{0},\ldots ,{a}_{D - 1}\right\rbrack }^{T} \in \mathcal{A}$ . We repeat for $K$ times to sample a subset of $K$ action vectors. We set the mean $\mu = N/2$ and standard deviation $\sigma = N/4$ , where $N$ is the number of discrete action bins. These two parameters are chosen to spread a reasonable distribution over $\left\lbrack {0,N - 1}\right\rbrack$ . Outliers and non-integers are rounded to the nearest whole number within the range of $\left\lbrack {0,N - 1}\right\rbrack$ .
+
+(2) Learned-Gaussian: The two parameters manually chosen in the previous method pose a constraint on placing the median action in each dimension as the most likely. Here instead, two fully-connected linear layers are used to separately estimate the mean $\mu$ and standard deviation $\sigma$ . They take the state embedding ${\overrightarrow{h}}_{s}$ from Encoder and output parameter estimations for each dimension. We use the reparameterization trick [30] to make the sampling differentiable.
+
+§ 4.4.2 PARAMETER REUSE
+
+Gaussian methods still restrain a fixed distribution on the sampling distribution, which may not necessarily fit. Previous work [23] studied a similar action sampling problem. They reasoned that since the actions selected by the policy are expected to be more valuable, we can directly use the policy for sampling.
+
+(3) Reuse-Policy: We can reuse Policy layer $P\left( {{\overrightarrow{h}}_{s},{\overrightarrow{\mathcal{X}}}_{s}}\right)$ to sample the actions when we expand the graph in Executor. This is equivalent to using the policy distribution ${\pi }^{ * } = p\left( {\overrightarrow{a} \mid s}\right)$ as the neighbor sampling distribution. However, the second input ${\overrightarrow{\mathcal{X}}}_{s}$ for Policy layer comes from Executor, which is not available at the time of constructing the graph. It is filled up by setting ${\overrightarrow{\mathcal{X}}}_{s} = \overrightarrow{0}$ as placeholders.
+
+§ 4.4.3 LEARN TO EXPAND
+
+Lastly, we can also use a separate layer to learn the neighbor sampling distribution.
+
+(4) Learned-Sampling: This uses a fully-connected linear layer that consumes ${\overrightarrow{h}}_{s}$ and produces an output dimension of $\left| {N \cdot D}\right|$ . It is expected to learn the optimal neighbor sampling distribution in a factorized joint manner, same as Figure 2(a). The outputs are logits for $D$ categorical distributions, where we used Gumbel-Softmax [31] for differentiable sampling actions in each dimension, together producing $\overrightarrow{a} = {\left\lbrack {a}_{1},\ldots ,{a}_{D}\right\rbrack }^{T}$ .
+
+§ 5 RESULTS
+
+§ 5.1 CLASSIC CONTROL
+
+To evaluate the performance of CNAP agents, we first ran the experiments on a relatively simple MountainCarContinuous-v0 environment from OpenAI Gym Classic Control suite [32], where the action space was one-dimensional. The training of the agent used PPO under 20 rollouts with 5 training episodes each, so the training consumed 100 episodes in total.
+
+We compared two variants of CNAP agents: "CNAP-B" had its Executor pre-trained on a type of binary graph that aimed to simulate the bi-directional control of the car, and "CNAP-R" had its Executor pre-trained on random synthetic Erdős-Rényi graphs. In Table 1, we compared both CNAP agents against a "PPO Baseline" agent that consisted of only the Encoder and Policy/Tail modules. Both the CNAP agents outperformed the baseline agent for this environment, indicating the success of extending XLVIN onto continuous settings via binning.
+
+Table 1: Mean rewards for MountainCarContinuous-v0 using PPO Baseline and two variants of CNAP agents. All three agents ran on 10 action bins, and were trained on 100 episodes in total. Both CNAP agents executed one step of value iteration. The reward was averaged over 100 episodes and 10 seeds.
+
+max width=
+
+Model MountainCarContinuous-v0
+
+1-2
+PPO Baseline $- {4.96} \pm {1.24}$
+
+1-2
+CNAP-B ${55.73} \pm {45.10}$
+
+1-2
+CNAP-R $\mathbf{{63.41}} \pm {37.89}$
+
+1-2
+
+§ 5.1.1 EFFECT OF GNN WIDTH AND DEPTH
+
+In Table 2 and 3, we varied the two hyperparameters of the CNAP agents. In Table 2, we varied the number of action bins into which the continuous action space was discretized. In Table 3, we varied the number of GNN steps, corresponding to the number of steps we simulated in the value iteration algorithm. The two hyperparameters controlled the width and depth of the GNN graphs constructed, respectively. The two agents performed best with 10 action bins and one GNN step. We note that the number of training samples might not be sufficient when given larger graph width and depth. Also, a deeper graph required repeatedly applying the Transition module, where the imprecision might add on, leading to inappropriate state embeddings and hence less desirable results.
+
+Table 2: Mean rewards for MountainCarContinuous-v0 using Baseline and CNAP agents by varying number of action bins, i.e., width of graph. The results were averaged over 100 episodes and 10 seeds.
+
+max width=
+
+Model Action Bins MountainCar-Continuous
+
+1-3
+3*PPO 5 $- {2.16} \pm {1.25}$
+
+2-3
+ 10 $- {4.96} \pm {1.24}$
+
+2-3
+ 15 $- {3.95} \pm {0.77}$
+
+1-3
+3*CNAP-B 5 ${29.46} \pm {57.57}$
+
+2-3
+ 10 ${55.73} \pm {45.10}$
+
+2-3
+ 15 ${22.79} \pm {41.24}$
+
+1-3
+3*CNAP-R 5 ${20.32} \pm {53.13}$
+
+2-3
+ 10 $\mathbf{{63.41}} \pm {37.89}$
+
+2-3
+ 15 ${26.21} \pm {46.44}$
+
+1-3
+
+Table 3: Mean rewards for MountainCarContinuous-v0 using CNAP agents by varying number of GNN steps, i.e., depth of graph. The results were averaged over 100 episodes and 10 seeds.
+
+max width=
+
+Model GNN Steps MountainCar-Continuous
+
+1-3
+3*CNAP-B 1 ${55.73} \pm {45.10}$
+
+2-3
+ 2 ${46.93} \pm {44.13}$
+
+2-3
+ 3 ${40.58} \pm {48.20}$
+
+1-3
+3*CNAP-R 1 $\mathbf{{63.41}} \pm {37.89}$
+
+2-3
+ 2 ${34.49} \pm {47.77}$
+
+2-3
+ 3 ${43.61} \pm {46.16}$
+
+1-3
+
+§ 5.2 MUJOCO
+
+We then ran experiments on more complex environments from OpenAI Gym's MuJoCo suite [19, 32] to evaluate how CNAPs could handle the high increase in scale. Unlike the Classic Control suite, the $\mathrm{{MuJoCo}}$ environments have higher dimensions in both its observation and action spaces. We started by evaluating CNAP agents in two environments with relatively lower action dimensions, and then we moved on to two more environments with much higher dimensions. The discretization of the continuous action space also implied a combinatorial explosion in the action space, resulting in a large graph constructed for the GNN. We used the proposed factorized joint policy from Section 4.3 and the neighbor sampling methods from Section 4.4 to address the limitations.
+
+§ 5.2.1 ON LOW-DIMENSIONAL ENVIRONMENTS
+
+In Figure 3, we experimented with the four sampling methods discussed in Section 4.4 on Swimmer-v2 (action space dimension of 2) and HalfCheetah-v2 (action space dimension of 6). We chose to take the number of action bins $N = {11}$ for all the experiments following [22], where the best performance on MuJoCo environments was obtained when $7 \leq N \leq {15}$ . In all cases, CNAP outperformed the baseline in the final performances. Moreover, Manual-Gaussian and Reuse-Policy were the most promising sampling strategies as they also demonstrated faster learning, hence better sample efficiency. This pointed to the benefits of parameter reuse and the synergistic improvement between learning to act and learning to sample relevant neighbors, as well as the power of a well-chosen manual distribution. We also note that choosing a manual distribution can become non-trivial when the task becomes more complex, especially if choosing the average values for each dimension is not the most desirable. Our work acts as a proof-of-concept of sampling strategies and leaves the choice of parameters for future studies.
+
+§ 5.2.2 ON HIGH-DIMENSIONAL ENVIRONMENTS
+
+We then further evaluated the scalability of CNAP agents in more complex environments where the dimensionality of the action space was significantly larger, while retaining a relatively low-data regime $\left( {10}^{6}\right.$ actor steps). In Figure 4, we compared all the previously proposed CNAP methods on two environments with highly complex dynamics, both having an action space dimension of 17. In the Humanoid task, all variants of CNAPs outperformed PPO, acquiring knowledge significantly faster.
+
+ < g r a p h i c s >
+
+Figure 3: Average rewards over time for CNAP (red) and PPO baseline (blue), in Swimmer (action dimension=2) and Halfcheetah (action dimension=6), using different sampling methods. In Swimmer, CNAP with sampling methods were compared with the original version by expanding all actions (green). In (a), the actions were sampled using Gaussian distribution with mean $= N/2$ and std $= N/4$ , where $N$ was the number of action bins used to discretize the continuous action space. In (b), two linear layers were used to learn the mean and std, respectively. In (c), the Policy layer was reused in sampling actions to expand. In (d), a separate linear layer was used to learn the optimal neighbor sampling distribution. The mean rewards were averaged over 100 episodes, and the learning curve was aggregated from 5 seeds.
+
+Particularly, we found that nonparametric approaches to sampling the graph in CNAP (e.g. manual Gaussian and policy reuse) acquired this knowledge significantly faster than any other CNAP approach tested. This supplements our previous results well, and further testifies to the improved learning stability when the sampling process does not contain additional parameters to optimise.
+
+We also evaluated all of the methods considered against PPO on the HumanoidStandup task, with all methods learning to sit up, and no apparent distinction in the rate of acquisition. However, we provide some qualitative evidence that the solution found by CNAP appears to be more robust in the way this knowledge acquired-see Appendix A.
+
+ < g r a p h i c s >
+
+Figure 4: Average rewards over time for CNAP (red) and PPO baseline (blue), in Humanoid (action dimension=17) and HumanoidStandup (action dimension=17), using Manual-Gaussian and Reuse-Policy sampling methods.
+
+268
+
+§ 5.2.3 QUALITATIVE INTERPRETATION
+
+269 We captured the video recordings of the interactions between the agents and the environments to provide a qualitative interpretation to the results above. We chose to look at the selected frames at 1 equal time intervals from one episode after the last training iteration by CNAP (Manual-Gaussian) and PPO Baseline, respectively.
+
+ < g r a p h i c s >
+
+Figure 5: Selected frames of two agents in HalfCheetah
+
+From Figure 5's HalfCheetah task, we can see the agent instructed by PPO Baseline fell over quickly and never managed to turn it back. However, CNAP's agent could balance well and kept running 75 forward. This observation could support the higher average episodic rewards gained by CNAP agents than by PPO Baseline in Figure 3.
+
+ < g r a p h i c s >
+
+Figure 6: Selected frames of two agents in Humanoid
+
+Similarly, in Figure 6's Humanoid task, PPO Baseline's humanoid stayed stationary and lost balance quickly, while CNAP's humanoid could walk forward in small steps. This observation aligned with the results in Figure 4 where the gain from CNAP was significant.
+
+The selected frames for Swimmer and HumanoidStandup tasks are attached in Appendix A. We note that, although quantitatively CNAP agent did not differentiate from PPO Baseline in Humanoid-Standup task as shown in Figure 4, for the trajectories we observed, it successfully remained in a sitting position, while the PPO Baseline fell quickly.
+
+§ 6 CONCLUSION
+
+We present CNAP, a method that generalizes implicit planners to continuous action spaces for the first time. In particular, we study implicit planners based on neural algorithmic reasoners and the unstudied implications of not having precise alignment between the learned graph algorithm and the setup where the executor is applied. To deal with the challenges in building the planning tree, as a result of the continuous, high-dimensional nature of the action space, we combine previous advancements in XLVIN with binning, as well as parametric and non-parametric neighbor sampling strategies. We evaluate the agent against its model-free variant, observing its efficiency in low-data settings and consistently better performance than the baseline. Moreover, this paves the way for extending other implicit planners to continuous action spaces and studying neural algorithmic reasoning beyond strict applications of graph algorithms.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..14828042c5abbe965eda94a514718abce5043bce
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,324 @@
+# Transfer Learning using Spectral Convolutional Autoencoders on Semi-Regular Surface Meshes
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+The underlying dynamics and patterns of 3D surface meshes deforming over time can be discovered by unsupervised learning, especially autoencoders, which calculate low-dimensional embeddings of the surfaces. To study the deformation patterns of unseen shapes by transfer learning, we want to train an autoencoder that can analyze new surface meshes without training a new network. Here, most state-of-the-art autoencoders cannot handle meshes of different connectivity and therefore have limited to no transfer learning capacities to new meshes. Also, reconstruction errors strongly increase in comparison to the errors for the training shapes. To address this, we propose a novel spectral CoSMA (Convolutional SemiRegular Mesh Autoencoder) network. This patch-based approach is combined with a surface-aware training. It reconstructs surfaces not presented during training and generalizes the deformation behavior of the surfaces' patches. The novel approach reconstructs unseen meshes from different datasets in superior quality compared to state-of-the-art autoencoders that have been trained on these shapes. Our transfer learning reconstruction errors are ${40}\%$ lower than those from models learned directly on the data. Furthermore, baseline autoencoders detect deformation patterns of unseen mesh sequences only for the whole shape. In contrast, due to the employed regional patches and stable reconstruction quality, we can localize where on the surfaces these deformation patterns manifest.
+
+## 21 1 Introduction
+
+We study the deformation of surfaces in $3\mathrm{D}$ , which discretize human bodies, animals, or work pieces from computer aided engineering. Using autoencoders as a method for unsupervised learning, we analyze and detect patterns in the deformation behavior by calculating low-dimensional features. Since surface deformation is locally described by the same physical rules, we want to study the deformation patterns of unseen shapes by transfer learning. That means an autoencoder should be able to analyze new surface meshes without being trained again.
+
+While two-dimensional surfaces embedded in ${\mathbb{R}}^{3}$ are locally homeomorphic to the two-dimensional space, they are of non-Euclidean nature. Their representation by surface meshes lacks the regularity of pixels describing images, which is so convenient for 2D CNNs [1]. This is why existing methods for unsupervised learning for irregularly meshed surface meshes depend on the mesh connectivity when defining pooling or convolutional operators. For this reason, a trained mesh autoencoder cannot be applied to a surface that is represented by a different mesh, although the local deformation behavior might be similar.
+
+The authors of [2] presented a mesh autoencoder for semi-regular meshes of different sizes. The semi-regular surface representations enforce some local mesh regularity and are made up of regularly meshed patches as illustrated in Figure 1, which allows the application of their patch-wise approach to meshes of different sizes. However, the reconstruction quality decreases by a factor of 4 when applying their mesh autoencoder to new meshes and shapes that have not been used during training. This limits the method's application for unseen shapes.
+
+
+
+Figure 1: Remeshing of the horse template mesh. In the semi-regular mesh, the boundaries of the regularly meshed patches are highlighted in gray.
+
+Additionally, baseline mesh autoencoders for deforming shapes do not provide an understanding or explanation about which surface areas lead to the patterns in the embedding space. The embeddings represent the entire shape. Nevertheless, when identifying and analyzing deformation patterns, it is of particular relevance where on the surfaces these patterns manifest.
+
+Our work remedies these gaps by adopting the patch-based framework for semi-regular meshes and choosing a spectral graph convolutional filter [3] projecting vertex features to the Laplacian eigenvector basis in combination with a surface-aware training. Since the spectral filters consider the entire patch, the network generalizes better in comparison to a spatial approach, whose filters consider smaller $n$ -ring neighborhoods. This improves the quality and smoothness of the reconstruction results when being applied to unknown meshes and the errors are ${40}\%$ lower than errors from models learned directly on the data. Although spectral graph neural network methods require fixed mesh connectivity, our mesh-independent approach is not limited by this constraint. This is because the filters are applied to the regular substructures of semi-regular mesh representations of the surfaces. Furthermore, our patch-based approach allows us to correlate patch-wise embeddings with the embedding of the entire shape. This way we localize and understand where on the surfaces the deformation patterns, which are visible in the low-dimensional representation, manifest.
+
+The research objectives can be summarized as a) the definition of a spectral convolutional autoencoder for semi-regular meshes (spectral CoSMA) and a surface-aware training loss, by this means b) improving the generalization capability, transfer learning and runtime of baseline mesh autoencoders, and c) localizing the deformation patterns visible in the low-dimensional embedding on the surfaces.
+
+Further on in section 2, we discuss work related to learning features from meshed geometry. In section 3 , we present relevant characteristics of surface meshes for CNNs and introduce the semi-regular remeshing, followed by the definition of our spectral CoSMA in section 4. Results for different datasets containing meshes with different connectivity are presented in section 5 .
+
+## 2 Related Work
+
+### 2.1 Convolutional Networks for Surfaces
+
+Surfaces are generally represented either in form of point clouds or by a surface mesh, which is defined by faces connecting vertices to each other. We only consider the representation via meshes, because their faces describe the underlying surface $\left\lbrack {4,5}\right\rbrack$ . Surface meshes can be viewed as graphs, and hence graph-based convolutional methods are often applied to meshes.
+
+Generally, convolutional networks for graphs can be separated into spectral and spatial ones, of which $\left\lbrack {1,6,7}\right\rbrack$ give an overview. Spatial convolutional methods for graphs aggregate features based on a node's spatial relations, which allows generalization across different mesh connectivities [7, 8]. Spectral approaches, on the other hand, interpret information on the vertices as a signal propagation along the vertices. They exploit the connection of the graph Laplacian and the Fourier basis and vertex features are projected to the Laplacian eigenvector basis, where filters are applied [9]. Instead of explicitly computing Laplacian eigenvectors, the authors of [3] use truncated Chebyshev polynomials, and in [10] they use only first-order Chebyshev polynomials. These spectral methods require fixed connectivity of the graph. If not, the adjacency matrix and consequently the Laplacian eigenvector basis change.
+
+Furthermore, there are network architectures only for surface meshes, e.g. DiffusionNet [11] and HodgeNet [12], which are applied for classification, mesh segmentation, and shape correspondence. Nevertheless, these architectures cannot be implemented directly into autoencoders, because of missing mesh pooling operators.
+
+### 2.2 Neural Networks for Semi-Regular Surface Meshes
+
+Semi-regular triangular surface meshes, also known as meshes with subdivision connectivity, come with a regular local structure and a hierarchical multi-resolution structure. In section 3.2, we provide a more detailed definition. The Spatial CoSMA [2] and SubdivNet [13] take advantage of the local regularity of the patches by defining efficient mesh-independent pooling operators and using 2D convolution. By inputting the patches separately into the network, [2] can define an autoencoder pipeline that is independent of the mesh size. [13] apply self-parametrization using the MAPS algorithm [14] to remesh watertight manifold meshes without boundaries. [2] on the other hand, apply a remeshing algorithm that works for meshes with boundaries and coarser base meshes.
+
+### 2.3 Mesh Convolutional Autoencoders
+
+The first convolutional mesh autoencoder (CoMA) has been introduced in [15]. The authors introduced mesh downsampling and mesh upsampling layers for pooling and unpooling, which are combined with spectral convolutional filters using truncated Chebyshev polynomials as in [3]. The Neural 3D Morphable Models (Neural3DMM) network presented in [4] improves those results using spiral convolutional layers. The authors of [16] apply the CoMA to different datasets and improve the down and upsampling layers slightly. By manually choosing latent vertices for the embedding space, [17] define an autoencoder that allows interpolating in the latent space. All the above-mentioned mesh convolutional autoencoders work only for meshes of the same size and connectivity because the pooling and/or convolutional layers depend on the adjacency matrix. The authors of [2] showed that the latter methods are not able to learn data with greater global variations in comparison to their patch-based approach, which generalizes and reconstructs the deformed meshes to superior quality. Additionally, their architecture can be applied to unseen meshes of different sizes. The MeshCNN architecture [5] can be implemented as an encoder and decoder. Nevertheless, the pooling is feature dependent and therefore the embeddings can be of different significance.
+
+## 3 Handling Surface Meshes by Neural Networks
+
+The irregularity of surface meshes gives rise to difficulties when handling them with a neural network. These are explained in this section, followed by the motivation and definition of semi-regular meshes.
+
+### 3.1 Irregularity of Surface Meshes
+
+CNNs in 2D [18, 19] apply the same local filters to local neighborhoods of selected pixels of the image. Because of the global grid structure (defined by the x - and y-axis) of the image, the filters of constant shape can be horizontally and vertically shifted and the local neighborhoods are of regular connectivity. CNNs work so efficiently for images because they are translation equivariant and therefore equivariant to the global symmetry of images [20].
+
+The intrinsic dimension of surface meshes is also 2 because they represent a flat surface. Nevertheless, surface meshes lack global regularities, because they are not defined along a global grid, local neighborhoods can have any size and arrangement as long as they are locally Euclidean, and the distance between neighbors is not fixed.
+
+One cannot enforce a regular mesh discretization for every surface in ${\mathbb{R}}^{3}$ , which would lead to an underlying global grid [21]. This is why [2, 22] proposed to enforce a similar structure in the local neighborhoods by choosing a semi-regular representation of the surface. In this way, an efficient application of convolution on surface meshes becomes possible. Note that remeshing the polygonal mesh only changes the representation of the objects. The considered surface embedded in ${\mathbb{R}}^{3}$ is the same, but now represented by a different discrete approximation.
+
+
+
+Figure 2: Resolution of the regularly meshed patches inside the spectral CoSMA. The encoder pools the patches twice by undoing subdivision. In the decoder, the unpooling increases the resolution again by subdivision. The orange vertices are the vertices from the irregular base mesh. Red and purple vertices have been created during the ${1}^{\text{st }}$ and ${2}^{\text{nd }}$ refinement steps.
+
+### 3.2 Definition of Semi-Regular Meshes
+
+We consider semi-regular meshes in order to mitigate the problems caused by the irregularity of surface meshes while still allowing a flexible surface representation (see Figure 1). Following the definitions in [23], we call a surface mesh semi-regular if we can convert it to a low-resolution mesh by iteratively merging four triangular faces into one. Consequently, all vertices of the semi-regular mesh except for the ones remaining in the low-resolution mesh are regular (i.e. have six neighbors). Vice versa, the regular subdivision of a possibly irregular low-resolution mesh yields a semi-regular mesh. Such a regular subdivision can be achieved by inserting a vertex on each edge and splitting each original triangle face into 4 sub-triangles. $\left\lbrack {{13},{24}}\right\rbrack$ refer to this property as Loop subdivision connectivity of the semi-regular mesh. The subdivision connectivity makes semi-regular meshes particularly useful for multiresolution analysis and directly implies a suitable local pooling operator on semi-regular meshes (see section 4).
+
+### 3.3 Remeshing
+
+We apply the remeshing from [2], because other algorithms, e.g. Neural Subdivision [22] or MAPS [14], only work for closed surfaces without boundaries and fail for base meshes as coarse as ours. The algorithm iteratively subdivides a coarse approximation of the original irregular mesh (see Figure 1). The resulting semi-regular mesh is fitted to the original mesh using gradient descent on a loss function based on the chamfer distance. The refinement level ${rl}$ states the number of times each face of the coarse base mesh is iteratively subdivided. The number of faces in the final semi-regular mesh is ${n}_{F}^{\text{semireg }} = {4}^{rl} * {n}_{F}^{c}$ , with ${n}_{F}^{c}$ being the number of faces describing the coarse base mesh. We choose the refinement level ${rl} = 4$ , which leads to finer meshes compared to [2], who chose ${rl} = 3$ .
+
+After the remeshing, all vertices that are newly created during the subdivision have six neighbors. Therefore, the resulting mesh is semi-regular or has subdivision connectivity.
+
+## 4 Spectral CoSMA
+
+The network handles the regional patches separately, which allows us to handle meshes of different sizes. We describe how the graph convolution is combined with the padding and surface-aware loss as well as the pooling of the patches, and how one takes advantage of the semi-regular meshing. The building blocks are set together to define the spectral CoSMA (Spectral Convolutional Semi-Regular Mesh Autoencoder).
+
+### 4.1 Spectral Chebyshev Convolutional Filters
+
+We apply fast Chebyshev filters [3], as in [15], with the distinction that we are using them to perform spectral convolutions on the regional patches instead of the entire mesh. We justify this different convolution on the patches, compared to [2], by the intuition that spectral filters encode information of a whole patch and the general characteristics of its deformations, whereas in comparison spatial convolution considers just the local neighborhood around a vertex.
+
+We use the formulation of [3] for convolving over our regularly meshed patches. We perform spectral decomposition using spectral filters and apply convolutions directly in the frequency space.
+
+The spectral filters are approximated by truncated Chebyshev polynomials, which avoid explicitly computing the Laplacian eigenvectors and, by this means, reduce the computational complexity.
+
+The decomposition using spectral filters is dependent on the adjacency matrix, which restricts the transfer learning of spectral graph convolution to meshes of the same connectivity. Nevertheless, the adjacency matrix of the patches of our semi-regular meshes is always the same for one refinement level. This allows us to train the filters for all patches together and to apply them to unseen meshes.
+
+### 4.2 Pooling and Padding of the Regular Patches
+
+We apply the patch-wise average pooling and unpooling from [2] that takes advantage of the multi-scale structure of the semi-regular meshes. The subdivision connectivity guarantees that every 4 faces can be uniformly pooled to 1 . The remaining vertices take the average of their own value and the values of the neighboring vertices that are removed. The unpooling operator subdivides the faces and the newly created vertices are assigned the average value of neighboring vertices from the lower-resolution mesh patch. A similar pooling and unpooling operator is also applied by [13], where the information is saved on the faces.
+
+The padding is crucial for the network to consider the regional patches in a larger context. Since the network handles the patches separately, we consider the features of the neighboring patches in a padding of size 2 as in [2]. If the vertices are boundary vertices, we decide to pad the patch with the boundary vertices' features.
+
+### 4.3 Network Architecture
+
+While using specialized pooling and convolution techniques for the regular patches, the general structure of our network architecture is inspired by $\left\lbrack {2,{15}}\right\rbrack$ . Our autoencoder architecture combines spectral Chebyshev convolutional filters with the described pooling technique to process the padded regular patches of a semi-regular mesh. The autoencoder compresses every padded patch, which corresponds to one face of the low-resolution mesh, from ${\mathbb{R}}^{{276} \times 3}\left( {{rl} = 4}\right)$ to an ${hr} = {10}$ dimensional latent vector and reconstructs the original padded patch from the latent vector.
+
+The encoder consists of two blocks containing a Chebyshev convolutional layer followed by an average pooling layer and an exponential linear unit (ELU) as an activation function [25]. The output of the second encoding block is mapped to the latent space by a fully connected layer.
+
+The decoder mirrors the structure of the encoder by first applying a fully connected layer, which transforms the latent space vector back to a regular triangle representation with refinement level ${rl} = 2$ . Afterward, two decoding blocks consisting of an unpooling layer followed by a convolutional layer transform the coarse triangle representation back to the original padded patch representation. Finally, another Chebyshev convolutional layer is applied without activation function to reconstruct the original patch coordinates by reducing the number of features to three dimensions.
+
+All Chebyshev convolutional layers use $K = 6$ Chebyshev polynomials. Table 3 in the supplementary material gives a detailed view of the structure of the network together with the parameter numbers per layer which sum up to 23,053. Figure 2 illustrates the patch sizes inside the autoencoder. Note that we are able to handle non-manifold edges of the coarse base mesh because the patches, whose interiors by construction have only manifold-edges, are fed separately. The code will be provided as supplementary material.
+
+This spectral CoSMA architecture can handle all surface meshes, that have been remeshed into a semi-regular mesh representation of the same refinement level. By handling the regional padded patches separately, this workflow is independent of the original irregular mesh connectivity thanks to the remeshing and patch-wise handling.
+
+### 4.4 Surface-Aware Loss Calculation
+
+The authors of the patch-based spatial CoSMA [2] employ a patch-wise mean squared error as the training loss. But, that loss calculation is not keeping track of multiple appearances of the vertices in the patch boundaries. Therefore, it is not surface-aware and during training the error on the patch boundaries is weighted higher than in the interior of the patches. By weighting the vertex-wise error in the training loss by the vertices' number of appearances in the different patches, we employ a surface-aware error for training. This reduces the P2S error as visible in the ablation study.
+
+Table 1: Point to surface (P2S) errors $\left( {\times {10}^{-2}}\right)$ between reconstructed unseen semi-regular meshes $\left( {{rl} = 4}\right)$ and original irregular mesh and their standard deviations for three different training runs. $\left\lbrack {4,{13},{15}}\right\rbrack$ have to be trained per mesh; we and $\left\lbrack 2\right\rbrack$ train one network for all three animals in the GALLOP dataset. *: the elephant has not been seen by the network during training.
+
+| Mesh Class | CoMA [15] | Neural3DMM [4] | SubdivNet [13] | Spatial CoSMA [2] | Ours |
| FAUST | 0.7073 + 1.751 | 0.4064 + 0.921 | ${2.8190} + {4.699}$ | 0.0224 + 0.045 | $\mathbf{{0.0031}} + {0.006}$ |
| Horse | 0.0053 + 0.017 | 0.0096 + 0.045 | 0.0113 + 0.025 | 0.0078 + 0.012 | $\mathbf{{0.0022}} \pm {0.005}$ |
| Camel | ${0.0075} \pm {0.023}$ | 0.0145 + 0.056 | 0.0113 + 0.024 | 0.0091 + 0.014 | $\mathbf{{0.0030}} \pm {0.006}$ |
| Elephant | 0.0101 + 0.031 | 0.0147 + 0.057 | 0.0145 + 0.032 | ${0.0316} + {0.068}^{ * }$ | 0.0054 + 0.012* |
+
+Table 2: P2S errors $\left( {\times {10}^{-2}}\right)$ for three different training runs. Additionally, the Euclidean P2S error (in cm) is given. *: the entire YARIS dataset has not been seen by the network during training.
+
+| Dataset | Component Lengths | Spatial CoSMA [2] | Ours |
| Test P2S | Eucl. E. | Test P2S | Eucl. E. |
| TRUCK | 135-370 cm | 0.0660 + 0.117 | 2.76 cm | 0.0013 + 0.003 | 0.26 cm |
| YARIS* | 21-91 cm | ${0.2061} \pm {0.438}$ | 0.84 cm | 0.0375 + 0.088 | 0.31 cm |
+
+## 216 5 Experiments
+
+We test our spectral CoSMA for semi-regular meshes using a setup similar to [2] on four different datasets and compare our achieved reconstruction errors to state-of-the-art surface mesh autoencoders.
+
+### 5.1 Datasets
+
+GALLOP: The dataset contains triangular meshes representing a motion sequence with 48 timesteps from a galloping horse, elephant, and camel [26]. The galloping movement is similar but the meshes representing the surfaces of the three animals are different in connectivity and the number of vertices. This is why the baseline autoencoders have to be trained three times. The surface approximations are remeshed to semi-regular meshes with refinement level ${rl} = 4$ for each animal. The new meshes are still of different connectivity, but all are made up of regional regular patches. Table 6 lists the resulting numbers of vertices. We normalize the semi-regular meshes to $\left\lbrack {-1,1}\right\rbrack$ as in [2]. Before inputting the data to the CoSMAs, every patch is translated to zero mean. We use the first 70% of the galloping sequence of the horse and camel for training. The architecture is tested on the remaining ${30}\%$ and the whole sequence of the elephant, which is never seen during the training for the CoSMAs.
+
+FAUST: The dataset contains 100 meshes [27], which are in correspondence to each other. The irregular surface meshes represent 10 different bodies in 10 different poses. For the experiments, we consider two unknown poses of all bodies (20% of the data) in the testing set. The meshes are remeshed and normalized in the same way as for the GALLOP dataset.
+
+TRUCK and YARIS: In a car crash simulation the car components, which are generally represented by surface meshes, often deform in different patterns. Every component is discretized by a surface mesh, while the local deformation is described by the same physical rules. Following [2], the TRUCK dataset contains 32 completed frontal crash simulations and 6 components, the YARIS dataset contains 10 simulations and 10 components. 30 simulations and 70% of the timesteps of the TRUCK dataset are included in the training set. The remaining samples from the TRUCK dataset and the entire YARIS dataset, representing a different car, are considered for testing. For this setup, the authors of $\left\lbrack {2,{28}}\right\rbrack$ detect patterns in the deformation of the TRUCK and YARIS components. We normalize the meshes that discretize car components to zero mean and range $\left\lbrack {-1,1}\right\rbrack$ relative to the coordinates' ratio. Every patch is translated to zero mean.
+
+
+
+Figure 3: Reconstructed unknown FAUST pose and elephant test sample at $t = {43}$ by CoMA, Neural3DMM, SubdivNet Autoencoder, spatial CoSMA, and our network. P2S error of the reconstructed faces is highlighted. More reconstruction examples are given in the supplementary material. * The elephant's mesh has not been presented during training to spatial CoSMA and our network.
+
+### 5.2 Training Details
+
+We train the network (implemented in Pytorch [29] and Pytorch Geometric [30]) with the adaptive learning rate optimization algorithm [31]. For the GALLOP and the FAUST dataset, we use a learning rate of 0.0001 and train for 150 epochs using a batch size of 100 . For the TRUCK data, we choose a batch size of 100 combined with a learning rate of 0.001 for 300 epochs, since the variation inside the dataset is higher. We minimize the surface-aware loss between the original and reconstructed regional patches of the surface mesh without considering the padding. To augment the data in the case of the GALLOP and the FAUST dataset we rotate the regional patches by ${0}^{ \circ },{120}^{ \circ }$ , and ${240}^{ \circ }$ .
+
+Our architecture requires at least ${50}\%$ fewer parameters than the CoMA, Neural3DMM, and Subdi-vNet networks, because for increasing ${rl}$ and consequently finer meshes, the CoSMAs require only a few parameters more in the linear layers (compare Tables 6 and 7 in the supplementary material). This is because the patches and convolutional filters share the parameters. The spectral CoSMA approach requires 15% fewer parameters than the spatial CoSMA approach. The runtime analysis and ablation study justifying parameter choices are provided in the supplementary material.
+
+### 5.3 Reconstructions of the Meshes
+
+The mean squared error between true and reconstructed vertices of the semi-regular mesh allows a comparison of different methods only if the same remeshing result is used. In difference to [2], we compare the reconstructed semi-regular mesh directly to the original irregular surface mesh by calculating a point to surface error (P2S). We average the mean squared errors between the vertices of the semi-regular mesh and their orthogonal projections to the surface described by the irregular mesh. This allows us to compare the reconstruction errors when using different remeshing results or refinements.
+
+Besides CoMA [15] and Neural3DMM [4], we use an additional baseline semi-regular mesh autoen-coder using our network's architectures with the pooling and convolutional layers from SubdivNet [13] to process the entire meshes. In Table 1 we compare the autoencoders for the GALLOP and FAUST dataset in terms of the P2S errors of reconstructed test samples, whose 3D coordinates lie in the range $\left\lbrack {-1,1}\right\rbrack$ . Our network reduces the test reconstruction error for the GALLOP and FAUST dataset by more than ${50}\%$ and ${80}\%$ respectively, if the shape is presented to the autoencoder during the training. For unknown poses from the FAUST dataset, the limbs' positions are reconstructed inaccurately by the CoMA, Neural3DMM, and SubdivNet autoencoders. Especially if the pose is not similar to training poses, their reconstruction fails, as Figure 3 illustrates.
+
+
+
+Figure 4: Reconstructed front beams from the TRUCK (length of ${150}\mathrm{\;{cm}}$ ) at time $t = {24}$ (test sample) from two crash simulations representing different deformation behavior and from the YARIS (length of ${65}\mathrm{\;{cm}}$ ) at $t = {15}$ . The average Euclidean P2S error (in $\mathrm{{cm}}$ ) of the faces is highlighted.
+
+The spectral CoSMA's reconstructions are generally smoother than the ones from the spatial CoSMA, which reduces the reconstruction errors. Figure 7 in the supplementary material shows that the reconstructed patch using spectral filters, which encode the connectivity of the whole patch in the Chebyshev polynomials, is smoother than the spatial reconstruction, where the convolutional kernels only consider the close neighborhood. Because the spatial CoSMA uses ${hr} = 8$ and no surface-aware loss, we also list our reconstruction errors using these parameters in the ablation study for a complete comparison.
+
+Transfer Learning to Meshes with New Connectivity: Our spectral CoSMA and the spatial CoSMA are the only networks that can reconstruct an unseen shape of different connectivity. The elephant's mesh has never been presented to our network, nevertheless, our reconstruction error is lower. Even though trained on the elephant, the baselines' reconstructions are worse and unstable in the legs, as Figure 3 illustrates. The spatial CoSMA's reconstructions of the unseen elephant are inferior to all the other networks, although the reconstructions of the known camel and horse are of similar quality to the other baselines. This highlights the improved transfer learning and generalization capability of the new spectral approach.
+
+Since the TRUCK and YARIS datasets contain 16 different meshes, the reconstruction results are compared between the CoSMA architectures. In Table 2 we present the average P2S errors for the TRUCK and YARIS dataset between the components scaled to range $\left\lbrack {-1,1}\right\rbrack$ and in cm. The entire YARIS dataset has never been presented to the network during training. The results on the YARIS in Figure 4 also show that our network not only reconstructs smoother surfaces in comparison to the spatial CoSMA but also has higher transfer learning capacities.
+
+A comparison of the results for refinement levels ${rl} = 3$ and ${rl} = 4$ for the TRUCK and YARIS datasets (see Table 8 in the supplementary material) shows the stability of the results from our spectral CoSMA. For the spatial CoSMA on the other hand, the reconstruction quality decreases when increasing the refinement level. This is due to the fixed kernel size of 2 . Since the mesh is finer, the considered neighborhoods by a spatial filter using kernel size 2 cover smaller areas of the surface. The spectral CoSMA considers the entire patches in spectral representation. Therefore, an increase in the refinement level does not impair the reconstruction quality.
+
+### 5.4 Low-dimensional Embedding
+
+We project the patch-wise hidden representations of size ${hr}$ into the two-dimensional space using the linear dimensionality reduction method Principal Component Analysis (PCA) [32]. Then we compare these patch-wise results to the $2\mathrm{D}$ embedding over time of the whole shape, by concatenating the hidden patch-wise representations and then applying PCA.
+
+The time-dependent embedding for the unseen elephant from the GALLOP dataset exhibits a periodic galloping sequence, visualized in Figure 5 (a). We compare how similar the 2D patch-wise embed-dings are to the $2\mathrm{D}$ embedding for the entire shape, to determine how important the deformation of the patch is for the general deformation behavior of the whole shape. The patch-wise distance is visualized in Figure 5 (b) and its calculation detailed in the supplementary material. We notice that this distance is the lowest for the body and legs, which define the elephant's gallop, whereas the movement of the head does not follow the periodic pattern.
+
+
+
+Figure 5: (a) 2D Embedding of the low-dimensional representation of the whole elephant over time. (b) Highlighting the distance of the patch-wise embeddings to the embedding of the whole shape. (c) Patch-wise score for the TRUCK’s front beam from Figure 4 at $t = {24}$ . Only the patch with the high score manifests the deformation in two patterns. This is visible in the example patches with high and low scores. The embedding's colors encode timestep and branch.
+
+For the TRUCK and YARIS datasets, the goal is the detection of clusters corresponding to different deformation patterns in the components' embeddings. This speeds up the analysis of car crash simulations since relations between model parameters and the deformation behavior are discovered more easily [28, 33]. In the 2D visualizations for the TRUCK components, we detect two clusters corresponding to a different deformation behavior and our patch-based approach allows us to identify the patches that contribute most to this. For each patch, we define a score, which equals the accuracy of an SVM (between 0.5 and 1) that is classifying the observed two deformation patterns of the entire component from the patch's embedding, see Figure 5 (c). The highlighted patches correlate to the left part of the beam, where the deformation is visibly different for two different TRUCK simulations in Figure 4. Note, that this comparison of patch- and shape-embeddings does not lead to significant results for the spatial CoSMA [2] because of the instability of its results.
+
+For the YARIS, which has never been seen by the network during training, we also visualize the low-dimensional representation for different components in 2D using PCA. We detect a deformation pattern in the front beams that splits up the simulation set into two clusters, see Figure 9 in the supplementary material, which is a result similar to [2] who used a nonlinear dimensionality reduction.
+
+## 6 Conclusion
+
+We have introduced a novel spectral mesh autoencoder pipeline for the analysis of deforming $3\mathrm{D}$ semi-regular surface meshes with different connectivity. This allows us to generate high-quality reconstructions of unseen meshes, that have not been presented during training. In fact, the reconstruction quality for unknown meshes with our spectral CoSMA is higher than with baseline autoencoders that have seen the meshes during training. This improved transfer learning capability and reconstruction quality motivate the future analysis of generative models for the patch-based approach. For high-quality generative results, we also plan to improve the remeshing procedure to focus more on detailed structures. Right now the loss of smaller detailed geometric structures in the remeshing has little effect on the results since we want to detect the behavioral patterns in the low-dimensional representations of global deformation.
+
+Additionally, we provide an understanding and interpretation of which surface areas lead to the patterns in the embedding space. We speculate that this information per patch could be used in further analysis. We also plan to apply the architecture to other tasks such as shape matching and segmentation.
+
+References
+
+[1] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. 1, 2
+
+[2] Sara Hahner and Jochen Garcke. Mesh convolutional autoencoder for semi-regular meshes of different sizes. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 885-894, 2022. 1, 3, 4, 5, 6, 7, 9, 12, 14, 15
+
+[3] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, volume 29, pages 3844-3852, 2016. 2, 3, 4
+
+[4] Giorgos Bouritsas, Sergiy Bokhnyak, Stylianos Ploumpis, Stefanos Zafeiriou, and Michael Bronstein. Neural 3D morphable models: Spiral convolutional networks for 3D shape representation learning and generation. Proceedings of the IEEE International Conference on Computer Vision, 2019-Octob:7212-7221, 2019. doi: 10.1109/ICCV.2019.00731. 2, 3, 6, 7, 14, 15
+
+[5] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. MeshCNN: A network with an edge. ACM Transactions on Graphics, 38(4):1-12, jul 2019. doi: 10.1145/3306346.3322959. 2, 3
+
+[6] Michael M. Bronstein, Joan Bruna, Yann Lecun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34 (4):18-42, 2017. doi: 10.1109/MSP.2017.2693418. 2
+
+[7] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2020. doi: 10.1109/tnnls.2020.2978386. 2
+
+[8] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. 34th International Conference on Machine Learning, 3:2053-2070, 2017. 2
+
+[9] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and deep locally connected networks on graphs. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings, pages 1-14, 2014. 2
+
+[10] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, pages 1-14, 2016. 2, 12
+
+[11] Nicholas Sharp, Souhaib Attaiki, Keenan Crane, and Maks Ovsjanikov. DiffusionNet: Dis-cretization agnostic learning on surfaces. ACM Transactions on Graphics, 41(3):1-16, 2022. 3
+
+[12] Dmitriy Smirnov and Justin Solomon. HodgeNet: Learning spectral geometry on triangle meshes. ACM Transactions on Graphics, 40(4):1-11, jul 2021. doi: 10.1145/3450626.3459797. 3
+
+[13] Shi-Min Hu, Zheng-Ning Liu, Meng-Hao Guo, Jun-Xiong Cai, Jiahui Huang, Tai-Jiang Mu, and Ralph R. Martin. Subdivision-based mesh convolution networks. ACM Transactions on Graphics, 41(3):1-16, 2022. 3, 4, 5, 6, 7, 14, 15
+
+[14] Aaron W.F. Lee, Wim Sweldens, Peter Schröder, Lawrence Cowsar, and David Dobkin. MAPS: Multiresolution adaptive parameterization of surfaces. Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1998, pages 95-104, 1998. doi: 10.1145/280814.280828. 3, 4
+
+[15] Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J. Black. Generating 3D faces using convolutional mesh autoencoders. Proceedings of the European Conference on Computer Vision, pages 704-720, 2018. doi: 10.1007/978-3-030-01219-9_43. 3, 4, 5, 6, 7, 14, 15
+
+[16] Yu Jie Yuan, Yu Kun Lai, Jie Yang, Qi Duan, Hongbo Fu, and Lin Gao. Mesh variational autoencoders with edge contraction pooling. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, volume 2020-June, pages 1105-1112. IEEE Computer Society, jun 2020. doi: 10.1109/CVPRW50498.2020.00145. 3
+
+[17] Yi Zhou, Chenglei Wu, Zimo Li, Chen Cao, Yuting Ye, Jason Saragih, Hao Li, and Yaser Sheikh. Fully convolutional mesh autoencoder using efficient spatially varying kernels. In Advances in Neural Information Processing Systems, volume 33, pages 9251-9262, 2020. 3
+
+[18] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. 3
+
+[19] Yann LeCun, Lionel D. Jackel, Brian Boser, John S. Denker, Henry P. Graf, Isabelle Guyon, Don Henderson, Richard E. Howard, and William Hubbard. Handwritten digit recognition: applications of neural network chips and automatic learning. IEEE Communications Magazine, 27(11):41-46, nov 1989. doi: 10.1109/35.41400. 3
+
+[20] Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral CNN. 36th International Conference on Machine Learning, 2019-June:2357-2371, 2019. 3
+
+[21] Luitzen Egbertus Jan Brouwer. Über Abbildung von Mannigfaltigkeiten. Mathematische Annalen, 71(4), dec 1912. doi: 10.1007/BF01456812. 3
+
+[22] Hsueh Ti Derek Liu, Vladimir G. Kim, Siddhartha Chaudhuri, Noam Aigerman, and Alec Jacobson. Neural subdivision. ACM Transactions on Graphics, 39(4):1-16, jul 2020. doi: 10.1145/3386569.3392418. 3, 4
+
+[23] Frédéric Payan, Céline Roudet, and Basile Sauvage. Semi-regular triangle remeshing: A comprehensive study. Computer Graphics Forum, 34(1):86-102, 2015. doi: 10.1111/cgf.12461. 4
+
+[24] Charles Loop. Smooth subdivision surfaces based on triangles. Master's thesis, The University of Utah, jan 1987. 4
+
+[25] Djork Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, pages 1-14, 2016. 5
+
+[26] Robert W. Sumner and Jovan Popović. Deformation transfer for triangle meshes. ACM Transactions on Graphics, 23(3):399-405, 2004. doi: 10.1145/1186562.1015736. 6
+
+[27] Federica Bogo, Javier Romero, Matthew Loper, and Michael J. Black. FAUST: Dataset and evaluation for 3D mesh registration. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3794-3801, 2014. doi: 10.1109/CVPR.2014. 491.6
+
+[28] Sara Hahner, Rodrigo Iza-Teran, and Jochen Garcke. Analysis and prediction of deforming 3D shapes using oriented bounding boxes and LSTM autoencoders. In Artificial Neural Networks and Machine Learning, pages 284-296. Springer International Publishing, 2020. 6, 9
+
+[29] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, volume 32, pages 8026-8037, 2019. 7
+
+[30] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, mar 2019. 7
+
+[31] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pages 1-15, 2015. 7
+
+[32] Karl Pearson. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2(11):559-572, nov 1901. doi: 10.1080/14786440109462720. 8
+
+[33] Bastian Bohn, Jochen Garcke, Rodrigo Iza-Teran, Alexander Paprotny, Benjamin Peherstorfer, Ulf Schepsmeier, and Clemens August Thole. Analysis of car crash simulation data with nonlinear machine learning methods. Procedia Computer Science, 18:621-630, 2013. doi: 10.1016/j.procs.2013.05.226.9
+
+[34] Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning representations and generative models for 3D point clouds. 35th International Conference on Machine Learning, ICML 2018, 1:67-85, 2018. 13
+
+## A Supplementary Material
+
+## Code and Detailed Network Architecture
+
+As an addition to the architecture's description in section 4 and visualization in Figure 2 we give a detailed distribution of parameters over the hexagonal convolutional, fully connected, and pooling layers in Table 3. We provide the code through an anonymized repository: https://anonymous.4open.science/r/spectralCoSMA-6156/README.md.
+
+Table 3: Structure of the autoencoder for refinement level ${rl} = 4$ , number of Chebyshev polynomials $K = 6$ and hidden representation of size ${hr} = {10}$ . The bullets $\bullet$ reference the corresponding batch size. The data's last dimension is the number of vertices considered for each padded patch.
+
+| Encoder Layer | Output Shape | Param. | Decoder Layer | Output Shape | Param. |
| Input | (•, 3,267) | 0 | Fully Connected | $\left( {\bullet ,{2}^{5},\;{15}}\right)$ | 5280 |
| ChebConv | $\left( {\bullet ,{2}^{4},{267}}\right)$ | 304 | Unpooling | $\left( {\bullet ,{2}^{5},\;{78}}\right)$ | 0 |
| Pooling | $\left( {\bullet ,{2}^{4},\;{78}}\right)$ | 0 | ChebConv | $\left( {\bullet ,{2}^{5},\;{78}}\right)$ | 6176 |
| ChebConv | $\left( {\bullet ,{2}^{5},\;{78}}\right)$ | 3104 | Unpooling | $\left( {\bullet ,{2}^{4},{267}}\right)$ | 0 |
| Pooling | $\left( {\bullet ,{2}^{5},\;{15}}\right)$ | 0 | ChebConv | $( \bullet ,{2}^{4},{267})$ | 3088 |
| Fully Connected | $\left( {\bullet ,{10}}\right)$ | 4810 | ChebConv | $\left( {\bullet ,\;3,{267}}\right)$ | 291 |
+
+## Ablation Study
+
+We perform an ablation study to justify some of the design and parameter choices in our spectral CoSMA architecture. In Table 4, we report the P2S errors on the FAUST dataset and the elephant from the GALLOP dataset after 50 epochs of training. The accuracy degrades for at least one of the two datasets when we reduce the degree $K$ of the Chebyshev polynomials, reduce the size of the hidden representation ${hr}$ , reduce the number of output channels of the convolutional layers, or change the Chebyshev Graph Convolution to the Graph Convolution from [10], who use only first-order Chebyshev polynomials. For the latter change, the networks are trained for 100 epochs.
+
+We also list the P2S errors for a training without using the surface-aware training loss but instead, the patch-wise mean squared error and a hidden representation of size ${hr} = 8$ as in [2]. These networks are trained for 150 epochs as the main experiments.
+
+Table 4: Ablation study of our parameter choices based on P2S errors $\left( {\times {10}^{-2}}\right)$ for 2 training runs.
+
+| Model | P2S Error |
| FAUST | Elephant |
| full | $\mathbf{{0.0031} + {0.006}}$ | 0.0054 + 0.012 |
| ${hr} = 8$ | ${0.0053} \pm {0.010}$ | ${0.0083} \pm {0.016}$ |
| $K = 4$ | 0.0031 + 0.006 | 0.0055 + 0.012 |
| ${2}^{3}$ and ${2}^{4}$ channels | 0.0031 + 0.006 | ${0.0060} \pm {0.013}$ |
| GCN [10] | ${0.0032} \pm {0.006}$ | 0.0056 + 0.012 |
| Patch-wise train MSE | ${0.0033} \pm {0.006}$ | 0.0074 + 0.015 |
| ${hr} = 8$ and patch-wise train MSE as in [2] | 0.0041 + 0.007 | 0.0085 + 0.016 |
+
+## Runtime Analysis
+
+Our spectral CoSMA has a similar runtime per epoch for ${rl} = 4$ when comparing it to the spatial CoSMA, see Table 5 for GALLOP and FAUST datasets. For ${rl} = 3$ the runtime is reduced by ${50}\%$ because the spectral CoSMA's runtime scales with the refinement level.
+
+For a more detailed comparison, we illustrate the validation error per epoch in Figure 6 when training both networks with the patch-wise training error. It shows, that the spectral CoSMA converges in six times fewer epochs in comparison to the spatial CoSMA. This means that the total training time of a spectral CoSMA on the GALLOP and FAUST datasets is in total reduced by more than 75% for ${rl} = 4$ . The training has been conducted on an Nvidia Tesla V100.
+
+
+
+Figure 6: Training error (Vertex-to-Vertex mean squared error measured for each patch) per Epoch for the GALLOP dataset and ${rl} = 4$ for the training of the CoSMA networks.
+
+Table 5: Runtime of different CoSMAs per epoch when training on GALLOP and FAUST datasets using a batch size of 100 .
+
+| Mesh Class | Spatial CoSMA | Ours |
| ${rl} = 3$ | ${rl} = 4$ | ${rl} = 3$ | ${rl} = 4$ |
| FAUST | 17.3 sec | 18.7 sec | 6.9 sec | 11.8 sec |
| GALLOP | 16.7 sec | 17.8 sec | 10.1 sec | 17.2 sec |
+
+## Additional Reconstructed Samples
+
+We provide additional reconstructed samples from the GALLOP and FAUST dataset in Figure 8. Additionally, Figure 7 compares reconstructed patches from the two CoSMA approaches. It is visible that the reconstruction from the novel spectral CoSMA is smoother.
+
+## 2D Visualizations of the Embeddings
+
+Figure 9 shows the embeddings in the low-dimensional space for two YARIS front beams. The beams deform in two different branches, which manifests in the embedding.
+
+For the GALLOP dataset, we calculate a distance between the patch-wise embeddings and the embedding of the entire shape, to determine how important the patch's deformation is for the general deformation behavior of the whole shape. We interpolate and densely subsample the lines connecting the embedding points of consecutive timesteps. Between the sampled points ${p}_{i}^{s}$ describing the deformation of the entire shape over time and the sampled points ${p}_{j}^{p}$ from the patch’s embedding, we calculate a chamfer distance, since the embedding shape is cyclic. The chamfer distance [34] measures the average squared distance between each point ${p}_{i}^{s}$ to its nearest neighbor from all points ${p}_{j}^{p}$ and vice versa. Therefore the distance is the lowest for circle-like patch-wise embeddings.
+
+
+
+Figure 7: Comparison of reconstructed patches of the CoSMA networks.
+
+
+
+Figure 8: More reconstructed unknown FAUST pose and reconstructed horse test sample at $t = {39}$ by CoMA, Neural3DMM, SubdivNet Autoencoder, spatial CoSMA, and our network with highlighted P2S error.
+
+
+
+Figure 9: Spectral CoSMA embeddings of the YARIS front beams for 10 simulations, which deform in two branches. Color encodes timestep and branch.
+
+## Model Parameters and Reconstruction Errors for Refinement Level 3
+
+For the baselines and our spectral CoSMA, we list the number of trainable parameters of the models for the different meshes in refinement level ${rl} = 3$ and ${rl} = 4$ . Increasing the refinement level by one, increases the number of faces by a factor of four.
+
+Table 6: Number of vertices per mesh and trainable parameters for the reconstruction of semi-regular meshes using refinement level 4 .
+
+| Mesh Class | #Vertices | CoMA [15] | Neural 3DMM [4] | SubdivNet [13] | Spatial CoSMA [2] | Ours |
| irregular | semi-regular |
| FAUST | 6890 | 12,772 | 46,379 | 426,195 | 879,857 | 26,888 | 23,053 |
| Horse | 8,431 | 14,745 | 50,731 | 459,987 | 1,010,417 | | |
| Camel | 21,887 | 12,802 | 46,923 | 430,419 | 879,857 | 26,888 | 23,053 |
| Elephant | 42,321 | 15,362 | 52,363 | 472,659 | 1,053,937 | | |
+
+Table 7: Comparison of the number of parameters for meshes of refinement level 3 from [2].
+
+| Mesh Class | CoMA [15] | Neural 3DMM [4] | Spatial CoSMA [2] | Ours |
| FAUST | 26,795 | 276,275 | 18,184 | 16,235 |
| Horse | 27,339 | 280,499 | 18,184 | 16,235 |
| Camel | 26,795 | 292,659 |
| Elephant | 27,339 | 296,883 |
+
+Table 8: Point to surface (P2S) errors $\left( {\times {10}^{-2}}\right)$ between reconstructed unseen semi-regular meshes $\left( {{rl} = 3}\right)$ and original irregular mesh and their standard deviations for three different training runs. Additionally, the average Euclidean vertex-wise error (in $\mathrm{{cm}}$ ) is given.
+
+*: the entire YARIS dataset has not been seen by the network during training.
+
+| Dataset | Component Lengths | Spatial CoSMA [2] | Ours |
| Test P2S | Eucl. E. | Test P2S | Eucl. E. |
| TRUCK | 135-370 cm | 0.0443 + 0.071 | 2.23 cm | 0.0043 + 0.009 | 0.43 cm |
| YARIS* | 21-91 cm | ${0.1784} \pm {0.380}$ | 0.80 cm | 0.0458 + 0.090 | 0.37 cm |
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2ab08d774489472cd6e2e5cc66f27f6488c989fb
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/7B_qc3tDyD/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,208 @@
+§ TRANSFER LEARNING USING SPECTRAL CONVOLUTIONAL AUTOENCODERS ON SEMI-REGULAR SURFACE MESHES
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+The underlying dynamics and patterns of 3D surface meshes deforming over time can be discovered by unsupervised learning, especially autoencoders, which calculate low-dimensional embeddings of the surfaces. To study the deformation patterns of unseen shapes by transfer learning, we want to train an autoencoder that can analyze new surface meshes without training a new network. Here, most state-of-the-art autoencoders cannot handle meshes of different connectivity and therefore have limited to no transfer learning capacities to new meshes. Also, reconstruction errors strongly increase in comparison to the errors for the training shapes. To address this, we propose a novel spectral CoSMA (Convolutional SemiRegular Mesh Autoencoder) network. This patch-based approach is combined with a surface-aware training. It reconstructs surfaces not presented during training and generalizes the deformation behavior of the surfaces' patches. The novel approach reconstructs unseen meshes from different datasets in superior quality compared to state-of-the-art autoencoders that have been trained on these shapes. Our transfer learning reconstruction errors are ${40}\%$ lower than those from models learned directly on the data. Furthermore, baseline autoencoders detect deformation patterns of unseen mesh sequences only for the whole shape. In contrast, due to the employed regional patches and stable reconstruction quality, we can localize where on the surfaces these deformation patterns manifest.
+
+§ 21 1 INTRODUCTION
+
+We study the deformation of surfaces in $3\mathrm{D}$ , which discretize human bodies, animals, or work pieces from computer aided engineering. Using autoencoders as a method for unsupervised learning, we analyze and detect patterns in the deformation behavior by calculating low-dimensional features. Since surface deformation is locally described by the same physical rules, we want to study the deformation patterns of unseen shapes by transfer learning. That means an autoencoder should be able to analyze new surface meshes without being trained again.
+
+While two-dimensional surfaces embedded in ${\mathbb{R}}^{3}$ are locally homeomorphic to the two-dimensional space, they are of non-Euclidean nature. Their representation by surface meshes lacks the regularity of pixels describing images, which is so convenient for 2D CNNs [1]. This is why existing methods for unsupervised learning for irregularly meshed surface meshes depend on the mesh connectivity when defining pooling or convolutional operators. For this reason, a trained mesh autoencoder cannot be applied to a surface that is represented by a different mesh, although the local deformation behavior might be similar.
+
+The authors of [2] presented a mesh autoencoder for semi-regular meshes of different sizes. The semi-regular surface representations enforce some local mesh regularity and are made up of regularly meshed patches as illustrated in Figure 1, which allows the application of their patch-wise approach to meshes of different sizes. However, the reconstruction quality decreases by a factor of 4 when applying their mesh autoencoder to new meshes and shapes that have not been used during training. This limits the method's application for unseen shapes.
+
+ < g r a p h i c s >
+
+Figure 1: Remeshing of the horse template mesh. In the semi-regular mesh, the boundaries of the regularly meshed patches are highlighted in gray.
+
+Additionally, baseline mesh autoencoders for deforming shapes do not provide an understanding or explanation about which surface areas lead to the patterns in the embedding space. The embeddings represent the entire shape. Nevertheless, when identifying and analyzing deformation patterns, it is of particular relevance where on the surfaces these patterns manifest.
+
+Our work remedies these gaps by adopting the patch-based framework for semi-regular meshes and choosing a spectral graph convolutional filter [3] projecting vertex features to the Laplacian eigenvector basis in combination with a surface-aware training. Since the spectral filters consider the entire patch, the network generalizes better in comparison to a spatial approach, whose filters consider smaller $n$ -ring neighborhoods. This improves the quality and smoothness of the reconstruction results when being applied to unknown meshes and the errors are ${40}\%$ lower than errors from models learned directly on the data. Although spectral graph neural network methods require fixed mesh connectivity, our mesh-independent approach is not limited by this constraint. This is because the filters are applied to the regular substructures of semi-regular mesh representations of the surfaces. Furthermore, our patch-based approach allows us to correlate patch-wise embeddings with the embedding of the entire shape. This way we localize and understand where on the surfaces the deformation patterns, which are visible in the low-dimensional representation, manifest.
+
+The research objectives can be summarized as a) the definition of a spectral convolutional autoencoder for semi-regular meshes (spectral CoSMA) and a surface-aware training loss, by this means b) improving the generalization capability, transfer learning and runtime of baseline mesh autoencoders, and c) localizing the deformation patterns visible in the low-dimensional embedding on the surfaces.
+
+Further on in section 2, we discuss work related to learning features from meshed geometry. In section 3, we present relevant characteristics of surface meshes for CNNs and introduce the semi-regular remeshing, followed by the definition of our spectral CoSMA in section 4. Results for different datasets containing meshes with different connectivity are presented in section 5 .
+
+§ 2 RELATED WORK
+
+§ 2.1 CONVOLUTIONAL NETWORKS FOR SURFACES
+
+Surfaces are generally represented either in form of point clouds or by a surface mesh, which is defined by faces connecting vertices to each other. We only consider the representation via meshes, because their faces describe the underlying surface $\left\lbrack {4,5}\right\rbrack$ . Surface meshes can be viewed as graphs, and hence graph-based convolutional methods are often applied to meshes.
+
+Generally, convolutional networks for graphs can be separated into spectral and spatial ones, of which $\left\lbrack {1,6,7}\right\rbrack$ give an overview. Spatial convolutional methods for graphs aggregate features based on a node's spatial relations, which allows generalization across different mesh connectivities [7, 8]. Spectral approaches, on the other hand, interpret information on the vertices as a signal propagation along the vertices. They exploit the connection of the graph Laplacian and the Fourier basis and vertex features are projected to the Laplacian eigenvector basis, where filters are applied [9]. Instead of explicitly computing Laplacian eigenvectors, the authors of [3] use truncated Chebyshev polynomials, and in [10] they use only first-order Chebyshev polynomials. These spectral methods require fixed connectivity of the graph. If not, the adjacency matrix and consequently the Laplacian eigenvector basis change.
+
+Furthermore, there are network architectures only for surface meshes, e.g. DiffusionNet [11] and HodgeNet [12], which are applied for classification, mesh segmentation, and shape correspondence. Nevertheless, these architectures cannot be implemented directly into autoencoders, because of missing mesh pooling operators.
+
+§ 2.2 NEURAL NETWORKS FOR SEMI-REGULAR SURFACE MESHES
+
+Semi-regular triangular surface meshes, also known as meshes with subdivision connectivity, come with a regular local structure and a hierarchical multi-resolution structure. In section 3.2, we provide a more detailed definition. The Spatial CoSMA [2] and SubdivNet [13] take advantage of the local regularity of the patches by defining efficient mesh-independent pooling operators and using 2D convolution. By inputting the patches separately into the network, [2] can define an autoencoder pipeline that is independent of the mesh size. [13] apply self-parametrization using the MAPS algorithm [14] to remesh watertight manifold meshes without boundaries. [2] on the other hand, apply a remeshing algorithm that works for meshes with boundaries and coarser base meshes.
+
+§ 2.3 MESH CONVOLUTIONAL AUTOENCODERS
+
+The first convolutional mesh autoencoder (CoMA) has been introduced in [15]. The authors introduced mesh downsampling and mesh upsampling layers for pooling and unpooling, which are combined with spectral convolutional filters using truncated Chebyshev polynomials as in [3]. The Neural 3D Morphable Models (Neural3DMM) network presented in [4] improves those results using spiral convolutional layers. The authors of [16] apply the CoMA to different datasets and improve the down and upsampling layers slightly. By manually choosing latent vertices for the embedding space, [17] define an autoencoder that allows interpolating in the latent space. All the above-mentioned mesh convolutional autoencoders work only for meshes of the same size and connectivity because the pooling and/or convolutional layers depend on the adjacency matrix. The authors of [2] showed that the latter methods are not able to learn data with greater global variations in comparison to their patch-based approach, which generalizes and reconstructs the deformed meshes to superior quality. Additionally, their architecture can be applied to unseen meshes of different sizes. The MeshCNN architecture [5] can be implemented as an encoder and decoder. Nevertheless, the pooling is feature dependent and therefore the embeddings can be of different significance.
+
+§ 3 HANDLING SURFACE MESHES BY NEURAL NETWORKS
+
+The irregularity of surface meshes gives rise to difficulties when handling them with a neural network. These are explained in this section, followed by the motivation and definition of semi-regular meshes.
+
+§ 3.1 IRREGULARITY OF SURFACE MESHES
+
+CNNs in 2D [18, 19] apply the same local filters to local neighborhoods of selected pixels of the image. Because of the global grid structure (defined by the x - and y-axis) of the image, the filters of constant shape can be horizontally and vertically shifted and the local neighborhoods are of regular connectivity. CNNs work so efficiently for images because they are translation equivariant and therefore equivariant to the global symmetry of images [20].
+
+The intrinsic dimension of surface meshes is also 2 because they represent a flat surface. Nevertheless, surface meshes lack global regularities, because they are not defined along a global grid, local neighborhoods can have any size and arrangement as long as they are locally Euclidean, and the distance between neighbors is not fixed.
+
+One cannot enforce a regular mesh discretization for every surface in ${\mathbb{R}}^{3}$ , which would lead to an underlying global grid [21]. This is why [2, 22] proposed to enforce a similar structure in the local neighborhoods by choosing a semi-regular representation of the surface. In this way, an efficient application of convolution on surface meshes becomes possible. Note that remeshing the polygonal mesh only changes the representation of the objects. The considered surface embedded in ${\mathbb{R}}^{3}$ is the same, but now represented by a different discrete approximation.
+
+ < g r a p h i c s >
+
+Figure 2: Resolution of the regularly meshed patches inside the spectral CoSMA. The encoder pools the patches twice by undoing subdivision. In the decoder, the unpooling increases the resolution again by subdivision. The orange vertices are the vertices from the irregular base mesh. Red and purple vertices have been created during the ${1}^{\text{ st }}$ and ${2}^{\text{ nd }}$ refinement steps.
+
+§ 3.2 DEFINITION OF SEMI-REGULAR MESHES
+
+We consider semi-regular meshes in order to mitigate the problems caused by the irregularity of surface meshes while still allowing a flexible surface representation (see Figure 1). Following the definitions in [23], we call a surface mesh semi-regular if we can convert it to a low-resolution mesh by iteratively merging four triangular faces into one. Consequently, all vertices of the semi-regular mesh except for the ones remaining in the low-resolution mesh are regular (i.e. have six neighbors). Vice versa, the regular subdivision of a possibly irregular low-resolution mesh yields a semi-regular mesh. Such a regular subdivision can be achieved by inserting a vertex on each edge and splitting each original triangle face into 4 sub-triangles. $\left\lbrack {{13},{24}}\right\rbrack$ refer to this property as Loop subdivision connectivity of the semi-regular mesh. The subdivision connectivity makes semi-regular meshes particularly useful for multiresolution analysis and directly implies a suitable local pooling operator on semi-regular meshes (see section 4).
+
+§ 3.3 REMESHING
+
+We apply the remeshing from [2], because other algorithms, e.g. Neural Subdivision [22] or MAPS [14], only work for closed surfaces without boundaries and fail for base meshes as coarse as ours. The algorithm iteratively subdivides a coarse approximation of the original irregular mesh (see Figure 1). The resulting semi-regular mesh is fitted to the original mesh using gradient descent on a loss function based on the chamfer distance. The refinement level ${rl}$ states the number of times each face of the coarse base mesh is iteratively subdivided. The number of faces in the final semi-regular mesh is ${n}_{F}^{\text{ semireg }} = {4}^{rl} * {n}_{F}^{c}$ , with ${n}_{F}^{c}$ being the number of faces describing the coarse base mesh. We choose the refinement level ${rl} = 4$ , which leads to finer meshes compared to [2], who chose ${rl} = 3$ .
+
+After the remeshing, all vertices that are newly created during the subdivision have six neighbors. Therefore, the resulting mesh is semi-regular or has subdivision connectivity.
+
+§ 4 SPECTRAL COSMA
+
+The network handles the regional patches separately, which allows us to handle meshes of different sizes. We describe how the graph convolution is combined with the padding and surface-aware loss as well as the pooling of the patches, and how one takes advantage of the semi-regular meshing. The building blocks are set together to define the spectral CoSMA (Spectral Convolutional Semi-Regular Mesh Autoencoder).
+
+§ 4.1 SPECTRAL CHEBYSHEV CONVOLUTIONAL FILTERS
+
+We apply fast Chebyshev filters [3], as in [15], with the distinction that we are using them to perform spectral convolutions on the regional patches instead of the entire mesh. We justify this different convolution on the patches, compared to [2], by the intuition that spectral filters encode information of a whole patch and the general characteristics of its deformations, whereas in comparison spatial convolution considers just the local neighborhood around a vertex.
+
+We use the formulation of [3] for convolving over our regularly meshed patches. We perform spectral decomposition using spectral filters and apply convolutions directly in the frequency space.
+
+The spectral filters are approximated by truncated Chebyshev polynomials, which avoid explicitly computing the Laplacian eigenvectors and, by this means, reduce the computational complexity.
+
+The decomposition using spectral filters is dependent on the adjacency matrix, which restricts the transfer learning of spectral graph convolution to meshes of the same connectivity. Nevertheless, the adjacency matrix of the patches of our semi-regular meshes is always the same for one refinement level. This allows us to train the filters for all patches together and to apply them to unseen meshes.
+
+§ 4.2 POOLING AND PADDING OF THE REGULAR PATCHES
+
+We apply the patch-wise average pooling and unpooling from [2] that takes advantage of the multi-scale structure of the semi-regular meshes. The subdivision connectivity guarantees that every 4 faces can be uniformly pooled to 1 . The remaining vertices take the average of their own value and the values of the neighboring vertices that are removed. The unpooling operator subdivides the faces and the newly created vertices are assigned the average value of neighboring vertices from the lower-resolution mesh patch. A similar pooling and unpooling operator is also applied by [13], where the information is saved on the faces.
+
+The padding is crucial for the network to consider the regional patches in a larger context. Since the network handles the patches separately, we consider the features of the neighboring patches in a padding of size 2 as in [2]. If the vertices are boundary vertices, we decide to pad the patch with the boundary vertices' features.
+
+§ 4.3 NETWORK ARCHITECTURE
+
+While using specialized pooling and convolution techniques for the regular patches, the general structure of our network architecture is inspired by $\left\lbrack {2,{15}}\right\rbrack$ . Our autoencoder architecture combines spectral Chebyshev convolutional filters with the described pooling technique to process the padded regular patches of a semi-regular mesh. The autoencoder compresses every padded patch, which corresponds to one face of the low-resolution mesh, from ${\mathbb{R}}^{{276} \times 3}\left( {{rl} = 4}\right)$ to an ${hr} = {10}$ dimensional latent vector and reconstructs the original padded patch from the latent vector.
+
+The encoder consists of two blocks containing a Chebyshev convolutional layer followed by an average pooling layer and an exponential linear unit (ELU) as an activation function [25]. The output of the second encoding block is mapped to the latent space by a fully connected layer.
+
+The decoder mirrors the structure of the encoder by first applying a fully connected layer, which transforms the latent space vector back to a regular triangle representation with refinement level ${rl} = 2$ . Afterward, two decoding blocks consisting of an unpooling layer followed by a convolutional layer transform the coarse triangle representation back to the original padded patch representation. Finally, another Chebyshev convolutional layer is applied without activation function to reconstruct the original patch coordinates by reducing the number of features to three dimensions.
+
+All Chebyshev convolutional layers use $K = 6$ Chebyshev polynomials. Table 3 in the supplementary material gives a detailed view of the structure of the network together with the parameter numbers per layer which sum up to 23,053. Figure 2 illustrates the patch sizes inside the autoencoder. Note that we are able to handle non-manifold edges of the coarse base mesh because the patches, whose interiors by construction have only manifold-edges, are fed separately. The code will be provided as supplementary material.
+
+This spectral CoSMA architecture can handle all surface meshes, that have been remeshed into a semi-regular mesh representation of the same refinement level. By handling the regional padded patches separately, this workflow is independent of the original irregular mesh connectivity thanks to the remeshing and patch-wise handling.
+
+§ 4.4 SURFACE-AWARE LOSS CALCULATION
+
+The authors of the patch-based spatial CoSMA [2] employ a patch-wise mean squared error as the training loss. But, that loss calculation is not keeping track of multiple appearances of the vertices in the patch boundaries. Therefore, it is not surface-aware and during training the error on the patch boundaries is weighted higher than in the interior of the patches. By weighting the vertex-wise error in the training loss by the vertices' number of appearances in the different patches, we employ a surface-aware error for training. This reduces the P2S error as visible in the ablation study.
+
+Table 1: Point to surface (P2S) errors $\left( {\times {10}^{-2}}\right)$ between reconstructed unseen semi-regular meshes $\left( {{rl} = 4}\right)$ and original irregular mesh and their standard deviations for three different training runs. $\left\lbrack {4,{13},{15}}\right\rbrack$ have to be trained per mesh; we and $\left\lbrack 2\right\rbrack$ train one network for all three animals in the GALLOP dataset. *: the elephant has not been seen by the network during training.
+
+max width=
+
+Mesh Class CoMA [15] Neural3DMM [4] SubdivNet [13] Spatial CoSMA [2] Ours
+
+1-6
+FAUST 0.7073 + 1.751 0.4064 + 0.921 ${2.8190} + {4.699}$ 0.0224 + 0.045 $\mathbf{{0.0031}} + {0.006}$
+
+1-6
+Horse 0.0053 + 0.017 0.0096 + 0.045 0.0113 + 0.025 0.0078 + 0.012 $\mathbf{{0.0022}} \pm {0.005}$
+
+1-6
+Camel ${0.0075} \pm {0.023}$ 0.0145 + 0.056 0.0113 + 0.024 0.0091 + 0.014 $\mathbf{{0.0030}} \pm {0.006}$
+
+1-6
+Elephant 0.0101 + 0.031 0.0147 + 0.057 0.0145 + 0.032 ${0.0316} + {0.068}^{ * }$ 0.0054 + 0.012*
+
+1-6
+
+Table 2: P2S errors $\left( {\times {10}^{-2}}\right)$ for three different training runs. Additionally, the Euclidean P2S error (in cm) is given. *: the entire YARIS dataset has not been seen by the network during training.
+
+max width=
+
+2*Dataset 2*Component Lengths 2|c|Spatial CoSMA [2] 2|c|Ours
+
+3-6
+ Test P2S Eucl. E. Test P2S Eucl. E.
+
+1-6
+TRUCK 135-370 cm 0.0660 + 0.117 2.76 cm 0.0013 + 0.003 0.26 cm
+
+1-6
+YARIS* 21-91 cm ${0.2061} \pm {0.438}$ 0.84 cm 0.0375 + 0.088 0.31 cm
+
+1-6
+
+§ 216 5 EXPERIMENTS
+
+We test our spectral CoSMA for semi-regular meshes using a setup similar to [2] on four different datasets and compare our achieved reconstruction errors to state-of-the-art surface mesh autoencoders.
+
+§ 5.1 DATASETS
+
+GALLOP: The dataset contains triangular meshes representing a motion sequence with 48 timesteps from a galloping horse, elephant, and camel [26]. The galloping movement is similar but the meshes representing the surfaces of the three animals are different in connectivity and the number of vertices. This is why the baseline autoencoders have to be trained three times. The surface approximations are remeshed to semi-regular meshes with refinement level ${rl} = 4$ for each animal. The new meshes are still of different connectivity, but all are made up of regional regular patches. Table 6 lists the resulting numbers of vertices. We normalize the semi-regular meshes to $\left\lbrack {-1,1}\right\rbrack$ as in [2]. Before inputting the data to the CoSMAs, every patch is translated to zero mean. We use the first 70% of the galloping sequence of the horse and camel for training. The architecture is tested on the remaining ${30}\%$ and the whole sequence of the elephant, which is never seen during the training for the CoSMAs.
+
+FAUST: The dataset contains 100 meshes [27], which are in correspondence to each other. The irregular surface meshes represent 10 different bodies in 10 different poses. For the experiments, we consider two unknown poses of all bodies (20% of the data) in the testing set. The meshes are remeshed and normalized in the same way as for the GALLOP dataset.
+
+TRUCK and YARIS: In a car crash simulation the car components, which are generally represented by surface meshes, often deform in different patterns. Every component is discretized by a surface mesh, while the local deformation is described by the same physical rules. Following [2], the TRUCK dataset contains 32 completed frontal crash simulations and 6 components, the YARIS dataset contains 10 simulations and 10 components. 30 simulations and 70% of the timesteps of the TRUCK dataset are included in the training set. The remaining samples from the TRUCK dataset and the entire YARIS dataset, representing a different car, are considered for testing. For this setup, the authors of $\left\lbrack {2,{28}}\right\rbrack$ detect patterns in the deformation of the TRUCK and YARIS components. We normalize the meshes that discretize car components to zero mean and range $\left\lbrack {-1,1}\right\rbrack$ relative to the coordinates' ratio. Every patch is translated to zero mean.
+
+ < g r a p h i c s >
+
+Figure 3: Reconstructed unknown FAUST pose and elephant test sample at $t = {43}$ by CoMA, Neural3DMM, SubdivNet Autoencoder, spatial CoSMA, and our network. P2S error of the reconstructed faces is highlighted. More reconstruction examples are given in the supplementary material. * The elephant's mesh has not been presented during training to spatial CoSMA and our network.
+
+§ 5.2 TRAINING DETAILS
+
+We train the network (implemented in Pytorch [29] and Pytorch Geometric [30]) with the adaptive learning rate optimization algorithm [31]. For the GALLOP and the FAUST dataset, we use a learning rate of 0.0001 and train for 150 epochs using a batch size of 100 . For the TRUCK data, we choose a batch size of 100 combined with a learning rate of 0.001 for 300 epochs, since the variation inside the dataset is higher. We minimize the surface-aware loss between the original and reconstructed regional patches of the surface mesh without considering the padding. To augment the data in the case of the GALLOP and the FAUST dataset we rotate the regional patches by ${0}^{ \circ },{120}^{ \circ }$ , and ${240}^{ \circ }$ .
+
+Our architecture requires at least ${50}\%$ fewer parameters than the CoMA, Neural3DMM, and Subdi-vNet networks, because for increasing ${rl}$ and consequently finer meshes, the CoSMAs require only a few parameters more in the linear layers (compare Tables 6 and 7 in the supplementary material). This is because the patches and convolutional filters share the parameters. The spectral CoSMA approach requires 15% fewer parameters than the spatial CoSMA approach. The runtime analysis and ablation study justifying parameter choices are provided in the supplementary material.
+
+§ 5.3 RECONSTRUCTIONS OF THE MESHES
+
+The mean squared error between true and reconstructed vertices of the semi-regular mesh allows a comparison of different methods only if the same remeshing result is used. In difference to [2], we compare the reconstructed semi-regular mesh directly to the original irregular surface mesh by calculating a point to surface error (P2S). We average the mean squared errors between the vertices of the semi-regular mesh and their orthogonal projections to the surface described by the irregular mesh. This allows us to compare the reconstruction errors when using different remeshing results or refinements.
+
+Besides CoMA [15] and Neural3DMM [4], we use an additional baseline semi-regular mesh autoen-coder using our network's architectures with the pooling and convolutional layers from SubdivNet [13] to process the entire meshes. In Table 1 we compare the autoencoders for the GALLOP and FAUST dataset in terms of the P2S errors of reconstructed test samples, whose 3D coordinates lie in the range $\left\lbrack {-1,1}\right\rbrack$ . Our network reduces the test reconstruction error for the GALLOP and FAUST dataset by more than ${50}\%$ and ${80}\%$ respectively, if the shape is presented to the autoencoder during the training. For unknown poses from the FAUST dataset, the limbs' positions are reconstructed inaccurately by the CoMA, Neural3DMM, and SubdivNet autoencoders. Especially if the pose is not similar to training poses, their reconstruction fails, as Figure 3 illustrates.
+
+ < g r a p h i c s >
+
+Figure 4: Reconstructed front beams from the TRUCK (length of ${150}\mathrm{\;{cm}}$ ) at time $t = {24}$ (test sample) from two crash simulations representing different deformation behavior and from the YARIS (length of ${65}\mathrm{\;{cm}}$ ) at $t = {15}$ . The average Euclidean P2S error (in $\mathrm{{cm}}$ ) of the faces is highlighted.
+
+The spectral CoSMA's reconstructions are generally smoother than the ones from the spatial CoSMA, which reduces the reconstruction errors. Figure 7 in the supplementary material shows that the reconstructed patch using spectral filters, which encode the connectivity of the whole patch in the Chebyshev polynomials, is smoother than the spatial reconstruction, where the convolutional kernels only consider the close neighborhood. Because the spatial CoSMA uses ${hr} = 8$ and no surface-aware loss, we also list our reconstruction errors using these parameters in the ablation study for a complete comparison.
+
+Transfer Learning to Meshes with New Connectivity: Our spectral CoSMA and the spatial CoSMA are the only networks that can reconstruct an unseen shape of different connectivity. The elephant's mesh has never been presented to our network, nevertheless, our reconstruction error is lower. Even though trained on the elephant, the baselines' reconstructions are worse and unstable in the legs, as Figure 3 illustrates. The spatial CoSMA's reconstructions of the unseen elephant are inferior to all the other networks, although the reconstructions of the known camel and horse are of similar quality to the other baselines. This highlights the improved transfer learning and generalization capability of the new spectral approach.
+
+Since the TRUCK and YARIS datasets contain 16 different meshes, the reconstruction results are compared between the CoSMA architectures. In Table 2 we present the average P2S errors for the TRUCK and YARIS dataset between the components scaled to range $\left\lbrack {-1,1}\right\rbrack$ and in cm. The entire YARIS dataset has never been presented to the network during training. The results on the YARIS in Figure 4 also show that our network not only reconstructs smoother surfaces in comparison to the spatial CoSMA but also has higher transfer learning capacities.
+
+A comparison of the results for refinement levels ${rl} = 3$ and ${rl} = 4$ for the TRUCK and YARIS datasets (see Table 8 in the supplementary material) shows the stability of the results from our spectral CoSMA. For the spatial CoSMA on the other hand, the reconstruction quality decreases when increasing the refinement level. This is due to the fixed kernel size of 2 . Since the mesh is finer, the considered neighborhoods by a spatial filter using kernel size 2 cover smaller areas of the surface. The spectral CoSMA considers the entire patches in spectral representation. Therefore, an increase in the refinement level does not impair the reconstruction quality.
+
+§ 5.4 LOW-DIMENSIONAL EMBEDDING
+
+We project the patch-wise hidden representations of size ${hr}$ into the two-dimensional space using the linear dimensionality reduction method Principal Component Analysis (PCA) [32]. Then we compare these patch-wise results to the $2\mathrm{D}$ embedding over time of the whole shape, by concatenating the hidden patch-wise representations and then applying PCA.
+
+The time-dependent embedding for the unseen elephant from the GALLOP dataset exhibits a periodic galloping sequence, visualized in Figure 5 (a). We compare how similar the 2D patch-wise embed-dings are to the $2\mathrm{D}$ embedding for the entire shape, to determine how important the deformation of the patch is for the general deformation behavior of the whole shape. The patch-wise distance is visualized in Figure 5 (b) and its calculation detailed in the supplementary material. We notice that this distance is the lowest for the body and legs, which define the elephant's gallop, whereas the movement of the head does not follow the periodic pattern.
+
+ < g r a p h i c s >
+
+Figure 5: (a) 2D Embedding of the low-dimensional representation of the whole elephant over time. (b) Highlighting the distance of the patch-wise embeddings to the embedding of the whole shape. (c) Patch-wise score for the TRUCK’s front beam from Figure 4 at $t = {24}$ . Only the patch with the high score manifests the deformation in two patterns. This is visible in the example patches with high and low scores. The embedding's colors encode timestep and branch.
+
+For the TRUCK and YARIS datasets, the goal is the detection of clusters corresponding to different deformation patterns in the components' embeddings. This speeds up the analysis of car crash simulations since relations between model parameters and the deformation behavior are discovered more easily [28, 33]. In the 2D visualizations for the TRUCK components, we detect two clusters corresponding to a different deformation behavior and our patch-based approach allows us to identify the patches that contribute most to this. For each patch, we define a score, which equals the accuracy of an SVM (between 0.5 and 1) that is classifying the observed two deformation patterns of the entire component from the patch's embedding, see Figure 5 (c). The highlighted patches correlate to the left part of the beam, where the deformation is visibly different for two different TRUCK simulations in Figure 4. Note, that this comparison of patch- and shape-embeddings does not lead to significant results for the spatial CoSMA [2] because of the instability of its results.
+
+For the YARIS, which has never been seen by the network during training, we also visualize the low-dimensional representation for different components in 2D using PCA. We detect a deformation pattern in the front beams that splits up the simulation set into two clusters, see Figure 9 in the supplementary material, which is a result similar to [2] who used a nonlinear dimensionality reduction.
+
+§ 6 CONCLUSION
+
+We have introduced a novel spectral mesh autoencoder pipeline for the analysis of deforming $3\mathrm{D}$ semi-regular surface meshes with different connectivity. This allows us to generate high-quality reconstructions of unseen meshes, that have not been presented during training. In fact, the reconstruction quality for unknown meshes with our spectral CoSMA is higher than with baseline autoencoders that have seen the meshes during training. This improved transfer learning capability and reconstruction quality motivate the future analysis of generative models for the patch-based approach. For high-quality generative results, we also plan to improve the remeshing procedure to focus more on detailed structures. Right now the loss of smaller detailed geometric structures in the remeshing has little effect on the results since we want to detect the behavioral patterns in the low-dimensional representations of global deformation.
+
+Additionally, we provide an understanding and interpretation of which surface areas lead to the patterns in the embedding space. We speculate that this information per patch could be used in further analysis. We also plan to apply the architecture to other tasks such as shape matching and segmentation.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..60a1c868294caa570a9a332efa95189a7311d44b
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,269 @@
+# Expectation Complete Graph Representations Using Graph Homomorphisms
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+We propose and study a practical graph embedding that in expectation is able to distinguish all non-isomorphic graphs and can be computed in polynomial time. The embedding is based on Lovász' characterization of graph isomorphism through an infinite dimensional vector of homomorphism counts. Recent work has studied the expressiveness of graph embeddings by comparing their ability to distinguish graphs to that of the Weisfeiler-Leman hierarchy. While previous methods have either limited expressiveness or are computationally impractical, we devise efficient sampling-based alternatives that are maximally expressive in expectation. We empirically evaluate our proposed embeddings and show competitive results on several benchmark graph learning tasks.
+
+## 1 Introduction
+
+We study novel efficient and expressive graph embeddings based on Lovász' characterisation of graph isomorphism through homomorphism counts. While most practical graph embeddings drop the property of completeness, that is, the ability to distinguish all non-isomorphic graphs, in favour of runtime, we devise efficient embeddings that retain completeness in expectation. To achieve that, we sample pattern graphs in a particular way, simultaneously guaranteeing completeness and polynomial runtime in expectation. We discuss related work, in particular the relationship to the $k$ -dimensional Weisfeiler Leman isomorphism test, and show first results on benchmarks datasets.
+
+While subgraph counts are also a reasonable choice for expectation complete graph embeddings, they have multiple drawbacks compared to homomorphism counts. Most importantly, from a computational perspective, computing subgraph counts even for simple graphs such as trees or paths is NP-hard [Alon et al., 1995; Marx and Pilipczuk, 2014], while we can compute homomorphism counts efficiently [Díaz et al., 2002] as long as the pattern graphs have small treewidth, a measure of 'tree-likeness'. In particular, all known exact algorithms for subgraph isomorphism have a runtime exponentially in the pattern size or the maximum degree of the pattern even for small treewidth-one of the main reasons why the graphlet kernel [Shervashidze et al., 2009] and similar fixed pattern based approaches [Bouritsas et al., 2022] only count subgraphs up to size around 5.
+
+Probably most important from a conceptual perspective, is the relationship of homomorphism counts to the cut distance [Borgs et al., 2006; Lovász, 2012]. The cut distance is a well studied and important distance on graphs that captures global structural but also sampling-based local information. It is well known that the distance given by (potentially approximated and sampled) homomorphism counts is close to the cut distance and hence has similar favourable properties. The cut distance, and hence, homomorphism counts, capture the behaviour of all permutation-invariant functions on graphs. For an ongoing discussion about the importance of the cut distance and homomorphism counts in the context of graph learning, see Dell et al. [2018], Grohe [2020], and Hoang and Maehara [2020].
+
+Completeness in expectation essentially implies one powerful fact which no deterministic embedding with bounded expressiveness can guarantee: repetition will make the embedding more expressive eventually. If the graph embedding is complete in expectation it is guaranteed that sampling more patterns will eventually increase its expressiveness.
+
+## 2 Complete Graph Embeddings
+
+The graph isomorphism problem is a classical problem in graph theory and its computational complexity is a major open problem [Babai, 2016]. Following the classical result of Lovász [1967], two graphs are isomorphic if and only if they have the same infinite dimensional homomorphism count vectors. This provides a strong graph embedding for graph classification tasks [Barceló et al., 2021; Dell et al., 2018; Hoang and Maehara, 2020].
+
+A graph $G = \left( {V\left( G\right) , E\left( G\right) }\right)$ consists of a set $V\left( G\right)$ of vertices and a set $E\left( G\right) = \{ e \subseteq V\left| \right| e \mid = 2\}$ of edges. The size of a graph is the number of its vertices. In the following $F$ and $G$ denote graphs, where $F$ represents a pattern graph and $G$ a graph in our training set. A homomorphism $\varphi : V\left( F\right) \rightarrow V\left( G\right)$ is a map that respects edges, i.e. $\{ v, w\} \in E\left( F\right) \Rightarrow \{ \varphi \left( v\right) ,\varphi \left( w\right) \} \in E\left( G\right)$ . An isomorphism is a bijective homomorphism whose inverse is also a homomorphism. We say that a distribution $\mathcal{D}$ over a countable domain $\mathcal{X}$ has full support if each $x \in X$ has nonzero probability.
+
+Let ${\mathcal{G}}_{n}$ be the set of all finite graphs of size at most $n$ and let $\hom \left( {F, G}\right)$ denote the number of homomorphisms of $F$ to $G$ for arbitrarily graphs and ${\varphi }_{n}\left( G\right) = \hom \left( {{\mathcal{G}}_{n}, G}\right) = {\left( \hom \left( F, G\right) \right) }_{F \in {\mathcal{G}}_{n}}$ denote the Lovász vector of $G$ for ${\mathcal{G}}_{n}$ . Lovász [1967] proved the following classical theorem.
+
+Theorem 1 (Lovász [1967]). Two arbitrary graphs $G, H \in {\mathcal{G}}_{n}$ are isomorphic iff ${\varphi }_{n}\left( G\right) = {\varphi }_{n}\left( H\right)$ .
+
+We can define a simple kernel on ${\mathcal{G}}_{n}$ with the canonical inner product using ${\varphi }_{n}$ .
+
+Definition 2 (Complete Lovász kernel). Let ${k}_{{\varphi }_{n}}\left( {G, H}\right) = \left\langle {{\varphi }_{n}\left( G\right) ,{\varphi }_{n}\left( H\right) }\right\rangle$ .
+
+Note that ${k}_{{\varphi }_{n}}$ is a complete graph kernel [Gärtner et al.,2003] on ${\mathcal{G}}_{n}$ , i.e., ${k}_{{\varphi }_{n}}$ can be used to distinguish non-isomorphic graphs of size $n$ . Similarly, we define complete graph embeddings.
+
+Definition 3. Let $\varphi : \mathcal{G} \rightarrow X$ be a permutation-invariant graph embedding from a family of graphs $\mathcal{G}$ to a vector space $X$ . We call $\varphi$ complete (on $\mathcal{G}$ ) if $\varphi \left( G\right) \neq \varphi \left( H\right)$ for all non-isomorphic $G, H \in \mathcal{G}$ .
+
+When studying graph embeddings and graph kernels we face the tradeoff between efficiency and expressiveness: complete graph representations are unlikely to be computable in polynomial-time [Gärtner et al., 2003] and hence most practical graph representations drop completeness in favour of polynomial runtime. In our work, we study random graph representations. While dropping completeness and being efficiently computable, this allows us to keep a slightly weaker yet desirable property: completeness in expectation.
+
+Definition 4. A graph embedding ${\varphi }_{X}$ , which depends on a random variable $X$ , is complete in expectation if the graph embedding given by the expectation, ${\mathbb{E}}_{X}\left\lbrack {{\varphi }_{X}\left( \cdot \right) }\right\rbrack$ , is complete.
+
+Similarly, we say that the corresponding kernel ${k}_{X}\left( {G, H}\right) = \left\langle {{\varphi }_{X}\left( G\right) ,{\varphi }_{X}\left( H\right) }\right\rangle$ is complete in expectation. We can use Lovász' isomorphism theorem to devise graph embeddings that are complete in expectation. For that let ${e}_{F} \in {\mathbb{R}}^{{\mathcal{G}}_{n}}$ be the ’ $F$ th’ standard basis unit-vector of ${\mathcal{G}}_{n}$
+
+Theorem 5. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{n}$ with full support and $G \in {\mathcal{G}}_{n}$ . Then the graph embedding ${\varphi }_{F}\left( G\right) = \hom \left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel $k$ are complete in expectation.
+
+### 2.1 Expectation Complete Embeddings and Kernels on ${\mathcal{G}}_{\infty }$
+
+In this section, we generalise the previous result to the set of all finite graphs ${\mathcal{G}}_{\infty }$ . Theorem 1 holds for $G, H \in {\mathcal{G}}_{\infty }$ and the mapping ${\varphi }_{\infty }$ that maps each $G \in {\mathcal{G}}_{\infty }$ to an infinite-dimensional vector. The resulting vector space, however, is not a Hilbert space with the usual inner product. To see this, consider any graph $G$ that has at least one edge. Then $\hom \left( {{P}_{n}, G}\right) \geq 2$ for every path ${P}_{n}$ of length $n \in \mathbb{N}$ . Thus, the inner product $\left\langle {{\varphi }_{\infty }\left( G\right) ,{\varphi }_{\infty }\left( G\right) }\right\rangle$ is not finite.
+
+To define a kernel on ${\mathcal{G}}_{\infty }$ without fixing a maximum size of graphs, i.e., restricting to ${\mathcal{G}}_{n}$ for some $n \in \mathbb{N}$ , we define the countable-dimensional vector ${\bar{\varphi }}_{\infty }\left( G\right) = {\left( {\hom }_{\left| V\left( G\right) \right| }\left( F, G\right) \right) }_{F \in {\mathcal{G}}_{\infty }}$ where
+
+$$
+{\hom }_{\left| V\left( G\right) \right| }\left( {F, G}\right) = \left\{ \begin{array}{ll} \hom \left( {F, G}\right) & \text{ if }\left| {V\left( F\right) }\right| \leq \left| {V\left( G\right) }\right| , \\ 0 & \text{ if }\left| {V\left( F\right) }\right| > \left| {V\left( G\right) }\right| . \end{array}\right.
+$$
+
+That is, ${\bar{\varphi }}_{\infty }\left( G\right)$ is the projection of ${\varphi }_{\infty }\left( G\right)$ to the subspace that gives us the homomorphism counts for all graphs of size at most of $G$ . Note that this is a well-defined map of graphs to a subspace of the ${\ell }^{2}$ space, i.e., sequences ${\left( {x}_{i}\right) }_{i}$ over $\mathbb{R}$ with $\mathop{\sum }\limits_{i}{\left| {x}_{i}\right| }^{2} < \infty$ . Hence, the kernel given by the canonical inner product ${\bar{k}}_{\infty }\left( {G, H}\right) = \left\langle {{\bar{\varphi }}_{\infty }\left( G\right) ,{\bar{\varphi }}_{\infty }\left( H\right) }\right\rangle$ is finite and positive semi-definite. Note that we can rewrite
+
+${\bar{k}}_{\infty }\left( {G, H}\right) = {k}_{\min }\left( {G, H}\right) = \left\langle {{\varphi }_{{n}^{\prime }}\left( G\right) ,{\varphi }_{{n}^{\prime }}\left( H\right) }\right\rangle$ where ${n}^{\prime } = \min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ . While the first hunch might be to count patterns up to $\max \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ , it is thus not necessary to guarantee completeness. In addition to it, the corresponding map ${k}_{\max }$ is not even positive semi-definite.
+
+Lemma 6. ${k}_{\min }$ is a complete kernel on ${\mathcal{G}}_{\infty }$ .
+
+Given a sample of graphs $S$ , we note that for $n = \mathop{\max }\limits_{{G \in S}}\left| {V\left( G\right) }\right|$ we only need to consider patterns up to size $n{.}^{1}$ As the number of graphs of a given size $n$ are superexponential it is impractical to compute all such counts. Hence, we propose to resort to sampling.
+
+Theorem 7. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{\infty }$ with full support and $G \in {\mathcal{G}}_{\infty }$ . Then ${\bar{\varphi }}_{F}\left( G\right) =$ ${\hom }_{\left| V\left( G\right) \right| }\left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel are complete in expectation.
+
+### 2.2 Sampling multiple patterns
+
+Sampling just a one pattern $F$ will not result in a practical graph embedding. Thus, we propose to sample $\ell$ patterns ${F}_{1},\ldots ,{F}_{\ell } \sim \mathcal{D}$ i.i.d. and construct the embedding ${\varphi }^{\ell }\left( G\right) \in {\mathbb{N}}_{0}^{\ell }$ with ${\left( {\varphi }^{\ell }\left( G\right) \right) }_{i} =$ $\hom \left( {{F}_{i}, G}\right)$ if $\left| {V\left( {F}_{i}\right) }\right| \leq \left| {V\left( G\right) }\right|$ and 0 otherwise for all $i \in \left\lbrack \ell \right\rbrack$ . Note that, for the dot product it holds that ${\varphi }^{\ell }{\left( G\right) }^{T}{\varphi }^{\ell }\left( H\right) = \mathop{\sum }\limits_{{i = 1}}^{\ell }\left\langle {{\bar{\varphi }}_{{F}_{i}}\left( G\right) ,{\bar{\varphi }}_{{F}_{i}}\left( H\right) }\right\rangle$ as long as we do not sample patterns twice. ${}^{2}$
+
+## 3 Computing Embeddings in Expected Polynomial Time
+
+A graph embedding that is complete in expectation must be efficiently computable to be practical. In this section, we describe our main result achieving polynomial runtime in expectation. The best known algorithms [Díaz et al.,2002] to exactly compute $\hom \left( {F, G}\right)$ take time
+
+$$
+\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right) \tag{1}
+$$
+
+where $\operatorname{tw}\left( F\right)$ is the treewidth of the pattern graph $H$ . Thus, a straightforward sampling strategy to achieve polynomial runtime in expectation is to give decreasing probability mass to patterns with higher treewidth. Unfortunately, in the case of ${\mathcal{G}}_{\infty }$ this is not possible.
+
+Lemma 8. There exists no distribution $\mathcal{D}$ with full support on ${\mathcal{G}}_{\infty }$ such that the expected runtime of Eq. (1) becomes polynomial in $\left| {V\left( G\right) }\right|$ for all $G \in {\mathcal{G}}_{\infty }$ .
+
+To resolve this issue we have to take the size of the largest graph in our sample into account. For a given sample $S \subseteq {\mathcal{G}}_{n}$ of graphs, where $n$ is the maximum number of vertices in $S$ , we can construct simple distributions achieving polynomial time in expectation.
+
+Theorem 9. There exists a distribution $\mathcal{D}$ such that computing the expectation complete graph embedding ${\bar{\varphi }}_{X}\left( G\right)$ takes polynomial time in $\left| {V\left( G\right) }\right|$ in expectation for all $G \in {\mathcal{G}}_{n}$ .
+
+Proof. Sketch. We first draw a treewidth upper bound $k$ from an appropriate distribution. For example, a Poisson distribution with parameter $\lambda = \mathcal{O}\left( {{}^{logn}//n}\right)$ is sufficient. We have to ensure that each possible graph with treewidth up to $k$ gets a nonzero probability of being drawn. For that we first draw a $k$ -tree, a maximal graph of treewidth $k$ , and then take a random subgraph of it.
+
+Note that we do not require that the patterns are sampled uniformly at random. It merely suffices that each pattern has a nonzero probability of being drawn. To satisfy a runtime of $\mathcal{O}\left( {\left| V\left( G\right) \right| }^{d + 1}\right)$ in expectation, for example, a Poisson distribution with $\lambda \leq \frac{1 + d\log n}{n}$ is sufficient.
+
+## 4 Related Work
+
+The $k$ -dimensional Weisfeiler-Leman (WL) test and the Lovász vector restricted to patterns up to treewidth $k$ are equally expressive [Dell et al.,2018; Dvořák,2010]. We propose an efficiently computable embedding matching the expressiveness of $k$ -WL, and hence also MPNNs and $k$ -GNNs [Morris et al., 2019; Xu et al., 2019], in expectation, see Appendix D.
+
+Dell et al. [2018] proposed a complete graph kernel based on homomorphism counts related to our ${k}_{\min }$ kernel. Instead of implicitly restricting the embedding to only a finite number of patterns, as we do, they weigh the homomorphism counts such that the inner product defined on the whole Lovász vectors converges. However, Dell et al. [2018] do not discuss runtime aspects and so, our approach can be seen as an efficient sampling-based alternative to their weighted kernel.
+
+---
+
+${}^{1}$ Actually, it is sufficient to go up to the size of the second largest graph.
+
+${}^{2}$ Note that it does not affect the expressiveness results if we sample a pattern multiple times.
+
+---
+
+Table 1: Cross-validation accuracies on benchmark datasets
+
+| method | MUTAG | IMDB-BIN | IMDB-MULTI | PAULUS25 | CSL |
| GHC-tree | ${89.28} \pm {8.26}$ | ${72.10} \pm {2.62}$ | ${48.60} \pm {4.40}$ | ${7.14} \pm {0.00}$ | ${10.00} \pm {0.00}$ |
| GHC-cycle | ${87.81} \pm {7.46}$ | ${70.93} \pm {4.54}$ | ${47.41} \pm {3.67}$ | ${7.14} \pm {0.00}$ | ${100.00} \pm {0.00}$ |
| GNTK | ${89.46} \pm {7.03}$ | ${75.61} \pm {3.98}$ | ${51.91} \pm {3.56}$ | ${7.14} \pm {0.00}$ | ${10.00} \pm {0.00}$ |
| GIN | ${89.40} \pm {5.60}$ | ${70.70} \pm {1.10}$ | ${43.20} \pm {2.00}$ | ${7.14} \pm {00}$ | ${10} \pm {0.00}$ |
| ours (SVM) | ${86.85} \pm {1.28}$ | ${69.83} \pm {0.15}$ | ${47.31} \pm {0.46}$ | ${100.00} \pm {0.00}$ | ${38.89} \pm {11.18}$ |
| ours (MLP) | ${88.33} \pm {1.11}$ | ${70.37} \pm {0.85}$ | ${48.75} \pm {0.20}$ | ${49.84} \pm {6.74}$ | ${11.78} \pm {1.54}$ |
+
+Using graph homomorphism counts as a feature embedding for graph learning tasks was proposed before by Hoang and Maehara [2020]. They discuss various aspects of homomorphism counts important for learning tasks, in particular, universality aspects and their power to capture certain properties of the graph, such as bipartiteness. Instead of relying on sampling patterns, which we use to guarantee expectation in completeness, they propose to use a fixed number of small pattern graphs. This limits the practical usage of their approach due to computational complexity reasons. In their experiments the authors only use tree and cycle patterns up to size 6 and 8, respectively, whereas we allow patterns of arbitrary size and treewidth, guaranteeing polynomial runtime in expectation. Simiarly to Hoang and Maehara [2020], we use the computed embeddings as features for a kernel SVM (with RBF kernel) and an MLP.
+
+Instead of embedding the whole graph into a vector of homomorphism counts, Barceló et al. [2021] proposed to use rooted homomorphism counts as node features in conjunction with a graph neural network (GNN). They discuss the required patterns to be as or more expressive than the $k$ -WL test. We achieve this in expectation when selecting an appropriate sampling distribution.
+
+Wu et al. [2019] adapted random Fourier features [Rahimi and Recht, 2007] to graphs and proposed an sampling-based variant of the global alignment graph kernel. Similar sampling-based ideas were discussed before for the graphlet kernel [Shervashidze et al., 2009] and frequent-subtree kernels [Welke et al., 2015]. All three papers do not discuss expressiveness aspects, however.
+
+## 5 Experiments
+
+We performed some preliminary experiments on some benchmark datasets. To this end, we sample a fixed number $\ell = {30}$ of patterns as described in Appendix A and compute the sampled min kernel as described in Section 3. Table 1 shows averaged accuracies of SVM and MLP classifiers trained on our feature sets. We follow the experimental design of Hoang and Maehara [2020] and compare to their published results. Even with as little as 30 features, the results of our approach are comparable to the competitors on real world datasets. Furthermore, it is interesting to note that a SVM with RBF kernel and our features performs perfectly on the PAULUS25 dataset, i.e., it is able to decide isomorphism for the strongly regular graphs in this dataset. It also shows good performance, although with high deviation, on the CSL dataset, where only the method specifically designed for this dataset, GHC-cycle, performs well. We also included GNTK [Du et al., 2019] and GIN [Xu et al., 2019].
+
+## 6 Conclusion
+
+As future work, we will investigate approximate counts to make our implementation more efficient [Beaujean et al., 2021]. It is unclear how this affects expressiveness, as we loose permutation-invariance. Going beyond expressiveness results, our goal is to further study graph similarities suitable for graph learning, such as the cut distance as proposed by Grohe [2020]. Finally, instead of sampling patterns from a fixed distribution, a more promising variant is to adapt the sampling process in a sample-dependent manner. One could, for example, draw new patterns until each graph in the sample has a unique embedding (up to isomorphism) or at least until we can distinguish 1-WL classes. Alternatively, we could pre-compute frequent or interesting patterns and use them to adapt the distribution. Such approaches would employ the power of randomisation to select a fitting graph representation in a data-driven manner, instead of relying on a finite set of fixed and pre-determined patterns like in previous work [Barceló et al., 2021; Bouritsas et al., 2022].
+
+References
+
+Noga Alon, Raphael Yuster, and Uri Zwick. Color-coding. J. ACM, 42(4):844-856, 1995. 1
+
+László Babai. Graph isomorphism in quasipolynomial time. In STOC, 2016. 2
+
+Pablo Barceló, Floris Geerts, Juan Reutter, and Maksimilian Ryschkov. Graph Neural Networks with Local Graph Parameters. In NeurIPS, 2021. 2, 4
+
+Paul Beaujean, Florian Sikora, and Florian Yger. Graph homomorphism features: Why not sample? In Graph Embedding and Mining (GEM) Workshop at ECMLPKDD, 2021. 4
+
+Christian Borgs, Jennifer Chayes, László Lovász, Vera T Sós, Balázs Szegedy, and Katalin Veszter-gombi. Graph limits and parameter testing. In STOC, 2006. 1
+
+Giorgos Bouritsas, Fabrizio Frasca, Stefanos P Zafeiriou, and Michael Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. 1, 4
+
+Saverio Caminiti, Emanuele G Fusco, and Rossella Petreschi. Bijective linear time coding and decoding for k-trees. Theory of Computing Systems, 46(2):284-300, 2010. 7
+
+Radu Curticapean, Holger Dell, and Dániel Marx. Homomorphisms are a good basis for counting small subgraphs. In ${STOC},{2017.7}$
+
+Holger Dell, Martin Grohe, and Gaurav Rattan. Lovász meets Weisfeiler and Leman. In ICALP, 2018.1,2,3,4,8
+
+Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. NeurIPS, 2019.4
+
+Zdeněk Dvořák. On recognizing graphs by numbers of homomorphisms. J. Graph Theory, 64(4): 330-342,2010.3,8
+
+Josep Díaz, Maria Serna, and Dimitrios M. Thilikos. Counting h-colorings of partial k-trees. Theoretical Computer Science, 281(1):291-309, 2002. ISSN 0304-3975. 1, 3, 8
+
+Thomas Gärtner, Peter A. Flach, and Stefan Wrobel. On graph kernels: Hardness results and efficient alternatives. In ${COLT},{2003.2}$
+
+Martin Grohe. Word2vec, node2vec, graph2vec, x2vec: Towards a theory of vector embeddings of structured data. In PODS, 2020. 1, 4
+
+NT Hoang and Takanori Maehara. Graph homomorphism convolution. In ICML, 2020. 1, 2, 4, 7
+
+László Lovász. Operations with structures. Acta Mathematica Hungaria, 18:321-328, 1967. 2
+
+Lászl6 Lovász. Large Networks and Graph Limits, volume 60 of Colloquium Publications. American Mathematical Society, 2012. ISBN 978-0-8218-9085-1. 1
+
+Dániel Marx and Michal Pilipczuk. Everything you always wanted to know about the parameterized complexity of subgraph isomorphism (but were afraid to ask). In International Symposium on Theoretical Aspects of Computer Science, 2014. 1
+
+Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In ${AAAI},{2019.3}$
+
+Siqi Nie, Cassio P de Campos, and Qiang Ji. Learning bounded tree-width Bayesian networks via sampling. In European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, 2015. 7
+
+Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In NIPS, 2007. 4
+
+Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. Efficient graphlet kernels for large graph comparison. In AISTATS, 2009. 1, 4
+
+Pascal Welke, Tamás Horváth, and Stefan Wrobel. Probabilistic frequent subtree kernels. In International Workshop on New Frontiers in Mining Complex Patterns, 2015. 4
+
+Lingfei Wu, Ian En-Hsu Yen, Zhen Zhang, Kun Xu, Liang Zhao, Xi Peng, Yinglong Xia, and Charu Aggarwal. Scalable global alignment graph kernel using random features: From node embedding to graph embedding. In ${KDD},{2019.4}$
+
+Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019. 3, 4
+
+Jaemin Yoo, U Kang, Mauro Scanagatta, Giorgio Corani, and Marco Zaffalon. Sampling subgraphs with guaranteed treewidth for accurate and efficient graphical inference. In WSDM, 2020. 7
+
+## A Sampling details
+
+Given a pattern size $N \in \mathbb{N}$ , we first draw a treewidth upper bound $k < N$ given from some distribution. Then we want to sample any graph with treewidth at most $k$ with a nonzero probability. A natural strategy is to first sample a $k$ -tree, which is a maximal graph with treewidth $k$ , and then take a random subgraph of it. Uniform sampling of $k$ -trees is described by Nie et al. [2015] and Caminiti et al. [2010]. Alternatively, the strategy of Yoo et al. [2020] is also possible. Note that we only have to guarantee that each pattern has a nonzero probability of being sampled; it does not have to be uniform. While guaranteed uniform sampling would be preferable, we resort to a simple sampling scheme that is easy to implement. We achieve a nonzero probability for each pattern of at most a given treewidth $k$ by first constructing a random $k$ -tree $P$ through its tree decomposition, by uniformly drawing a tree $T$ on $N - k$ vertices and choosing a root. We then create $P$ as the (unique up to isomorphism) $k$ -tree that has $T$ as tree decomposition. We then randomly remove edges from that $k$ -tree i.i.d. with fixed probability (currently set to 0.1). This ensures that each subgraph of $P$ will be created with nonzero probability.
+
+## B Implementation details
+
+The python code and information to reproduce our experiments can be found online ${}^{3}$ . These sources will be made accessible on Github. We rely on the C++ code of Curticapean et al. [2017] ${}^{4}$ to efficiently compute homomorphism counts. While the code computes a tree decomposition itself we decided to simply provide it with our tree decomposition of the $k$ -tree which we compute anyway, to make the computation more efficient. Additionally, we use the cross-validation-based eveluation with SVM and MLP of Hoang and Maehara [2020] ${}^{5}$ .
+
+## C Proofs
+
+Theorem 5. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{n}$ with full support and $G \in {\mathcal{G}}_{n}$ . Then the graph embedding ${\varphi }_{F}\left( G\right) = \hom \left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel $k$ are complete in expectation.
+
+Proof. Let $\mathcal{D}$ and ${\varphi }_{F}$ with $F \sim \mathcal{D}$ as stated and $G \in {\mathcal{G}}_{n}$ . Then
+
+$$
+g = {\mathbb{E}}_{F}\left\lbrack {{\varphi }_{F}\left( G\right) }\right\rbrack = \mathop{\sum }\limits_{{{F}^{\prime } \in {\mathcal{G}}_{n}}}\Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime }, G}\right) {e}_{{F}^{\prime }}.
+$$
+
+The vector $g$ has the entries ${\left( g\right) }_{{F}^{\prime }} = \Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime }, G}\right)$ . Let ${G}^{\prime }$ be a graph that is non-isomorphic to $G$ and let ${g}^{\prime } = {\mathbb{E}}_{F}\left\lbrack {{\varphi }_{F}\left( {G}^{\prime }\right) }\right\rbrack$ accordingly. By Theorem 1 we know that $\hom \left( {{\mathcal{G}}_{n}, G}\right) \neq$ $\hom \left( {{\mathcal{G}}_{n},{G}^{\prime }}\right)$ . Thus, there is an ${F}^{\prime }$ such that $\hom \left( {{F}^{\prime }, G}\right) \neq \hom \left( {{F}^{\prime },{G}^{\prime }}\right)$ . By definition of $\mathcal{D}$ we have that $\Pr \left( {F = {F}^{\prime }}\right) > 0$ and hence $\Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime }, G}\right) \neq \Pr \left( {F = {F}^{\prime }}\right) \hom \left( {{F}^{\prime },{G}^{\prime }}\right)$ which implies $g \neq {g}^{\prime }$ . That shows that ${\mathbb{E}}_{F}\left\lbrack {{\varphi }_{F}\left( \cdot \right) }\right\rbrack$ is complete and concludes the proof.
+
+Lemma 6. ${k}_{\min }$ is a complete kernel on ${\mathcal{G}}_{\infty }$ .
+
+Proof. Let $G, H \in {\mathcal{G}}_{\infty }$ . We have to show that
+
+$$
+{\varphi }_{\widetilde{\infty }}\left( G\right) = {\varphi }_{\widetilde{\infty }}\left( H\right) \Leftrightarrow G\widetilde{ = }H,
+$$
+
+where $G\widetilde{ = }H$ indicates that $G$ and $H$ are isomorphic. There are two cases:
+
+$\left| {V\left( G\right) }\right| = \left| {V\left( H\right) }\right|$ : Then, by Theorem 1 we have ${\varphi }_{N}\left( G\right) = {\varphi }_{n}\left( H\right)$ iff $G \cong H$ for $N =$ $\min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \} = \left| {V\left( G\right) }\right| = \left| {V\left( H\right) }\right| .$
+
+$\left| {V\left( G\right) }\right| \neq \left| {V\left( H\right) }\right|$ : Let w.l.o.g. $0 < \left| {V\left( G\right) }\right| < \left| {V\left( H\right) }\right|$ . Let $P$ be the graph on exactly one vertex. Then $\hom \left( {P, G}\right) < \hom \left( {P, H}\right)$ , i.e., we can distinguish graphs on different numbers of vertices using homomorphism counts. As $\min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \} \geq 1$ , we have $P \in {\mathcal{G}}^{\left| V\left( G\right) \right| }$ and hence ${\varphi }_{\left| V\left( G\right) \right| }\left( G\right) \neq {\varphi }_{\left| V\left( G\right) \right| }\left( H\right)$ . The other direction follows directly from the fact that homomorphism counts are invariant under isomorphism.
+
+Theorem 7. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{\infty }$ with full support and $G \in {\mathcal{G}}_{\infty }$ . Then ${\bar{\varphi }}_{F}\left( G\right) =$ ${\hom }_{\left| V\left( G\right) \right| }\left( {F, G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel are complete in expectation.
+
+---
+
+${}^{3}$ https://drive.google.com/file/d/1kCDSORcLgpDWNdfJz2xIShWENTLVPgSe/view
+
+${}^{4}$ https://github.com/ChristianLebeda/HomSub
+
+${}^{5}$ https://github.com/gear/graph-homomorphism-network
+
+---
+
+Proof. We can apply the same arguments as before from Theorem 5 to show that the expected embeddings of two graphs $G, H$ with size ${n}^{\prime } = \min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ are equal iff their Lovász vector restricted to size ${n}^{\prime }$ are equal. By Lemma 6 we know that the latter only can happen if the two graphs are isomorphic.
+
+Lemma 8. There exists no distribution $\mathcal{D}$ with full support on ${\mathcal{G}}_{\infty }$ such that the expected runtime of Eq. (1) becomes polynomial in $\left| {V\left( G\right) }\right|$ for all $G \in {\mathcal{G}}_{\infty }$ .
+
+Proof. Let $\mathcal{D}$ be such a distribution and let ${\mathcal{D}}^{\prime }$ be the marginal distribution on the treewidths of the graphs given by ${p}_{k} = \mathop{\Pr }\limits_{{F \sim \mathcal{D}}}\left( {\operatorname{tw}\left( F\right) = k}\right) > 0$ . Let $G$ be a given input graph in the sample with $n = \left| {V\left( G\right) }\right|$ . Díaz et al. [2002] has shown that computing $\hom \left( {F, G}\right)$ takes time $\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right)$ Assume for the purpose of contradiction that we can guarantee an expected polynomial runtime (ignoring the $\left| {V\left( F\right) }\right|$ and constant factors for simplicity):
+
+$$
+{\mathbb{E}}_{F \sim \mathcal{D}}\left\lbrack {n}^{\operatorname{tw}\left( F\right) + 1}\right\rbrack = \mathop{\sum }\limits_{{k = 1}}^{\infty }{p}_{k}{n}^{k + 1} \leq C{n}^{c}
+$$
+
+for some constants $C, c \in \mathbb{N}$ . Then for all $k \geq c$ , it must hold that ${p}_{k}{n}^{k + 1} \leq C{n}^{c}$ , as all summands are positive. However, for large enough $n$ the left hand side is larger than the right hand side. Contradiction.
+
+Theorem 9. There exists a distribution $\mathcal{D}$ such that computing the expectation complete graph embedding ${\bar{\varphi }}_{X}\left( G\right)$ takes polynomial time in $\left| {V\left( G\right) }\right|$ in expectation for all $G \in {\mathcal{G}}_{n}$ .
+
+Proof. Let $G \in {\mathcal{G}}_{n}$ . Draw a treewidth upper bound $k$ from a Poisson distribution with parameter $\lambda$ to be determined later. Select a distribution ${\mathcal{D}}_{n, k}$ which has full support on all graphs with treewidth up to $k$ and size up to $n$ , for example, the one described in Appendix A. Using the algorithm of [Díaz et al.,2002] this gives, for some constant $C \in \mathbb{N}$ , an expected runtime of
+
+${\mathbb{E}}_{k \sim \operatorname{Poi}\left( \lambda \right) , F \sim {\mathcal{D}}_{n, k}}\left\lbrack {C\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right\rbrack \leq {\mathbb{E}}_{k \sim \operatorname{Poi}\left( \lambda \right) }\left\lbrack {C{n}^{k + 2}}\right\rbrack = \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{\lambda }^{k}{e}^{-\lambda }}{k!}C{n}^{k + 2} = \frac{C{n}^{2}}{{e}^{\lambda }}{e}^{\lambda n}.$
+
+We need to bound the right hand side by some polynomial $D{n}^{d}$ for some constants $D, d \in \mathbb{N}$ . By rearranging terms we see that
+
+$$
+\lambda \leq \frac{\ln \frac{D}{C} + \left( {d - 2}\right) \ln n}{n - 1} = \mathcal{O}\left( \frac{\log n}{n}\right)
+$$
+
+is sufficient.
+
+285
+
+## D Matching the expressivness of $k$ -WL in expectation
+
+We devise a graph embedding matching the expressiveness of the $k$ -WL test in expecation.
+
+Theorem 10. Let $\mathcal{D}$ be a distribution with full support on the set of graphs with treewidth up to $k$ . The resulting graph embedding ${\varphi }_{F}^{k \cdot {WL}}\left( \cdot \right)$ with $F \sim \mathcal{D}$ has the same expressiveness as the $k$ -WL test in expectation. Furthermore, there is a specific such distribution such that can compute ${\varphi }_{F}^{k - {WL}}\left( G\right)$ in expected polynomial time $\mathcal{O}\left( {\left| V\left( G\right) \right| }^{k + 1}\right)$ for all $G \in {\mathcal{G}}_{\infty }$ .
+
+Proof. Let ${\mathcal{T}}_{k}$ be the set of graphs with treewidth up to $k$ and $\mathcal{D}$ be a distribution with full support on ${\mathcal{T}}_{k}$ . Then by the same arguments as before in Theorem 5, the expected embeddings of two graphs $G$ and $H$ are equal iff their Lovász vectors restricted to patterns in ${\mathcal{T}}_{k}$ are equal. By Dvořák [2010] and Dell et al. [2018] the latter happens iff $k$ -WL returns the same color histogram for both graphs. This proves the first claim.
+
+For the second claim note that the worst-case runtime for any pattern $F \in {\mathcal{T}}_{k}$ is $\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{k + 1}}\right)$ by Díaz et al. [2002]. However, the equivalence between homomorphism counts on ${\mathcal{T}}_{k}$ and $k$ -WL requires to inspect also patterns $F$ of all sizes, in particular, also larger than the size $n$ of the input graph. To remedy this, we can draw the pattern size $m$ from some distribution with bounded expectation and full support on $\mathbb{N}$ . For example, the geometric $m \sim \operatorname{Geom}\left( p\right)$ with any parameter $p \in \left( {0,1}\right)$ and expectation $\mathbb{E}\left\lbrack m\right\rbrack = \frac{1}{1 - p}$ is sufficient. By linearity of expectation then
+
+$$
+E\left\lbrack {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right\rbrack = \mathcal{O}\left( {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}\right) .
+$$
+
+297
+
+Note that for the embedding ${\varphi }_{F}^{k - {WL}}\left( \cdot \right)$ Lemma 8 does not apply. In particular, the used distribution guaranteeing polynomial expected runtime is independent of $n$ and can be used for all ${\mathcal{G}}_{\infty }$ .
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..20fd866cb97ca11bf40cd70cd17beead1c7a51cc
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/8GJyW4i2oST/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,141 @@
+§ EXPECTATION COMPLETE GRAPH REPRESENTATIONS USING GRAPH HOMOMORPHISMS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+We propose and study a practical graph embedding that in expectation is able to distinguish all non-isomorphic graphs and can be computed in polynomial time. The embedding is based on Lovász' characterization of graph isomorphism through an infinite dimensional vector of homomorphism counts. Recent work has studied the expressiveness of graph embeddings by comparing their ability to distinguish graphs to that of the Weisfeiler-Leman hierarchy. While previous methods have either limited expressiveness or are computationally impractical, we devise efficient sampling-based alternatives that are maximally expressive in expectation. We empirically evaluate our proposed embeddings and show competitive results on several benchmark graph learning tasks.
+
+§ 1 INTRODUCTION
+
+We study novel efficient and expressive graph embeddings based on Lovász' characterisation of graph isomorphism through homomorphism counts. While most practical graph embeddings drop the property of completeness, that is, the ability to distinguish all non-isomorphic graphs, in favour of runtime, we devise efficient embeddings that retain completeness in expectation. To achieve that, we sample pattern graphs in a particular way, simultaneously guaranteeing completeness and polynomial runtime in expectation. We discuss related work, in particular the relationship to the $k$ -dimensional Weisfeiler Leman isomorphism test, and show first results on benchmarks datasets.
+
+While subgraph counts are also a reasonable choice for expectation complete graph embeddings, they have multiple drawbacks compared to homomorphism counts. Most importantly, from a computational perspective, computing subgraph counts even for simple graphs such as trees or paths is NP-hard [Alon et al., 1995; Marx and Pilipczuk, 2014], while we can compute homomorphism counts efficiently [Díaz et al., 2002] as long as the pattern graphs have small treewidth, a measure of 'tree-likeness'. In particular, all known exact algorithms for subgraph isomorphism have a runtime exponentially in the pattern size or the maximum degree of the pattern even for small treewidth-one of the main reasons why the graphlet kernel [Shervashidze et al., 2009] and similar fixed pattern based approaches [Bouritsas et al., 2022] only count subgraphs up to size around 5.
+
+Probably most important from a conceptual perspective, is the relationship of homomorphism counts to the cut distance [Borgs et al., 2006; Lovász, 2012]. The cut distance is a well studied and important distance on graphs that captures global structural but also sampling-based local information. It is well known that the distance given by (potentially approximated and sampled) homomorphism counts is close to the cut distance and hence has similar favourable properties. The cut distance, and hence, homomorphism counts, capture the behaviour of all permutation-invariant functions on graphs. For an ongoing discussion about the importance of the cut distance and homomorphism counts in the context of graph learning, see Dell et al. [2018], Grohe [2020], and Hoang and Maehara [2020].
+
+Completeness in expectation essentially implies one powerful fact which no deterministic embedding with bounded expressiveness can guarantee: repetition will make the embedding more expressive eventually. If the graph embedding is complete in expectation it is guaranteed that sampling more patterns will eventually increase its expressiveness.
+
+§ 2 COMPLETE GRAPH EMBEDDINGS
+
+The graph isomorphism problem is a classical problem in graph theory and its computational complexity is a major open problem [Babai, 2016]. Following the classical result of Lovász [1967], two graphs are isomorphic if and only if they have the same infinite dimensional homomorphism count vectors. This provides a strong graph embedding for graph classification tasks [Barceló et al., 2021; Dell et al., 2018; Hoang and Maehara, 2020].
+
+A graph $G = \left( {V\left( G\right) ,E\left( G\right) }\right)$ consists of a set $V\left( G\right)$ of vertices and a set $E\left( G\right) = \{ e \subseteq V\left| \right| e \mid = 2\}$ of edges. The size of a graph is the number of its vertices. In the following $F$ and $G$ denote graphs, where $F$ represents a pattern graph and $G$ a graph in our training set. A homomorphism $\varphi : V\left( F\right) \rightarrow V\left( G\right)$ is a map that respects edges, i.e. $\{ v,w\} \in E\left( F\right) \Rightarrow \{ \varphi \left( v\right) ,\varphi \left( w\right) \} \in E\left( G\right)$ . An isomorphism is a bijective homomorphism whose inverse is also a homomorphism. We say that a distribution $\mathcal{D}$ over a countable domain $\mathcal{X}$ has full support if each $x \in X$ has nonzero probability.
+
+Let ${\mathcal{G}}_{n}$ be the set of all finite graphs of size at most $n$ and let $\hom \left( {F,G}\right)$ denote the number of homomorphisms of $F$ to $G$ for arbitrarily graphs and ${\varphi }_{n}\left( G\right) = \hom \left( {{\mathcal{G}}_{n},G}\right) = {\left( \hom \left( F,G\right) \right) }_{F \in {\mathcal{G}}_{n}}$ denote the Lovász vector of $G$ for ${\mathcal{G}}_{n}$ . Lovász [1967] proved the following classical theorem.
+
+Theorem 1 (Lovász [1967]). Two arbitrary graphs $G,H \in {\mathcal{G}}_{n}$ are isomorphic iff ${\varphi }_{n}\left( G\right) = {\varphi }_{n}\left( H\right)$ .
+
+We can define a simple kernel on ${\mathcal{G}}_{n}$ with the canonical inner product using ${\varphi }_{n}$ .
+
+Definition 2 (Complete Lovász kernel). Let ${k}_{{\varphi }_{n}}\left( {G,H}\right) = \left\langle {{\varphi }_{n}\left( G\right) ,{\varphi }_{n}\left( H\right) }\right\rangle$ .
+
+Note that ${k}_{{\varphi }_{n}}$ is a complete graph kernel [Gärtner et al.,2003] on ${\mathcal{G}}_{n}$ , i.e., ${k}_{{\varphi }_{n}}$ can be used to distinguish non-isomorphic graphs of size $n$ . Similarly, we define complete graph embeddings.
+
+Definition 3. Let $\varphi : \mathcal{G} \rightarrow X$ be a permutation-invariant graph embedding from a family of graphs $\mathcal{G}$ to a vector space $X$ . We call $\varphi$ complete (on $\mathcal{G}$ ) if $\varphi \left( G\right) \neq \varphi \left( H\right)$ for all non-isomorphic $G,H \in \mathcal{G}$ .
+
+When studying graph embeddings and graph kernels we face the tradeoff between efficiency and expressiveness: complete graph representations are unlikely to be computable in polynomial-time [Gärtner et al., 2003] and hence most practical graph representations drop completeness in favour of polynomial runtime. In our work, we study random graph representations. While dropping completeness and being efficiently computable, this allows us to keep a slightly weaker yet desirable property: completeness in expectation.
+
+Definition 4. A graph embedding ${\varphi }_{X}$ , which depends on a random variable $X$ , is complete in expectation if the graph embedding given by the expectation, ${\mathbb{E}}_{X}\left\lbrack {{\varphi }_{X}\left( \cdot \right) }\right\rbrack$ , is complete.
+
+Similarly, we say that the corresponding kernel ${k}_{X}\left( {G,H}\right) = \left\langle {{\varphi }_{X}\left( G\right) ,{\varphi }_{X}\left( H\right) }\right\rangle$ is complete in expectation. We can use Lovász' isomorphism theorem to devise graph embeddings that are complete in expectation. For that let ${e}_{F} \in {\mathbb{R}}^{{\mathcal{G}}_{n}}$ be the ’ $F$ th’ standard basis unit-vector of ${\mathcal{G}}_{n}$
+
+Theorem 5. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{n}$ with full support and $G \in {\mathcal{G}}_{n}$ . Then the graph embedding ${\varphi }_{F}\left( G\right) = \hom \left( {F,G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel $k$ are complete in expectation.
+
+§ 2.1 EXPECTATION COMPLETE EMBEDDINGS AND KERNELS ON ${\MATHCAL{G}}_{\INFTY }$
+
+In this section, we generalise the previous result to the set of all finite graphs ${\mathcal{G}}_{\infty }$ . Theorem 1 holds for $G,H \in {\mathcal{G}}_{\infty }$ and the mapping ${\varphi }_{\infty }$ that maps each $G \in {\mathcal{G}}_{\infty }$ to an infinite-dimensional vector. The resulting vector space, however, is not a Hilbert space with the usual inner product. To see this, consider any graph $G$ that has at least one edge. Then $\hom \left( {{P}_{n},G}\right) \geq 2$ for every path ${P}_{n}$ of length $n \in \mathbb{N}$ . Thus, the inner product $\left\langle {{\varphi }_{\infty }\left( G\right) ,{\varphi }_{\infty }\left( G\right) }\right\rangle$ is not finite.
+
+To define a kernel on ${\mathcal{G}}_{\infty }$ without fixing a maximum size of graphs, i.e., restricting to ${\mathcal{G}}_{n}$ for some $n \in \mathbb{N}$ , we define the countable-dimensional vector ${\bar{\varphi }}_{\infty }\left( G\right) = {\left( {\hom }_{\left| V\left( G\right) \right| }\left( F,G\right) \right) }_{F \in {\mathcal{G}}_{\infty }}$ where
+
+$$
+{\hom }_{\left| V\left( G\right) \right| }\left( {F,G}\right) = \left\{ \begin{array}{ll} \hom \left( {F,G}\right) & \text{ if }\left| {V\left( F\right) }\right| \leq \left| {V\left( G\right) }\right| , \\ 0 & \text{ if }\left| {V\left( F\right) }\right| > \left| {V\left( G\right) }\right| . \end{array}\right.
+$$
+
+That is, ${\bar{\varphi }}_{\infty }\left( G\right)$ is the projection of ${\varphi }_{\infty }\left( G\right)$ to the subspace that gives us the homomorphism counts for all graphs of size at most of $G$ . Note that this is a well-defined map of graphs to a subspace of the ${\ell }^{2}$ space, i.e., sequences ${\left( {x}_{i}\right) }_{i}$ over $\mathbb{R}$ with $\mathop{\sum }\limits_{i}{\left| {x}_{i}\right| }^{2} < \infty$ . Hence, the kernel given by the canonical inner product ${\bar{k}}_{\infty }\left( {G,H}\right) = \left\langle {{\bar{\varphi }}_{\infty }\left( G\right) ,{\bar{\varphi }}_{\infty }\left( H\right) }\right\rangle$ is finite and positive semi-definite. Note that we can rewrite
+
+${\bar{k}}_{\infty }\left( {G,H}\right) = {k}_{\min }\left( {G,H}\right) = \left\langle {{\varphi }_{{n}^{\prime }}\left( G\right) ,{\varphi }_{{n}^{\prime }}\left( H\right) }\right\rangle$ where ${n}^{\prime } = \min \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ . While the first hunch might be to count patterns up to $\max \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ , it is thus not necessary to guarantee completeness. In addition to it, the corresponding map ${k}_{\max }$ is not even positive semi-definite.
+
+Lemma 6. ${k}_{\min }$ is a complete kernel on ${\mathcal{G}}_{\infty }$ .
+
+Given a sample of graphs $S$ , we note that for $n = \mathop{\max }\limits_{{G \in S}}\left| {V\left( G\right) }\right|$ we only need to consider patterns up to size $n{.}^{1}$ As the number of graphs of a given size $n$ are superexponential it is impractical to compute all such counts. Hence, we propose to resort to sampling.
+
+Theorem 7. Let $\mathcal{D}$ be a distribution on ${\mathcal{G}}_{\infty }$ with full support and $G \in {\mathcal{G}}_{\infty }$ . Then ${\bar{\varphi }}_{F}\left( G\right) =$ ${\hom }_{\left| V\left( G\right) \right| }\left( {F,G}\right) {e}_{F}$ with $F \sim \mathcal{D}$ and the corresponding kernel are complete in expectation.
+
+§ 2.2 SAMPLING MULTIPLE PATTERNS
+
+Sampling just a one pattern $F$ will not result in a practical graph embedding. Thus, we propose to sample $\ell$ patterns ${F}_{1},\ldots ,{F}_{\ell } \sim \mathcal{D}$ i.i.d. and construct the embedding ${\varphi }^{\ell }\left( G\right) \in {\mathbb{N}}_{0}^{\ell }$ with ${\left( {\varphi }^{\ell }\left( G\right) \right) }_{i} =$ $\hom \left( {{F}_{i},G}\right)$ if $\left| {V\left( {F}_{i}\right) }\right| \leq \left| {V\left( G\right) }\right|$ and 0 otherwise for all $i \in \left\lbrack \ell \right\rbrack$ . Note that, for the dot product it holds that ${\varphi }^{\ell }{\left( G\right) }^{T}{\varphi }^{\ell }\left( H\right) = \mathop{\sum }\limits_{{i = 1}}^{\ell }\left\langle {{\bar{\varphi }}_{{F}_{i}}\left( G\right) ,{\bar{\varphi }}_{{F}_{i}}\left( H\right) }\right\rangle$ as long as we do not sample patterns twice. ${}^{2}$
+
+§ 3 COMPUTING EMBEDDINGS IN EXPECTED POLYNOMIAL TIME
+
+A graph embedding that is complete in expectation must be efficiently computable to be practical. In this section, we describe our main result achieving polynomial runtime in expectation. The best known algorithms [Díaz et al.,2002] to exactly compute $\hom \left( {F,G}\right)$ take time
+
+$$
+\mathcal{O}\left( {\left| {V\left( F\right) }\right| {\left| V\left( G\right) \right| }^{\operatorname{tw}\left( F\right) + 1}}\right) \tag{1}
+$$
+
+where $\operatorname{tw}\left( F\right)$ is the treewidth of the pattern graph $H$ . Thus, a straightforward sampling strategy to achieve polynomial runtime in expectation is to give decreasing probability mass to patterns with higher treewidth. Unfortunately, in the case of ${\mathcal{G}}_{\infty }$ this is not possible.
+
+Lemma 8. There exists no distribution $\mathcal{D}$ with full support on ${\mathcal{G}}_{\infty }$ such that the expected runtime of Eq. (1) becomes polynomial in $\left| {V\left( G\right) }\right|$ for all $G \in {\mathcal{G}}_{\infty }$ .
+
+To resolve this issue we have to take the size of the largest graph in our sample into account. For a given sample $S \subseteq {\mathcal{G}}_{n}$ of graphs, where $n$ is the maximum number of vertices in $S$ , we can construct simple distributions achieving polynomial time in expectation.
+
+Theorem 9. There exists a distribution $\mathcal{D}$ such that computing the expectation complete graph embedding ${\bar{\varphi }}_{X}\left( G\right)$ takes polynomial time in $\left| {V\left( G\right) }\right|$ in expectation for all $G \in {\mathcal{G}}_{n}$ .
+
+Proof. Sketch. We first draw a treewidth upper bound $k$ from an appropriate distribution. For example, a Poisson distribution with parameter $\lambda = \mathcal{O}\left( {{}^{logn}//n}\right)$ is sufficient. We have to ensure that each possible graph with treewidth up to $k$ gets a nonzero probability of being drawn. For that we first draw a $k$ -tree, a maximal graph of treewidth $k$ , and then take a random subgraph of it.
+
+Note that we do not require that the patterns are sampled uniformly at random. It merely suffices that each pattern has a nonzero probability of being drawn. To satisfy a runtime of $\mathcal{O}\left( {\left| V\left( G\right) \right| }^{d + 1}\right)$ in expectation, for example, a Poisson distribution with $\lambda \leq \frac{1 + d\log n}{n}$ is sufficient.
+
+§ 4 RELATED WORK
+
+The $k$ -dimensional Weisfeiler-Leman (WL) test and the Lovász vector restricted to patterns up to treewidth $k$ are equally expressive [Dell et al.,2018; Dvořák,2010]. We propose an efficiently computable embedding matching the expressiveness of $k$ -WL, and hence also MPNNs and $k$ -GNNs [Morris et al., 2019; Xu et al., 2019], in expectation, see Appendix D.
+
+Dell et al. [2018] proposed a complete graph kernel based on homomorphism counts related to our ${k}_{\min }$ kernel. Instead of implicitly restricting the embedding to only a finite number of patterns, as we do, they weigh the homomorphism counts such that the inner product defined on the whole Lovász vectors converges. However, Dell et al. [2018] do not discuss runtime aspects and so, our approach can be seen as an efficient sampling-based alternative to their weighted kernel.
+
+${}^{1}$ Actually, it is sufficient to go up to the size of the second largest graph.
+
+${}^{2}$ Note that it does not affect the expressiveness results if we sample a pattern multiple times.
+
+Table 1: Cross-validation accuracies on benchmark datasets
+
+max width=
+
+method MUTAG IMDB-BIN IMDB-MULTI PAULUS25 CSL
+
+1-6
+GHC-tree ${89.28} \pm {8.26}$ ${72.10} \pm {2.62}$ ${48.60} \pm {4.40}$ ${7.14} \pm {0.00}$ ${10.00} \pm {0.00}$
+
+1-6
+GHC-cycle ${87.81} \pm {7.46}$ ${70.93} \pm {4.54}$ ${47.41} \pm {3.67}$ ${7.14} \pm {0.00}$ ${100.00} \pm {0.00}$
+
+1-6
+GNTK ${89.46} \pm {7.03}$ ${75.61} \pm {3.98}$ ${51.91} \pm {3.56}$ ${7.14} \pm {0.00}$ ${10.00} \pm {0.00}$
+
+1-6
+GIN ${89.40} \pm {5.60}$ ${70.70} \pm {1.10}$ ${43.20} \pm {2.00}$ ${7.14} \pm {00}$ ${10} \pm {0.00}$
+
+1-6
+ours (SVM) ${86.85} \pm {1.28}$ ${69.83} \pm {0.15}$ ${47.31} \pm {0.46}$ ${100.00} \pm {0.00}$ ${38.89} \pm {11.18}$
+
+1-6
+ours (MLP) ${88.33} \pm {1.11}$ ${70.37} \pm {0.85}$ ${48.75} \pm {0.20}$ ${49.84} \pm {6.74}$ ${11.78} \pm {1.54}$
+
+1-6
+
+Using graph homomorphism counts as a feature embedding for graph learning tasks was proposed before by Hoang and Maehara [2020]. They discuss various aspects of homomorphism counts important for learning tasks, in particular, universality aspects and their power to capture certain properties of the graph, such as bipartiteness. Instead of relying on sampling patterns, which we use to guarantee expectation in completeness, they propose to use a fixed number of small pattern graphs. This limits the practical usage of their approach due to computational complexity reasons. In their experiments the authors only use tree and cycle patterns up to size 6 and 8, respectively, whereas we allow patterns of arbitrary size and treewidth, guaranteeing polynomial runtime in expectation. Simiarly to Hoang and Maehara [2020], we use the computed embeddings as features for a kernel SVM (with RBF kernel) and an MLP.
+
+Instead of embedding the whole graph into a vector of homomorphism counts, Barceló et al. [2021] proposed to use rooted homomorphism counts as node features in conjunction with a graph neural network (GNN). They discuss the required patterns to be as or more expressive than the $k$ -WL test. We achieve this in expectation when selecting an appropriate sampling distribution.
+
+Wu et al. [2019] adapted random Fourier features [Rahimi and Recht, 2007] to graphs and proposed an sampling-based variant of the global alignment graph kernel. Similar sampling-based ideas were discussed before for the graphlet kernel [Shervashidze et al., 2009] and frequent-subtree kernels [Welke et al., 2015]. All three papers do not discuss expressiveness aspects, however.
+
+§ 5 EXPERIMENTS
+
+We performed some preliminary experiments on some benchmark datasets. To this end, we sample a fixed number $\ell = {30}$ of patterns as described in Appendix A and compute the sampled min kernel as described in Section 3. Table 1 shows averaged accuracies of SVM and MLP classifiers trained on our feature sets. We follow the experimental design of Hoang and Maehara [2020] and compare to their published results. Even with as little as 30 features, the results of our approach are comparable to the competitors on real world datasets. Furthermore, it is interesting to note that a SVM with RBF kernel and our features performs perfectly on the PAULUS25 dataset, i.e., it is able to decide isomorphism for the strongly regular graphs in this dataset. It also shows good performance, although with high deviation, on the CSL dataset, where only the method specifically designed for this dataset, GHC-cycle, performs well. We also included GNTK [Du et al., 2019] and GIN [Xu et al., 2019].
+
+§ 6 CONCLUSION
+
+As future work, we will investigate approximate counts to make our implementation more efficient [Beaujean et al., 2021]. It is unclear how this affects expressiveness, as we loose permutation-invariance. Going beyond expressiveness results, our goal is to further study graph similarities suitable for graph learning, such as the cut distance as proposed by Grohe [2020]. Finally, instead of sampling patterns from a fixed distribution, a more promising variant is to adapt the sampling process in a sample-dependent manner. One could, for example, draw new patterns until each graph in the sample has a unique embedding (up to isomorphism) or at least until we can distinguish 1-WL classes. Alternatively, we could pre-compute frequent or interesting patterns and use them to adapt the distribution. Such approaches would employ the power of randomisation to select a fitting graph representation in a data-driven manner, instead of relying on a finite set of fixed and pre-determined patterns like in previous work [Barceló et al., 2021; Bouritsas et al., 2022].
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..963c7968ad14e9871cf6adf2130cc5829cbcf6da
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,532 @@
+# ScatterSample: Diversified Label Sampling for Data Efficient Graph Neural Network Learning
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+What target labels are most effective for graph neural network (GNN) training? In some applications where GNNs excel-like drug design or fraud detection, labeling new instances is expensive. We develop a data-efficient active sampling framework, ScatterSample, to train GNNs under an active learning setting. ScatterSample employs a sampling module termed DiverseUncertainty to collect instances with large uncertainty from different regions of the sample space for labeling. To ensure diversification of the selected nodes, DiverseUncertainty clusters the high uncertainty nodes and selects the representative nodes from each cluster. Our ScatterSample algorithm is further supported by rigorous theoretical analysis demonstrating its advantage compared to standard active sampling methods that aim to simply maximize the uncertainty and not diversify the samples. In particular, we show that ScatterSample is able to efficiently reduce the model uncertainty over the whole sample space. Our experiments on five datasets show that ScatterSample significantly outperforms the other GNN active learning baselines, specifically it reduces the sampling cost by up to $\mathbf{{50}}\%$ while achieving the same test accuracy.
+
+## 17 1 Introduction
+
+How to spot the most effective labeled nodes for GNN training? Graph neural networks (GNN) [KW16; Vel+17; Wu+19a] which employ non-linear and parameterized feature propagation [ZG02] to compute graph representations, have been widely employed in a broad range of learning tasks and achieved state-of-art-performance in node classification, link prediction and graph classification. Training GNNs for node classification in the supervised learning setup typically requires a large number of labeled examples such that the GNN can learn from diverse node features and node connectivity patterns. However, labeling costs can be expensive which inhibits the possibility of acquiring a large number of node labels. For example, the GNNs can be used to assist the drug design. However, evaluating the properties of a molecule is time consuming. It usually takes one to two weeks for evaluation using the current simulation tools, not to mention the cost spent on the laboratory experiments.
+
+Active learning (AL) aims at maximizing the generalization performance under a constrained labeling budget [Set09]. AL algorithms choose which training instances to use as labeled targets to maximize the performance of the learned model. Previous research in AL algorithms for GNN training can be categorized with respect to whether the AL methods take into account the model weights (model aware) or can be applied to any model (model agnostic). Model agnostic algorithms label a representative subset of the nodes such that the labeled nodes can cover the whole sample space [Wu+19b; Zha+21]. Model aware AL algorithms leverage the GNN model to compute the node uncertainty, which combines both the input features and graph structure [CZC17; Gao+18]. AL then picks the nodes with the highest uncertainty.
+
+However, maximizing the uncertainty of the labeled nodes may not balance the exploration and exploitation of the classification boundary [KVAG19]. For example, if there exist a group of nodes close to the classification boundary but are clustered in a small region of the graph, just labeling the most uncertain nodes could only explore that specific region of the classification boundary,
+
+while others are ignored, and the classification boundary is not well explored. Thus, our first main contribution is to simultaneously consider the node uncertainty and the diversification of the uncertain nodes over the sample space.
+
+Challenges of diversifying uncertain nodes. Graph data present additional challenges to diversify the uncertain nodes. Diversification requires modeling the sample space using carefully selected representations for the nodes. However, there are two challenges of a suitable node representations.
+
+Challenge 1: Sample space for graph data requires a representation which takes both the graph structure and node features into account (see section sec 4.2).
+
+Challenge 2: The representation should be robust to the model trained so far, and not be biased by the limited amount of available labels.
+
+Our approach. We develop ScatterSample for data-efficient GNN learning. ScatterSample allows us to explore the classification boundary while exploiting the nodes with the highest uncertainty. To diversify the uncertain samples on graph-structured data, ScatterSample includes a DiverseUncer-tainty module to address the two challenges above, which clusters the uncertain nodes representations over the whole sample space.
+
+Our Contributions. The contributions of our work are the following.
+
+- Insight: ScatterSample is the first method that proposes and implements diversification of the uncertain samples for data efficient GNN learning.
+
+
+
+Figure 1: ScatterSample wins: test accuracy vs. sampling ratio on the ogbn-products dataset (62M edges).
+
+- Effectiveness: We evaluate ScatterSample on five different graph datasets, where ScatterSample saves up to ${50}\%$ labeling cost, while still achieving the same test accuracy with state-of-the-art baselines.
+
+- Theoretical Guarantees: Our theoretical analysis proves the superiority of ScatterSample over the standard, uncertainty-sampling method (see Theorem 5.1). Simulation results further confirm our theory.
+
+## 2 Related Work
+
+This section will review the uncertainty based active learn-
+
+ing research and implementation of active learning in GNNs.
+
+Active Learning (AL):. Active learning aims at selecting a subset of training data as labeling targets such that the model performance is optimized [Set09; Han+14]. Uncertainty sampling is one major approach of active learning, which labels a group of samples to maximally reduce the model uncertainty. To achieve this goal, uncertainty sampling selects samples around the decision boundary [THTS05]. Uncertainty sampling has also been applied to the deep learning field, and researchers have proposed different methods to measure the uncertainty of samples. For example, Ducoffe and Precioso [DP18] developed a margin based method which uses the distance from a sample to its smallest adversarial sample to approximate the distance to the decision boundary.
+
+AL and GNNs: AL with GNNs requires to consider the graph structure information into the node selection. Wu et al. [Wu+19b] uses the propagated features followed by K-Medoids clustering of nodes to select a group of representative instances. Zhang et al. $\left\lbrack {\mathrm{{Zha}} + {21}}\right\rbrack$ measures importance of nodes through combining the diversity and influence scores. However the above approaches do not account for the learned GNN model, which may limit the generalization performance. Uncertainty sampling has also been implemented to select nodes. Cai et al. [CZC17] propose to use a weighted average of the node uncertainty, graph centrality and information density scores. Gao et al. [Gao+18] further propose a different approach to combine the three features with multi-armed bandit techniques. Although useful, these approaches aim choose nodes with the highest uncertainty and may be challenged if the selected nodes are clustered in a small region of the graph, which will not provide 92 good graph coverage. Our work addresses this limitation by diversifying the selected nodes based on the graph structure.
+
+## 3 Preliminaries
+
+Problem Statement. Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is the set of nodes with $N = \left| \mathcal{V}\right|$ nodes and $\mathcal{E}$ is the set of edges. The set of nodes is divided into the training set ${\mathcal{V}}_{\text{train }}$ , validation set ${\mathcal{V}}_{\text{valid }}$ and testing set ${\mathcal{V}}_{\text{test }}$ . Each node ${v}_{n} \in \mathcal{V}$ is associated with a feature vector ${\mathbf{x}}_{n} \in {\mathbb{R}}^{d}$ and a label ${y}_{n} \in \{ 1,2,\ldots , C\}$ . Let $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ be the feature matrix of all the nodes in the graph, where the $i$ -th row of $\mathbf{X}$ corresponds to ${v}_{n},\mathbf{y} = \left( {{y}_{1},{y}_{2},\ldots ,{y}_{n}}\right) \in {\mathbb{R}}^{n}$ is the vector containing all the labels. To learn the labels of the nodes, we train a GNN model $M$ which maps the graph $\mathcal{G}$ and $\mathbf{X}$ to the the prediction of labels $\widehat{\mathbf{y}}$ .
+
+Active Learning: Active learning picks a subset of nodes $S \subset {\mathcal{V}}_{\text{train }}$ from the training set and query their labels ${\mathbf{y}}_{S}$ . A GNN model ${M}_{S}$ is trained with respect to the feature matrix $\mathbf{X}$ and ${\mathbf{y}}_{S}$ . Given the sampling budget $B$ , the goal of active learning is to find a set $S\left( {\left| S\right| \leq B}\right)$ such that the generalization loss is minimized, i.e.
+
+$$
+\underset{S : \left| S\right| \leq b}{\arg \min }{\mathbb{E}}_{{v}_{n} \in {\mathcal{V}}_{\text{test }}}\left( {\ell \left( {{y}_{n}, f\left( {{\mathbf{x}}_{n} \mid \mathcal{G},{M}_{S}}\right) }\right) }\right) .
+$$
+
+### 3.1 Graph neural networks and message passing
+
+In this section we present the basic operation of the GNN at layer $l$ . With the message passin paradigm, the GNN layer updates for most GNN models can be interpreted as message vectors that are exchanged among neighbors over the edges and nodes in the graph.
+
+For the following let ${\mathbf{h}}_{v}^{\left( l\right) } \in {\mathbb{R}}^{{d}_{1}}$ be the hidden representation for node $v$ and layer $l$ . Consider $\phi$ that is a message function combining the hidden representations for nodes $v, u$ . Next, using the message vectors for neighboring edges the node representations are updated as follows
+
+$$
+{\mathbf{h}}_{v}^{\left( l + 1\right) } = \psi \left( {{\mathbf{h}}_{v}^{\left( l\right) },\rho \left( \left\{ {\phi \left( {{\mathbf{h}}_{v}^{\left( l\right) },{\mathbf{h}}_{u}^{\left( l\right) }}\right) : \left( {u, v}\right) \in \mathcal{E}}\right\} \right) }\right) \tag{1}
+$$
+
+where $\rho$ is a reduce function used to aggregate the messages coming from the neighbors of $v$ and $\psi$ is an update function defined on each node to update the hidden node representation for layer $l + 1$ . By defining $\phi ,\rho ,\psi$ different GNN models can be instantiated [KW16; DBV16; Bro+17; IMG20]. These functions are also parameterized by learnable matrices that are updated during training.
+
+## 4 Proposed method: ScatterSample
+
+We propose the ScatterSample algorithm, which dynamically samples a set of diverse nodes with large uncertainties in order to more efficiently explore the classification boundary during GNN training. At each round, our method calculates the uncertainty for all nodes with the GNN model trained so far. Then, ScatterSample clusters the top uncertain nodes and selecting nodes from each cluster to obtain diverse samples. The labels of the selected nodes are queried and used as supervision to continue training the GNN model for the next round. This section explains our method in detail.
+
+### 4.1 Selecting the uncertain nodes
+
+The uncertainty of a node is measured by the information entropy. Given a trained GNN model at the $t$ -th sampling round, ScatterSample first computes the information entropy ${\phi }_{\text{entropy }}\left( {v}_{n}\right)$ of nodes in ${\mathcal{V}}_{\text{train }}$ based on the current GNN model, i.e.
+
+$$
+{\phi }_{\text{entropy }}\left( {v}_{n}\right) = - \mathop{\sum }\limits_{{j = 1}}^{C}\log \left( {\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X}, M}\right\rbrack }\right) \mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X}, M}\right\rbrack \tag{2}
+$$
+
+where $\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X}, M}\right\rbrack$ is probability that node ${v}_{n}$ belongs to class $j$ given the GNN model $M$ . Then, ScatterSample ranks all the nodes in order of decreasing uncertainty, and picks the ones with the largest information entropy into a candidate set ${\mathcal{C}}_{t} \subset {\mathcal{V}}_{\text{train }}$ . Different than traditional AL techniques that select training targets solely based on uncertainty, we then move on to pick a diverse subset of the uncertain nodes over the sampling space.
+
+### 4.2 Diversifying uncertain nodes
+
+Our goal is to ensure the diversity of selected nodes for labeling, by exploring the node distribution over the sample space. At this point naturally, the question arises How to model the sample space? We need a representation for nodes to define the space, based on which we could measure the samples' distances. A straightforward approach is to use the GNN embedding space since the classification boundary is directly depicted there. However, GNN embeddings fail to address the two challenges in the introduction section.
+
+First, with active learning, a limited number of labeled nodes are available in the initial stages. Hence, only the already labeled nodes may have reliable GNN embeddings and biased subsequent samples. Second, GNN embeddings for node classification may not carry enough information for diversification. GNNs usually do not have an MLP layer connecting to the output. The final GNN outputs of uncertain nodes are not diverse enough since the high uncertain nodes may have similar class probabilities (class probabilities close to uniform). Conversely, embeddings of intermediate GNN layers may have an appropriate dimension but lack information of the expanded ego-network.
+
+These drawbacks are confirmed in Sec. 6.2, where we show that using GNN embeddings as proxy representations leads to a performance drop. Moreover, different from other machine learning problems, the nodes are correlated with each other, and we also need to take the graph structure into account when diversifying the samples. Hence, to address all these considerations we will employ a $k$ -step propagation of the original node features based on the graph structure as a proxy representation for the nodes. The $k$ -step propagation of nodes ${\mathbf{X}}^{\left( k\right) } = \left( {{\mathbf{x}}_{1}^{\left( k\right) },{\mathbf{x}}_{2}^{\left( k\right) },\ldots ,{\mathbf{x}}_{N}^{\left( k\right) }}\right)$ is defined as follows
+
+$$
+{\mathbf{X}}^{\left( k\right) } \mathrel{\text{:=}} {\mathbf{{SX}}}^{\left( k - 1\right) } \tag{3}
+$$
+
+where $\mathbf{S}$ is the normalized adjacency matrix, and ${\mathbf{X}}^{\left( 0\right) }$ are the initial node features. The operation in (3) is efficient and amenable to a mini-batch implementation. Such representations are well-known to succinctly encode the node feature distribution and graph structure. Next, we calculate the proxy representations for the candidate high uncertainty nodes in the set ${\mathcal{C}}_{t}$ . To maximize the diversity of the samples, we cluster the proxy representations in ${\mathcal{C}}_{t}$ using $k$ -means++ into ${B}_{t}$ clusters [AV06], and select the nodes closest to the cluster centers for labeling, by using the ${L}_{2}$ distance metric. One node from each cluster is selected that amounts to ${B}_{t}$ samples.
+
+Algorithm 1 ScatterSample Algorithm
+
+---
+
+1: Input: ${\mathcal{V}}_{\text{train }}$ , GNN model $M$ , number of propagation layers $k$ , number of sampling round $T$ ,
+
+ sampling redundancy $r$ , initial sampling budget ${B}_{0}$ and total sampling budget $B$ .
+
+ Initialize $S = \varnothing$
+
+ Compute ${\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{V}}_{\text{train }}$ as in (3).
+
+ Initial Sampling:
+
+ Use $k$ -means++ to cluster $\left\{ {\mathbf{x}}_{n}^{\left( k\right) }\right\}$ into ${B}_{0}$ clusters.
+
+ Add a node closest to the cluster center per cluster to $S$ .
+
+ Query the labels of nodes ${v}_{n} \in S$ , denoted by ${\mathbf{y}}_{S}$ .
+
+ Train model $M$ using $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ .
+
+ Dynamic Sampling:
+
+ Initialize sampling round $t = 1$
+
+ while $t < T$ do
+
+ Let ${B}_{t} = \min \left( {B - \left| S\right| ,\left( {B - {B}_{0}}\right) /T}\right)$
+
+ Use the DiverseUncertainty algorithm to select ${S}_{t}$
+
+ Query the labels of ${S}_{t}$ , and update $S = S \cup {S}_{t}$ .
+
+ Train model $M$ over $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ . Update $t = t + 1$ .
+
+ end while
+
+---
+
+Clearly, the size of the candidate set $\left| {\mathcal{C}}_{t}\right| \geq {B}_{t}$ , however deciding how many candidate nodes to choose from is important. We parameterize the size as a multiple of the selected nodes namely $\left| {\mathcal{C}}_{t}\right| = r{B}_{t}$ , where $r > 1$ is the sampling redundancy. If $r$ is too small, the selected nodes are closer to the classification boundary (have larger information entropy) but the nodes selected may not be diverse enough. On the other hand, if $r$ is too large, the set will be diverse, but the selected nodes may be far away from the classification boundary. Therefore, it is critical to pick a suitable $r$ to achieve a sweet point between diversity and uncertainty. We leave the discussion of choosing $r$ to Sec. 6.2. Besides empirical validation with experiments in five real datasets (see Sec. 6), our diversification approach is theoretically motivated (see Sec. 5).
+
+The pseudo code of ScatterSample is shown in Algorithm 1. ScatterSample is a multiple rounds sampling scheme, which includes an initial sampling step and dynamic sampling steps. ScatterSample
+
+Algorithm 2 DiverseUncertainty Algorithm
+
+---
+
+Input: ${\mathcal{V}}_{\text{train }},\left\{ {{\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{C}}_{t}}\right\} , r,{B}_{t}$
+
+Compute ${\phi }_{\text{entropy }}\left( v\right) \forall v \in {\mathcal{V}}_{\text{train }}$ ; see 2).
+
+${\mathcal{C}}_{t} \leftarrow \left\{ {r{B}_{t}\text{nodes with largest}{\phi }_{\text{entropy }}\left( v\right) }\right\}$ .
+
+Use $k$ -means++ to cluster the ${\mathbf{x}}_{n}^{\left( k\right) }$ (for all $n \in {\mathcal{C}}_{t}$ ) into ${B}_{t}$ clusters.
+
+${S}_{t} \leftarrow \varnothing$
+
+for $j = 1,2,\ldots ,{B}_{t}$ do
+
+ Compute the cluster center ${\mathbf{v}}_{j}$ of cluster $j$
+
+ Pick node $x \leftarrow \arg \mathop{\min }\limits_{{n \in {\mathcal{C}}_{t}}}\begin{Vmatrix}{{\mathbf{x}}_{n}^{\left( k\right) } - {\mathbf{v}}_{j}}\end{Vmatrix}$
+
+ ${S}_{t} \leftarrow {S}_{t} \cup \{ x\}$
+
+end for
+
+Return ${S}_{t}$
+
+---
+
+first computes the $k$ -step features propagation of all the nodes in the training set using (3), and clusters them into ${B}_{0}$ clusters, where ${B}_{0}$ is the initial sampling budget. Then, ScatterSample picks the nodes closest to the cluster centers as the initial training samples and queries their labels. The purpose of clustering $k$ -step feature propagations is to enforce the initial training set to spread out over the whole sample space. It is also helpful to explore the classification boundary since if the initial sampled nodes are not diverse enough, we cannot picture the classification boundary of the regions that are far away from the initial training samples. ScatterSample repeats the dynamic sampling described in Algorithm 2 until the sampling budget $B$ is exhausted. The next section fortifies our diversification method with theoretical guarantees.
+
+## 5 Theoretical analysis
+
+In Sec. 6.2, we have shown that DiverseUncertainty is significantly better than Uncertainty algorithm. In this section, we provide theoretical analysis and simulation results to demonstrate the benefits of DiverseUncertainty and explains why MaxUncertainty algorithm may fail. The results presented here give a theoretical basis for the superiority of our method as established in the experiments in Section 6.
+
+### 5.1 Analysis setup
+
+For the analysis, we employ the Gaussian Process (GP) model [O'H78]. GP models offer a flexible approach to model complex functions and are robust to small sample sizes [See04]. Moreover, the uncertainty of the prediction can be easily computed using a GP model. Neural network models and GNNs interpolate the observed samples, while GPs provide a robust framework to interpolate samples, that is amenable to analysis.
+
+Assume the label ${y}_{i} \in \mathbb{R}$ is dependent on the propagated features ${\mathbf{x}}_{i}^{\left( k\right) }$ through a GP model. The label ${y}_{i}$ is modeled by a Gaussian Process, where $\left( {\mathbf{y} \mid {\mathbf{X}}^{\left( k\right) }}\right) \sim N\left( {\mathbf{1}\mu ,\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right) }\right)$ and $\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right)$ is the Gaussian kernel matrix. The kernel is parameterized by ${\mathbf{K}}_{ij}\left( {\mathbf{X}}^{\left( k\right) }\right) = K\left( {{\mathbf{x}}_{i}^{\left( k\right) },{\mathbf{x}}_{j}^{\left( k\right) }}\right) =$ $\exp \left( {-\frac{1}{2}{\left( {\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }\right) }^{T}{\sum }^{-1}\left( {{\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }}\right) }\right)$ , where $\sum = \operatorname{diag}\left( {{\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{d}}\right)$ . Consider that the sample space of ${\mathbf{x}}^{\left( k\right) }$ can be clustered into $m$ clusters ${\mathcal{S}}_{1},{\mathcal{S}}_{2},\ldots ,{\mathcal{S}}_{m}$ , and denote the cluster centers as ${\mathbf{c}}_{1},{\mathbf{c}}_{2},\ldots ,{\mathbf{c}}_{m}$ . Without loss of generality, denote the radius of the cluster, ${d}_{1} \leq {d}_{2} \leq {d}_{3} \leq \cdots < {d}_{m}$ . The clusters are well separated and the distance between the cluster centers are larger than $\delta$ , i.e. $\mathop{\min }\limits_{{i \neq j}}{\begin{Vmatrix}{\mathbf{c}}_{i} - {\mathbf{c}}_{j}\end{Vmatrix}}_{2} \geq \delta \left( {\delta > 2{d}_{m}}\right)$ . Moreover, we consider that there does not exist a cluster dominating the sample space, ${d}_{m}^{2} \leq \tau \mathop{\sum }\limits_{{j = 1}}^{{m - 1}}{d}_{j}^{2}$ and the samples are uniformly distributed over the clusters.
+
+### 5.2 MaxUncertainty vs DiverseUncertainty
+
+Here, we show that DiverseUncertainty could significantly achieves smaller mean squared error (MSE) compared to MaxUncertainty. Without loss of generality we consider $m$ clusters and the following definitions.
+
+- MaxUncertainty Select ${2m}$ most uncertain samples.
+
+- DiverseUncertainty Select the 2 most uncertain samples from each cluster.
+
+
+
+Figure 2: The area enclosed by the blue circles is the sample space of propagated features (2D case). The green stars are sampled nodes during initial sampling (cluster center). The red stars are the sampled nodes during uncertainty sampling. (a) MaxUncertainty picks the nodes with largest uncertainty, which is equivalent to sampling the boundary of cluster 2. (b) DiverseUncertainty diversifies the clustered nodes, and samples the boundary of both clusters.
+
+Before presenting the theory we illustrate the operation of our method and MaxUncertainty in Figure 2. ScatterSample first clusters the samples on the propagated feature space (blue circles in Figure 2), and selects the nodes closest to the cluster centers for initial training (green stars in Figure 2). Then, during the dynamic sampling steps, we compute the uncertainty using equation 4. MaxUncertainty approach will select the nodes with the largest uncertainty. Under our setup, it is equivalent to sample nodes at the boundary of the largest cluster since the distance to the cluster center is the most important factor of uncertainty (Figure 2(a)). While DiverseUncertainty will diversify the high uncertainty nodes, which is equivalent to sample from the boundary of each cluster (Figure 2(b)). The red stars of Figure 2 show the nodes labeled during the uncertainty sampling stage. Since MaxUncertainty algorithm only labels the nodes in cluster 2, cluster 1 is ignored the prediction uncertainty of cluster 2 cannot be reduced. On the contrary, DiverseUncertainty samples nodes from both cluster 1 and 2. Thus, it could reduce the prediction uncertainty in both clusters.
+
+Then, the following theorem quantifies the relationship of the MSEs of both algorithms under the setup of Sec. 5.1.
+
+Theorem 5.1. Consider a case where feature dimension $d = 1$ . With the above notation and assumptions, let ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{2\theta }}\right\rbrack$ . If we satisfy ${d}_{m}^{2} \geq {d}_{m - 1}^{2} + 4\log \theta$ and $\delta \geq {d}_{m} +$ $\max \left( {\sqrt{{d}_{m}^{2} + \theta \log \left( {9m}\right) },{2\theta }\log \left( \frac{3\sqrt{m}}{1 - {r}_{m}}\right) }\right)$ , we have
+
+$$
+\frac{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ Diverse Uncertainty }}\right) } \geq \frac{1}{2\left( {1 + \tau }\right) }\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3} = \frac{1}{\tau + 1}O\left( \theta \right) .
+$$
+
+## Proof: The complete proof is included in Appendix B.
+
+Theorem 5.1 suggests that when the GP function is smooth enough (large $\theta$ ), the MaxUncertainty will have larger MSE than the MaxDiversity algorithm (proof is in appendix section B). A large $\theta$ suggests a close correlation between the labels of the nodes that are close to each other. It is also common for most of the graph datasets where samples clustered together usually have similar labels. Thus, DiverseUncertainty can achieve a smaller MSE in this case.
+
+Table 1: Statistics of graph datasets used in experiments.
+
+| Data | #Nodes | #Train Nods | #Edges | #Classes |
| Cora | 2,708 | 1,208 | 5,429 | 7 |
| Citeseer | 3,327 | 1,827 | 4,732 | 6 |
| Pubmed | 19,717 | 18,217 | 44,328 | 3 |
| Corafull | 19,793 | 18,293 | 126,842 | 70 |
| ogbn-products | 2,449,029 | 196,615 | 61,859,149 | 47 |
+
+## 232 6 Experiments
+
+3 We evaluate the performance of ScatterSample on five different datasets.
+
+Datasets. We evaluated the different methods on the Cora, Citeseer, Pubmed, Corafull [KW16], and ogbn-products [Hu+20] datasets (Table 1). Besides the ogbn-products, we do not keep original data split of the training and testing set. For the nodes that are not in the validation or testing sets (the validation and testing sets follows the split in the dgl package "dgl.data" [Wan+19]), we will add them to the training set. The labels can only be queried from the training set.
+
+Baselines. For different sampling budget $B$ , we compare the test accuracy of ScatterSample with the following graph active learning baselines:
+
+- Random sampling. Select $B$ nodes uniformly at random from ${\mathcal{V}}_{\text{train }}$ .
+
+- AGE [CZC17]: AGE computes a score which combines the node centrality, information density, and uncertainty, to select $B$ nodes with the highest scores.
+
+- ANRMAB [Gao+18]: ANRMAB learns the combination weights of the three metrics used by AGE with multi-armed bandit method.
+
+- FeatProp: FeatProp [Wu+19b] clusters the feature propogations into $B$ clusters and pick the nodes closest to the cluster centers.
+
+- Grain: [Zha+21] score the node by the weighted average of the influence score and diversity score. And select the top $B$ nodes with largest node scores. Grain includes two different approaches of selecting nodes, Grain (ball-D) and Grain (NN-D).
+
+- ScatterSample: For the sample scale graph dataset (Cora, Citeseer), we set the initial sampling budget to $3\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ and sample $1\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ each round during the dynamic sampling period. For medium scale datasets (Pubmed and Corafull), we set the initial sampling budget to $1\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ and sample ${0.5}\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ each dynamic sampling round. For the large scale dataset (ogbn-products), initial sampling budget is ${0.2}\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ , and each dynamic sampling round selects ${0.05}\% \cdot \left| {\mathcal{V}}_{\text{train }}\right|$ nodes.
+
+GNN setup. We train a 2-layer GCN network with hidden layer dimension $= {64}$ for Cora, Cite-seer and Pubmed, and $= {128}$ for Corafull and obgn-products. To train the GNN, we follow the standard random neighbor sampling where for each node [HYL17], we randomly sample 5 neighbors for the convolution operation in each layer. We use the function in "dgl" package to train the GNNs [Wan+19].
+
+### 6.1 Performance Results
+
+We compare the performance of different active graph neural network learning algorithms under different labeling budgets(B). We parameterize the labeling budget $B$ equal to a certain proportion of the nodes in the training set $\left( {B = r\left| {\mathcal{V}}_{\text{train }}\right| }\right)$ . For Cora and Citeseer, we vary $r$ from 5% to ${15}\%$ in increment of $2\%$ ; for Pubmed and Corafull, $r$ is varied from $3\%$ to ${10}\%$ ; for ogbn-product dataset, we vary the $r$ from 0.3% to 1%. The performance of the active learning algorithms are measured with the test accuracy.
+
+Accuracy. Figure 3 shows the test accuracy of baselines trained on different proportions of the selected nodes. ScatterSample improves the test accuracy and consistently outperforms other baselines in all the datasets. In Citeseer, ScatterSample requires 9% of the node labels to achieve test accuracy 74.2%, while the best alternative baselines "Grain (ball-D)" and "Grain (NN-D)" need to label 15% of nodes to achieve similar accuracy, which corresponds to a ${40}\%$ savings of the labeling cost. Similarly, in PubMed and ogbn-products, ScatterSample achieves a 50% labeling cost reduction compared to the best alternative baseline.
+
+Efficiency. Here, we compare the computation time among the methods that use the graph structure and node features to select the samples namely, ScatterSample, "Grain (ball-D)" and "Grain (NN-D)". We use the ogbn-products dataset to perform comparisons. ScatterSample takes less than 8 hours to determine the labeling nodes and train the GNN, while the Grain algorithm requires more than 240 hours. Grain requires $\mathcal{O}\left( {n}^{2}\right)$ complexity to calculate the scores of all nodes, which is prohibitive complexity in large graphs.
+
+Complexity analysis. The computation complexity of DiverseUncertainty is $O\left( {\left| E\right| + r * {B}_{t}^{2}}\right)$ . It is because ScatterSample includes two parts: 1) computing the node representations with complexity $O\left( \left| E\right| \right)$ where $\left| E\right|$ is the number of edges and 2) cluster the the uncertain nodes where the complexity is $O\left( {r{B}_{t}^{2}}\right)$ . Since both $r$ and ${B}_{t}$ are small, $r{B}_{t}^{2} < \left| E\right|$ , our method does not add a lot of extra burden compared to the model training time.
+
+
+
+Figure 3: ScatterSample (blue), wins consistently: Comparison of the test accuracy of active GNN learning algorithms at different labeling budget. The $x$ -axis shows # labeled nodes/# nodes in training set.
+
+### 6.2 Ablation Study
+
+The MaxDiversity algorithm of ScatterSample needs to determine the size of candidate set ${\mathcal{C}}_{t}$ before selecting a subset ${S}_{t}$ from ${\mathcal{C}}_{t}$ for labeling. Hence, sampling redundancy $r$ and the clustering algorithm to cluster the nodes in ${\mathcal{C}}_{t}$ will affect the performance of ScatterSample. In this section, we will evaluate the effect of both factors.
+
+
+
+Figure 4: Compare the performance under different sampling redundancy $r$ . When $r = 1$ , Diverse-Uncertainty reduces to MaxUncertainty method.
+
+Sampling redundancy $r$ : Recall from algorithm 1, the sampling redundancy $r$ controls the relative size of candidate set ${\mathcal{C}}_{t}$ to size of sampled node ${S}_{t}$ . When $r = 1$ , ScatterSample reduces to the standard MaxUncertainty algorithm. And figure 4 shows that the sampling the most uncertain nodes is significantly worse than DiverseUncertainty. For the Citeseer dataset, DiverseUncertainty can outperform MaxUncertainty by over 7% when sampling ratio is 5%. Therefore, to achieve a good test accuracy, $r$ should be carefully selected. Figure 4 suggests that as $r$ increases, the test accuracy quickly boosts at the early stage, and then decreases slowly.
+
+Sensitivity to initial sampling ratio: During the initial sampling stage, DiverseUncertainty samples ${B}_{0}$ nodes to train the model initially. And the initially trained model will affect the nodes sampled during the dynamic sampling period. We test the effect of different initial sampling ratio on Cora and Citeseer datasets. We vary the initial sampling ratio from 2% to 4%, and figure A5 shows that DiverseUncertainty is robust to the choice of initial sampling ratio.
+
+Diverse uncertainty algorithms: Besides the sampling algorithm used by DiverseUncertainty, there are some other algorithms to pick the representative nodes from the candidate set ${S}_{t}$ . First, we will evaluate three algorithms to cluster and select the propagated features.
+
+- Random select: randomly pick nodes ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
+
+- DiverseUncertainty: use $k$ -means++ to cluster the nodes in ${\mathcal{C}}_{t}$ and
+
+- Random round-robin Algorithm [Cit+21]: use the cluster labels from the initial sampling period (the initial sampling period clusters all the nodes in ${\mathcal{V}}_{\text{train }}$ ). Then, following the Algorithm A3 (see Appendix) to select ${S}_{t}$ from ${\mathcal{C}}_{t}$
+
+Figure A6 suggests that $k$ -means++ clustering algorithm achieves a better test accuracy in most cases compared to random selection or random round-robin algorithm (see Appendix). Moreover, compared to random sampling algorithm, $k$ -means++ clustering algorithm is more robust when the sampling ratio increases. As the sampling ratio increases, the test accuracy of $k$ -means++ keeps increasing in most cases, while the test accuracy of random sampling algorithm has more fluctuations.
+
+Another factor that affects the test performance is the metric for clustering. Besides the propagated features (which is used by MaxDiversity), we can also cluster the input features or the embedding vectors. Since the GNN models typically used do not have a fully connected layer connecting to the output, we cannot use the output of second last layer as the embedding. Hence, we use the GNN output as the embedding vector for clustering. Figure A7 shows that clustering the propagated features consistently outperforms clustering the other two targets. Especially for the "Citeseer" dataset, clustering the propagated features outperforms by at most 5%. To conclude, the $k$ -means++ clustering algorithm achieves the best performance compared to the other selection methods and clustering the propagated features is better than clustering other targets. Thus, DiverseUncertainty uses $k$ -means++ to cluster the propagated features to pick ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
+
+## 7 Empirical validation of theorem
+
+In this section, we perform simulation analysis to demonstrate that ScatterSample can reduce the MSE compared to greedy uncertainty sampling approach.
+
+Graph Simulation Setup. Let the dimension of input feature $d = 1$ . Simulate $\mathbf{X}$ from two different clusters, where $\left( {X \mid {C}_{1}}\right) \sim$ Uniform(-15, - 5)and $\left( {X \mid {C}_{2}}\right) \sim$ Uniform(8,12). In our simulation, we randomly generated 100 nodes for each cluster. Each node is randomly connected to two other nodes in the same cluster. Moreover, for the edges between clusters, we set a probability threshold $r$ such that $\mathrm{P}\left\lbrack {{V}_{i} \in {C}_{1}\text{connect to a node} \in {C}_{2}}\right\rbrack = r$ (See Appendix D for details).
+
+Label of nodes. The label of a node depends on its propagated features. First compute the 1-layer feature propagation of each node, ${\mathbf{X}}^{\left( 1\right) }$ . Then, the label of $i$ -th node is ${y}_{i} = {\left| {X}_{i}^{\left( 1\right) }\right| }^{2}$ . Here, because the two cluster centers are equally distanced from 0 , hence, the label function is also symmetric around 0 .
+
+Node sampling. During the initial sampling step, label the nodes closest to the cluster centers and train the GP function. To sample uncertain nodes,
+
+- MaxUncertainty: Label the 8 nodes with largest uncertainty.
+
+- DiverseUncertainty: Collect the top 80 nodes with largest uncertainty into the candidate set. Then, use $k$ -means++ to cluster the nodes in the candidate set into 8 clusters. Label the 8 nodes closest to the cluster centers.
+
+MaxUncertainty and DiverseUncertainty use the newly labeled nodes to update the GP function respectively. Finally, the trained GP function predicts the node labels, and we compute the corresponding MSE.
+
+Figure A8 in the Appendix suggests that MaxUncertainty has larger MSE compared to Diverse-Uncertainty algorithm. For the MaxUncertainty algorithm, since most of the labeled nodes come from the cluster 1, the MSE of cluster 1 is significantly smaller than that of cluster 2 . While for the DiverseUncertainty algorithm, the MSE of cluster 1 and 2 are comparable. As $r$ increases, there are more and more edges between clusters, and the propagated features are less separated. Hence, there are some high uncertainty nodes from cluster 1 very close to cluster 2, which is beneficial for Max-Uncertainty to learn the labels of nodes from cluster 2. Thus, we could observe $\frac{\text{ MSE of MaxUncertainty }}{\text{ MSE of DiverseUncertaintly }}$ keeps decreasing when $r$ increases. When $r$ is very large, cluster 1 and 2 will merge into one cluster, and MSEs of both methods no longer have a significant difference.
+
+## 8 Conclusion
+
+Learning a GNN model with limited labeling budget is an important but challenging problem. In this paper:
+
+- We propose a novel data efficient GNN learning algorithm, ScatterSample, which efficiently diversifies the uncertain nodes and achieves better test accuracy than recent baselines.
+
+- We provide theoretical guarantees: Theorem 5.1 proves the advantage of ScatterSample over MaxUncertainty sampling.
+
+- Experiments on real data show that ScatterSample can save up to ${50}\%$ labeling size, for the same test accuracy.
+
+We envision ScatterSample will inspire future research of combining uncertainty sampling and representation sampling (diversifying).
+
+References
+
+[AV06] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. Tech. rep. Stanford, 2006
+
+[Bro+17] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. "Geometric deep learning: going beyond euclidean data". In: 34.4 (2017), pp. 18-42
+
+[CZC17] H. Cai, V. W. Zheng, and K. C.-C. Chang. "Active learning for graph embedding". In: arXiv preprint arXiv:1705.05085 (2017)
+
+[Cit+21] G. Citovsky, G. DeSalvo, C. Gentile, L. Karydas, A. Rajagopalan, A. Rostamizadeh, and S. Kumar. "Batch Active Learning at Scale". In: Advances in Neural Information Processing Systems 34 (2021)
+
+[DBV16] M. Defferrard, X. Bresson, and P. Vandergheynst. "Convolutional neural networks on graphs with fast localized spectral filtering". In: Barcelona, Spain, 2016, pp. 3844-3852
+
+[DP18] M. Ducoffe and F. Precioso. "Adversarial active learning for deep networks: a margin based approach". In: arXiv preprint arXiv:1802.09841 (2018)
+
+[Gao+18] L. Gao, H. Yang, C. Zhou, J. Wu, S. Pan, and Y. Hu. "Active discriminative network representation learning". In: IJCAI International Joint Conference on Artificial Intelligence. 2018
+
+[HYL17] W. L. Hamilton, R. Ying, and J. Leskovec. "Inductive representation learning on large graphs". In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, pp. 1025-1035
+
+[Han+14] S. Hanneke et al. "Theory of disagreement-based active learning". In: Foundations and Trends® in Machine Learning 7.2-3 (2014), pp. 131-309
+
+[Hu+20] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. "Open Graph Benchmark: Datasets for Machine Learning on Graphs". In: arXiv preprint arXiv:2005.00687 (2020)
+
+[IMG20] V. N. Ioannidis, A. G. Marques, and G. B. Giannakis. "Tensor Graph Convolutional Networks for Multi-Relational and Robust Learning". In: IEEE Transactions on Signal Processing 68 (2020), pp. 6535-6546
+
+[KW16] T. N. Kipf and M. Welling. "Semi-supervised classification with graph convolutional networks". In: arXiv preprint arXiv:1609.02907 (2016)
+
+[KVAG19] A. Kirsch, J. Van Amersfoort, and Y. Gal. "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning". In: Advances in neural information processing systems 32 (2019), pp. 7026-7037
+
+[O'H78] A. O'Hagan. "Curve fitting and optimal design for prediction". In: Journal of the Royal Statistical Society: Series B (Methodological) 40.1 (1978), pp. 1-24
+
+[See04] M. Seeger. "Gaussian processes for machine learning". In: International journal of neural systems 14.02 (2004), pp. 69-106
+
+[Set09] B. Settles. "Active learning literature survey". In: (2009)
+
+[THTS05] G. Tur, D. Hakkani-Tür, and R. E. Schapire. "Combining active and semi-supervised learning for spoken language understanding". In: Speech Communication 45.2 (2005), pp. 171-186
+
+[Vel+17] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. "Graph attention networks". In: arXiv preprint arXiv:1710.10903 (2017)
+
+[Wan+19] M. Wang et al. "Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks". In: arXiv preprint arXiv:1909.01315 (2019)
+
+[Wu+19a] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. "Simplifying graph convolutional networks". In: International conference on machine learning. PMLR. 2019, pp. 6861-6871
+
+[Wu+19b] Y. Wu, Y. Xu, A. Singh, Y. Yang, and A. Dubrawski. "Active learning for graph neural networks via node feature propagation". In: arXiv preprint arXiv:1910.07567 (2019)
+
+[Zha+21] W. Zhang, Z. Yang, Y. Wang, Y. Shen, Y. Li, L. Wang, and B. Cui. "Grain: Improving data efficiency of graph neural networks via diversified influence maximization". In: arXiv preprint arXiv:2108.00219 (2021)
+
+[ZG02] X. Zhu and Z. Ghahramani. "Learning from labeled and unlabeled data with label propagation". In: (2002)
+
+## 3 A Estimation and prediction of the GP model
+
+Given the assumptions and notations above, the likelihood of GP model can be written as:
+
+$$
+f\left( {\mathbf{y} \mid \mu ,{\sigma }^{2},\mathbf{\theta }}\right) \propto \exp \left\lbrack {-\frac{1}{2{\sigma }^{2}}{\left( \mathbf{y} - \mathbf{1}\mu \right) }^{T}\mathbf{K}{\left( {\mathbf{X}}^{\left( k\right) }\right) }^{-1}\left( {\mathbf{y} - \mathbf{1}\mu }\right) }\right\rbrack .
+$$
+
+Here, we assume $\mathbf{\theta } = \left( {{\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{d}}\right)$ is a known parameter, and only $\mu$ and ${\sigma }^{2}$ are left to fit. The MLE of $\mu$ and ${\sigma }^{2}$ are, $\widehat{\mu } = \mathop{\sum }\limits_{{i = 1}}^{n}{\mathbf{y}}_{i}$ and ${\widehat{\sigma }}^{2} = \frac{1}{n}{\left( \mathbf{y} - \widehat{\mathbf{\mu }}\right) }^{T}\mathbf{K}{\left( {\mathbf{X}}^{\left( k\right) }\right) }^{-1}\left( {\mathbf{y} - \widehat{\mathbf{\mu }}}\right)$ .
+
+Given a testing point ${\mathbf{x}}_{ * }^{\left( k\right) }$ , by the GP model fitted by $D$ , the prediction of the response $f\left( {\mathbf{x}}_{ * }^{\left( k\right) }\right) \sim$ $N\left( {{\mu }^{ * },{\sigma }^{*2}}\right)$ , where
+
+$$
+{\mu }^{ * } = \mu + {k}^{*T}\mathbf{K}{\left( {\mathbf{X}}^{\left( k\right) }\right) }^{-1}\left( {\mathbf{y} - \mathbf{1}\widehat{\mu }}\right) \;\text{ and }\;{\sigma }^{*2} = {\widehat{\sigma }}^{2}\left( {1 - {k}^{*T}\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right) {k}^{ * }}\right) \tag{4}
+$$
+
+$$
+{k}^{ * } = {\left\lbrack K\left( {\mathbf{x}}_{1},{\mathbf{x}}^{ * }\right) , K\left( {\mathbf{x}}_{2},{\mathbf{x}}^{ * }\right) ,\ldots , K\left( {\mathbf{x}}_{n},{\mathbf{x}}^{ * }\right) \right\rbrack }^{T} \in {\mathbb{R}}^{n \times 1}
+$$
+
+## B Proof of theorem 1
+
+Before proving theorem 5.1, we first provide some preliminary results of Gaussian kernel matrix.
+
+### B.1 Preliminary of Gaussian kernel matrix
+
+Lemma B.1. Let $\mathbf{K}$ be the Gaussian kernel matrix of vector $\left( {{\mathbf{c}}_{1},{\mathbf{c}}_{2},\ldots ,{\mathbf{c}}_{m}}\right)$ . Since $\mathop{\min }\limits_{{i \neq j}}\begin{Vmatrix}{{\mathbf{c}}_{i} - {\mathbf{c}}_{j}}\end{Vmatrix} > \delta$ , we have ${\mathbf{K}}_{ij} < \exp \left\lbrack {-\frac{{\delta }^{2}}{\theta }}\right\rbrack$ . Denote $\epsilon = \exp \left\lbrack {-\frac{{\delta }^{2}}{\theta }}\right\rbrack$ . Then, ${K}_{ij}^{-1} > - \epsilon$ if $i \neq j$ , and $1 < {K}_{ii}^{-1} < 1 + \left( {m - 1}\right) {\epsilon }^{2}$ .
+
+Proof. Let $\mathbf{K} = \mathbf{I} + \mathbf{A}$ . By Neumann series, ${\mathbf{K}}^{-1} = \mathbf{I} + \mathop{\sum }\limits_{{t = 1}}^{\infty }{\left( -1\right) }^{t}{\mathbf{A}}^{t}$ . Thus, ${\mathbf{K}}_{ij} > - {\mathbf{A}}_{ij} > - \epsilon$ for $i \neq j$ , and $1 < {\mathbf{K}}_{ii} < 1 + {\mathbf{A}}_{ii}^{2} < 1 + \left( {m - 1}\right) {\epsilon }^{2}$
+
+### B.2 Prove MaxUncertainty method samples ${2m}$ from cluster $m$
+
+During the initial sampling stage, the nodes at the cluster centers are sampled. Then, the variance of a sample $x$ is,
+
+$$
+\operatorname{Var}\left( {f\left( x\right) }\right) = {\sigma }^{2}\left( {1 - {\mathbf{k}}^{T}{\mathbf{K}}^{-1}\mathbf{k}}\right) , \tag{5}
+$$
+
+where $\mathbf{k} = \left( {K\left( {x,{c}_{1}}\right) , K\left( {x,{c}_{2}}\right) ,\ldots , K\left( {x,{c}_{m}}\right) }\right)$ and $\mathbf{K} = \mathbf{K}\left( \mathbf{c}\right)$ is the Gaussian kernel matrix of $\mathbf{c} = \left( {{c}_{1},{c}_{2},\ldots ,{c}_{m}}\right) .$
+
+For a node $x$ from cluster $i$ , the $\operatorname{Var}\left( {f\left( x\right) }\right)$ is monotone increasing as $x$ moves from cluster center to boundary. Let $\omega = \exp \left\lbrack {-\frac{{\left( \delta - {d}_{m}\right) }^{2}}{2\theta }}\right\rbrack$ . Since $\left| {x - {c}_{j}}\right| \geq \delta - {d}_{i} \geq \delta - {d}_{m}$ for $j \neq i$ , naturally, we have $\omega < {\mathbf{k}}_{j}$ . Then, following lemma B.1,
+
+$$
+\mathbf{k}{\left( x,\mathbf{c}\right) }^{T}{\mathbf{K}}^{-1}\mathbf{k}\left( {x,\mathbf{c}}\right) \geq \exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack {\mathbf{K}}_{ii}^{-1} > \exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack \tag{6}
+$$
+
+With equation 6, we can upper bound the variance of the $x$ from cluster $i$ .
+
+In the next step, we lower bound the variance of $x$ at the boundary of cluster $m$ (largest cluster), and show that its variance is strictly larger than nodes from other clusters.
+
+$$
+\mathbf{k}{\left( x,\mathbf{c}\right) }^{T}{\mathbf{K}}^{-1}\mathbf{k}\left( {x,\mathbf{c}}\right) < \left( {\exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack + \left( {m - 1}\right) {\omega }^{2}}\right) \left\lbrack {1 + \left( {m - 1}\right) {\epsilon }^{2}}\right\rbrack , \tag{7}
+$$
+
+Since $\delta \geq {d}_{m} + \sqrt{{d}_{m}^{2} + \theta \log \left( {9m}\right) }$ , we have $\left( {m - 1}\right) {\epsilon }^{2} < \left( {m - 1}\right) {\omega }^{2} \leq \frac{1}{9}\exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack <$ $\frac{1}{9}\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack$ .
+
+$$
+\text{RHS of equation}7 \leq 2\left( {\exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack + \left( {m - 1}\right) {\omega }^{2}}\right)
+$$
+
+$$
+\leq \frac{1}{2}\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack + 2\left( {m - 1}\right) {\omega }^{2} < \frac{13}{18}\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack \tag{8}
+$$
+
+The RHS of equation 7 is strictly smaller than the LHS of equation 6 . Therefore, the uncertainty of nodes at the boundary of cluster $m$ is larger than the uncertainty of nodes from other clusters. For our case, feature dimension is 1 and there only exist 2 points at the boundary of cluster $m$ . However, since the nodes are continuous distributed, MaxUncertainty will pick the other $2\left( {m - 1}\right)$ nodes close to the boundary of cluster $m$ .
+
+### B.3 Bound the MSE of MaxUncertainty and DiverseUncertainty
+
+From previous section, we have seen the boundary nodes of cluster $m$ have the largest uncertainty. Thus, MaxUncertainty will sample ${2m}$ nodes from the cluster $m$ . To lower bound the MSE of MaxUncertainty, we consider the other(m - 1)clusters. Since the Gaussian Process model does not have noise, MSE of the prediction is equal to its variance.
+
+Let $\mathbf{h} = \left( {\mathbf{c},\mathbf{s}}\right) \in {\mathbb{R}}^{3m}$ , where $\mathbf{h}$ are the sampled nodes and $\mathbf{s} \in {\mathbb{R}}^{2m}$ are the ${2m}$ nodes sampled during the dynamic sampling stage. Denote $\mathbf{K}\left( \mathbf{h}\right)$ to be Gaussian kernel matrix of $\mathbf{h}$ . Let $t = \left| {x - {c}_{i}}\right|$ be the distance from node $x$ to its cluster center.
+
+$$
+\mathbf{k}{\left( x,\mathbf{h}\right) }^{T}{\mathbf{K}}^{-1}\left( \mathbf{h}\right) \mathbf{k}\left( {x,\mathbf{h}}\right) \leq \left\lbrack {1 + m{\epsilon }^{2} + {2m}{\omega }^{2}}\right\rbrack \left( {\exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack + {3m}{\omega }^{2}}\right)
+$$
+
+$$
+\leq \left( {1 + {3m}{\omega }^{2}}\right) \left( {\exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack + {3m}{\omega }^{2}}\right) \tag{9}
+$$
+
+Moreover, we have ${\mathbb{E}}_{t}\left( {\exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack }\right) \leq \frac{1}{2}\left( {1 + \exp \left\lbrack {-\frac{{d}_{m}^{2}}{\theta }}\right\rbrack }\right)$ . Let ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{4\theta }}\right\rbrack$ and $a =$ $\exp \left\lbrack {-\frac{{d}_{m}^{2}}{{\theta }^{ * }}}\right\rbrack$ , we have
+
+$$
+\operatorname{MSE}\left( {f\left( x\right) \mid \text{ Max Uncertainty }, x \in {\mathcal{S}}_{i}}\right) > {\sigma }^{2}\left\lbrack {\left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}}\right\rbrack . \tag{10}
+$$
+
+Hence, ${MSE}\left( {f\left( x\right) \mid \text{MaxUncertainty}}\right) > {\sigma }^{2}\mathop{\sum }\limits_{{i = 1}}^{{m - 1}}\frac{{d}_{i}^{2}}{\parallel \mathbf{d}{\parallel }^{2}}\left\lbrack {\left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}}\right\rbrack$ . Let 466 $h\left( {r}_{i}^{2}\right) = \left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}$ and $h$ is concave in ${d}_{i}^{2}$ . Thus,
+
+$$
+h\left( {r}_{m}^{2}\right) = \left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4} \leq \tau \mathop{\sum }\limits_{{i = 1}}^{{m - 1}}\frac{h\left( {r}_{i}^{2}\right) }{{r}_{m}^{2}}h\left( {r}_{m}^{2}\right) \leq \tau \mathop{\sum }\limits_{{i = 1}}^{{m - 1}}h\left( {r}_{i}^{2}\right) . \tag{11}
+$$
+
+Hence, ${MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) > \frac{{\sigma }^{2}}{1 + \tau }\mathop{\sum }\limits_{{i = 1}}^{m}\frac{{d}_{i}^{2}}{\parallel \mathbf{d}{\parallel }^{2}}\left\lbrack {\left( {\frac{1}{2} - \frac{a}{2} - \frac{{a}^{2}}{9}}\right) - \left( {\frac{1}{2} + \frac{a}{6}}\right) {r}_{i}^{4}}\right\rbrack$ .
+
+Then, we upper bound the MSE of DiverseUncertainty. Since each cluster labels 2 nodes at the cluster boundary. For node $x$ from cluster $i$ , the distance between node $i$ to the closest labeled point is smaller than $\frac{{d}_{i}}{2}$ . Hence,
+
+$$
+\mathbf{k}{\left( x,\mathbf{h}\right) }^{T}{\mathbf{K}}^{-1}\left( \mathbf{h}\right) \mathbf{k}\left( {x,\mathbf{h}}\right) \geq \exp \left\lbrack {-\frac{{t}^{2}}{\theta }}\right\rbrack + \exp \left\lbrack {-\frac{{\left( {d}_{i} - t\right) }^{2}}{\theta }}\right\rbrack - 2\exp \left\lbrack {-\frac{{d}_{i}^{2}}{\theta }}\right\rbrack \exp \left\lbrack {-\frac{{t}^{2} + {\left( {d}_{i} - t\right) }^{2}}{2\theta }}\right\rbrack \geq \frac{2{r}_{i}}{1 + {r}_{i}^{2}}.({12} \tag{12}
+$$
+
+Thus, we have $\operatorname{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty,}x \in {\mathcal{S}}_{i}}\right) \leq {\sigma }^{2}\frac{{\left( 1 - {r}_{i}\right) }^{2}}{1 + {r}_{i}^{2}}$ and $\operatorname{MSE}(f\left( x\right) \mid$ DiverseUncertainty $) \leq {\sigma }^{2}\mathop{\sum }\limits_{{i = 1}}^{m}\frac{{d}_{i}^{2}}{\parallel \mathbf{d}{\parallel }^{2}}\frac{{\left( 1 - {r}_{i}\right) }^{2}}{1 + {r}_{i}^{2}}$ .
+
+Moreover, $\frac{{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty,}x \in {\mathcal{S}}_{i}}\right) }{{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty,}x \in {\mathcal{S}}_{i}}\right) } \geq \frac{1 + {r}_{i}^{2}}{1 - {r}_{i}}\left( {\frac{1}{2} + \frac{a}{6}}\right) - \frac{\left( 1 + {r}_{i}^{2}\right) }{{\left( 1 - {r}_{i}\right) }^{2}}\left( {\frac{2a}{3} + \frac{{a}^{2}}{9}}\right)$ . Since $\delta \geq {d}_{m} + {2\theta }\log \left( \frac{3\sqrt{m}}{1 - {r}_{m}}\right)$ , we have $a \leq {\left( 1 - {r}_{i}\right) }^{2}$ for all $i = 1,2,\ldots , m$ . Thus, $\frac{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty,}x \in {\mathcal{S}}_{i}}\right) }{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty,}x \in {\mathcal{S}}_{i}}\right) } \geq \frac{1}{2}\frac{1 + {r}_{i}^{2}}{1 - {r}_{i}} - \frac{8}{3} \geq \frac{1}{2}\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3}.$
+
+Now, we could lower bound $\frac{{MSE}\left( {f\left( x\right) \mid {MaxUncertainty}}\right) }{{MSE}\left( {f\left( x\right) \mid {DiverseUncertainty}}\right) }$ over the whole sample space, where
+
+$$
+\frac{{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{{MSE}\left( {f\left( x\right) \mid \text{ DiverseUncertainty }}\right) } \geq \frac{1}{2\left( {1 + \tau }\right) }\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3}
+$$
+
+17 Moreover, when $\theta$ is large, ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{4\theta }}\right\rbrack \approx 1 - \frac{{d}_{i}^{2}}{4\theta }$ . Thus, $\frac{{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{{MSE}\left( {f\left( x\right) \mid \text{ DiversedUncertainty }}\right) } \geq$ $\frac{1}{1 + \tau }O\left( \theta \right)$ .
+
+## C Ablation Experiments
+
+### C.1 Detail of round-robin algorithm
+
+Algorithm 3 Random Round-robin Algorithm
+
+---
+
+1: Input: cluster labels of node $i$ (node $i \in {\mathcal{V}}_{\text{train }}$ ) ${cl}_{n}$ , where ${cl}_{n} \in 1,2,\ldots , m$ ; candidate set ${\mathcal{C}}_{t}$ ;
+
+ number of nodes to label ${B}_{t}$ .
+
+2: Using the cluster labels to split ${\mathcal{C}}_{t}$ onto clusters ${A}_{1},{A}_{2},\ldots ,{A}_{m}$ . Without loss of generality,
+
+ $\left| {A}_{1}\right| \leq \left| {A}_{2}\right| \leq \ldots \leq \left| {A}_{m}\right| .$
+
+ ${S}_{t} = \varnothing$
+
+ for $i = 1,2,\ldots ,{B}_{t}$ do
+
+ for $j = 1,2,\ldots , m$ do
+
+ if ${A}_{j} \neq \varnothing$ then
+
+ Uniformly select $x$ from ${A}_{j}$ at random
+
+ ${A}_{j} \leftarrow {A}_{j} \smallsetminus \{ x\} ,{S}_{t} \leftarrow {S}_{t} \cup \{ x\}$
+
+ break
+
+ end if
+
+ end for
+
+ end for
+
+ return ${S}_{t}$
+
+---
+
+### C.2 Sensitivity to initial sampling ratio
+
+
+
+Figure 5: Compare different initial sampling ratios for Cora (left) and Citeseer (Right)
+
+### C.3 Compare sampling algorithms
+
+
+
+Figure 6: Compare different sampling algorithms to collect ${S}_{t}$ from the candidate set ${\mathcal{C}}_{t}$ .
+
+B3 C. 4 Compare clustering algorithms
+
+
+
+Figure 7: Compare clustering different targets to select ${S}_{t}$ from the candidate set ${\mathcal{C}}_{t}$ .
+
+## D Empirical validation of theory
+
+Graph Simulation Setup. Let the dimension of input feature $d = 1$ . Simulate $\mathbf{X}$ from two different clusters, where $\left( {X \mid {C}_{1}}\right) \sim$ Uniform(-15, - 5)and $\left( {X \mid {C}_{2}}\right) \sim$ Uniform(8,12). In our simulation, we randomly generated 100 nodes for each cluster. Then, we simulate the edges between nodes. The edges can be divided into two categories, edges within clusters and edges between clusters. To simulate the edges within clusters, for each node, we random select two other nodes from the same cluster as its neighbor. For the edges between clusters, we set a probability threshold $r$ such that $\mathrm{P}\left\lbrack {{V}_{i} \in {C}_{1}\text{connect to a node} \in {C}_{2}}\right\rbrack = r$ . For each node ${V}_{i} \in {C}_{1}$ , generate an indicator variable ${I}_{i} \sim$ Bernoulli(r)to determine whether ${V}_{i}$ is connected to cluster $2\left( {V}_{i}\right.$ is connected to cluster 2 if $\left. {{I}_{i} = 1}\right)$ . If ${V}_{i}$ is connected to cluster 2, randomly pick a node from cluster 2 and connect it to ${V}_{i}$ .
+
+
+
+Figure 8: Compare the MSEs of Uncertainty and DiverseUncertainty algorithms under different correlation levels between clusters.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5547f2f5859cb44cdc1502ae8b619e3b2c526a5f
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/BCg0P57qU96/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,316 @@
+§ SCATTERSAMPLE: DIVERSIFIED LABEL SAMPLING FOR DATA EFFICIENT GRAPH NEURAL NETWORK LEARNING
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+What target labels are most effective for graph neural network (GNN) training? In some applications where GNNs excel-like drug design or fraud detection, labeling new instances is expensive. We develop a data-efficient active sampling framework, ScatterSample, to train GNNs under an active learning setting. ScatterSample employs a sampling module termed DiverseUncertainty to collect instances with large uncertainty from different regions of the sample space for labeling. To ensure diversification of the selected nodes, DiverseUncertainty clusters the high uncertainty nodes and selects the representative nodes from each cluster. Our ScatterSample algorithm is further supported by rigorous theoretical analysis demonstrating its advantage compared to standard active sampling methods that aim to simply maximize the uncertainty and not diversify the samples. In particular, we show that ScatterSample is able to efficiently reduce the model uncertainty over the whole sample space. Our experiments on five datasets show that ScatterSample significantly outperforms the other GNN active learning baselines, specifically it reduces the sampling cost by up to $\mathbf{{50}}\%$ while achieving the same test accuracy.
+
+§ 17 1 INTRODUCTION
+
+How to spot the most effective labeled nodes for GNN training? Graph neural networks (GNN) [KW16; Vel+17; Wu+19a] which employ non-linear and parameterized feature propagation [ZG02] to compute graph representations, have been widely employed in a broad range of learning tasks and achieved state-of-art-performance in node classification, link prediction and graph classification. Training GNNs for node classification in the supervised learning setup typically requires a large number of labeled examples such that the GNN can learn from diverse node features and node connectivity patterns. However, labeling costs can be expensive which inhibits the possibility of acquiring a large number of node labels. For example, the GNNs can be used to assist the drug design. However, evaluating the properties of a molecule is time consuming. It usually takes one to two weeks for evaluation using the current simulation tools, not to mention the cost spent on the laboratory experiments.
+
+Active learning (AL) aims at maximizing the generalization performance under a constrained labeling budget [Set09]. AL algorithms choose which training instances to use as labeled targets to maximize the performance of the learned model. Previous research in AL algorithms for GNN training can be categorized with respect to whether the AL methods take into account the model weights (model aware) or can be applied to any model (model agnostic). Model agnostic algorithms label a representative subset of the nodes such that the labeled nodes can cover the whole sample space [Wu+19b; Zha+21]. Model aware AL algorithms leverage the GNN model to compute the node uncertainty, which combines both the input features and graph structure [CZC17; Gao+18]. AL then picks the nodes with the highest uncertainty.
+
+However, maximizing the uncertainty of the labeled nodes may not balance the exploration and exploitation of the classification boundary [KVAG19]. For example, if there exist a group of nodes close to the classification boundary but are clustered in a small region of the graph, just labeling the most uncertain nodes could only explore that specific region of the classification boundary,
+
+while others are ignored, and the classification boundary is not well explored. Thus, our first main contribution is to simultaneously consider the node uncertainty and the diversification of the uncertain nodes over the sample space.
+
+Challenges of diversifying uncertain nodes. Graph data present additional challenges to diversify the uncertain nodes. Diversification requires modeling the sample space using carefully selected representations for the nodes. However, there are two challenges of a suitable node representations.
+
+Challenge 1: Sample space for graph data requires a representation which takes both the graph structure and node features into account (see section sec 4.2).
+
+Challenge 2: The representation should be robust to the model trained so far, and not be biased by the limited amount of available labels.
+
+Our approach. We develop ScatterSample for data-efficient GNN learning. ScatterSample allows us to explore the classification boundary while exploiting the nodes with the highest uncertainty. To diversify the uncertain samples on graph-structured data, ScatterSample includes a DiverseUncer-tainty module to address the two challenges above, which clusters the uncertain nodes representations over the whole sample space.
+
+Our Contributions. The contributions of our work are the following.
+
+ * Insight: ScatterSample is the first method that proposes and implements diversification of the uncertain samples for data efficient GNN learning.
+
+ < g r a p h i c s >
+
+Figure 1: ScatterSample wins: test accuracy vs. sampling ratio on the ogbn-products dataset (62M edges).
+
+ * Effectiveness: We evaluate ScatterSample on five different graph datasets, where ScatterSample saves up to ${50}\%$ labeling cost, while still achieving the same test accuracy with state-of-the-art baselines.
+
+ * Theoretical Guarantees: Our theoretical analysis proves the superiority of ScatterSample over the standard, uncertainty-sampling method (see Theorem 5.1). Simulation results further confirm our theory.
+
+§ 2 RELATED WORK
+
+This section will review the uncertainty based active learn-
+
+ing research and implementation of active learning in GNNs.
+
+Active Learning (AL):. Active learning aims at selecting a subset of training data as labeling targets such that the model performance is optimized [Set09; Han+14]. Uncertainty sampling is one major approach of active learning, which labels a group of samples to maximally reduce the model uncertainty. To achieve this goal, uncertainty sampling selects samples around the decision boundary [THTS05]. Uncertainty sampling has also been applied to the deep learning field, and researchers have proposed different methods to measure the uncertainty of samples. For example, Ducoffe and Precioso [DP18] developed a margin based method which uses the distance from a sample to its smallest adversarial sample to approximate the distance to the decision boundary.
+
+AL and GNNs: AL with GNNs requires to consider the graph structure information into the node selection. Wu et al. [Wu+19b] uses the propagated features followed by K-Medoids clustering of nodes to select a group of representative instances. Zhang et al. $\left\lbrack {\mathrm{{Zha}} + {21}}\right\rbrack$ measures importance of nodes through combining the diversity and influence scores. However the above approaches do not account for the learned GNN model, which may limit the generalization performance. Uncertainty sampling has also been implemented to select nodes. Cai et al. [CZC17] propose to use a weighted average of the node uncertainty, graph centrality and information density scores. Gao et al. [Gao+18] further propose a different approach to combine the three features with multi-armed bandit techniques. Although useful, these approaches aim choose nodes with the highest uncertainty and may be challenged if the selected nodes are clustered in a small region of the graph, which will not provide 92 good graph coverage. Our work addresses this limitation by diversifying the selected nodes based on the graph structure.
+
+§ 3 PRELIMINARIES
+
+Problem Statement. Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is the set of nodes with $N = \left| \mathcal{V}\right|$ nodes and $\mathcal{E}$ is the set of edges. The set of nodes is divided into the training set ${\mathcal{V}}_{\text{ train }}$ , validation set ${\mathcal{V}}_{\text{ valid }}$ and testing set ${\mathcal{V}}_{\text{ test }}$ . Each node ${v}_{n} \in \mathcal{V}$ is associated with a feature vector ${\mathbf{x}}_{n} \in {\mathbb{R}}^{d}$ and a label ${y}_{n} \in \{ 1,2,\ldots ,C\}$ . Let $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ be the feature matrix of all the nodes in the graph, where the $i$ -th row of $\mathbf{X}$ corresponds to ${v}_{n},\mathbf{y} = \left( {{y}_{1},{y}_{2},\ldots ,{y}_{n}}\right) \in {\mathbb{R}}^{n}$ is the vector containing all the labels. To learn the labels of the nodes, we train a GNN model $M$ which maps the graph $\mathcal{G}$ and $\mathbf{X}$ to the the prediction of labels $\widehat{\mathbf{y}}$ .
+
+Active Learning: Active learning picks a subset of nodes $S \subset {\mathcal{V}}_{\text{ train }}$ from the training set and query their labels ${\mathbf{y}}_{S}$ . A GNN model ${M}_{S}$ is trained with respect to the feature matrix $\mathbf{X}$ and ${\mathbf{y}}_{S}$ . Given the sampling budget $B$ , the goal of active learning is to find a set $S\left( {\left| S\right| \leq B}\right)$ such that the generalization loss is minimized, i.e.
+
+$$
+\underset{S : \left| S\right| \leq b}{\arg \min }{\mathbb{E}}_{{v}_{n} \in {\mathcal{V}}_{\text{ test }}}\left( {\ell \left( {{y}_{n},f\left( {{\mathbf{x}}_{n} \mid \mathcal{G},{M}_{S}}\right) }\right) }\right) .
+$$
+
+§ 3.1 GRAPH NEURAL NETWORKS AND MESSAGE PASSING
+
+In this section we present the basic operation of the GNN at layer $l$ . With the message passin paradigm, the GNN layer updates for most GNN models can be interpreted as message vectors that are exchanged among neighbors over the edges and nodes in the graph.
+
+For the following let ${\mathbf{h}}_{v}^{\left( l\right) } \in {\mathbb{R}}^{{d}_{1}}$ be the hidden representation for node $v$ and layer $l$ . Consider $\phi$ that is a message function combining the hidden representations for nodes $v,u$ . Next, using the message vectors for neighboring edges the node representations are updated as follows
+
+$$
+{\mathbf{h}}_{v}^{\left( l + 1\right) } = \psi \left( {{\mathbf{h}}_{v}^{\left( l\right) },\rho \left( \left\{ {\phi \left( {{\mathbf{h}}_{v}^{\left( l\right) },{\mathbf{h}}_{u}^{\left( l\right) }}\right) : \left( {u,v}\right) \in \mathcal{E}}\right\} \right) }\right) \tag{1}
+$$
+
+where $\rho$ is a reduce function used to aggregate the messages coming from the neighbors of $v$ and $\psi$ is an update function defined on each node to update the hidden node representation for layer $l + 1$ . By defining $\phi ,\rho ,\psi$ different GNN models can be instantiated [KW16; DBV16; Bro+17; IMG20]. These functions are also parameterized by learnable matrices that are updated during training.
+
+§ 4 PROPOSED METHOD: SCATTERSAMPLE
+
+We propose the ScatterSample algorithm, which dynamically samples a set of diverse nodes with large uncertainties in order to more efficiently explore the classification boundary during GNN training. At each round, our method calculates the uncertainty for all nodes with the GNN model trained so far. Then, ScatterSample clusters the top uncertain nodes and selecting nodes from each cluster to obtain diverse samples. The labels of the selected nodes are queried and used as supervision to continue training the GNN model for the next round. This section explains our method in detail.
+
+§ 4.1 SELECTING THE UNCERTAIN NODES
+
+The uncertainty of a node is measured by the information entropy. Given a trained GNN model at the $t$ -th sampling round, ScatterSample first computes the information entropy ${\phi }_{\text{ entropy }}\left( {v}_{n}\right)$ of nodes in ${\mathcal{V}}_{\text{ train }}$ based on the current GNN model, i.e.
+
+$$
+{\phi }_{\text{ entropy }}\left( {v}_{n}\right) = - \mathop{\sum }\limits_{{j = 1}}^{C}\log \left( {\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X},M}\right\rbrack }\right) \mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X},M}\right\rbrack \tag{2}
+$$
+
+where $\mathrm{P}\left\lbrack {{Y}_{n} = j \mid \mathcal{G},\mathbf{X},M}\right\rbrack$ is probability that node ${v}_{n}$ belongs to class $j$ given the GNN model $M$ . Then, ScatterSample ranks all the nodes in order of decreasing uncertainty, and picks the ones with the largest information entropy into a candidate set ${\mathcal{C}}_{t} \subset {\mathcal{V}}_{\text{ train }}$ . Different than traditional AL techniques that select training targets solely based on uncertainty, we then move on to pick a diverse subset of the uncertain nodes over the sampling space.
+
+§ 4.2 DIVERSIFYING UNCERTAIN NODES
+
+Our goal is to ensure the diversity of selected nodes for labeling, by exploring the node distribution over the sample space. At this point naturally, the question arises How to model the sample space? We need a representation for nodes to define the space, based on which we could measure the samples' distances. A straightforward approach is to use the GNN embedding space since the classification boundary is directly depicted there. However, GNN embeddings fail to address the two challenges in the introduction section.
+
+First, with active learning, a limited number of labeled nodes are available in the initial stages. Hence, only the already labeled nodes may have reliable GNN embeddings and biased subsequent samples. Second, GNN embeddings for node classification may not carry enough information for diversification. GNNs usually do not have an MLP layer connecting to the output. The final GNN outputs of uncertain nodes are not diverse enough since the high uncertain nodes may have similar class probabilities (class probabilities close to uniform). Conversely, embeddings of intermediate GNN layers may have an appropriate dimension but lack information of the expanded ego-network.
+
+These drawbacks are confirmed in Sec. 6.2, where we show that using GNN embeddings as proxy representations leads to a performance drop. Moreover, different from other machine learning problems, the nodes are correlated with each other, and we also need to take the graph structure into account when diversifying the samples. Hence, to address all these considerations we will employ a $k$ -step propagation of the original node features based on the graph structure as a proxy representation for the nodes. The $k$ -step propagation of nodes ${\mathbf{X}}^{\left( k\right) } = \left( {{\mathbf{x}}_{1}^{\left( k\right) },{\mathbf{x}}_{2}^{\left( k\right) },\ldots ,{\mathbf{x}}_{N}^{\left( k\right) }}\right)$ is defined as follows
+
+$$
+{\mathbf{X}}^{\left( k\right) } \mathrel{\text{ := }} {\mathbf{{SX}}}^{\left( k - 1\right) } \tag{3}
+$$
+
+where $\mathbf{S}$ is the normalized adjacency matrix, and ${\mathbf{X}}^{\left( 0\right) }$ are the initial node features. The operation in (3) is efficient and amenable to a mini-batch implementation. Such representations are well-known to succinctly encode the node feature distribution and graph structure. Next, we calculate the proxy representations for the candidate high uncertainty nodes in the set ${\mathcal{C}}_{t}$ . To maximize the diversity of the samples, we cluster the proxy representations in ${\mathcal{C}}_{t}$ using $k$ -means++ into ${B}_{t}$ clusters [AV06], and select the nodes closest to the cluster centers for labeling, by using the ${L}_{2}$ distance metric. One node from each cluster is selected that amounts to ${B}_{t}$ samples.
+
+Algorithm 1 ScatterSample Algorithm
+
+1: Input: ${\mathcal{V}}_{\text{ train }}$ , GNN model $M$ , number of propagation layers $k$ , number of sampling round $T$ ,
+
+ sampling redundancy $r$ , initial sampling budget ${B}_{0}$ and total sampling budget $B$ .
+
+ Initialize $S = \varnothing$
+
+ Compute ${\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{V}}_{\text{ train }}$ as in (3).
+
+ Initial Sampling:
+
+ Use $k$ -means++ to cluster $\left\{ {\mathbf{x}}_{n}^{\left( k\right) }\right\}$ into ${B}_{0}$ clusters.
+
+ Add a node closest to the cluster center per cluster to $S$ .
+
+ Query the labels of nodes ${v}_{n} \in S$ , denoted by ${\mathbf{y}}_{S}$ .
+
+ Train model $M$ using $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ .
+
+ Dynamic Sampling:
+
+ Initialize sampling round $t = 1$
+
+ while $t < T$ do
+
+ Let ${B}_{t} = \min \left( {B - \left| S\right| ,\left( {B - {B}_{0}}\right) /T}\right)$
+
+ Use the DiverseUncertainty algorithm to select ${S}_{t}$
+
+ Query the labels of ${S}_{t}$ , and update $S = S \cup {S}_{t}$ .
+
+ Train model $M$ over $\left( {{\mathbf{y}}_{S},\mathbf{X},\mathcal{G}}\right)$ . Update $t = t + 1$ .
+
+ end while
+
+Clearly, the size of the candidate set $\left| {\mathcal{C}}_{t}\right| \geq {B}_{t}$ , however deciding how many candidate nodes to choose from is important. We parameterize the size as a multiple of the selected nodes namely $\left| {\mathcal{C}}_{t}\right| = r{B}_{t}$ , where $r > 1$ is the sampling redundancy. If $r$ is too small, the selected nodes are closer to the classification boundary (have larger information entropy) but the nodes selected may not be diverse enough. On the other hand, if $r$ is too large, the set will be diverse, but the selected nodes may be far away from the classification boundary. Therefore, it is critical to pick a suitable $r$ to achieve a sweet point between diversity and uncertainty. We leave the discussion of choosing $r$ to Sec. 6.2. Besides empirical validation with experiments in five real datasets (see Sec. 6), our diversification approach is theoretically motivated (see Sec. 5).
+
+The pseudo code of ScatterSample is shown in Algorithm 1. ScatterSample is a multiple rounds sampling scheme, which includes an initial sampling step and dynamic sampling steps. ScatterSample
+
+Algorithm 2 DiverseUncertainty Algorithm
+
+Input: ${\mathcal{V}}_{\text{ train }},\left\{ {{\mathbf{x}}_{n}^{\left( k\right) }\forall n \in {\mathcal{C}}_{t}}\right\} ,r,{B}_{t}$
+
+Compute ${\phi }_{\text{ entropy }}\left( v\right) \forall v \in {\mathcal{V}}_{\text{ train }}$ ; see 2).
+
+${\mathcal{C}}_{t} \leftarrow \left\{ {r{B}_{t}\text{ nodes with largest }{\phi }_{\text{ entropy }}\left( v\right) }\right\}$ .
+
+Use $k$ -means++ to cluster the ${\mathbf{x}}_{n}^{\left( k\right) }$ (for all $n \in {\mathcal{C}}_{t}$ ) into ${B}_{t}$ clusters.
+
+${S}_{t} \leftarrow \varnothing$
+
+for $j = 1,2,\ldots ,{B}_{t}$ do
+
+ Compute the cluster center ${\mathbf{v}}_{j}$ of cluster $j$
+
+ Pick node $x \leftarrow \arg \mathop{\min }\limits_{{n \in {\mathcal{C}}_{t}}}\begin{Vmatrix}{{\mathbf{x}}_{n}^{\left( k\right) } - {\mathbf{v}}_{j}}\end{Vmatrix}$
+
+ ${S}_{t} \leftarrow {S}_{t} \cup \{ x\}$
+
+end for
+
+Return ${S}_{t}$
+
+first computes the $k$ -step features propagation of all the nodes in the training set using (3), and clusters them into ${B}_{0}$ clusters, where ${B}_{0}$ is the initial sampling budget. Then, ScatterSample picks the nodes closest to the cluster centers as the initial training samples and queries their labels. The purpose of clustering $k$ -step feature propagations is to enforce the initial training set to spread out over the whole sample space. It is also helpful to explore the classification boundary since if the initial sampled nodes are not diverse enough, we cannot picture the classification boundary of the regions that are far away from the initial training samples. ScatterSample repeats the dynamic sampling described in Algorithm 2 until the sampling budget $B$ is exhausted. The next section fortifies our diversification method with theoretical guarantees.
+
+§ 5 THEORETICAL ANALYSIS
+
+In Sec. 6.2, we have shown that DiverseUncertainty is significantly better than Uncertainty algorithm. In this section, we provide theoretical analysis and simulation results to demonstrate the benefits of DiverseUncertainty and explains why MaxUncertainty algorithm may fail. The results presented here give a theoretical basis for the superiority of our method as established in the experiments in Section 6.
+
+§ 5.1 ANALYSIS SETUP
+
+For the analysis, we employ the Gaussian Process (GP) model [O'H78]. GP models offer a flexible approach to model complex functions and are robust to small sample sizes [See04]. Moreover, the uncertainty of the prediction can be easily computed using a GP model. Neural network models and GNNs interpolate the observed samples, while GPs provide a robust framework to interpolate samples, that is amenable to analysis.
+
+Assume the label ${y}_{i} \in \mathbb{R}$ is dependent on the propagated features ${\mathbf{x}}_{i}^{\left( k\right) }$ through a GP model. The label ${y}_{i}$ is modeled by a Gaussian Process, where $\left( {\mathbf{y} \mid {\mathbf{X}}^{\left( k\right) }}\right) \sim N\left( {\mathbf{1}\mu ,\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right) }\right)$ and $\mathbf{K}\left( {\mathbf{X}}^{\left( k\right) }\right)$ is the Gaussian kernel matrix. The kernel is parameterized by ${\mathbf{K}}_{ij}\left( {\mathbf{X}}^{\left( k\right) }\right) = K\left( {{\mathbf{x}}_{i}^{\left( k\right) },{\mathbf{x}}_{j}^{\left( k\right) }}\right) =$ $\exp \left( {-\frac{1}{2}{\left( {\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }\right) }^{T}{\sum }^{-1}\left( {{\mathbf{x}}_{i}^{\left( k\right) } - {\mathbf{x}}_{j}^{\left( k\right) }}\right) }\right)$ , where $\sum = \operatorname{diag}\left( {{\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{d}}\right)$ . Consider that the sample space of ${\mathbf{x}}^{\left( k\right) }$ can be clustered into $m$ clusters ${\mathcal{S}}_{1},{\mathcal{S}}_{2},\ldots ,{\mathcal{S}}_{m}$ , and denote the cluster centers as ${\mathbf{c}}_{1},{\mathbf{c}}_{2},\ldots ,{\mathbf{c}}_{m}$ . Without loss of generality, denote the radius of the cluster, ${d}_{1} \leq {d}_{2} \leq {d}_{3} \leq \cdots < {d}_{m}$ . The clusters are well separated and the distance between the cluster centers are larger than $\delta$ , i.e. $\mathop{\min }\limits_{{i \neq j}}{\begin{Vmatrix}{\mathbf{c}}_{i} - {\mathbf{c}}_{j}\end{Vmatrix}}_{2} \geq \delta \left( {\delta > 2{d}_{m}}\right)$ . Moreover, we consider that there does not exist a cluster dominating the sample space, ${d}_{m}^{2} \leq \tau \mathop{\sum }\limits_{{j = 1}}^{{m - 1}}{d}_{j}^{2}$ and the samples are uniformly distributed over the clusters.
+
+§ 5.2 MAXUNCERTAINTY VS DIVERSEUNCERTAINTY
+
+Here, we show that DiverseUncertainty could significantly achieves smaller mean squared error (MSE) compared to MaxUncertainty. Without loss of generality we consider $m$ clusters and the following definitions.
+
+ * MaxUncertainty Select ${2m}$ most uncertain samples.
+
+ * DiverseUncertainty Select the 2 most uncertain samples from each cluster.
+
+ < g r a p h i c s >
+
+Figure 2: The area enclosed by the blue circles is the sample space of propagated features (2D case). The green stars are sampled nodes during initial sampling (cluster center). The red stars are the sampled nodes during uncertainty sampling. (a) MaxUncertainty picks the nodes with largest uncertainty, which is equivalent to sampling the boundary of cluster 2. (b) DiverseUncertainty diversifies the clustered nodes, and samples the boundary of both clusters.
+
+Before presenting the theory we illustrate the operation of our method and MaxUncertainty in Figure 2. ScatterSample first clusters the samples on the propagated feature space (blue circles in Figure 2), and selects the nodes closest to the cluster centers for initial training (green stars in Figure 2). Then, during the dynamic sampling steps, we compute the uncertainty using equation 4. MaxUncertainty approach will select the nodes with the largest uncertainty. Under our setup, it is equivalent to sample nodes at the boundary of the largest cluster since the distance to the cluster center is the most important factor of uncertainty (Figure 2(a)). While DiverseUncertainty will diversify the high uncertainty nodes, which is equivalent to sample from the boundary of each cluster (Figure 2(b)). The red stars of Figure 2 show the nodes labeled during the uncertainty sampling stage. Since MaxUncertainty algorithm only labels the nodes in cluster 2, cluster 1 is ignored the prediction uncertainty of cluster 2 cannot be reduced. On the contrary, DiverseUncertainty samples nodes from both cluster 1 and 2. Thus, it could reduce the prediction uncertainty in both clusters.
+
+Then, the following theorem quantifies the relationship of the MSEs of both algorithms under the setup of Sec. 5.1.
+
+Theorem 5.1. Consider a case where feature dimension $d = 1$ . With the above notation and assumptions, let ${r}_{i} = \exp \left\lbrack {-\frac{{d}_{i}^{2}}{2\theta }}\right\rbrack$ . If we satisfy ${d}_{m}^{2} \geq {d}_{m - 1}^{2} + 4\log \theta$ and $\delta \geq {d}_{m} +$ $\max \left( {\sqrt{{d}_{m}^{2} + \theta \log \left( {9m}\right) },{2\theta }\log \left( \frac{3\sqrt{m}}{1 - {r}_{m}}\right) }\right)$ , we have
+
+$$
+\frac{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ MaxUncertainty }}\right) }{\operatorname{MSE}\left( {f\left( x\right) \mid \text{ Diverse Uncertainty }}\right) } \geq \frac{1}{2\left( {1 + \tau }\right) }\frac{1 + {r}_{m}^{2}}{1 - {r}_{m}} - \frac{8}{3} = \frac{1}{\tau + 1}O\left( \theta \right) .
+$$
+
+§ PROOF: THE COMPLETE PROOF IS INCLUDED IN APPENDIX B.
+
+Theorem 5.1 suggests that when the GP function is smooth enough (large $\theta$ ), the MaxUncertainty will have larger MSE than the MaxDiversity algorithm (proof is in appendix section B). A large $\theta$ suggests a close correlation between the labels of the nodes that are close to each other. It is also common for most of the graph datasets where samples clustered together usually have similar labels. Thus, DiverseUncertainty can achieve a smaller MSE in this case.
+
+Table 1: Statistics of graph datasets used in experiments.
+
+max width=
+
+Data #Nodes #Train Nods #Edges #Classes
+
+1-5
+Cora 2,708 1,208 5,429 7
+
+1-5
+Citeseer 3,327 1,827 4,732 6
+
+1-5
+Pubmed 19,717 18,217 44,328 3
+
+1-5
+Corafull 19,793 18,293 126,842 70
+
+1-5
+ogbn-products 2,449,029 196,615 61,859,149 47
+
+1-5
+
+§ 232 6 EXPERIMENTS
+
+3 We evaluate the performance of ScatterSample on five different datasets.
+
+Datasets. We evaluated the different methods on the Cora, Citeseer, Pubmed, Corafull [KW16], and ogbn-products [Hu+20] datasets (Table 1). Besides the ogbn-products, we do not keep original data split of the training and testing set. For the nodes that are not in the validation or testing sets (the validation and testing sets follows the split in the dgl package "dgl.data" [Wan+19]), we will add them to the training set. The labels can only be queried from the training set.
+
+Baselines. For different sampling budget $B$ , we compare the test accuracy of ScatterSample with the following graph active learning baselines:
+
+ * Random sampling. Select $B$ nodes uniformly at random from ${\mathcal{V}}_{\text{ train }}$ .
+
+ * AGE [CZC17]: AGE computes a score which combines the node centrality, information density, and uncertainty, to select $B$ nodes with the highest scores.
+
+ * ANRMAB [Gao+18]: ANRMAB learns the combination weights of the three metrics used by AGE with multi-armed bandit method.
+
+ * FeatProp: FeatProp [Wu+19b] clusters the feature propogations into $B$ clusters and pick the nodes closest to the cluster centers.
+
+ * Grain: [Zha+21] score the node by the weighted average of the influence score and diversity score. And select the top $B$ nodes with largest node scores. Grain includes two different approaches of selecting nodes, Grain (ball-D) and Grain (NN-D).
+
+ * ScatterSample: For the sample scale graph dataset (Cora, Citeseer), we set the initial sampling budget to $3\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ and sample $1\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ each round during the dynamic sampling period. For medium scale datasets (Pubmed and Corafull), we set the initial sampling budget to $1\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ and sample ${0.5}\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ each dynamic sampling round. For the large scale dataset (ogbn-products), initial sampling budget is ${0.2}\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ , and each dynamic sampling round selects ${0.05}\% \cdot \left| {\mathcal{V}}_{\text{ train }}\right|$ nodes.
+
+GNN setup. We train a 2-layer GCN network with hidden layer dimension $= {64}$ for Cora, Cite-seer and Pubmed, and $= {128}$ for Corafull and obgn-products. To train the GNN, we follow the standard random neighbor sampling where for each node [HYL17], we randomly sample 5 neighbors for the convolution operation in each layer. We use the function in "dgl" package to train the GNNs [Wan+19].
+
+§ 6.1 PERFORMANCE RESULTS
+
+We compare the performance of different active graph neural network learning algorithms under different labeling budgets(B). We parameterize the labeling budget $B$ equal to a certain proportion of the nodes in the training set $\left( {B = r\left| {\mathcal{V}}_{\text{ train }}\right| }\right)$ . For Cora and Citeseer, we vary $r$ from 5% to ${15}\%$ in increment of $2\%$ ; for Pubmed and Corafull, $r$ is varied from $3\%$ to ${10}\%$ ; for ogbn-product dataset, we vary the $r$ from 0.3% to 1%. The performance of the active learning algorithms are measured with the test accuracy.
+
+Accuracy. Figure 3 shows the test accuracy of baselines trained on different proportions of the selected nodes. ScatterSample improves the test accuracy and consistently outperforms other baselines in all the datasets. In Citeseer, ScatterSample requires 9% of the node labels to achieve test accuracy 74.2%, while the best alternative baselines "Grain (ball-D)" and "Grain (NN-D)" need to label 15% of nodes to achieve similar accuracy, which corresponds to a ${40}\%$ savings of the labeling cost. Similarly, in PubMed and ogbn-products, ScatterSample achieves a 50% labeling cost reduction compared to the best alternative baseline.
+
+Efficiency. Here, we compare the computation time among the methods that use the graph structure and node features to select the samples namely, ScatterSample, "Grain (ball-D)" and "Grain (NN-D)". We use the ogbn-products dataset to perform comparisons. ScatterSample takes less than 8 hours to determine the labeling nodes and train the GNN, while the Grain algorithm requires more than 240 hours. Grain requires $\mathcal{O}\left( {n}^{2}\right)$ complexity to calculate the scores of all nodes, which is prohibitive complexity in large graphs.
+
+Complexity analysis. The computation complexity of DiverseUncertainty is $O\left( {\left| E\right| + r * {B}_{t}^{2}}\right)$ . It is because ScatterSample includes two parts: 1) computing the node representations with complexity $O\left( \left| E\right| \right)$ where $\left| E\right|$ is the number of edges and 2) cluster the the uncertain nodes where the complexity is $O\left( {r{B}_{t}^{2}}\right)$ . Since both $r$ and ${B}_{t}$ are small, $r{B}_{t}^{2} < \left| E\right|$ , our method does not add a lot of extra burden compared to the model training time.
+
+ < g r a p h i c s >
+
+Figure 3: ScatterSample (blue), wins consistently: Comparison of the test accuracy of active GNN learning algorithms at different labeling budget. The $x$ -axis shows # labeled nodes/# nodes in training set.
+
+§ 6.2 ABLATION STUDY
+
+The MaxDiversity algorithm of ScatterSample needs to determine the size of candidate set ${\mathcal{C}}_{t}$ before selecting a subset ${S}_{t}$ from ${\mathcal{C}}_{t}$ for labeling. Hence, sampling redundancy $r$ and the clustering algorithm to cluster the nodes in ${\mathcal{C}}_{t}$ will affect the performance of ScatterSample. In this section, we will evaluate the effect of both factors.
+
+ < g r a p h i c s >
+
+Figure 4: Compare the performance under different sampling redundancy $r$ . When $r = 1$ , Diverse-Uncertainty reduces to MaxUncertainty method.
+
+Sampling redundancy $r$ : Recall from algorithm 1, the sampling redundancy $r$ controls the relative size of candidate set ${\mathcal{C}}_{t}$ to size of sampled node ${S}_{t}$ . When $r = 1$ , ScatterSample reduces to the standard MaxUncertainty algorithm. And figure 4 shows that the sampling the most uncertain nodes is significantly worse than DiverseUncertainty. For the Citeseer dataset, DiverseUncertainty can outperform MaxUncertainty by over 7% when sampling ratio is 5%. Therefore, to achieve a good test accuracy, $r$ should be carefully selected. Figure 4 suggests that as $r$ increases, the test accuracy quickly boosts at the early stage, and then decreases slowly.
+
+Sensitivity to initial sampling ratio: During the initial sampling stage, DiverseUncertainty samples ${B}_{0}$ nodes to train the model initially. And the initially trained model will affect the nodes sampled during the dynamic sampling period. We test the effect of different initial sampling ratio on Cora and Citeseer datasets. We vary the initial sampling ratio from 2% to 4%, and figure A5 shows that DiverseUncertainty is robust to the choice of initial sampling ratio.
+
+Diverse uncertainty algorithms: Besides the sampling algorithm used by DiverseUncertainty, there are some other algorithms to pick the representative nodes from the candidate set ${S}_{t}$ . First, we will evaluate three algorithms to cluster and select the propagated features.
+
+ * Random select: randomly pick nodes ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
+
+ * DiverseUncertainty: use $k$ -means++ to cluster the nodes in ${\mathcal{C}}_{t}$ and
+
+ * Random round-robin Algorithm [Cit+21]: use the cluster labels from the initial sampling period (the initial sampling period clusters all the nodes in ${\mathcal{V}}_{\text{ train }}$ ). Then, following the Algorithm A3 (see Appendix) to select ${S}_{t}$ from ${\mathcal{C}}_{t}$
+
+Figure A6 suggests that $k$ -means++ clustering algorithm achieves a better test accuracy in most cases compared to random selection or random round-robin algorithm (see Appendix). Moreover, compared to random sampling algorithm, $k$ -means++ clustering algorithm is more robust when the sampling ratio increases. As the sampling ratio increases, the test accuracy of $k$ -means++ keeps increasing in most cases, while the test accuracy of random sampling algorithm has more fluctuations.
+
+Another factor that affects the test performance is the metric for clustering. Besides the propagated features (which is used by MaxDiversity), we can also cluster the input features or the embedding vectors. Since the GNN models typically used do not have a fully connected layer connecting to the output, we cannot use the output of second last layer as the embedding. Hence, we use the GNN output as the embedding vector for clustering. Figure A7 shows that clustering the propagated features consistently outperforms clustering the other two targets. Especially for the "Citeseer" dataset, clustering the propagated features outperforms by at most 5%. To conclude, the $k$ -means++ clustering algorithm achieves the best performance compared to the other selection methods and clustering the propagated features is better than clustering other targets. Thus, DiverseUncertainty uses $k$ -means++ to cluster the propagated features to pick ${S}_{t}$ from ${\mathcal{C}}_{t}$ .
+
+§ 7 EMPIRICAL VALIDATION OF THEOREM
+
+In this section, we perform simulation analysis to demonstrate that ScatterSample can reduce the MSE compared to greedy uncertainty sampling approach.
+
+Graph Simulation Setup. Let the dimension of input feature $d = 1$ . Simulate $\mathbf{X}$ from two different clusters, where $\left( {X \mid {C}_{1}}\right) \sim$ Uniform(-15, - 5)and $\left( {X \mid {C}_{2}}\right) \sim$ Uniform(8,12). In our simulation, we randomly generated 100 nodes for each cluster. Each node is randomly connected to two other nodes in the same cluster. Moreover, for the edges between clusters, we set a probability threshold $r$ such that $\mathrm{P}\left\lbrack {{V}_{i} \in {C}_{1}\text{ connect to a node } \in {C}_{2}}\right\rbrack = r$ (See Appendix D for details).
+
+Label of nodes. The label of a node depends on its propagated features. First compute the 1-layer feature propagation of each node, ${\mathbf{X}}^{\left( 1\right) }$ . Then, the label of $i$ -th node is ${y}_{i} = {\left| {X}_{i}^{\left( 1\right) }\right| }^{2}$ . Here, because the two cluster centers are equally distanced from 0, hence, the label function is also symmetric around 0 .
+
+Node sampling. During the initial sampling step, label the nodes closest to the cluster centers and train the GP function. To sample uncertain nodes,
+
+ * MaxUncertainty: Label the 8 nodes with largest uncertainty.
+
+ * DiverseUncertainty: Collect the top 80 nodes with largest uncertainty into the candidate set. Then, use $k$ -means++ to cluster the nodes in the candidate set into 8 clusters. Label the 8 nodes closest to the cluster centers.
+
+MaxUncertainty and DiverseUncertainty use the newly labeled nodes to update the GP function respectively. Finally, the trained GP function predicts the node labels, and we compute the corresponding MSE.
+
+Figure A8 in the Appendix suggests that MaxUncertainty has larger MSE compared to Diverse-Uncertainty algorithm. For the MaxUncertainty algorithm, since most of the labeled nodes come from the cluster 1, the MSE of cluster 1 is significantly smaller than that of cluster 2 . While for the DiverseUncertainty algorithm, the MSE of cluster 1 and 2 are comparable. As $r$ increases, there are more and more edges between clusters, and the propagated features are less separated. Hence, there are some high uncertainty nodes from cluster 1 very close to cluster 2, which is beneficial for Max-Uncertainty to learn the labels of nodes from cluster 2. Thus, we could observe $\frac{\text{ MSE of MaxUncertainty }}{\text{ MSE of DiverseUncertaintly }}$ keeps decreasing when $r$ increases. When $r$ is very large, cluster 1 and 2 will merge into one cluster, and MSEs of both methods no longer have a significant difference.
+
+§ 8 CONCLUSION
+
+Learning a GNN model with limited labeling budget is an important but challenging problem. In this paper:
+
+ * We propose a novel data efficient GNN learning algorithm, ScatterSample, which efficiently diversifies the uncertain nodes and achieves better test accuracy than recent baselines.
+
+ * We provide theoretical guarantees: Theorem 5.1 proves the advantage of ScatterSample over MaxUncertainty sampling.
+
+ * Experiments on real data show that ScatterSample can save up to ${50}\%$ labeling size, for the same test accuracy.
+
+We envision ScatterSample will inspire future research of combining uncertainty sampling and representation sampling (diversifying).
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..54ef422c56a8f4168e2de945cc6e7f2a0d025b2d
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,498 @@
+# Diffusion Models for Graphs Benefit From Discrete State Spaces
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Denoising diffusion probabilistic models and score matching models have proven to be very powerful for generative tasks. While these approaches have also been applied to the generation of discrete graphs, they have, so far, relied on continuous Gaussian perturbations. Instead, in this work, we suggest using discrete noise for the forward Markov process. This ensures that in every intermediate step the graph remains discrete. Compared to the previous approach, our experimental results on four datasets and multiple architectures show that using a discrete noising process results in higher quality generated samples indicated with an average MMDs reduced by a factor of 1.5. Furthermore, the number of denoising steps is reduced from 1000 to 32 steps leading to a 30 times faster sampling procedure.
+
+## 12 1 Introduction
+
+Score-based [1] and denoising diffusion probabilistic models (DDPMs) [2, 3] have recently achieved striking results in generative modeling and in particular in image generation. Instead of learning a complex model that generates samples in a single pass (like a Generative Adversarial Network [4] (GAN) or a Variational Auto-Encoder [5] (VAE)), a diffusion model is a parameterized Markov Chain trained to reverse an iterative predefined process that gradually transforms a sample into pure noise. Although diffusion processes have been proposed for both continuous [6] and discrete [7] state spaces, their use for graph generation has only focused on Gaussian diffusion processes which operate in the continuous state space $\left\lbrack {8,9}\right\rbrack$ .
+
+This contribution suggests adapting the denoising procedure to an actual graph distribution and using discrete noise, leading to a random graph model. We describe this procedure based on the Discrete DDPM framework proposed by Austin et al. [7], Hoogeboom et al. [10]. Our experiments show that using discrete noise greatly reduces the number of denoising steps that are needed and improves the sample quality. We also suggest the use of a simple expressive graph neural network architecture [11] for denoising, which, while bringing expressivity benefits, contrasts with more complicated architectures currently used for graph denoising [8].
+
+## 2 Related Work
+
+Traditionally, graph generation has been studied through the lens of random graph models [12-14]. While this approach is insufficient to model many real-world graph distributions, it is useful to create synthetic datasets and provides a useful abstraction. In fact, we will use Erdős-Rényi graphs [12] to model the prior distribution of our diffusion process.
+
+Due to their larger number of parameters and expressive power, deep generative models have achieved better results in modeling complex graph distributions. The most successful graph generative models can be devised into two different techniques: a) auto-regressive graph generative models, which generate the graph sequentially node-by-node [15, 16], and b) one-shot generative models which generate the whole graph in a single forward pass [8, 9, 17-21]. While auto-regressive models can generate graphs with hundreds or even thousands of nodes, they can suffer from mode collapse $\left\lbrack {{20},{21}}\right\rbrack$ . One-shot graph generative models are more resilient to mode collapse but are more challenging to train while still not scaling easily beyond tens of nodes. Recently, one-shot generation has been scaled up to graphs of hundreds of nodes thanks to spectral conditioning [21], suggesting that good conditioning can largely benefit graph generation. Still, the suggested training procedure is cumbersome as it involves 3 different intertwined Generative Adversarial Networks (GANs). Finally, Variational Auto Encoders (VAE) have also been studied to generate graphs but remain difficult to train, as the loss function needs to be permutation invariant [22] which can necessitate an expensive graph matching step [17].
+
+In contrast, the score-based models $\left\lbrack {8,9}\right\rbrack$ have the potential to provide both, a simple, stable training objective similar to the auto-regressive models and good graph distribution coverage provided by the one-shot models. Niu et al. [8] provided the first score-based model for graph generation by directly using the score-based model formulation by Song and Ermon [1] and additionally accounting for the permutation equivariance of graphs. Jo et al. [9] extended this to featured graph generation, by formulating the problem as a system of two stochastic differential equations, one for feature generation and one for adjacency generation. The graph and the features are then generated in parallel. This approach provided promising results for molecule generation. Results on slightly larger graphs were also improved but remained imperfect. Importantly, both contributions rely on a continuous Gaussian noise process and use a thousand denoising steps to achieve good results, which makes for a slow graph generation.
+
+As shown by Song et al. [6], score matching is tightly related to denoising diffusion probabilistic models [3] which provide a more flexible formulation, more easily amendable for the graph generation. In particular, for the noisy samples to remain discrete graphs, the perturbations need to be discrete. Such discrete diffusion has been successfully used for quantized image generation [23, 24] and text generation [25]. Diffusion using the multinomial distribution was proposed in Hoogeboom et al. [10]. Then, Austin et al. [7] extended the previous work by Hoogeboom et al. [10], Song et al. [26] and provided a general recipe for denoising diffusion models in discrete state-spaces which mainly requires the specification of a doubly-stochastic Markov transition matrix $\mathbf{Q}$ which ensures the Markov process conserves probability mass and converges to a stationary distribution. In the next section, we describe a formulation of this perturbation matrix $\mathbf{Q}$ leading to the Erdős-Rényi random graphs.
+
+## 3 Discrete Diffusion for Simple Graphs
+
+Diffusion models [2] are generative models based on a forward and a reverse Markov process. The forward process, denoted $q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) = \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right)$ generates a sequence of increasingly noisier latent variables ${\mathbf{A}}_{t}$ from the initial sample ${\mathbf{A}}_{0}$ , to white noise ${\mathbf{A}}_{T}$ . Here the sample ${\mathbf{A}}_{0}$ and the latent variables ${\mathbf{A}}_{t}$ are adjacency matrices. The learned reverse process ${p}_{\theta }\left( {\mathbf{A}}_{1 : T}\right) = p\left( {\mathbf{A}}_{T}\right) \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ attempts to progressively denoise the latent variable ${\mathbf{A}}_{t}$ in order to produce samples from the desired distribution. Here we will focus on simple graphs, but the approach can be extended in a straightforward manner to account for different edge types. We use the model from [10] and, for convenience, adopt the representation of [7] for our discrete process.
+
+### 3.1 Forward Process
+
+Let the row vector ${\mathbf{a}}_{t}^{ij} \in \{ 0,1{\} }^{2}$ be the one-hot encoding of $i, j$ element of the adjacency matrix ${\mathbf{A}}_{t}$ . Here $t \in \left\lbrack {0, T}\right\rbrack$ denotes the timestep of the process, where ${\mathbf{A}}_{0}$ is a sample from the data distribution and ${\mathbf{A}}_{T}$ is an Erdős-Rényi random graph. The forward process is described as repeated multiplication of each adjacency element type row vector ${\mathbf{a}}_{t}^{ij} = {\mathbf{a}}_{t - 1}^{ij}{\mathbf{Q}}_{t}$ with a double stochastic matrix ${\mathbf{Q}}_{t}$ . Note that the forward process is independent for each edge/non-edge $i \neq j$ . The matrix ${\mathbf{Q}}_{t} \in {\mathbb{R}}^{2 \times 2}$ is
+
+modeled as
+
+$$
+{\mathbf{Q}}_{t} = \left\lbrack \begin{matrix} 1 - {\beta }_{t} & {\beta }_{t} \\ {\beta }_{t} & 1 - {\beta }_{t} \end{matrix}\right\rbrack , \tag{1}
+$$
+
+where ${\beta }_{t}$ is the probability of not changing the edge state ${}^{1}$ . This formulation ${}^{2}$ has the advantage to allow direct sampling at any timestep of the diffusion process without computing any previous timesteps. Indeed the matrix ${\overline{\mathbf{Q}}}_{t} = \mathop{\prod }\limits_{{i < t}}{\mathbf{Q}}_{i}$ can be expressed in the form of (1) with ${\beta }_{t}$ being replaced by ${\bar{\beta }}_{t} = \frac{1}{2} - \frac{1}{2}\mathop{\prod }\limits_{{i < t}}\left( {1 - 2{\beta }_{i}}\right)$ . Eventually, we want the probability ${\bar{\beta }}_{t} \in \left\lbrack {0,{0.5}}\right\rbrack$ to vary from 0 (unperturbed sample) to 0.5 (pure noise). In this contribution, we limit ourselves to symmetric graphs and therefore only need to model the upper triangular part of the adjacency matrix. The noise is sampled i.i.d. over all of the edges.
+
+---
+
+${}^{1}$ Note that two different $\beta$ ’s could be used for edges and non-edges. This case is left for future work.
+
+${}^{2}$ Note that we use a different parametrization for (1) than [10]. To recover the original formulation, one can simply divide all ${\beta }_{t}$ by 2 .
+
+---
+
+### 3.2 Reverse Process
+
+To sample from the data distribution, the forward process needs to be reversed. Therefore, we need to estimate $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ . In our case, using the Markov property of the forward process this can be rewritten as (see Appendix A for derivation):
+
+$$
+q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) .} \tag{2}
+$$
+
+Note that (2) is entirely defined by ${\beta }_{t}$ and ${\bar{\beta }}_{t}$ and ${\mathbf{A}}_{0}$ (see Appendix A, Equation 4).
+
+### 3.3 Loss
+
+Diffusion models are typically trained to minimize a variational upper bound on the negative log-likelihood. This bound can be expressed as (see Appendix C or [3, Equation 5]):
+
+$$
+\left. {{L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right) }\right) \mathrel{\text{:=}} {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\left\lbrack \underset{{L}_{T}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) }}\right.
+$$
+
+$$
++ \mathop{\sum }\limits_{{t = 1}}^{T}{\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\underset{{L}_{t}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) }}\underset{{L}_{0}}{\underbrace{-{\mathbb{E}}_{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }}
+$$
+
+Practically, the model is trained to directly minimize the losses ${L}_{t}$ , i.e. the KL divergence ${D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right)$ by using the tractable parametrization of $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ from (2). Note that the discrete setting of the selected noise distribution prevents training the model to approximate the gradient of the distribution as done by score-matching graph generative models [8, 9].
+
+Parametrization of the reverse process. While it is possible to predict the logits of ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ in order to minimize ${L}_{\mathrm{{vb}}}$ , we follow $\left\lbrack {3,7,{10}}\right\rbrack$ and use a network ${\mathrm{{nn}}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ that predict the logits of the distribution ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ . This parametrization is known to stabilize the training procedure. To minimize ${L}_{\mathrm{{vb}}},\left( 2\right)$ can be used to recover ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ from ${\mathbf{A}}_{0}$ and ${\mathbf{A}}_{t}$ .
+
+Alternate loss. Many implementations of DDPMs found it beneficial to use alternative losses. For instance, [3] derived a simplified loss function that reweights the ELBO. Hybrid losses have been used in [27] and [7]. As shown in Appendix D, using the parametrization ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ , one can express the term: ${L}_{t}$ as ${L}_{t} = - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right)$ . Empirically, we found that minimizing
+
+$$
+{L}_{\text{simple }} \mathrel{\text{:=}} - {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\mathop{\sum }\limits_{{t = 1}}^{T}\left( {1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}}\right) \cdot {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\log {p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) ) \tag{3}
+$$
+
+leads to stable training and better results. Note that this loss equals the cross-entropy loss between ${\mathbf{A}}_{0}$ and ${\operatorname{nn}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ . The re-weighting $1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}$ , which assigns linearly more importance to the less noisy samples, has been proposed in [23, Equation 7].
+
+### 3.4 Sampling
+
+For each loss, we used a specific sampling algorithm. For both approaches, we start by sampling each edge independently from a Bernoulli distribution with probability $p = 1/2$ (Erdős-Rényi random graph). Then, for the ${L}_{\mathrm{{vb}}}$ loss we follow Ho et al. [3] and iteratively reverse the chain by sampling Bernoulli-sampling from ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ until we obtain at our sample of ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right)$ . For the loss function ${L}_{\text{simple }}$ , we sample ${\mathbf{A}}_{0}$ directly from ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ for each step $t$ and obtain ${\mathbf{A}}_{t - 1}$ by sampling again from $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right)$ . The two approaches are described algorithmically in Appendix E.
+
+The values of ${\bar{\beta }}_{t}$ are selected following a simple linear schedule for our reverse process [2]. We found it works similarly well as other options such as cosine schedule [27]. Note that in this case ${\beta }_{t}$ can be obtained from ${\bar{\beta }}_{t}$ in a straightforward manner (see Appendix B).
+
+## 4 Experiments
+
+We compare our graph discrete diffusion approach to the original score-based approach proposed by Niu et al. [8]. Models using this original formulation are denoted by score. We follow the training and evaluation setup used by previous contributions $\left\lbrack {8,9,{15},{19}}\right\rbrack$ . More details can be found in Appendix G. For evaluation, we compute MMD metrics from [15] between the generated graphs and the test set, namely, the degree distribution, the clustering coefficient, and the 4-node orbit counts. To demonstrate the efficiency of the discrete parameterization, the discrete models only use 32 denoising steps, while the score-based models use 1000 denoising steps, as originally proposed. We compare two architectures: 1. EDP-GNN as introduced by Niu et al. [8], and 2. a simpler and more expressive provably powerful graph network (PPGN) [11]. See Appendix F for a more detailed description of the architectures.
+
+| Model | Community | Ego | Total |
| Deg. | Clus. | Orb. | Avg. | Deg. | Clus. | Orb. | Avg. |
| GraphRNN ${}^{ \dagger }$ | 0.030 | 0.030 | 0.010 | 0.017 | 0.040 | 0.050 | 0.060 | 0.050 | 0.033 |
| ${\mathrm{{GNF}}}^{ \dagger }$ | 0.120 | 0.150 | 0.020 | 0.097 | 0.010 | 0.030 | 0.001 | 0.014 | 0.055 |
| EDP-Score ${}^{ \dagger }$ | 0.006 | 0.127 | 0.018 | 0.050 | 0.010 | 0.025 | 0.003 | 0.013 | 0.031 |
| SDE-Score ${}^{ \dagger }$ | 0.045 | 0.086 | 0.007 | 0.046 | 0.021 | 0.024 | 0.007 | 0.017 | 0.032 |
| EDP-Score ${}^{3}$ | 0.016 | 0.810 | 0.110 | 0.320 | 0.04 | 0.064 | 0.005 | 0.037 | 0.178 |
| PPGN-Score | 0.081 | 0.237 | 0.284 | 0.200 | 0.019 | 0.049 | 0.005 | 0.025 | 0.113 |
| PPGN ${L}_{\mathrm{{vb}}}$ | 0.023 | 0.061 | 0.015 | 0.033 | 0.025 | 0.039 | 0.019 | 0.027 | 0.03 |
| PPGN ${L}_{\text{simple }}$ | 0.019 | 0.044 | 0.005 | 0.023 | 0.018 | 0.026 | 0.003 | 0.016 | 0.019 |
| EDP ${L}_{\text{simple }}$ | 0.024 | 0.04 | 0.012 | 0.026 | 0.019 | 0.031 | 0.017 | 0.022 | 0.024 |
+
+Table 1: MMD results for the Community and the Ego datasets. All values are averaged over 5 runs with 1024 generated samples without any sub-selection. The "Total" column denotes the average MMD over all of the 6 measurements. The best results of the "Avg." and "Total" columns are shown in bold. $\dagger$ marks the results taken from the original papers.
+
+| Model | SBM-27 | Planar-60 | Total |
| Deg. | Clus. | Orb. | Avg. | Deg. | Clus. | Orb. | Avg. |
| EDP-Score | 0.014 | 0.800 | 0.190 | 0.334 | 1.360 | 1.904 | 0.534 | 1.266 | 0.8 |
| PPGN ${L}_{\text{simple }}$ | 0.007 | 0.035 | 0.072 | 0.038 | 0.029 | 0.039 | 0.036 | 0.035 | 0.036 |
| EDP ${L}_{\text{simple }}$ | 0.046 | 0.184 | 0.064 | 0.098 | 0.017 | 1.928 | 0.785 | 0.910 | 0.504 |
+
+Table 2: MMD results for the SBM-27 and the Planar- 60 datasets.
+
+Table 1 shows the results for two datasets, Community-small $\left( {{12} \leq n \leq {20}}\right)$ and Ego-small $\left( {4 \leq n \leq {18}}\right)$ , used by Niu et al. [8]. To better compare our approach to traditional score-based graph generation, in Table 2, we additionally perform experiments on slightly more challenging datasets with larger graphs. Namely, a stochastic-block-model (SBM) dataset with three communities, which in total consists of $\left( {{24} \leq n \leq {27}}\right)$ nodes and a planar dataset with $\left( {n = {60}}\right)$ nodes. Detailed information on the datasets can be found in Appendix H. Additional details concerning the evaluation setup are provided in Appendix G.4.
+
+Results. In Table 1, we observe that the proposed discrete diffusion process using the ${L}_{\mathrm{{vb}}}$ loss and PPGN model leads to slightly improved average MMDs over the competitors. The ${L}_{\text{simple }}$ loss further improve the result over ${L}_{\mathrm{{vb}}}$ . The fact that the EDP- ${L}_{\text{simple }}$ model has significantly lower MMD values than the EDP-score model is a strong indication that the proposed loss and the discrete formulation are the cause of the improvement rather than the PPGN architecture. This improvement comes with the additional benefit that sampling is greatly accelerated (30 times) as the number of timesteps is reduced from 1000 to 32. Table 2 shows that the proposed discrete formulation is even more beneficial when graph size and complexity increase. The PPGN-Score even becomes infeasible to run in this setting, due to the prohibitively expensive sampling procedure. A qualitative evaluation of the generated graphs is performed in Appendix I. Visually, the ${L}_{\text{simple }}$ loss leads to the best samples.
+
+## 5 Conclusion
+
+In this work, we demonstrated that discrete diffusion can increase sample quality and greatly improve the efficiency of denoising diffusion for the graph generation. While the approach was presented for simple graphs with non-attributed edges, it could also be extended to cover graphs with edge attributes.
+
+---
+
+${}^{3}$ The discrepancy with the SDE-Score ${}^{ \dagger }$ results comes from the fact that using the code provided by the authors, we were unable to reproduce their results. Strangely, their code leads to good results when used with our discrete formulation and ${L}_{\text{simple }}$ loss improving over the result reported in their contribution.
+
+---
+
+References
+
+[1] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019. 1, 2
+
+[2] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015. 1, 2, 4
+
+[3] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020. 1, 2, 3
+
+[4] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. 1
+
+[5] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 2014. 1
+
+[6] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021. 1, 2
+
+[7] Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. Structured denoising diffusion models in discrete state-spaces. Advances in Neural Information Processing Systems, 34:17981-17993, 2021. 1, 2, 3
+
+[8] Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, and Stefano Ermon. Permutation invariant graph generation via score-based generative modeling. In International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 4474-4484, Online, 26-28 Aug 2020. PMLR. 1, 2, 3, 4, 9, 10, 12, 13, 14, 15
+
+[9] Jaehyeong Jo, Seul Lee, and Sung Ju Hwang. Score-based generative modeling of graphs via the system of stochastic differential equations. In Proceedings of the International Conference on Machine Learning (ICML), 2022. 1, 2, 3, 4
+
+[10] Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems, 34:12454-12465, 2021. 1, 2, 3
+
+[11] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In Advances in Neural Information Processing Systems, pages 2156-2167, 2019.1,4,9
+
+[12] Paul Erdos, Alfréd Rényi, et al. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17-60, 1960. 1, 10
+
+[13] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109-137, 1983.
+
+[14] Justin Eldridge, Mikhail Belkin, and Yusu Wang. Graphons, mergeons, and so on! In Advances in Neural Information Processing Systems, pages 2307-2315, 2016. 1
+
+[15] Jiaxuan You, Rex Ying, Xiang Ren, William L Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep auto-regressive models. In ICML, 2018. 1, 4
+
+[16] Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Will Hamilton, David K Duvenaud, Raquel Urtasun, and Richard Zemel. Efficient graph generation with graph recurrent attention networks. In Advances in Neural Information Processing Systems, pages 4255-4265, 2019. 1
+
+[17] Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders. In International conference on artificial neural networks, pages 412-422. Springer, 2018. 1, 2
+
+[18] Nicola De Cao and Thomas Kipf. MolGAN: An implicit generative model for small molecular graphs. ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018.
+
+[19] Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, and Kevin Swersky. Graph normalizing flows. Advances in Neural Information Processing Systems, 32:13578-13588, 2019. 4
+
+[20] Igor Krawczuk, Pedro Abranches, Andreas Loukas, and Volkan Cevher. Gg-gan: A geometric graph generative adversarial network. 2020. 1
+
+[21] Karolis Martinkus, Andreas Loukas, Nathanaël Perraudin, and Roger Wattenhofer. Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators. In Proceedings of the International Conference on Machine Learning (ICML), 2022. 1, 2, 10, 11
+
+[22] Clement Vignac and Pascal Frossard. Top-n: Equivariant set and graph generation without exchangeability. In International Conference on Learning Representations, 2022. 2
+
+[23] Sam Bond-Taylor, Peter Hessey, Hiroshi Sasaki, Toby P. Breckon, and Chris G. Willcocks. Unleashing transformers: Parallel token prediction with discrete absorbing diffusion for fast high-resolution image generation from vector-quantized codes. In European Conference on Computer Vision (ECCV), 2022. 2, 3
+
+[24] Patrick Esser, Robin Rombach, Andreas Blattmann, and Bjorn Ommer. Imagebart: Bidirectional context with multinomial diffusion for autoregressive image synthesis. Advances in Neural Information Processing Systems, 34:3518-3532, 2021. 2
+
+[25] Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. Autoregressive diffusion models. In International Conference on Learning Representations, 2022. 2
+
+[26] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2020. 2
+
+[27] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162-8171. PMLR, 2021. 3, 4
+
+[28] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. 9
+
+[29] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. Collective classification in network data articles. AI Magazine, 29:93-106, 09 2008. doi: 10.1609/aimag.v29i3.2157. 10
+
+[30] Der-Tsai Lee and Bruce J Schachter. Two algorithms for constructing a delaunay triangulation. International Journal of Computer & Information Sciences, 9(3):219-242, 1980. 11
+
+## 5 A Reverse Process Derivations
+
+In this appendix, we provide the derivation of the reverse probability $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ . Using the Bayes rule, we obtain
+
+$$
+q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) \cdot q\left( {{\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }
+$$
+
+$$
+= \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \cdot q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) q\left( {\mathbf{A}}_{0}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) \cdot q\left( {\mathbf{A}}_{0}\right) }
+$$
+
+$$
+= q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \cdot \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) },
+$$
+
+where we use the fact that $q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) = q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right)$ since ${\mathbf{A}}_{t}$ is independent of ${\mathbf{A}}_{0}$ given ${\mathbf{A}}_{t - 1}$ .
+
+This reverse probability is entirely defined with ${\beta }_{t}$ and ${\bar{\beta }}_{t}$ . For the $i, j$ element of $\mathbf{A}$ (denoted ${\mathbf{A}}^{ij}$ ), we obtain:
+
+$$
+q\left( {{\mathbf{A}}_{t - 1}^{ij} = 1 \mid {\mathbf{A}}_{t}^{ij},{\mathbf{A}}_{0}^{ij}}\right) = \left\{ \begin{array}{ll} \left( {1 - {\beta }_{t}}\right) \cdot \frac{\left( 1 - {\bar{\beta }}_{t - 1}\right) }{1 - {\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 1,{\mathbf{A}}_{0}^{ij} = 1 \\ \left( {1 - {\beta }_{t}}\right) \cdot \frac{{\bar{\beta }}_{t - 1}}{{\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 1,{\mathbf{A}}_{0}^{ij} = 0 \\ {\beta }_{t} \cdot \frac{\left( 1 - {\bar{\beta }}_{t - 1}\right) }{{\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 0,{\mathbf{A}}_{0}^{ij} = 1 \\ {\beta }_{t} \cdot \frac{{\bar{\beta }}_{t - 1}}{1 - {\bar{\beta }}_{t}}, & \text{ if }{\mathbf{A}}_{t}^{ij} = 0,{\mathbf{A}}_{0}^{ij} = 0 \end{array}\right. \tag{4}
+$$
+
+## B Conversion of ${\bar{\beta }}_{t}$ to ${\beta }_{t}$
+
+The selected linear schedule provides us with the values of ${\bar{\beta }}_{t}$ . In this appendix, we compute an expression for ${\beta }_{t}$ from ${\bar{\beta }}_{t}$ , which allows us easy computation of (2). By definition, we have ${\overline{\mathbf{Q}}}_{t} = {\overline{\mathbf{Q}}}_{t - 1}{\mathbf{Q}}_{t}$ which is equivalent to
+
+$$
+\left( \begin{matrix} 1 - {\bar{\beta }}_{t - 1} & {\bar{\beta }}_{t - 1} \\ {\bar{\beta }}_{t - 1} & 1 - {\bar{\beta }}_{t - 1} \end{matrix}\right) \left( \begin{matrix} 1 - {\beta }_{t} & {\beta }_{t} \\ {\beta }_{t} & 1 - {\beta }_{t} \end{matrix}\right) = \left( \begin{matrix} 1 - {\bar{\beta }}_{t} & {\bar{\beta }}_{t} \\ {\bar{\beta }}_{t} & 1 - {\bar{\beta }}_{t} \end{matrix}\right)
+$$
+
+52 Let us select the first row and first column equality. We obtain the following equation
+
+$$
+\left( {1 - {\bar{\beta }}_{t - 1}}\right) \left( {1 - {\beta }_{t}}\right) + {\bar{\beta }}_{t - 1}{\beta }_{t} = 1 - {\bar{\beta }}_{t},
+$$
+
+263 which, after some arithmetic, provides us with the desired answer
+
+$$
+{\beta }_{t} = \frac{{\bar{\beta }}_{t - 1} - {\bar{\beta }}_{t}}{2{\bar{\beta }}_{t - 1} - 1}.
+$$
+
+## 264 C ELBO derivation
+
+265 The general Evidence Lower Bound (ELBO) formula states that
+
+$$
+\log \left( {{p}_{\theta }\left( x\right) }\right) \geq {\mathbb{E}}_{z \sim q}\left\lbrack {\log \left( \frac{p\left( {x, z}\right) }{q\left( z\right) }\right) }\right\rbrack
+$$
+
+6 for any distribution $q$ and latent $z$ . In our case, we use ${\mathbf{A}}_{1 : T}$ as a latent variable and obtain
+
+$$
+- \log \left( {{p}_{\theta }\left( {\mathbf{A}}_{0}\right) }\right) \leq {\mathbb{E}}_{{\mathbf{A}}_{1 : T} \sim q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) }\left\lbrack {\log \left( \frac{{p}_{\theta }\left( {\mathbf{A}}_{0 : T}\right) }{q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack \mathrel{\text{:=}} {L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right)
+$$
+
+267 We use ${L}_{\mathrm{{vb}}} = \mathbb{E}\left\lbrack {{L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right) )}\right\rbrack$ and obtain
+
+$$
+{L}_{\mathrm{{vb}}} = {\mathbb{E}}_{q\left( {\mathbf{A}}_{0 : T}\right) }\left\lbrack {-\log \left( \frac{{p}_{\theta }\left( {\mathbf{A}}_{0 : T}\right) }{q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
+$$
+
+$$
+= {\mathbb{E}}_{q}\left\lbrack {-\log \left( {{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) - \mathop{\sum }\limits_{{t = 1}}^{T}\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) }\right) }\right\rbrack
+$$
+
+$$
+= {\mathbb{E}}_{q}\left\lbrack {-\log \left( {{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) - \mathop{\sum }\limits_{{t = 2}}^{T}\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) }\right) - \log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
+$$
+
+$$
+= {\mathbb{E}}_{q}\left\lbrack {-\log \left( {{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) - \mathop{\sum }\limits_{{t = 2}}^{T}\log \left( {\frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) } \cdot \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }}\right) - \log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
+$$
+
+(5)
+
+$$
+= {\mathbb{E}}_{q}\left\lbrack {-\log \left( \frac{{p}_{\theta }\left( {\mathbf{A}}_{T}\right) }{q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) }\right) - \mathop{\sum }\limits_{{t = 2}}^{T}\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\right) - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }\right\rbrack
+$$
+
+$$
+= {\mathbb{E}}_{{\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }}\left\lbrack {{D}_{KL}\left( {q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) + \mathop{\sum }\limits_{{t = 2}}^{T}{\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) }\right.
+$$
+
+$$
+\left. {-{\mathbb{E}}_{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }\right\rbrack
+$$
+
+268 where (5) follows from
+
+$$
+q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) q\left( {{\mathbf{A}}_{t - 1},{\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }
+$$
+
+$$
+= \frac{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }.
+$$
+
+## 269 D Simple Loss
+
+0 Using the parametrization ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ , we can simplify the KL divergenc of the term ${L}_{t}$ .
+
+$$
+{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) = {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\left\lbrack {-\log \left( \frac{{p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\right) }\right\rbrack
+$$
+
+$$
+= {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) }\left\lbrack {-\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right) }\right\rbrack
+$$
+
+$$
+= - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right)
+$$
+
+We note that this term corresponds to the cross-entropy of the distribution ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ with the ground truth of ${\mathbf{A}}_{0}$ .
+
+## E Sampling Algorithms
+
+Here in Algorithms 1 and 2 we provide an algorithmic description of the two sampling approaches described in Section 3.4. Here ${\mathcal{B}}_{p = 1/2}$ denotes the Bernoulli distribution with parameter $p = 1/2$ , which corresponds to the Erdős-Rényi random graph model.
+
+Algorithm 1 Sampling for ${L}_{\mathrm{{vb}}}$
+
+---
+
+$\forall i, j \mid i > j : {\mathbf{A}}_{T}^{ij} \sim {\mathcal{B}}_{p = 1/2}$
+
+for $t = T,\ldots ,\overline{1}$ do
+
+ Compute ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$
+
+ ${\mathbf{A}}_{t - 1} \sim {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$
+
+end for
+
+---
+
+Algorithm 2 Sampling for ${L}_{\text{simple }}$
+
+---
+
+$\forall i, j \mid i > j : {\mathbf{A}}_{T}^{ij} \sim {\mathcal{B}}_{p = 1/2}$
+
+for $t = T,\ldots ,\overline{1}$ do
+
+ ${\widetilde{\mathbf{A}}}_{0} \sim {p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$
+
+ ${\mathbf{A}}_{t - 1} \sim q\left( {{\mathbf{A}}_{t - 1} \mid {\widetilde{\mathbf{A}}}_{0}}\right)$
+
+end for
+
+---
+
+## F Models
+
+### F.1 Edgewise Dense Prediction Graph Neural Network (EDP-GNN)
+
+The EDP-GNN model introduced by Niu et al. [8] extends GIN [28] to work with multi-channel adjacency matrices. This means that a GIN graph neural network is run on multiple different adjacency matrices (channels) and the different outputs are concatenated to produce new node embeddings:
+
+$$
+{\mathbf{X}}_{c}^{{\left( k + 1\right) }^{\prime }} = {\widetilde{\mathbf{A}}}_{c}^{\left( k\right) }{\mathbf{X}}^{\left( k\right) } + \left( {1 + \epsilon }\right) {\mathbf{X}}^{\left( k\right) },
+$$
+
+$$
+{\mathbf{X}}^{\left( k + 1\right) } = \operatorname{Concat}\left( {\mathbf{X}}_{c}^{{\left( k + 1\right) }^{\prime }}\right. \text{for}\left. {c \in \left\{ {1,\ldots ,{C}^{\left( k + 1\right) }}\right\} }\right) \text{,}
+$$
+
+where $\mathbf{X} \in {\mathbb{R}}^{n \times h}$ is the node embedding matrix with hidden dimension $h$ and ${C}^{\left( k\right) }$ is the number of channels in the input multi-channel adjacency matrix ${\widetilde{\mathbf{A}}}^{\left( k\right) } \in {\mathbb{R}}^{{C}^{\left( k\right) } \times n \times n}$ , at layer $k$ . The adjacency matrices for the next layer are produced using the node embeddings:
+
+$$
+{\widetilde{\mathbf{A}}}_{\cdot , i, j}^{\left( k + 1\right) } = \operatorname{MLP}\left( {{\widetilde{\mathbf{A}}}_{\cdot , i, j}^{\left( k\right) },{\mathbf{X}}_{i},{\mathbf{X}}_{j}}\right) .
+$$
+
+For the first layer, EDP-GNN computes two adjacency matrix ${\widetilde{\mathbf{A}}}^{\left( 0\right) }$ channels, original input adjacency $\mathbf{A}$ and its inversion ${\mathbf{{11}}}^{T} - \mathbf{A}$ . For node features, node degrees are used ${\mathbf{X}}^{\left( 0\right) } = \mathop{\sum }\limits_{i}{\mathbf{A}}_{i}$ .
+
+To produce the final outputs, outputs of all intermediary layers are concatenated:
+
+$$
+\widetilde{\mathbf{A}} = {\operatorname{MLP}}_{\text{out }}\left( {\operatorname{Concat}\left( {{\widetilde{\mathbf{A}}}^{\left( k\right) }\text{ for }k \in \{ 1,\ldots , K\} }\right) }\right) .
+$$
+
+The final layer always has only one output channel, such that ${\mathbf{A}}_{\left( t\right) } = \operatorname{EDP-GNN}\left( {\mathbf{A}}_{\left( t - 1\right) }\right)$ .
+
+To condition the model on the given noise level ${\bar{\beta }}_{t}$ , noise-level-dependent scale and bias parameters ${\mathbf{\alpha }}_{t}$ and ${\gamma }_{t}$ are introduced to each layer $f$ of every MLP:
+
+$$
+f\left( {\widetilde{\mathbf{A}}}_{\cdot , i, j}\right) = \operatorname{activation}\left( {\left( {\mathbf{W}{\widetilde{\mathbf{A}}}_{\cdot , i, j} + \mathbf{b}}\right) {\mathbf{\alpha }}_{t} + {\mathbf{\gamma }}_{t}}\right) .
+$$
+
+### F.2 Provably Powerful Graph Network (PPGN)
+
+The input to the PPGN model used is the adjacency matrix ${\mathbf{A}}_{t}$ concatenated with the diagonal matrix ${\overline{\mathbf{\beta }}}_{t} \cdot \mathbf{I}$ , resulting in an input tensor ${\mathbf{X}}_{in} \in {\mathbb{R}}^{n \times n \times 2}$ . The output tensor is ${\mathbf{X}}_{\text{out }} \in {\mathbb{R}}^{n \times n \times 1}$ , where each ${\left\lbrack {\mathbf{X}}_{\text{out }}\right\rbrack }_{ij}$ represents $p\left( {{\left\lbrack {\mathbf{A}}_{0}\right\rbrack }_{ij} \mid {\left\lbrack {\mathbf{A}}_{t}\right\rbrack }_{ij}}\right)$ .
+
+Our PPGN implementation, which closely follows Maron et al. [11] is structured as follows: Let $\mathbf{P}$ denote the PPGN model, then
+
+$$
+\mathbf{P}\left( {\mathbf{X}}_{\text{in }}\right) \mathrel{\text{:=}} \left( {{l}_{\text{out }} \circ C}\right) \left( {\mathbf{X}}_{\text{in }}\right) \tag{6}
+$$
+
+$$
+C : {\mathbb{R}}^{n \times n \times 2} \rightarrow {\mathbb{R}}^{n \times n \times \left( {d \cdot h}\right) } \tag{7}
+$$
+
+$$
+C\left( {\mathbf{X}}_{in}\right) \mathrel{\text{:=}} \operatorname{Concat}\left( {\left( {{B}_{d} \circ \ldots \circ {B}_{1}}\right) \left( {\mathbf{X}}_{in}\right) ,\left( {{B}_{d - 1} \circ \ldots \circ {B}_{1}}\right) \left( {\mathbf{X}}_{in}\right) ,\ldots ,{B}_{1}\left( {\mathbf{X}}_{in}\right) }\right) \tag{8}
+$$
+
+The set $\left\{ {{B}_{1},\ldots ,{B}_{d}}\right\}$ is a set of d different powerful layers implemented as proposed by Maron et al. [11]. We let the input run through different amounts of these powerful layers and concatenate their respective outputs to one tensor of size $n \times n \times \left( {d \cdot h}\right)$ . These powerful layers are functions of size:
+
+$$
+\forall {B}_{i} \in \left\{ {{B}_{2},\ldots ,{B}_{d}}\right\} ,{B}_{i} : {\mathbb{R}}^{n \times n \times h} \rightarrow {\mathbb{R}}^{n \times n \times h} \tag{9}
+$$
+
+304
+
+$$
+{B}_{1} : {\mathbb{R}}^{n \times n \times 1} \rightarrow {\mathbb{R}}^{n \times n \times h}. \tag{10}
+$$
+
+Finally, we use an MLP 2 to reduce the dimensionality of each matrix element down to 1 , so that we can treat the output as an adjacency matrix.
+
+$$
+{l}_{\text{out }} : {\mathbb{R}}^{d \cdot h} \rightarrow {\mathbb{R}}^{1}, \tag{11}
+$$
+
+where ${l}_{\text{out }}$ is applied to each element ${\left\lbrack C\left( {\mathbf{X}}_{in}\right) \right\rbrack }_{i, j,\text{ }}$ of the tensor $C\left( {\mathbf{X}}_{in}\right)$ over all its $d \cdot h$ channels. It is used to reduce the number of channels down to a single one which represents $p\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ .
+
+## G Training Setup
+
+### G.1 EDP-GNN
+
+The model training setup and hyperparameters used for the EDP-GNN were directly taken from [8]. We used 4 message-passing steps for each GIN, then stacked 5 EDP-GNN layers, for which the maximum number of channels is always set to 4 and the maximum number of node features is 16 . We use 32 denoising steps for all datasets besides Planar-60, where we used 256. Opposed to 6 noise levels with 1000 sample steps per level as in the Score-based approach.
+
+### G.2 PPGN
+
+The PPGN model we used for the Ego-small, Community-small, and SBM-27 datasets consist of 6 layers $\left\{ {{B}_{1},\ldots ,{B}_{6}}\right\}$ . After each powerful layer, we apply an instance normalization. The hidden dimension was set to 16. For the Planar-60 dataset, we have used 8 layers and a hidden dimension of 128. We used a batch size of 64 for all datasets and used the Adam optimizer with parameters chosen as follows: learning rate is 0.001 , betas are(0.9,0.999)and weight decay is0.999.
+
+### G.3 Model Selection
+
+We performed a simple model selection where the model which achieves the best training loss is saved and used to generate graphs for testing. We also investigated the use of a validation split and computation of MMD scores versus this validation split for model selection, but we did not find this to produce better results while adding considerable computational overhead.
+
+### G.4 Additional Details on Experimental Setup
+
+Here we provide some details concerning the experimental setup for the results in Tables 1 and 2.
+
+Details for MMD results in Table 1: From the original paper Niu et al. [8], we are unsure if the GNF, GraphRNN, and EDP-Score model selection were used or not. The SDE-Score results in the first section are sampled after training for 5000 epochs and no model selection was used. Due to the compute limitations on the PPGN model, the results for PPGN ${L}_{\mathrm{{vb}}}$ are taken after epoch 900 instead of 5000, as results for SDE-Score and EDP-Score have been. The results for PPGN ${L}_{\text{simple }}$ and EDP ${L}_{\text{simple }}$ were trained for 2500 epochs.
+
+Details for MMD results in Table 2: All results using the EDP-GNN model are trained until epoch 5000 and the PPGN implementation was trained until epoch 2500.
+
+## H Datasets
+
+In this appendix, we describe the 4 datasets used in our experiments.
+
+Ego-small: This dataset is composed of 200 graphs of 4-18 nodes from the Citeseer network (Sen et al. [29]). The dataset is available in the repository ${}^{4}$ of Niu et al. [8].
+
+Community-small: This dataset consists of 100 graphs from 12 to 20 nodes. The graphs are generated in two steps. First two communities of equal size are generated using the Erdos-Rényi model [12] with parameter $p = {0.7}$ . Then edges are randomly added between the nodes of the two communities with a probability $p = {0.05}$ . The dataset is directly taken from the repository of Niu et al. [8].
+
+SBM-27: This dataset consists of 200 graphs with 24 to 27 nodes generated using the Stochastic-Block-Model (SBM) with three communities. We use the implementation provided by Martinkus et al. [21]. The parameters used are ${p}_{\text{intra }} = {0.85},{p}_{\text{inter }} = {0.046875}$ , where ${p}_{\text{intra }}$ stands for the intra-community (i.e. for node within the same community) edge probability and ${p}_{\text{inter }}$ stands for the inter-community (i.e. for nodes from different community) edge probability. The number of nodes for the 3 communities is randomly drawn from $\{ 7,8,9\}$ . In expectation, these parameters generate 3 edges between each pair of communities.
+
+---
+
+${}^{4}$ https://github.com/ermongroup/GraphScoreMatching
+
+---
+
+Planar-60: This dataset consists of 200 randomly generated planar graphs of 60 nodes. We use the implementation provided by Martinkus et al. [21]. To generate a graph, 60 points are first random uniformly sampled on the ${\left\lbrack 0,1\right\rbrack }^{2}$ plane. Then the graph is generated by applying Delaunay triangulation to these points [30].
+
+## 57 I Visualization of Sampled Graphs
+
+In the following pages, we provide a visual comparison of graphs generated by the different models.
+
+
+
+Figure 1: Sample graphs from the training set of Ego-small dataset.
+
+Figure 2: Sample graphs generated with the model EDP-Score [8] for the Ego-small dataset.
+
+
+
+Figure 3: Sample graphs generated with the PPGN ${L}_{\mathrm{{vb}}}$ model for the Ego-small dataset.
+
+Figure 4: Sample graphs generated with the EDP ${L}_{\text{simple }}$ model for the Ego-small dataset.
+
+
+
+Figure 6: Sample graphs generated with the model EDP-Score [8] for the Community dataset.
+
+Figure 5: Sample graphs from the training set of the Community dataset
+
+
+
+Figure 7: Sample graphs generated with the PPGN ${L}_{\mathrm{{vb}}}$ model for the Community dataset.
+
+Figure 8: Sample graphs generated with the EDP ${L}_{\text{simple }}$ model for the Community dataset.
+
+
+
+Figure 9: Sample graphs from the training set of the Planar-60 dataset.
+
+
+
+Figure 10: Sample graphs generated with the model EDP-Score [8] for the Planar-60 dataset.
+
+Figure 11: Sample graphs generated with the PPGN ${L}_{\text{simple }}$ model for the Planar-60 dataset.
+
+
+
+Figure 12: Sample graphs from the training set of the SBM-27 dataset.
+
+
+
+Figure 13: Sample graphs generated with the model EDP-Score [8] for the SBM-27 dataset.
+
+
+
+Figure 14: Sample graphs generated with the PPGN ${L}_{\text{simple }}$ model for the SBM-27 dataset.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..bcc8c51b76790c6c592989a776f41250cf93e2ec
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/CtsKBwhTMKg/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,159 @@
+§ DIFFUSION MODELS FOR GRAPHS BENEFIT FROM DISCRETE STATE SPACES
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Denoising diffusion probabilistic models and score matching models have proven to be very powerful for generative tasks. While these approaches have also been applied to the generation of discrete graphs, they have, so far, relied on continuous Gaussian perturbations. Instead, in this work, we suggest using discrete noise for the forward Markov process. This ensures that in every intermediate step the graph remains discrete. Compared to the previous approach, our experimental results on four datasets and multiple architectures show that using a discrete noising process results in higher quality generated samples indicated with an average MMDs reduced by a factor of 1.5. Furthermore, the number of denoising steps is reduced from 1000 to 32 steps leading to a 30 times faster sampling procedure.
+
+§ 12 1 INTRODUCTION
+
+Score-based [1] and denoising diffusion probabilistic models (DDPMs) [2, 3] have recently achieved striking results in generative modeling and in particular in image generation. Instead of learning a complex model that generates samples in a single pass (like a Generative Adversarial Network [4] (GAN) or a Variational Auto-Encoder [5] (VAE)), a diffusion model is a parameterized Markov Chain trained to reverse an iterative predefined process that gradually transforms a sample into pure noise. Although diffusion processes have been proposed for both continuous [6] and discrete [7] state spaces, their use for graph generation has only focused on Gaussian diffusion processes which operate in the continuous state space $\left\lbrack {8,9}\right\rbrack$ .
+
+This contribution suggests adapting the denoising procedure to an actual graph distribution and using discrete noise, leading to a random graph model. We describe this procedure based on the Discrete DDPM framework proposed by Austin et al. [7], Hoogeboom et al. [10]. Our experiments show that using discrete noise greatly reduces the number of denoising steps that are needed and improves the sample quality. We also suggest the use of a simple expressive graph neural network architecture [11] for denoising, which, while bringing expressivity benefits, contrasts with more complicated architectures currently used for graph denoising [8].
+
+§ 2 RELATED WORK
+
+Traditionally, graph generation has been studied through the lens of random graph models [12-14]. While this approach is insufficient to model many real-world graph distributions, it is useful to create synthetic datasets and provides a useful abstraction. In fact, we will use Erdős-Rényi graphs [12] to model the prior distribution of our diffusion process.
+
+Due to their larger number of parameters and expressive power, deep generative models have achieved better results in modeling complex graph distributions. The most successful graph generative models can be devised into two different techniques: a) auto-regressive graph generative models, which generate the graph sequentially node-by-node [15, 16], and b) one-shot generative models which generate the whole graph in a single forward pass [8, 9, 17-21]. While auto-regressive models can generate graphs with hundreds or even thousands of nodes, they can suffer from mode collapse $\left\lbrack {{20},{21}}\right\rbrack$ . One-shot graph generative models are more resilient to mode collapse but are more challenging to train while still not scaling easily beyond tens of nodes. Recently, one-shot generation has been scaled up to graphs of hundreds of nodes thanks to spectral conditioning [21], suggesting that good conditioning can largely benefit graph generation. Still, the suggested training procedure is cumbersome as it involves 3 different intertwined Generative Adversarial Networks (GANs). Finally, Variational Auto Encoders (VAE) have also been studied to generate graphs but remain difficult to train, as the loss function needs to be permutation invariant [22] which can necessitate an expensive graph matching step [17].
+
+In contrast, the score-based models $\left\lbrack {8,9}\right\rbrack$ have the potential to provide both, a simple, stable training objective similar to the auto-regressive models and good graph distribution coverage provided by the one-shot models. Niu et al. [8] provided the first score-based model for graph generation by directly using the score-based model formulation by Song and Ermon [1] and additionally accounting for the permutation equivariance of graphs. Jo et al. [9] extended this to featured graph generation, by formulating the problem as a system of two stochastic differential equations, one for feature generation and one for adjacency generation. The graph and the features are then generated in parallel. This approach provided promising results for molecule generation. Results on slightly larger graphs were also improved but remained imperfect. Importantly, both contributions rely on a continuous Gaussian noise process and use a thousand denoising steps to achieve good results, which makes for a slow graph generation.
+
+As shown by Song et al. [6], score matching is tightly related to denoising diffusion probabilistic models [3] which provide a more flexible formulation, more easily amendable for the graph generation. In particular, for the noisy samples to remain discrete graphs, the perturbations need to be discrete. Such discrete diffusion has been successfully used for quantized image generation [23, 24] and text generation [25]. Diffusion using the multinomial distribution was proposed in Hoogeboom et al. [10]. Then, Austin et al. [7] extended the previous work by Hoogeboom et al. [10], Song et al. [26] and provided a general recipe for denoising diffusion models in discrete state-spaces which mainly requires the specification of a doubly-stochastic Markov transition matrix $\mathbf{Q}$ which ensures the Markov process conserves probability mass and converges to a stationary distribution. In the next section, we describe a formulation of this perturbation matrix $\mathbf{Q}$ leading to the Erdős-Rényi random graphs.
+
+§ 3 DISCRETE DIFFUSION FOR SIMPLE GRAPHS
+
+Diffusion models [2] are generative models based on a forward and a reverse Markov process. The forward process, denoted $q\left( {{\mathbf{A}}_{1 : T} \mid {\mathbf{A}}_{0}}\right) = \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right)$ generates a sequence of increasingly noisier latent variables ${\mathbf{A}}_{t}$ from the initial sample ${\mathbf{A}}_{0}$ , to white noise ${\mathbf{A}}_{T}$ . Here the sample ${\mathbf{A}}_{0}$ and the latent variables ${\mathbf{A}}_{t}$ are adjacency matrices. The learned reverse process ${p}_{\theta }\left( {\mathbf{A}}_{1 : T}\right) = p\left( {\mathbf{A}}_{T}\right) \mathop{\prod }\limits_{{t = 1}}^{T}q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ attempts to progressively denoise the latent variable ${\mathbf{A}}_{t}$ in order to produce samples from the desired distribution. Here we will focus on simple graphs, but the approach can be extended in a straightforward manner to account for different edge types. We use the model from [10] and, for convenience, adopt the representation of [7] for our discrete process.
+
+§ 3.1 FORWARD PROCESS
+
+Let the row vector ${\mathbf{a}}_{t}^{ij} \in \{ 0,1{\} }^{2}$ be the one-hot encoding of $i,j$ element of the adjacency matrix ${\mathbf{A}}_{t}$ . Here $t \in \left\lbrack {0,T}\right\rbrack$ denotes the timestep of the process, where ${\mathbf{A}}_{0}$ is a sample from the data distribution and ${\mathbf{A}}_{T}$ is an Erdős-Rényi random graph. The forward process is described as repeated multiplication of each adjacency element type row vector ${\mathbf{a}}_{t}^{ij} = {\mathbf{a}}_{t - 1}^{ij}{\mathbf{Q}}_{t}$ with a double stochastic matrix ${\mathbf{Q}}_{t}$ . Note that the forward process is independent for each edge/non-edge $i \neq j$ . The matrix ${\mathbf{Q}}_{t} \in {\mathbb{R}}^{2 \times 2}$ is
+
+modeled as
+
+$$
+{\mathbf{Q}}_{t} = \left\lbrack \begin{matrix} 1 - {\beta }_{t} & {\beta }_{t} \\ {\beta }_{t} & 1 - {\beta }_{t} \end{matrix}\right\rbrack , \tag{1}
+$$
+
+where ${\beta }_{t}$ is the probability of not changing the edge state ${}^{1}$ . This formulation ${}^{2}$ has the advantage to allow direct sampling at any timestep of the diffusion process without computing any previous timesteps. Indeed the matrix ${\overline{\mathbf{Q}}}_{t} = \mathop{\prod }\limits_{{i < t}}{\mathbf{Q}}_{i}$ can be expressed in the form of (1) with ${\beta }_{t}$ being replaced by ${\bar{\beta }}_{t} = \frac{1}{2} - \frac{1}{2}\mathop{\prod }\limits_{{i < t}}\left( {1 - 2{\beta }_{i}}\right)$ . Eventually, we want the probability ${\bar{\beta }}_{t} \in \left\lbrack {0,{0.5}}\right\rbrack$ to vary from 0 (unperturbed sample) to 0.5 (pure noise). In this contribution, we limit ourselves to symmetric graphs and therefore only need to model the upper triangular part of the adjacency matrix. The noise is sampled i.i.d. over all of the edges.
+
+${}^{1}$ Note that two different $\beta$ ’s could be used for edges and non-edges. This case is left for future work.
+
+${}^{2}$ Note that we use a different parametrization for (1) than [10]. To recover the original formulation, one can simply divide all ${\beta }_{t}$ by 2 .
+
+§ 3.2 REVERSE PROCESS
+
+To sample from the data distribution, the forward process needs to be reversed. Therefore, we need to estimate $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ . In our case, using the Markov property of the forward process this can be rewritten as (see Appendix A for derivation):
+
+$$
+q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) = q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{t - 1}}\right) \frac{q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right) }{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) .} \tag{2}
+$$
+
+Note that (2) is entirely defined by ${\beta }_{t}$ and ${\bar{\beta }}_{t}$ and ${\mathbf{A}}_{0}$ (see Appendix A, Equation 4).
+
+§ 3.3 LOSS
+
+Diffusion models are typically trained to minimize a variational upper bound on the negative log-likelihood. This bound can be expressed as (see Appendix C or [3, Equation 5]):
+
+$$
+\left. {{L}_{\mathrm{{vb}}}\left( {\mathbf{A}}_{0}\right) }\right) \mathrel{\text{ := }} {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\left\lbrack \underset{{L}_{T}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{T} \mid {\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {\mathbf{A}}_{T}\right) }\right) }}\right.
+$$
+
+$$
++ \mathop{\sum }\limits_{{t = 1}}^{T}{\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\underset{{L}_{t}}{\underbrace{{D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right) }}\underset{{L}_{0}}{\underbrace{-{\mathbb{E}}_{q\left( {{\mathbf{A}}_{1} \mid {\mathbf{A}}_{0}}\right) }\log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right) }\right) }}
+$$
+
+Practically, the model is trained to directly minimize the losses ${L}_{t}$ , i.e. the KL divergence ${D}_{KL}\left( {q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right) \parallel {p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right) }\right)$ by using the tractable parametrization of $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t},{\mathbf{A}}_{0}}\right)$ from (2). Note that the discrete setting of the selected noise distribution prevents training the model to approximate the gradient of the distribution as done by score-matching graph generative models [8, 9].
+
+Parametrization of the reverse process. While it is possible to predict the logits of ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ in order to minimize ${L}_{\mathrm{{vb}}}$ , we follow $\left\lbrack {3,7,{10}}\right\rbrack$ and use a network ${\mathrm{{nn}}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ that predict the logits of the distribution ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ . This parametrization is known to stabilize the training procedure. To minimize ${L}_{\mathrm{{vb}}},\left( 2\right)$ can be used to recover ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ from ${\mathbf{A}}_{0}$ and ${\mathbf{A}}_{t}$ .
+
+Alternate loss. Many implementations of DDPMs found it beneficial to use alternative losses. For instance, [3] derived a simplified loss function that reweights the ELBO. Hybrid losses have been used in [27] and [7]. As shown in Appendix D, using the parametrization ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ , one can express the term: ${L}_{t}$ as ${L}_{t} = - \log \left( {{p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) }\right)$ . Empirically, we found that minimizing
+
+$$
+{L}_{\text{ simple }} \mathrel{\text{ := }} - {\mathbb{E}}_{q\left( {\mathbf{A}}_{0}\right) }\mathop{\sum }\limits_{{t = 1}}^{T}\left( {1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}}\right) \cdot {\mathbb{E}}_{q\left( {{\mathbf{A}}_{t} \mid {\mathbf{A}}_{0}}\right) }\log {p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right) ) \tag{3}
+$$
+
+leads to stable training and better results. Note that this loss equals the cross-entropy loss between ${\mathbf{A}}_{0}$ and ${\operatorname{nn}}_{\theta }\left( {\mathbf{A}}_{t}\right)$ . The re-weighting $1 - 2 \cdot {\bar{\beta }}_{t} + \frac{1}{T}$ , which assigns linearly more importance to the less noisy samples, has been proposed in [23, Equation 7].
+
+§ 3.4 SAMPLING
+
+For each loss, we used a specific sampling algorithm. For both approaches, we start by sampling each edge independently from a Bernoulli distribution with probability $p = 1/2$ (Erdős-Rényi random graph). Then, for the ${L}_{\mathrm{{vb}}}$ loss we follow Ho et al. [3] and iteratively reverse the chain by sampling Bernoulli-sampling from ${p}_{\theta }\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{t}}\right)$ until we obtain at our sample of ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{1}}\right)$ . For the loss function ${L}_{\text{ simple }}$ , we sample ${\mathbf{A}}_{0}$ directly from ${p}_{\theta }\left( {{\mathbf{A}}_{0} \mid {\mathbf{A}}_{t}}\right)$ for each step $t$ and obtain ${\mathbf{A}}_{t - 1}$ by sampling again from $q\left( {{\mathbf{A}}_{t - 1} \mid {\mathbf{A}}_{0}}\right)$ . The two approaches are described algorithmically in Appendix E.
+
+The values of ${\bar{\beta }}_{t}$ are selected following a simple linear schedule for our reverse process [2]. We found it works similarly well as other options such as cosine schedule [27]. Note that in this case ${\beta }_{t}$ can be obtained from ${\bar{\beta }}_{t}$ in a straightforward manner (see Appendix B).
+
+§ 4 EXPERIMENTS
+
+We compare our graph discrete diffusion approach to the original score-based approach proposed by Niu et al. [8]. Models using this original formulation are denoted by score. We follow the training and evaluation setup used by previous contributions $\left\lbrack {8,9,{15},{19}}\right\rbrack$ . More details can be found in Appendix G. For evaluation, we compute MMD metrics from [15] between the generated graphs and the test set, namely, the degree distribution, the clustering coefficient, and the 4-node orbit counts. To demonstrate the efficiency of the discrete parameterization, the discrete models only use 32 denoising steps, while the score-based models use 1000 denoising steps, as originally proposed. We compare two architectures: 1. EDP-GNN as introduced by Niu et al. [8], and 2. a simpler and more expressive provably powerful graph network (PPGN) [11]. See Appendix F for a more detailed description of the architectures.
+
+max width=
+
+2*Model 4|c|Community 4|c|Ego 2*Total
+
+2-9
+ Deg. Clus. Orb. Avg. Deg. Clus. Orb. Avg.
+
+1-10
+GraphRNN ${}^{ \dagger }$ 0.030 0.030 0.010 0.017 0.040 0.050 0.060 0.050 0.033
+
+1-10
+${\mathrm{{GNF}}}^{ \dagger }$ 0.120 0.150 0.020 0.097 0.010 0.030 0.001 0.014 0.055
+
+1-10
+EDP-Score ${}^{ \dagger }$ 0.006 0.127 0.018 0.050 0.010 0.025 0.003 0.013 0.031
+
+1-10
+SDE-Score ${}^{ \dagger }$ 0.045 0.086 0.007 0.046 0.021 0.024 0.007 0.017 0.032
+
+1-10
+EDP-Score ${}^{3}$ 0.016 0.810 0.110 0.320 0.04 0.064 0.005 0.037 0.178
+
+1-10
+PPGN-Score 0.081 0.237 0.284 0.200 0.019 0.049 0.005 0.025 0.113
+
+1-10
+PPGN ${L}_{\mathrm{{vb}}}$ 0.023 0.061 0.015 0.033 0.025 0.039 0.019 0.027 0.03
+
+1-10
+PPGN ${L}_{\text{ simple }}$ 0.019 0.044 0.005 0.023 0.018 0.026 0.003 0.016 0.019
+
+1-10
+EDP ${L}_{\text{ simple }}$ 0.024 0.04 0.012 0.026 0.019 0.031 0.017 0.022 0.024
+
+1-10
+
+Table 1: MMD results for the Community and the Ego datasets. All values are averaged over 5 runs with 1024 generated samples without any sub-selection. The "Total" column denotes the average MMD over all of the 6 measurements. The best results of the "Avg." and "Total" columns are shown in bold. $\dagger$ marks the results taken from the original papers.
+
+max width=
+
+2*Model 4|c|SBM-27 4|c|Planar-60 2*Total
+
+2-9
+ Deg. Clus. Orb. Avg. Deg. Clus. Orb. Avg.
+
+1-10
+EDP-Score 0.014 0.800 0.190 0.334 1.360 1.904 0.534 1.266 0.8
+
+1-10
+PPGN ${L}_{\text{ simple }}$ 0.007 0.035 0.072 0.038 0.029 0.039 0.036 0.035 0.036
+
+1-10
+EDP ${L}_{\text{ simple }}$ 0.046 0.184 0.064 0.098 0.017 1.928 0.785 0.910 0.504
+
+1-10
+
+Table 2: MMD results for the SBM-27 and the Planar- 60 datasets.
+
+Table 1 shows the results for two datasets, Community-small $\left( {{12} \leq n \leq {20}}\right)$ and Ego-small $\left( {4 \leq n \leq {18}}\right)$ , used by Niu et al. [8]. To better compare our approach to traditional score-based graph generation, in Table 2, we additionally perform experiments on slightly more challenging datasets with larger graphs. Namely, a stochastic-block-model (SBM) dataset with three communities, which in total consists of $\left( {{24} \leq n \leq {27}}\right)$ nodes and a planar dataset with $\left( {n = {60}}\right)$ nodes. Detailed information on the datasets can be found in Appendix H. Additional details concerning the evaluation setup are provided in Appendix G.4.
+
+Results. In Table 1, we observe that the proposed discrete diffusion process using the ${L}_{\mathrm{{vb}}}$ loss and PPGN model leads to slightly improved average MMDs over the competitors. The ${L}_{\text{ simple }}$ loss further improve the result over ${L}_{\mathrm{{vb}}}$ . The fact that the EDP- ${L}_{\text{ simple }}$ model has significantly lower MMD values than the EDP-score model is a strong indication that the proposed loss and the discrete formulation are the cause of the improvement rather than the PPGN architecture. This improvement comes with the additional benefit that sampling is greatly accelerated (30 times) as the number of timesteps is reduced from 1000 to 32. Table 2 shows that the proposed discrete formulation is even more beneficial when graph size and complexity increase. The PPGN-Score even becomes infeasible to run in this setting, due to the prohibitively expensive sampling procedure. A qualitative evaluation of the generated graphs is performed in Appendix I. Visually, the ${L}_{\text{ simple }}$ loss leads to the best samples.
+
+§ 5 CONCLUSION
+
+In this work, we demonstrated that discrete diffusion can increase sample quality and greatly improve the efficiency of denoising diffusion for the graph generation. While the approach was presented for simple graphs with non-attributed edges, it could also be extended to cover graphs with edge attributes.
+
+${}^{3}$ The discrepancy with the SDE-Score ${}^{ \dagger }$ results comes from the fact that using the code provided by the authors, we were unable to reproduce their results. Strangely, their code leads to good results when used with our discrete formulation and ${L}_{\text{ simple }}$ loss improving over the result reported in their contribution.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..119557c3d01003bea68e98cd738236929d55c348
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,244 @@
+# De Bruijn goes Neural: Causality-Aware Graph Neural Networks for Time Series Data on Dynamic Graphs
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+We introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold in the causal topology of dynamic graphs, which is determined by causal walks, i.e. temporally ordered sequences of links by which nodes can influence each other over time. Our architecture builds on multiple layers of higher-order De Bruijn graphs, an iterative line graph construction where nodes in a De Bruijn graph of order $k$ represent walks of length $k - 1$ , while edges represent walks of length $k$ . We develop a graph neural network architecture that utilizes De Bruijn graphs to implement a message passing scheme that follows a non-Markovian dynamics, which enables us to learn patterns in the causal topology of a dynamic graph. Addressing the issue that De Bruijn graphs with different orders $k$ can be used to model the same data set, we further apply statistical model selection to determine the optimal graph topology to be used for message passing. An evaluation in synthetic and empirical data sets suggests that DBGNNs can leverage temporal patterns in dynamic graphs, which substantially improves the performance in a supervised node classification task.
+
+## 1 Introduction
+
+Graph Neural Networks (GNNs) [1, 2] have become a cornerstone for the application of deep learning to data with a non-Euclidean, relational structure. Different flavors of GNNs have been shown to be highly efficient for tasks like node classification, representation learning, link prediction, cluster detection, or graph classification. The popularity of GNNs is largely due to the abundance of data that can be represented as graphs, i.e. as a set of nodes with pairwise connections represented as links. However, we increasingly have access to time-resolved data that not only capture which nodes are connected to each other, but also when and in which temporal order those connections occur. A number of works in computer science, network science, and interdisciplinary physics have highlighted how the temporal dimension of dynamic graphs, i.e. the timing and ordering of links, influences the causal topology of networked systems, i.e. which nodes can possibly influence each other over time [3-5]. In a nutshell, if an undirected link(a, b)between two nodes $a$ and $b$ occurs before an undirected link(b, c), node $a$ can causally influence node $c$ via node $b$ . If the temporal ordering of those two links is reversed, node $a$ cannot influence node $c$ via $b$ due to the directionality of the arrow of time. This simple example shows that the arrow of time in dynamic graphs limits possible causal influences between nodes beyond what we would expect based on the mere topology of links.
+
+Beyond such toy examples, a number of recent studies in network science, computer science, and interdisciplinary physics have shown that the temporal ordering of links in real time series data on graphs has non-trivial consequences for the properties of networked systems, e.g. for reachability and percolation [6, 7], diffusion and epidemic spreading [8, 9], node rankings and community structures [10]. It had further been shown that this interesting aspect of dynamic graphs can be understood using a variant of De Bruijn graphs [11], i.e. static higher-order graphical models [9, 12, 13] of causal paths that capture both the temporal and the topological dimension of time series data on graphs.
+
+While the generalization of network analysis techniques like node centrality measures and community detection [10, 12], or graph embedding [14] to such higher-order models has been successful, to the best of our knowledge no generalizations of Graph Neural Networks to higher-order De Bruijn graphs have been proposed $\left\lbrack {{15},{16}}\right\rbrack$ . Such a generalization bears several promises: First it could enable us to apply well-known and efficient gradient-based learning techniques in a static neural network architecture that is able to learn patterns in the causal topology of dynamic graphs that are due to the temporal ordering of links. Second, making the temporal ordering of links in time-stamped data a first-class citizen of graph neural networks, this generalization could also be an interesting approach to incorporate a necessary condition for causality into state-of-the-art geometric deep learning techniques, which often lack meaningful ways to represent time. Finally, a combination of higher-order De Bruijn graph models with graph neural networks enable us to apply frequentist and Bayesian techniques to learn the "optimal" order of a De Bruijn graph model for a given time series, providing new ways to combine statistical learning and model selection with graph neural networks.
+
+Addressing this gap, our work generalizes graph neural networks to high-dimensional De Bruijn graph models for causal paths in time-stamped data on dynamic graphs. We obtain a novel causality-aware graph neural network architecture for time series data that makes the following contributions:
+
+- We develop a graph neural network architecture that generalizes message passing to multiple layers of higher-order De Bruijn graphs. The resulting De Bruijn Graph Neural Network (DBGNN) architecture leads to a non-Markovian message passing, whose dynamics matches correlations in the temporal ordering of links, thus enabling us to learn patterns that shape the causal topology of dynamic graphs.
+
+- We evaluate our proposed architecture both in empirical and synthetically generated dynamic graphs and compare its performance to graph neural networks as well as (time-aware) graph representation learning techniques. We find that our method yields superior node classification performance.
+
+- We combine this architecture with statistical model selection to infer the optimal higher order of a De Bruijn graph. This yields a two-step learning process, where (i) we first learn a parsimonious De Bruijn graph model that neither under- nor overfits patterns in a dynamic graph, and (ii) we apply message passing and gradient-based optimization to the inferred graph in order to address graph learning tasks like node classification or representation learning.
+
+Our work builds on the -to the best of our knowledge- novel combination of (i) statistical model selection to infer optimal higher-order graphical models for causal paths in dynamic graphs, and (ii) gradient-based learning in a neural network architecture that uses the inferred higher-order graphical models as message passing layers. Thanks to this approach, our architecture performs message passing in an optimal graph model for the causal paths in a given dynamic graph. The results of our evaluation confirm that this explicit regularization of the message passing layers enables us to considerably improve performance in a node classification task. The remainder of this paper is structured as follows: In section 2 we introduce the background of our work and formally state the problem that we address, in section 3 we introduce the De Bruijn graph neural network architecture, in section 4 we experimentally validate our method in synthetic and empirical data on dynamic graphs, and in section 5 we summarize our contributions and highlight opportunities for future research. We have implemented our architecture based on the graph learning library pyTorch Geometric [17] and release the code of our experiments as an Open Source package ${}^{1}$ .
+
+## 2 Background and Problem Statement
+
+Basic definitions We consider a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ with a (static) set of nodes $V$ and time-stamped (directed) edges $\left( {v, w;t}\right) \in {E}^{\mathcal{T}} \subseteq V \times V \times \mathbb{N}$ where -without loss of generality-integer timestamps $t$ represent the instantaneous time at which a pair of nodes $v, w$ is connected [4]. While many real-world network data exhibit such timestamps, for the application of graph neural networks we often consider a time-aggregated projection $G\left( {V, E}\right)$ along the time axis, where a (static) edge $\left( {v, w}\right) \in E$ exists iff $\exists t \in \mathbb{N} : \left( {v, w}\right) \in {E}^{\mathcal{T}}$ . We can further consider edge weights $w : E \rightarrow \mathbb{N}$ defined as $w\left( {v, w}\right) \mathrel{\text{:=}} \left| \left\{ {t \in \mathbb{N} : \left( {v, w;t}\right) \in {E}^{\mathcal{T}}}\right\} \right|$ , i.e. we use $w\left( {v, w}\right)$ to count the number of temporal activations of(v, w).
+
+A key motivation for the study of graphs as models for complex systems is that -apart from direct interactions captured by edges(v, w)- they facilitate the study of indirect interactions between nodes via paths or walks in a graph. Formally, we define a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ in a graph $G = \left( {V, E}\right)$ as any sequence of nodes ${v}_{i} \in V$ such that $\left( {{v}_{i - 1},{v}_{i}}\right) \in E$ for $i = 1,\ldots , l - 1$ . The length $l$ of a walk captures the number of traversed edges, i.e. each node $v \in V$ is a walk of length zero, while each edge(v, w)is a walk of length one. We further call a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a path of length $l$ from ${v}_{0}$ to ${v}_{l - 1}$ iff ${v}_{i} \neq {v}_{j}$ for $i \neq j$ , i.e. a path is a walk between a set of distinct nodes.
+
+---
+
+${}^{1}$ link blinded in review version
+
+---
+
+Causal walks and paths in dynamic graphs In a static graph $G = \left( {V, E}\right)$ , the topology-i.e. which nodes can directly and indirectly influence each other via edges, walks, or paths- is completely determined by the edges $E$ . This is is different for dynamic graphs, which can be understood by extending the definition of walks and paths to causal concepts that respect the arrow of time:
+
+Definition 1. For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ , we call a node sequence ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a causal walk iff the following two conditions hold: (i) $\left( {{v}_{i - 1},{v}_{i};{t}_{i}}\right) \in {E}^{\mathcal{T}}$ for $i = 1,\ldots , l - 1$ and (ii) $0 < {t}_{j} - {t}_{i} \leq \delta$ for $i < j$ and some $\delta > 0$ .
+
+The first condition ensures that nodes in a dynamic graph can only indirectly influence each other via a causal walk iff a corresponding walk exists in the time-aggregated graph. Due to $0 < {t}_{j} - {t}_{i}$ for $i < j$ , the second condition ensures that time-stamped edges in a causal walk occur in the correct chronological order, i.e. timestamps are monotonically increasing [3, 4]. As an example, two time-stamped edges $\left( {a, b;1}\right) ,\left( {b, c;2}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = 2$ via node $b$ , while the same edges in reverse temporal order $\left( {a, b;2}\right) ,\left( {b, c;1}\right)$ do not constitute a causal walk. While this definition of a causal walk does not impose an upper bound on the time difference between consecutive time-stamped edges constituting a causal walk, it is often reasonable to define a time limit $\delta > 0$ , i.e. a time difference beyond which consecutive edges are not considered to contribute to a causal walk. As an example, two time-stamped edges $\left( {a, b;1}\right) ,\left( {b, c;{100}}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = {100}$ via node $b$ for $\delta = {150}$ , while they do not constitute a causal walk for $\delta = 5$ . This time-limited notion of causal or time-respecting walks is characteristic for many real networked systems in which processes or agents have a finite time scale or "memory", which rules out infinitely long gaps between consecutive causal interactions [4, 5]. Analogous to the definition in a static network, we finally define a causal path ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ from node ${v}_{0}$ to node ${v}_{l - 1}$ as a causal walk with ${v}_{i} \neq {v}_{j}$ for $i \neq j$ .
+
+Non-Markovian characteristics of dynamic graphs The above definition of causal walks and paths in dynamic graphs has important consequences for our understanding of the topology of dynamic graphs, i.e. which nodes can directly and indirectly influence each other directly via walks or paths. Moreover, it has important consequences for graph learning and network analysis tasks such as node ranking, cluster detection, or embedding $\left\lbrack {9,{10},{12},{13},{18}}\right\rbrack$ . This additional complexity of dynamic graphs is due to the fact that the topology of a static graph $G = \left( {V, E}\right)$ can be fully understood based on the transitive hull of edges, i.e. the presence of two edges $\left( {u, v}\right) \in E$ and $\left( {v, w}\right) \in E$ implies that nodes $u$ and $w$ can indirectly influence each other via a walk or path, which we denote as $u{ \rightarrow }^{ * }w$ . This not only enables us to use standard algorithms, e.g. to calculate (shortest) paths, it also implies that we can use matrix powers, eigenvalues and eigenvectors to analyze topological properties of a graph. In contrast, in dynamic graphs the chronological order of time-stamped edges can break transitivity, i.e. $\left( {u, v;t}\right) \in E$ and $\left( {v, w;{t}^{\prime }}\right) \in E$ does not necessarily imply $u{ \rightarrow }^{ * }w$ , which invalidates graph analytic approaches [13].
+
+To study the question how correlations in the temporal ordering of time-stamped edges influence the causal topology of a dynamic graph, we can take a statistical modelling perspective. We can, for instance, consider causal walks as sequences of random variables that can be modelled via a Markov chain of order $k$ over a discrete state space $V$ [12]. In other words, we model the sequence of nodes ${v}_{0},\ldots ,{v}_{l - 1}$ on causal walks as $P\left( {{v}_{i} \mid {v}_{i - k},\ldots ,{v}_{i - 1}}\right)$ where $k - 1$ is the length of the "memory" of the Markov chain. For $k = 1$ we have a memoryless, first-order Markov chain model $P\left( {{v}_{i} \mid {v}_{i - 1}}\right)$ , where the next node on the walk exclusively depends on the current node. From the perspective of dynamic graphs with time-stamped link sequences, this corresponds to a case where the causal walks of the dynamic graph are exclusively determined by the topology (and possibly frequency) of edges, i.e. there are no correlations in the temporal ordering of time-stamped edges and the causal topology of the dynamic matches the topology of the corresponding time-aggregated graph. If the need a Markov order $k > 1$ , the sequence of nodes traversed by causal walks exhibits memory, i.e. the next node on a walk not only depends on the current one but also on the history of past interactions. The presence of such higher-order correlations in dynamic graphs is associated with more complex causal topologies that (i) cannot be reduced to the topology of the associated time-aggregated network, and (ii) have interesting implications for spreading and diffusion processes and spectral properties [9], node centralities [12], and community structures [10].
+
+Higher-order De Bruijn graph models of causal topologies The use of higher-order Markov chain models for causal paths leads to an interesting novel view on the relationship between graph models and time series data on dynamic graphs. In this view, the common (weighted) time-aggregated graph representation of time-stamped edges corresponds to a first-order graphical model, where edge weights capture the statistics of edges, i.e. causal paths of length one. A normalization of edge weights in this graph yields a first-order Markov model of causal walks in a dynamic graph. Similarly, a graphical representation of higher-order Markov chain model of causal walks can be used to capture non-Markovian patterns in the temporal sequence of time-stamped edges. However, different from higher-order Markov chain models of general categorical sequences, a higher-order model of causal paths in dynamic graphs must account for the fact that the set of possible causal paths is constrained by the topology of the corresponding static graph (i.e. condition (i) in Definition 1). To account for this we define a higher-order De Bruijn graph model of causal walks [11]:
+
+Definition 2 ( $k$ -th order De Bruijn graph model). For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ and $k \in \mathbb{N}$ , a $k$ -th order De Bruijn graph model of causal paths in ${G}^{\mathcal{T}}$ is a graph ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ , with $u \mathrel{\text{:=}} \left( {{u}_{0},{u}_{1},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ a causal walk of length $k - 1$ in ${G}^{\mathcal{T}}$ and $\left( {u, v}\right) \in {E}^{\left( k\right) }$ iff(i) $v = \left( {{v}_{1},\ldots ,{v}_{k}}\right)$ with ${u}_{i} = {v}_{i}$ for $i = 1,\ldots , k - 1$ and (ii) $u \oplus v = \left( {{u}_{0},\ldots ,{u}_{k - 1},{v}_{k}}\right)$ a causal walk of length $k$ in ${G}^{\mathcal{T}}$ .
+
+We note that any two adjacency nodes $u, v \in {V}^{k}$ in a $k$ -th order De Bruijn graph ${G}^{\left( k\right) }$ represent two causal walks of length $k - 1$ that overlap in exactly $k - 1$ nodes, i.e. each edge $\left( {u, v}\right) \in {E}^{\left( k\right) }$ represents a causal walk of length $k$ . We can further use edge weights $w : {E}^{\left( k\right) } \rightarrow \mathbb{N}$ to capture the frequencies of causal paths of length $k$ . The (weighted) time-aggregated graph $G$ of a dynamic graph trivially corresponds to a first-order De Bruijn graph, where (i) nodes are causal walks of length zero and (ii) edges $E = {E}^{\left( 1\right) }$ capture causal walks of length one (i.e. edges) in ${G}^{\mathcal{T}}$ . To construct a second-order De Bruijn graph ${G}^{\left( 2\right) }$ we can perform a line graph transformation of a static graph $G = {G}^{\left( 1\right) }$ , where each edge $\left( {{u}_{0},{u}_{1}}\right) ,\left( {{u}_{1},{u}_{2}}\right) \in {E}^{\left( 2\right) }$ captures a causally ordered sequence of two edges $\left( {{u}_{0},{u}_{1};t}\right)$ and $\left( {{u}_{1},{u}_{2};{t}^{\prime }}\right)$ . A $k$ -th order De Bruijn graph can be constructed by a repeated line graph transformation of a static graph $G$ . Hence, De Bruijn graphs can be viewed as generalization of common graph models to a higher-order, static graphical model of causal walks of length $k$ , where walks of length $l$ in ${G}^{k}$ model causal walks of length $k + l - 1$ in ${G}^{\mathcal{T}}\left\lbrack {9,{13}}\right\rbrack$ .
+
+De Bruijn graphs have interesting mathematical properties that connect them to trajectories of subshifts of finite type as well as to dynamical systems and ergodic theory [19]. For the purpose of our work, they provide the advantage that we can use $k$ -th order De Bruijn graphs to model the causal topology in dynamic graphs. We illustrate this in fig. 1, which shows two dynamic graphs with four nodes and 33 time-stamped links. These dynamics groups only differ in term of the temporal ordering of edges, i.e. they have the same (first-order) weighted time-aggregated graph representation (center). Moreover, this first-order representation wrongly suggests that node $A$ can influence node $C$ by a path via node $B$ . While this is true in the dynamic graph on the right (see red causal paths), no corresponding causal path from $A$ via $B$ to $C$ exists in the dynamic graph on the left. A second-order De Bruijn graph model (bottom left and right) captures the fact that the causal path from $A$ via $B$ to $C$ is absent in the right example. This shows that, different from commonly used static graph representations, the edges of a $k$ -th order De Bruijn graph with $k > 1$ are sensitive to the temporal ordering of time-stamped edges. Hence, static higher-order De Bruijn graphs can be used to model the causal topology in a dynamic graph. We can view a $k$ -th order De Bruijn graph in analogy to a $k$ -th order Markov model, where a directed link from node $\left( {{u}_{0},\ldots ,{u}_{k - 1}}\right)$ to node $u = \left( {{u}_{1},\ldots ,{u}_{k},{u}_{k}}\right)$ captures a walk from node ${u}_{k}$ to ${u}_{k + 1}$ in the underlying graph, with a memory of $k$ previously visited nodes ${u}_{0},\ldots ,{u}_{k - 1}$ . This approach has been used to analyze how the causal topology of dynamic graphs influences node ranking in dynamic graphs $\left\lbrack {{10},{12}}\right\rbrack$ , the modelling of random walks and diffusion [9], community detection [10, 18], time-aware static graph embedding [14, 20]. Moreover, several works have proposed heuristic, frequentist and Bayesian methods to infer the optimal order of higher-order graph models of causal paths given time series data on dynamic graphs [10, 12, 21, 22].
+
+Problem Statement and Research Gap The works above provide the background for the generalization of graph neural networks to higher-order De Bruijn graph models of causal walks in dynamic graphs, which we propose in the following section. Following the terminology in the network science community, higher-order De Bruijn graph models can be seen as one particular type of higher-order network models [13, 23, 24], which capture (causally-ordered) sequences of interactions between more than two nodes, rather than dyadic edges. They complement other types of popular higher-order network models (like, e.g. hypergraphs, simplicial complexes, or motif-based adjacency matrices)
+
+
+
+Figure 1: Simple example for two dynamic graphs with four nodes and 33 directed time-stamped edges (top left and right). The two graphs only differ in terms of the temporal ordering of edges. Frequency and topology of edges are identical, i.e. they have the same first-order time-aggregated weighted graph representation (center). Due to the arrow of time, causal walks and paths differ in the two dynamic graphs: Assuming $\delta = 1$ , in the left dynamic graph node $A$ cannot causally influence $C$ via $B$ , while such a causal path is possible in the right graph. A second-order De Bruijn graph representation of causal walks in the two graphs (bottom left and right) captures this difference in the causal topology. Building on such causality-aware graphical models, in our work we define a graph neural network architecture that is able to learn patterns in the causal topology of dynamic graphs.
+
+that consider (unordered) non-dyadic interactions in static networks, and which have been used to generalize graph neural networks to non-dyadic interactions [25, 26].
+
+To the best of our knowledge, De Bruijn graph models have not been combined with recent advanced in graph neural networks. Closing this gap, we propose a causality-aware graph convolutional network architecture that uses an augmented message passing scheme [27] in higher-order De Bruijn graphs to capture patterns in the causal topology of dynamic graphs.
+
+## 3 De Bruijn Graph Neural Network Architecture
+
+We now introduce the De Bruijn Graph Neural Network (DBGNN) architecture with an augmented message passing [27] scheme whose dynamics matches the non-Markovian characteristics of dynamic graphs, which is the key contribution of our work. While we build on the message passing proposed for Graph Convolutional Networks (GCN) [28], it is easy to generalize our architecture to other message passing schemes. Our approach is based on the following three steps, which yield an easy to implement and scalable class of graph neural networks for time series and sequential data on graphs: We first use time series data on dynamic graphs to calculate statistics of causal walks of different lengths $k$ . We use these statistics to select an higher-order De Bruijn graph model for the causal topology of a dynamic graph. This step is parameter-free, i.e. we can use statistical learning techniques to infer an optimal graph model for the causal topology directly from time series data, without need for hyperparameter tuning or cross-validation. We now define a graph convolutional network that builds on neural message passing in the higher-order De Bruijn graphs inferred in step one. The hidden layers of the resulting graph convolutional network yield meaningful latent representations of patterns in the causal topology of a dynamic graph. Since the nodes in a $k$ -th order De Bruijn graph model correspond to walks (i.e. sequences) of nodes of length $k - 1$ , we implement an additional bipartite layer that maps the latent space representations of sequences to nodes in the original graph. In the following, we provide a detailed description of the three steps outlined above:
+
+Inference of Optimal Higher-Order De Bruijn Graph Model The first step in the DBGNN architecture is the inference of the higher-order De Bruijn graph model for the causal topology in a given dynamic graph data set. For this, we use Definition 1 to calculate the statistic of causal walks of different lengths $k$ for a given maximum time difference $\delta$ . We note that this can be achieved using efficient window-based algorithms $\left\lbrack {{29},{30}}\right\rbrack$ . The statistics of causal walks in the dynamic graph allows us to apply the model selection technique proposed in [12], which yields the optimal higher-order of a De Bruijn graph model given the statistics of causal walks (or paths). The resulting (static) higher-order De Bruijn graph model is the basis for our extension of the message passing scheme for a dynamic graph with non-Markovian characteristics.
+
+Message passing in higher-order De Bruijn graphs Standard message passing algorithms in graph neural networks use the topology of a graph to propagate (and smooth) features across nodes, thus generating hidden features that incorporate patterns in the topology of a graph. To additionally incorporate patterns in the causal topology of a dynamic graph we perform message passing in multiple layers of higher-order De Bruijn graphs. Assuming a $k$ -th order De Bruijn graph model ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ as defined in Definition 2, the input to the first layer $l = 0$ is a set of $k$ -th order node features ${\mathbf{h}}^{\mathbf{k},\mathbf{0}} = \left\{ {{\overrightarrow{h}}_{1}^{k,0},{\overrightarrow{h}}_{2}^{k,0},\ldots ,{\overrightarrow{h}}_{N}^{k,0}}\right\}$ , for ${\overrightarrow{h}}_{i}^{k,0} \in {\mathbb{R}}^{{H}^{0}}$ , where $N = \left| {V}^{\left( k\right) }\right|$ and ${H}^{0}$ is the dimensionality of initial node features. The De Bruijn graph message passing layer uses the causal topology to learn a new set of hidden representations for higher-nodes ${\mathbf{h}}^{\mathbf{k},\mathbf{1}} = \left\{ {{\overrightarrow{h}}_{1}^{k,1},{\overrightarrow{h}}_{2}^{k,1},\ldots ,{\overrightarrow{h}}_{N}^{k,1}}\right\}$ , with ${\overrightarrow{h}}_{i}^{k,1} \in {\mathbb{R}}^{{H}^{1}}$ for each $k - {th}$ order node $i$ (corresponding to a causal walk of length $k - 1$ ). For layer $l$ , we define the update rule of the message passing as:
+
+$$
+{\overrightarrow{h}}_{v}^{k, l} = \sigma \left( {{\mathbf{W}}^{k, l}\mathop{\sum }\limits_{{\left\{ {u \in {V}^{\left( k\right) } : \left( {u, v}\right) \in {E}^{\left( k\right) }}\right\} \cup \{ v\} }}\frac{w\left( {u, v}\right) \cdot {\overrightarrow{h}}_{v}^{k, l - 1}}{\sqrt{S\left( v\right) \cdot S\left( u\right) }}}\right) , \tag{1}
+$$
+
+where ${\overrightarrow{h}}_{u}^{k, l - 1}$ is the previous hidden representation of node $u \in {V}^{k}, w\left( {u, v}\right)$ is the weight of edge $\left( {u, v}\right) \in {E}^{k}$ (capturing the frequency of the corresponding causal walk as explained in section 2), ${\mathbf{W}}^{k, l} \in {\mathbb{R}}^{{H}^{l} \times {H}^{l - 1}}$ are trainable weight matrices, $S\left( v\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{u \in {V}^{\left( k\right) }}}w\left( {u, w}\right)$ is the sum of weights of incoming edges of nodes, and $\sigma$ is a non-linear activation function. Since the message passing is performed on a higher-order De Bruijn graph, we obtain a non-Markovian (or rather higher-order Markovian) message passing dynamics, i.e. we perform a Laplacian smoothing that follows the non-Markovian patterns in the causal walks in the underlying dynamic graph. Different from standard, static graph neural networks that ignore the temporal dimension of dynamic graphs, this enables our architecture to incorporate temporal patters that shape the causal topology, i.e. which nodes in a dynamic graph can influence each other directly and indirectly based on the temporal ordering of time-stamped edges (and the arrow of time).
+
+First-order message passing and bipartite projection layer While the (static) topology of edges influences the (possible) causal walks and thus the edges in the $k$ -th order De Bruin graph, it is important to note that -due to the fact that it operates on nodes ${V}^{\left( k\right) }$ in the higher-order graph- the message passing outlined above does not allow us to incorporate information on the first-order topology. To address this issue, we additionally include message passing in the (static) time-aggregated weighted graph $G$ , which can be done in parallel to the message passing in the higher-order De Bruijn graph. The $g$ layers of this first-order message passing (whose formal definition we omit as it simply uses the GCN update rule [28]) generate hidden representations ${\overrightarrow{h}}_{v}^{1, g}$ of nodes $v \in V$ . This approach enables us to incorporate optional node features ${\overrightarrow{h}}_{v}^{0, g}$ (or alternatively use a one-hot-encoding of nodes).
+
+Since the message passing in a higher-order De Bruijn graph generates hidden features for higher-order nodes ${V}^{\left( k\right) }$ (i.e. sequences of $k$ nodes) rather than nodes $V$ in the original dynamic graph, we finally define a bipartite graph ${G}^{b} = \left( {{V}^{\left( k\right) } \cup V,{E}^{b} \subseteq {V}^{\left( k\right) } \times V}\right)$ that maps node features of higher-nodes to the first-order node space. For a given node $v \in V$ , this bipartite layer sums the hidden representations ${\overrightarrow{h}}_{u}^{k, l}$ of each higher-order node $u = \left( {{u}_{0},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ with ${u}_{k - 1} = v$ to the representation ${h}_{v}^{1, g} \in {\mathbb{R}}^{{F}^{g}}$ generated by the last layer of the first-order message passing. Notice that the dimensions of representations in the last layers of the $k$ -th and first-order message passing should satisfy ${F}^{g} = {H}^{l}$ to enable the summing of the representations. We obtain representations $\left\{ {{\overrightarrow{h}}_{u}^{k, l} + {\overrightarrow{h}}_{v}^{1, g} : \text{ for }u \in {V}^{k}\text{ with }\left( {u, v}\right) \in {E}^{b}}\right\}$ that are the higher-order node representations augmented by the corresponding first order representations. We then use a function $\mathcal{F}$ to aggregate the augmented higher-order representations at the level of first-order nodes. In our experiments, we learn first-order node representations ${h}^{1, g}$ using GCN message passing with $g$ layers, allowing to integrate information on the static and the causal topology of a dynamic graph. Formally, we define the bipartite layer as
+
+$$
+{\overrightarrow{h}}_{v}^{b} = \sigma \left( {{\mathbf{W}}^{b}\mathcal{F}\left( \left\{ {{\overrightarrow{h}}_{u}^{k, l} + {\overrightarrow{h}}_{v}^{1, g} : \text{ for }u \in {V}^{\left( k\right) }\text{ with }\left( {u, v}\right) \in {E}^{b}}\right\} \right) }\right) , \tag{2}
+$$
+
+where ${\overrightarrow{h}}_{v}^{b}$ is the output of the bipartite layer for node $v \in V$ , and ${\mathbf{W}}^{b} \in {\mathbb{R}}^{{F}^{g} \times {H}^{l}}$ is a learnable weight matrix. The function $\mathcal{F}$ can be SUM, MEAN, MAX, MIN.
+
+Figure 2 gives an overview of the proposed neural network architecture for the dynamic graph (and associated second-order De Bruijn graph model) shown in Figure 1 (left). The higher-order message
+
+
+
+Figure 2: Illustration of DBGNN architecture with two message passing layers in first- (left, gray) and second-order De Bruijn graph (right, orange) corresponding to the dynamic graph in Figure 1 (left). Red edges represent indicate the bipartite mapping ${G}^{b}$ of higher-order node representations to first-order representations. An additional linear layer (not shown) is used for node classification.
+
+passing layers on the right use the topology of the second-order De Bruijn graph in Figure 1 (left), while the first-order message passing layers (left) use the topology of the first-order graph. Note that the first-order and higher-order message passing can be performed in parallel, and that the number of message passing layers do not necessarily need to be the same. Red edges indicate the propagation of higher-order node representations to first order nodes performed in the final bipartite layer. Due to space constraints, in Figure 2 we omit the final linear layer used for classification.
+
+## 4 Experimental Evaluation
+
+In the following, we experimentally evaluate our proposed causality-aware graph neural network architecture both in synthetic and empirical time series data on dynamic graphs. With our evaluation, we want to answer the following questions:
+
+Q1 How does the performance of De Bruijn Graph Neural Networks compare to temporal and non-temporal graph learning techniqes?
+
+Q2 Can we use De Bruijn Graph Neural Networks to learn interpretable static latent space representations of nodes in dynamic graphs?
+
+To address those questions, we use six time series data sets on dynamic graphs that provide meta-information on node classes. The overall statistics of the data sets can be found in table 1, temp-clusters is a synthetically generated dynamic graph with three clusters in the causal topology, but no pattern in the static topology. To generate this data set, we first constructed a random graph and generated random sequences of time-stamped edges. We then selectively swap the time stamps of edges such that causal walks of length two within three clusters of nodes are overrepresented, while causal walks between clusters are underrepresented. We include a more detailed description in the appendix (code and data will be provided in a companion repository). Apart from this synthetic data set, we use five empirical time series data sets: student-sms captures time-stamped SMS exchanged over four weeks between freshmen at the Technical University of Denmark [31]. We use the gender of participants as ground truth classes and use a maximum time difference of $\delta = {40}$ . Since the time granularity of this data set is five minutes, this corresponds to a maximum time difference of 200 minutes. high-school-2011 and high-school-2012 capture time-stamped proximities between high-school students in two consecutive years [32] ( 4 days in 2001, 7 days in 2012). We use the gender of students as ground truth classes. workplace captures time-stamped proximity interaction between employees recorded in an office building for multiple days in different years [33]. We use the department of employees as ground truth classes. hospital captures time-stamped proximities between patients and healthcare workers in a hospital ward. We use employees' roles (patient, nurse, administrative, doctor) as ground truth node classes. All the proximity datasets were collected with a resolution of 20 seconds. To mitigate the computational complexity of the causal walk extraction in the (undirected) proximity data sets, we coarsen the resolution by aggregating interactions to a resolution of fifteen minutes and use $\delta = 4$ , which corresponds to a maximum time difference of one hour. Based on the resulting statistics of causal walks, we use the method (and code) provided in [12] to select a higher-order De Bruijn graph model. In table 1 we report the $p$ -value of the resulting likelihood ratio test, which is used to test the hypothesis that a first-order graph model is sufficient to explain the observed causal walk statistics, against the alternative hypothesis that a second-order De Bruijn graph model is needed. Since all $p$ -values are numerically zero, we find strong evidence for patterns that justify a second-order De Bruijn graph model for all data sets.
+
+| Data set | Ref | $\left| V\right|$ | $\left| E\right|$ | $\left| {E}^{\mathcal{T}}\right|$ | $p\left( {k = 2}\right)$ | $\left| {V}^{\left( 2\right) }\right|$ | $\left| {E}^{\left( 2\right) }\right|$ | $\delta$ | Classes |
| temp-clusters | [blinded] | 30 | 560 | 60000 | 0.0 | 560 | 6,789 | 1 | 3 |
| high-school-2011 | [32] | 126 | 3042 | 28561 | 0.0 | 3042 | 17141 | 4 | 2 |
| high-school-2012 | [32] | 180 | 3965 | 45047 | 0.0 | 3965 | 20614 | 4 | 2 |
| hospital | [34] | 75 | 2028 | 32424 | 0.0 | 2028 | 15500 | 4 | 4 |
| student-sms | [31] | 429 | 733 | 46138 | 0.0 | 733 | 846 | 40 | 2 |
| workplace | [33] | 92 | 1431 | 9827 | 0.0 | 1431 | 7121 | 4 | 5 |
+
+Table 1: Overview of time series data and ground truth node classes used in the experiments.
+
+Using a second-order De Bruijn graph, we compare the node classification performance of the DBGNN architecture against the following five baselines. The first three are standard (static) graph learning techniques, namely Graph Convolutional Networks (GCN) [28], DeepWalk [35] and node2vec [36]. We further use two recently proposed temporal graph embedding techniques: Embedding Variable Orders (EVO) [14], is a node representation learning framework that captures non-Markovian characteristics in dynamic graphs. Similar to our approach, EVO uses a higher-order network to generate time-aware node representations that can be used for downstream node classification. HONEM [20] is a higher-order network embedding approach that captures non-Markovian dependencies in time series data on graphs. This framework uses truncated SVD on a higher-order neighborhood matrix that considers the temporal order of interactions.
+
+Addressing Q1, the results of our experiments on node classification are shown in Table 2. Since the classes of the empirical data sets are imbalanced, we use balanced accuracy and additionally report macro-averaged precision, recall and f1-score for a 70-30 training-test split. We report the average performance across multiple splits. For DBGNN, GCN, DeepWalk, node2vec, and HONEM we performed 50 runs. Due to its larger computational complexity (and time constraints) we could only perform 10 runs on EVO. The standard deviations are included in the appendix. We trained node2vec, EVO, and DeepWalk with 80 walks of length 40 per each node and a window of 10 . We obtained the embeddings using the word2vec implementation in [37]. For EVO, we use the average as an aggregator for the higher-order representations. To ensure the comparability of the results from GCN and DBGNN, we train both with the same number of convolutional layers with a learning rate of 0.001 for 5000 epochs, ELU [38] as activation function, and Adam [39] optimiser. For DBGNN, we use SUM as aggregation function $\mathcal{F}$ . Since the data sets had no node features, we used one-hot encoding of nodes as a feature matrix (and a one-hot encoding of higher-order nodes in the initial layer of the DBGNN). For all methods, we fix the dimensionality of the learned representations to $d = {16}$ , which is justified by the size of the graphs. We manually tuned the number of hidden dimensions of the first hidden layers for GCN and DBGNN, as well as the $\mathrm{p}$ and $\mathrm{q}$ parameters of EVO and node2vec. We report the results for the best combination of hyperparameters.
+
+As expected, the results in Table 2 for the synthetic temporal clusters data set show that the three time-aware methods (EVO, HONEM, and DBGNN) perform considerably better than the static counterparts, which only "see" a random graph topology that does not allow to meaningfully assign node classes. Both EVO and our proposed DBGNN architecture are able to perfectly classify nodes in this data set. Interestingly, despite their good performance in the synthetic data set, the three time-aware methods show much higher variability in the empirical data sets. We find that DBGNN shows superior performance in terms of balanced accuracy, fl-macro, and recall-macro, for all of the five empirical data sets, with a relative performance increase compared to the second best method ranging from 1.55% to 28.16%. For precision-macro, DBGNN performs best in four of the five. We attribute these results to the ability of our architecture to consider both patterns in the (static) graph topology and the causal topology, as well as to the underlying supervised approach that is due to the use of the GCN-based message passing.
+
+To address Q2, we study visualizations of the hidden representations of higher- and first-order nodes generated by the DBGNN architecture for the synthetic temporal cluster data set, which exhibits three clear clusters in the causal topology. We use the hidden representations $\overrightarrow{{h}_{v}^{b}}$ generated by the bipartite layer of our DBGNN architecture, as defined in Section 3. We compare this to the representation generated in the last message passing layer of a GCN. Figure 3 in the appendix confirms that the DBGNN architecture learns meaningful latent space representations of nodes that incorporate temporal patterns.
+
+| dataset | method | Balanced Accuracy | F1-score-macro | Precision-macro | Recall-macro |
| temp-clusters | DeepWalk | 32.47 | 30.39 | 32.25 | 32.47 |
| Node2Vec $p = {1q} = 4$ | 35.48 | 33.02 | 34.92 | 35.48 |
| GCN (8,32) | 33.52 | 12.5 | 8.61 | 33.52 |
| EVO p=1 q=1 | 100.0 | 100.0 | 100.0 | 100.0 |
| HONEM | 54.94 | 53.5 | 58.16 | 54.94 |
| DBGNN (16,16) | 100.0 | 100.0 | 100.0 | 100.0 |
| gain | | 0% | 0% | 0% | 0% |
| high-school-2011 | DeepWalk | 55.25 | 54.02 | 60.45 | 55.25 |
| Node2Vec $p = {1q} = 4$ | 56.89 | 56.29 | 60.05 | 56.89 |
| GCN (32,4) | 50.06 | 40.27 | 33.99 | 50.06 |
| EVO p=1 q=4 | 57.21 | 56.28 | 62.09 | 57.21 |
| HONEM | 54.24 | 53.08 | 56.44 | 54.24 |
| DBGNN (32,8) | 64.4 | 63.7 | 65.14 | 64.4 |
| gain | | 12.57% | 13.16% | 4.91% | 12.57% |
| high-school-2012 | DeepWalk | 59.46 | 59.6 | 71.71 | 59.46 |
| Node2Vec $p = {1q} = 4$ | 60.75 | 61.23 | 72.44 | 60.75 |
| GCN (8,32) | 58.03 | 56.39 | 59.16 | 58.03 |
| EVO p=4 q=1 | 57.98 | 57.5 | 69.42 | 57.98 |
| HONEM | 53.16 | 51.7 | 56.59 | 53.16 |
| DBGNN (4,8) | 65.8 | 65.89 | 67.27 | 65.8 |
| gain | | 8.31% | 7.61% | -7.14% | 8.31% |
| hospital | DeepWalk | 47.18 | 44.18 | 43.91 | 47.18 |
| Node2Vec $p = {1q} = 4$ | 50.6 | 47.14 | 45.81 | 50.6 |
| GCN [32,32] | 49.48 | 44.62 | 43.55 | 49.48 |
| EVO p=1 q=4 | 36.34 | 36.44 | 42.1 | 36.34 |
| HONEM | 46.17 | 43.13 | 44.45 | 46.17 |
| DBGNN (32,16) | 59.04 | 55.26 | 58.71 | 57.71 |
| gain | | 16.68% | 17.23% | 28.16% | 14.05% |
| student-sms | DeepWalk | 53.22 | 50.57 | 60.57 | 53.22 |
| Node2Vec $p = 1q = 4$ | 53.22 | 50.97 | 58.56 | 53.22 |
| GCN (4,32) | 57.33 | 57.25 | 57.72 | 57.33 |
| EVO p=4 q=1 | 52.93 | 50.66 | 57.14 | 52.93 |
| HONEM | 50.43 | 44.44 | 52.91 | 50.43 |
| DBGNN (4,4) | 60.6 | 60.89 | 62.55 | 60.6 |
| gain | | 5.7% | 6.36% | 3.27% | 5.7% |
| workplace | DeepWalk | 77.81 | 76.74 | 76.06 | 77.81 |
| Node2Vec $p = {1q} = 4$ | 78.0 | 77.01 | 76.38 | 78.0 |
| GCN (32,16) | 81.86 | 78.72 | 78.58 | 79.93 |
| EVO p=1 q=4 | 77.0 | 75.68 | 75.03 | 77.0 |
| HONEM | 73.26 | 72.82 | 73.73 | 73.26 |
| DBGNN (32,8) | 83.13 | 81.06 | 81.52 | 81.75 |
| gain | | 1.55% | 2.97% | 3.74% | 2.28% |
+
+Table 2: Results of node classification in six dynamic graphs for static graph learning techniques (DeepWalk, node2vec, GCN) and time-aware methods (HONEM, EVO) as well as the DBGNN architecture proposed in this work.
+
+## 5 Conclusion
+
+In summary, we propose an approach to apply graph neural networks to high-resolution time series data that captures the temporal ordering of time-stamped edges in dynamic graphs. Our method is based on a novel combination of (i) a statistical approach to infer an optimal static higher-order De Bruijn graph model for the causal topology that is due to the temporal ordering of edges, (ii) gradient-based learning in a neural network architecture that performs neural message passing in the inferred higher-order De Bruijn graph, and (iii) an additional bipartite mapping layer that maps the learnt hidden representation of higher-order nodes to the original node space. Thanks to this approach, our architecture is able to generalize neural message passing to a static higher-order graph model that captures the causal topology of a dynamic graph, which can considerably deviate from what we would expect based on the mere (static) topology of edges. The results of our experiments demonstrate that the resulting architecture can considerably improve the performance of node classification in time series data, despite using message passing in a relatively simple static (augmented) graph. Bridging recent research on higher-order graph models in network science and deep learning in graphs $\left\lbrack {{13},{15},{23},{24}}\right\rbrack$ , our work contributes to the ongoing discussion about the need for augmented message passing schemes in data on graphs with complex characteristics [27].
+
+References
+
+[1] Hamilton, W. L. Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning 14, 1-159 (2020). 1
+
+[2] Wu, Z. et al. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32, 4-24 (2021). 1
+
+[3] Kempe, D., Kleinberg, J. & Kumar, A. Connectivity and inference problems for temporal networks. In Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, STOC '00, 504-513 (Association for Computing Machinery, New York, NY, USA, 2000). URL https://doi.org/10.1145/ 335305.335364.1,3
+
+[4] Holme, P. & Saramäki, J. Temporal networks. Phys. Rep. 519, 97 - 125 (2012). URL http://www.sciencedirect.com/science/article/pii/S0370157312000841.2, 3
+
+[5] Badie-Modiri, A., Karsai, M. & Kivelä, M. Efficient limited-time reachability estimation in temporal networks. Phys. Rev. E 101, 052303 (2020). URL https://link.aps.org/doi/10.1103/PhysRevE.101.052303.1,3
+
+[6] Lentz, H. H. K., Selhorst, T. & Sokolov, I. M. Unfolding accessibility provides a macroscopic approach to temporal networks. Phys. Rev. Lett. 110, 118701 (2013). URL http://link.aps.org/doi/10.1103/ PhysRevLett.110.118701. 1
+
+[7] Badie-Modiri, A., Rizi, A. K., Karsai, M. & Kivelă, M. Directed percolation in temporal networks. Phys. Rev. Research 4, L022047 (2022). URL https://link.aps.org/doi/10.1103/PhysRevResearch.4.L022047.1
+
+[8] Pfitzner, R., Scholtes, I., Garas, A., Tessone, C. J. & Schweitzer, F. Betweenness preference: Quantifying correlations in the topological dynamics of temporal networks. Phys. Rev. Lett. 110, 198701 (2013). URL http://link.aps.org/doi/10.1103/PhysRevLett.110.198701.https://doi.org/10.1103/ PhysRevLett.110.198701. 1
+
+[9] Scholtes, I. et al. Causality-driven slow-down and speed-up of diffusion in non-markovian temporal networks. Nature Communications 5, 5024 (2014). URL http://www.nature.com/ncomms/ 2014/140924/ncomms6024/full/ncomms6024.html. https://doi.org/10.1038/ncomms6024, 1307.4030.1,3,4
+
+[10] Rosvall, M., Esquivel, A. V., Lancichinetti, A., West, J. D. & Lambiotte, R. Memory in network flows and its effects on spreading dynamics and community detection. Nature communications 5 (2014). 1, 3, 4
+
+[11] Bruijn, N. D. A combinatorial problem. In Nederl. Akad. Wetensch., Proc. 49, 461-467 (1946). 1, 4
+
+[12] Scholtes, I. When is a network a network?: Multi-order graphical model selection in pathways and temporal networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, CA, August 2017, KDD '17, 1037-1046 (ACM, New York, NY, USA, 2017). URL http://doi.acm.org/10.1145/3097983.3098145.http://doi.acm.org/10.1145/3097983.3098145.1,3,4,5,7
+
+[13] Lambiotte, R., Rosvall, M. & Scholtes, I. From networks to optimal higher-order models of complex systems. Nature physics 15, 313-320 (2019). 1, 3, 4, 9
+
+[14] Belth, C., Kamran, F., Tjandra, D. & Koutra, D. When to remember where you came from: Node representation learning in higher-order networks. In 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 222-225 (2019). 1, 4, 8
+
+[15] Eliassi-Rad, T., Latora, V., Rosvall, M. & Scholtes, I. Higher-Order Graph Models: From Theoretical Foundations to Machine Learning (Dagstuhl Seminar 21352). Dagstuhl Reports 11, 139-178 (2021). URL https://drops.dagstuhl.de/opus/volltexte/2021/15592.2, 9
+
+[16] Krieg, S. J., Burgis, W. C., Soga, P. M. & Chawla, N. V. Deep ensembles for graphs with higher-order dependencies. CoRR abs/2205.13988 (2022). URL https://doi.org/10.48550/arXiv.2205.13988.2205.13988.2
+
+[17] Fey, M. & Lenssen, J. E. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428 (2019). 2
+
+[18] Salnikov, V., Schaub, M. T. & Lambiotte, R. Using higher-order markov models to reveal flow-based communities in networks. Scientific reports $\mathbf{6},1 - {13}\left( {2016}\right)$ . 3,4
+
+[19] Chung, F., Diaconis, P. & Graham, R. Universal cycles for combinatorial structures. Discrete Mathematics 110, 43-59 (1992). URL https://www.sciencedirect.com/science/article/pii/ 0012365X9290699G. 4
+
+[20] Saebi, M., Ciampaglia, G. L., Kaplan, L. M. & Chawla, N. V. HONEM: learning embedding for higher order networks. Big Data 8, 255-269 (2020). URL https://doi.org/10.1089/big.2019.0169.4, 8
+
+[21] Xu, J., Wickramarathne, T. L. & Chawla, N. V. Representing higher-order dependencies in networks. Science Advances 2 (2016). URL http://advances.sciencemag.org/content/2/5/e1600028.http://advances.sciencemag.org/content/2/5/e1600028.full.pdf.4
+
+[22] Petrovic, L. V. & Scholtes, I. Learning the markov order of paths in graphs. In Laforest, F. et al. (eds.) WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, 1559-1569 (ACM, 2022). URL https://doi.org/10.1145/3485447.3512091.4
+
+[23] Torres, L., Blevins, A. S., Bassett, D. & Eliassi-Rad, T. The why, how, and when of representations for complex systems. SIAM Review 63, 435-485 (2021). URL https://doi.org/10.1137/20M1355896.https://doi.org/10.1137/20M1355896.4,9
+
+[24] Benson, A. R., Gleich, D. F. & Higham, D. J. Higher-order network analysis takes off, fueled by classical ideas and new data. arXiv preprint arXiv:2103.05031 (2021). 4, 9
+
+[25] Feng, Y., You, H., Zhang, Z., Ji, R. & Gao, Y. Hypergraph neural networks. CoRR abs/1809.09401 (2018). URL http://arxiv.org/abs/1809.09401.1809.09401.5
+
+[26] Huang, J. & Yang, J. Unignn: a unified framework for graph and hypergraph neural networks. In Zhou, Z. (ed.) Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, 2563-2569 (ijcai.org, 2021). URL https://doi.org/10.24963/ijcai.2021/353.5
+
+[27] Veličković, P. Message passing all the way up. arXiv preprint arXiv:2202.11097 (2022). 5, 9
+
+[28] Kipf, T. N. & Welling, M. Semi-supervised classification with graph convolutional networks. CoRR abs/1609.02907 (2016). URL http://arxiv.org/abs/1609.02907.1609.02907.5, 6, 8
+
+[29] Badie-Modiri, A., Karsai, M. & Kivelä, M. Efficient limited-time reachability estimation in temporal networks. Physical Review E 101, 052303 (2020). 5
+
+[30] Petrovic, L. V. & Scholtes, I. Paco: Fast counting of causal paths in temporal network data. In Leskovec, J., Grobelnik, M., Najork, M., Tang, J. & Zia, L. (eds.) Companion of The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, 521-526 (ACM / IW3C2, 2021). URL https: //doi.org/10.1145/3442442.3452050.5
+
+[31] Sapiezynski, P., Stopczynski, A., Lassen, D. D. & Lehmann, S. Interaction data from the copenhagen networks study. Scientific Data 6, 1-10 (2019). 7, 8
+
+[32] Fournet, J. & Barrat, A. Contact patterns among high school students. PloS one 9, e107878 (2014). 7, 8
+
+[33] Génois, M. et al. Data on face-to-face contacts in an office building suggest a low-cost vaccination strategy based on community linkers. Network Science 3, 326-347 (2015). URL http://www.sociopatterns.org/datasets/contacts-in-a-workplace/.7, 8
+
+[34] Vanhems, P. et al. Estimating potential infection transmission routes in hospital wards using wearable proximity sensors. PloS one 8, e73970 (2013). URL http://www.sociopatterns.org/datasets/ hospital-ward-dynamic-contact-network/. 8
+
+[35] Perozzi, B., Al-Rfou, R. & Skiena, S. Deepwalk: online learning of social representations. In Macskassy, S. A., Perlich, C., Leskovec, J., Wang, W. & Ghani, R. (eds.) The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, New York, NY, USA - August 24 - 27, 2014, 701-710 (ACM, 2014). URL https://doi.org/10.1145/2623330.2623732.8
+
+[36] Grover, A. & Leskovec, J. node2vec: Scalable feature learning for networks. In Krishnapuram, B. et al. (eds.) Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 855-864 (ACM, 2016). URL https: //doi.org/10.1145/2939672.2939754.8
+
+[37] Rehurek, R. & Sojka, P. Gensim-python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic 3 (2011). 8
+
+[38] Clevert, D.-A., Unterthiner, T. & Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv: Learning (2016). 8
+
+[39] Kingma, D. & Ba, J. Adam: A method for stochastic optimization. International Conference on Learning Representations (2014). 8
+
+## A Generation of Synthetic data with temporal clusters
+
+temp-clusters is a synthetically generated dynamic graph with a random static topology but a strong cluster structure in the causal topology. To generate the dynamic graph, we first generate a static directed random graph with $n$ vertices and $m$ edges. For our experiment we chose $n = {30}$ and $m = {560}$ . We randomly assign vertices to three equally-sized, non-overlapping clusters, where $C\left( v\right)$ denotes the cluster of vertex $v$ . We then generate $N$ sequences of two randomly chosen time-stamped edges $\left( {{v}_{0},{v}_{1};t}\right)$ and $\left( {{v}_{1},{v}_{2};t + 1}\right)$ that contribute to a causal walk of length two in the resulting dynamic graph. For each vertex ${v}_{1}$ of such a causal path of length two, we randomly pick:
+
+- two time-stamped edges $\left( {u,{v}_{1};{t}_{1}}\right)$ and $\left( {{v}_{1}, w,{t}_{1} + 1}\right)$ such that $C\left( u\right) = C\left( {v}_{1}\right) \neq C\left( w\right)$
+
+- two time-stamped edges $\left( {x,{v}_{1};{t}_{2}}\right)$ and $\left( {{v}_{1}, z;{t}_{2} + 1}\right)$ with $C\left( {v}_{1}\right) = C\left( z\right) \neq C\left( x\right)$
+
+Finally, we swap the time stamps of the four time-stamped edges to $\left( {u,{v}_{1};{t}_{1}}\right)$ and $\left( {{v}_{1}, z;{t}_{1} + 1}\right)$ , $\left( {x,{v}_{1},{t}_{2}}\right)$ , and $\left( {{v}_{1}, w,{t}_{2} + 1}\right)$ . This swapping procedure is repeated for each vertex ${v}_{1}$ of a causal path of length two. This simple process changes the temporal ordering of time-stamped edges, affecting neither the topology nor the frequency of time-stamped edges. The model changes time stamps of edges (and thus causal paths) such that vertices are preferentially connected-via causal paths of length two-to other vertices in the same cluster. This leads to a strong cluster structure in the causal topology of the dynamic graph, which (i) is neither present in the time-aggregated topology nor in the temporal activation patterns of edges, and (ii) can nevertheless be detected by higher-order methods. A random reshuffling of timestamps destroys the cluster pattern, which confirms that it is only due to the temporal order of time-stamped edges.
+
+## B Latent Space Embeddings of Synthetic Example
+
+Figure 3 shows a latent representation of nodes in the synthetic data set temp-clusters generated by the DBGNN (a) and GCN (b) architecture. This synthetically generated dynamic graph contains no pattern whatsoever in the (static) graph topology, which corresponds to a random graph, i.e. the topology of edges is random and all nodes have similar degrees (cf. Figure 3(b)). However, correlations in the temporal ordering of edges lead to three strong clusters in the causal topology, i.e. there are three groups of nodes where -due to the arrow of time and the temporal ordering of edges- pairs of nodes within the same cluster can influence each other via causal walks more frequently than pairs of nodes in different clusters. We emphasize that the resulting pattern in the causal topology is exclusively due to the temporal ordering of edges. The latent space embedding in Figure 3(a) highlights the DBGNN architectures's ability to learn this pattern in the causal topology of the underlying dynamic graph, which is absent in Figure 3(b). As expected, the different node degrees of the static graph (visible as clusters in Figure 3(b)) are the only pattern captured in the hidden node representations of the GCN architecture, which is insensitive to the temporal ordering of edges. This synthetic example confirms that DBGNNs provide a simple, static causality-aware approach for deep learning in dynamic graphs.
+
+
+
+(a) Latent space representation of nodes generated by De Bruijn Graph Neural Network (DBGNN) using higher-order De Bruijn graph with order $k = 2$ .
+
+
+
+(b) Latent space representation of nodes generated by Graph Convolutional Network (GCN).
+
+Figure 3: Latent space representations of nodes in a synthetically generated dynamic graph (temp-clusters) with three clusters in the causal topology, where colours indicate cluster memberships. The hidden node representations learned by the DBGNN architecture capture the cluster structure in the causal topology, which is exclusively due to the temporal ordering -and not due to the topology or frequency- of time-stamped edges.
+
+## C Standard Deviation of Classification Results
+
+In Table 3 we present the standard deviation of the classification results reported in table 2 across all runs for all models.
+
+| dataset | method | Balanced Accuracy | F1-score-macro | Precision-macro | Recall-macro |
| temp-clusters | DeepWalk | 15.38 | 15.04 | 18.03 | 15.38 |
| Node2Vec $p = {1q} = 4$ | 17.12 | 16.88 | 20.24 | 17.12 |
| GCN (8,32) | 7.3 | 7.69 | 8.04 | 7.3 |
| EVO p=1 q=1 | 0.0 | 0.0 | 0.0 | 0.0 |
| HONEM | 16.27 | 16.71 | 19.61 | 16.27 |
| DBGNN (16,16) | 0.0 | 0.0 | 0.0 | 0.0 |
| high-school-2011 | DeepWalk | 5.83 | 7.22 | 12.79 | 5.83 |
| Node2Vec1.04.0 | 6.34 | 7.58 | 9.44 | 6.34 |
| GCN (32,4) | 0.89 | 3.1 | 4.83 | 0.89 |
| EVO p=1 q=4 | 5.72 | 7.65 | 9.33 | 5.72 |
| HONEM | 5.72 | 6.93 | 10.07 | 5.72 |
| DBGNN (32,8) | 7.0 | 7.42 | 7.8 | 7.0 |
| high-school-2012 | DeepWalk | 4.97 | 6.52 | 11.0 | 4.97 |
| Node2Vec $p = {1q} = 4$ | 5.27 | 6.8 | 11.29 | 5.27 |
| GCN (8,32) | 6.87 | 9.49 | 13.58 | 6.87 |
| EVO p=4 q=1 | 4.14 | 6.07 | 9.96 | 4.14 |
| HONEM | 4.59 | 5.89 | 9.12 | 4.59 |
| DBGNN (4,8) | 6.59 | 6.62 | 7.07 | 6.59 |
| hospital | DeepWalk | 7.64 | 6.9 | 7.51 | 7.64 |
| Node2Vec $p = {1q} = 4$ | 6.79 | 6.46 | 6.95 | 6.79 |
| GCN (32,32) | 11.06 | 12.0 | 13.58 | 11.06 |
| EVO p=1 q=4 | 9.31 | 11.34 | 16.31 | 9.31 |
| HONEM | 8.51 | 7.78 | 8.25 | 8.51 |
| DBGNN (32,16) | 13.09 | 12.54 | 15.02 | 12.65 |
| student-sms | DeepWalk | 2.72 | 4.45 | 10.05 | 2.72 |
| Node2Vec $p = 1q = 4$ | 3.29 | 4.93 | 9.13 | 3.29 |
| GCN (4,32) | 3.59 | 3.65 | 3.91 | 3.59 |
| EVO p=4 q=1 | 3.38 | 5.14 | 7.89 | 3.38 |
| HONEM | 1.29 | 2.31 | 15.0 | 1.29 |
| DBGNN (4,4) | 4.28 | 4.47 | 4.56 | 4.28 |
| workplace | DeepWalk | 2.23 | 1.85 | 1.48 | 2.23 |
| Node2Vec $p = 1q = 4$ | 3.3 | 3.11 | 2.95 | 3.3 |
| GCN (32,16) | 8.67 | 8.6 | 9.61 | 8.26 |
| EVO p=1 q=4 | 3.12 | 2.36 | 1.65 | 3.12 |
| HONEM | 6.27 | 5.17 | 4.34 | 6.27 |
| DBGNN (32,8) | 9.67 | 9.76 | 10.26 | 9.65 |
+
+Table 3: Standard deviations of node classification in six dynamic graphs for static graph learning techniques (DeepWalk, node2vec, GCN) and time-aware methods (HONEM, EVO) as well as the DBGNN architecture proposed in this work.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1fd2b52f9d20b31dc9d9d504f155f4015fe532b4
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/Dbkqs1EhTr/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,279 @@
+§ DE BRUIJN GOES NEURAL: CAUSALITY-AWARE GRAPH NEURAL NETWORKS FOR TIME SERIES DATA ON DYNAMIC GRAPHS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+We introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold in the causal topology of dynamic graphs, which is determined by causal walks, i.e. temporally ordered sequences of links by which nodes can influence each other over time. Our architecture builds on multiple layers of higher-order De Bruijn graphs, an iterative line graph construction where nodes in a De Bruijn graph of order $k$ represent walks of length $k - 1$ , while edges represent walks of length $k$ . We develop a graph neural network architecture that utilizes De Bruijn graphs to implement a message passing scheme that follows a non-Markovian dynamics, which enables us to learn patterns in the causal topology of a dynamic graph. Addressing the issue that De Bruijn graphs with different orders $k$ can be used to model the same data set, we further apply statistical model selection to determine the optimal graph topology to be used for message passing. An evaluation in synthetic and empirical data sets suggests that DBGNNs can leverage temporal patterns in dynamic graphs, which substantially improves the performance in a supervised node classification task.
+
+§ 1 INTRODUCTION
+
+Graph Neural Networks (GNNs) [1, 2] have become a cornerstone for the application of deep learning to data with a non-Euclidean, relational structure. Different flavors of GNNs have been shown to be highly efficient for tasks like node classification, representation learning, link prediction, cluster detection, or graph classification. The popularity of GNNs is largely due to the abundance of data that can be represented as graphs, i.e. as a set of nodes with pairwise connections represented as links. However, we increasingly have access to time-resolved data that not only capture which nodes are connected to each other, but also when and in which temporal order those connections occur. A number of works in computer science, network science, and interdisciplinary physics have highlighted how the temporal dimension of dynamic graphs, i.e. the timing and ordering of links, influences the causal topology of networked systems, i.e. which nodes can possibly influence each other over time [3-5]. In a nutshell, if an undirected link(a, b)between two nodes $a$ and $b$ occurs before an undirected link(b, c), node $a$ can causally influence node $c$ via node $b$ . If the temporal ordering of those two links is reversed, node $a$ cannot influence node $c$ via $b$ due to the directionality of the arrow of time. This simple example shows that the arrow of time in dynamic graphs limits possible causal influences between nodes beyond what we would expect based on the mere topology of links.
+
+Beyond such toy examples, a number of recent studies in network science, computer science, and interdisciplinary physics have shown that the temporal ordering of links in real time series data on graphs has non-trivial consequences for the properties of networked systems, e.g. for reachability and percolation [6, 7], diffusion and epidemic spreading [8, 9], node rankings and community structures [10]. It had further been shown that this interesting aspect of dynamic graphs can be understood using a variant of De Bruijn graphs [11], i.e. static higher-order graphical models [9, 12, 13] of causal paths that capture both the temporal and the topological dimension of time series data on graphs.
+
+While the generalization of network analysis techniques like node centrality measures and community detection [10, 12], or graph embedding [14] to such higher-order models has been successful, to the best of our knowledge no generalizations of Graph Neural Networks to higher-order De Bruijn graphs have been proposed $\left\lbrack {{15},{16}}\right\rbrack$ . Such a generalization bears several promises: First it could enable us to apply well-known and efficient gradient-based learning techniques in a static neural network architecture that is able to learn patterns in the causal topology of dynamic graphs that are due to the temporal ordering of links. Second, making the temporal ordering of links in time-stamped data a first-class citizen of graph neural networks, this generalization could also be an interesting approach to incorporate a necessary condition for causality into state-of-the-art geometric deep learning techniques, which often lack meaningful ways to represent time. Finally, a combination of higher-order De Bruijn graph models with graph neural networks enable us to apply frequentist and Bayesian techniques to learn the "optimal" order of a De Bruijn graph model for a given time series, providing new ways to combine statistical learning and model selection with graph neural networks.
+
+Addressing this gap, our work generalizes graph neural networks to high-dimensional De Bruijn graph models for causal paths in time-stamped data on dynamic graphs. We obtain a novel causality-aware graph neural network architecture for time series data that makes the following contributions:
+
+ * We develop a graph neural network architecture that generalizes message passing to multiple layers of higher-order De Bruijn graphs. The resulting De Bruijn Graph Neural Network (DBGNN) architecture leads to a non-Markovian message passing, whose dynamics matches correlations in the temporal ordering of links, thus enabling us to learn patterns that shape the causal topology of dynamic graphs.
+
+ * We evaluate our proposed architecture both in empirical and synthetically generated dynamic graphs and compare its performance to graph neural networks as well as (time-aware) graph representation learning techniques. We find that our method yields superior node classification performance.
+
+ * We combine this architecture with statistical model selection to infer the optimal higher order of a De Bruijn graph. This yields a two-step learning process, where (i) we first learn a parsimonious De Bruijn graph model that neither under- nor overfits patterns in a dynamic graph, and (ii) we apply message passing and gradient-based optimization to the inferred graph in order to address graph learning tasks like node classification or representation learning.
+
+Our work builds on the -to the best of our knowledge- novel combination of (i) statistical model selection to infer optimal higher-order graphical models for causal paths in dynamic graphs, and (ii) gradient-based learning in a neural network architecture that uses the inferred higher-order graphical models as message passing layers. Thanks to this approach, our architecture performs message passing in an optimal graph model for the causal paths in a given dynamic graph. The results of our evaluation confirm that this explicit regularization of the message passing layers enables us to considerably improve performance in a node classification task. The remainder of this paper is structured as follows: In section 2 we introduce the background of our work and formally state the problem that we address, in section 3 we introduce the De Bruijn graph neural network architecture, in section 4 we experimentally validate our method in synthetic and empirical data on dynamic graphs, and in section 5 we summarize our contributions and highlight opportunities for future research. We have implemented our architecture based on the graph learning library pyTorch Geometric [17] and release the code of our experiments as an Open Source package ${}^{1}$ .
+
+§ 2 BACKGROUND AND PROBLEM STATEMENT
+
+Basic definitions We consider a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ with a (static) set of nodes $V$ and time-stamped (directed) edges $\left( {v,w;t}\right) \in {E}^{\mathcal{T}} \subseteq V \times V \times \mathbb{N}$ where -without loss of generality-integer timestamps $t$ represent the instantaneous time at which a pair of nodes $v,w$ is connected [4]. While many real-world network data exhibit such timestamps, for the application of graph neural networks we often consider a time-aggregated projection $G\left( {V,E}\right)$ along the time axis, where a (static) edge $\left( {v,w}\right) \in E$ exists iff $\exists t \in \mathbb{N} : \left( {v,w}\right) \in {E}^{\mathcal{T}}$ . We can further consider edge weights $w : E \rightarrow \mathbb{N}$ defined as $w\left( {v,w}\right) \mathrel{\text{ := }} \left| \left\{ {t \in \mathbb{N} : \left( {v,w;t}\right) \in {E}^{\mathcal{T}}}\right\} \right|$ , i.e. we use $w\left( {v,w}\right)$ to count the number of temporal activations of(v, w).
+
+A key motivation for the study of graphs as models for complex systems is that -apart from direct interactions captured by edges(v, w)- they facilitate the study of indirect interactions between nodes via paths or walks in a graph. Formally, we define a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ in a graph $G = \left( {V,E}\right)$ as any sequence of nodes ${v}_{i} \in V$ such that $\left( {{v}_{i - 1},{v}_{i}}\right) \in E$ for $i = 1,\ldots ,l - 1$ . The length $l$ of a walk captures the number of traversed edges, i.e. each node $v \in V$ is a walk of length zero, while each edge(v, w)is a walk of length one. We further call a walk ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a path of length $l$ from ${v}_{0}$ to ${v}_{l - 1}$ iff ${v}_{i} \neq {v}_{j}$ for $i \neq j$ , i.e. a path is a walk between a set of distinct nodes.
+
+${}^{1}$ link blinded in review version
+
+Causal walks and paths in dynamic graphs In a static graph $G = \left( {V,E}\right)$ , the topology-i.e. which nodes can directly and indirectly influence each other via edges, walks, or paths- is completely determined by the edges $E$ . This is is different for dynamic graphs, which can be understood by extending the definition of walks and paths to causal concepts that respect the arrow of time:
+
+Definition 1. For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ , we call a node sequence ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ a causal walk iff the following two conditions hold: (i) $\left( {{v}_{i - 1},{v}_{i};{t}_{i}}\right) \in {E}^{\mathcal{T}}$ for $i = 1,\ldots ,l - 1$ and (ii) $0 < {t}_{j} - {t}_{i} \leq \delta$ for $i < j$ and some $\delta > 0$ .
+
+The first condition ensures that nodes in a dynamic graph can only indirectly influence each other via a causal walk iff a corresponding walk exists in the time-aggregated graph. Due to $0 < {t}_{j} - {t}_{i}$ for $i < j$ , the second condition ensures that time-stamped edges in a causal walk occur in the correct chronological order, i.e. timestamps are monotonically increasing [3, 4]. As an example, two time-stamped edges $\left( {a,b;1}\right) ,\left( {b,c;2}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = 2$ via node $b$ , while the same edges in reverse temporal order $\left( {a,b;2}\right) ,\left( {b,c;1}\right)$ do not constitute a causal walk. While this definition of a causal walk does not impose an upper bound on the time difference between consecutive time-stamped edges constituting a causal walk, it is often reasonable to define a time limit $\delta > 0$ , i.e. a time difference beyond which consecutive edges are not considered to contribute to a causal walk. As an example, two time-stamped edges $\left( {a,b;1}\right) ,\left( {b,c;{100}}\right)$ constitute a causal walk by which information from node $a$ starting at time ${t}_{1} = 1$ can reach node $c$ at time ${t}_{2} = {100}$ via node $b$ for $\delta = {150}$ , while they do not constitute a causal walk for $\delta = 5$ . This time-limited notion of causal or time-respecting walks is characteristic for many real networked systems in which processes or agents have a finite time scale or "memory", which rules out infinitely long gaps between consecutive causal interactions [4, 5]. Analogous to the definition in a static network, we finally define a causal path ${v}_{0},{v}_{1},\ldots ,{v}_{l - 1}$ of length $l$ from node ${v}_{0}$ to node ${v}_{l - 1}$ as a causal walk with ${v}_{i} \neq {v}_{j}$ for $i \neq j$ .
+
+Non-Markovian characteristics of dynamic graphs The above definition of causal walks and paths in dynamic graphs has important consequences for our understanding of the topology of dynamic graphs, i.e. which nodes can directly and indirectly influence each other directly via walks or paths. Moreover, it has important consequences for graph learning and network analysis tasks such as node ranking, cluster detection, or embedding $\left\lbrack {9,{10},{12},{13},{18}}\right\rbrack$ . This additional complexity of dynamic graphs is due to the fact that the topology of a static graph $G = \left( {V,E}\right)$ can be fully understood based on the transitive hull of edges, i.e. the presence of two edges $\left( {u,v}\right) \in E$ and $\left( {v,w}\right) \in E$ implies that nodes $u$ and $w$ can indirectly influence each other via a walk or path, which we denote as $u{ \rightarrow }^{ * }w$ . This not only enables us to use standard algorithms, e.g. to calculate (shortest) paths, it also implies that we can use matrix powers, eigenvalues and eigenvectors to analyze topological properties of a graph. In contrast, in dynamic graphs the chronological order of time-stamped edges can break transitivity, i.e. $\left( {u,v;t}\right) \in E$ and $\left( {v,w;{t}^{\prime }}\right) \in E$ does not necessarily imply $u{ \rightarrow }^{ * }w$ , which invalidates graph analytic approaches [13].
+
+To study the question how correlations in the temporal ordering of time-stamped edges influence the causal topology of a dynamic graph, we can take a statistical modelling perspective. We can, for instance, consider causal walks as sequences of random variables that can be modelled via a Markov chain of order $k$ over a discrete state space $V$ [12]. In other words, we model the sequence of nodes ${v}_{0},\ldots ,{v}_{l - 1}$ on causal walks as $P\left( {{v}_{i} \mid {v}_{i - k},\ldots ,{v}_{i - 1}}\right)$ where $k - 1$ is the length of the "memory" of the Markov chain. For $k = 1$ we have a memoryless, first-order Markov chain model $P\left( {{v}_{i} \mid {v}_{i - 1}}\right)$ , where the next node on the walk exclusively depends on the current node. From the perspective of dynamic graphs with time-stamped link sequences, this corresponds to a case where the causal walks of the dynamic graph are exclusively determined by the topology (and possibly frequency) of edges, i.e. there are no correlations in the temporal ordering of time-stamped edges and the causal topology of the dynamic matches the topology of the corresponding time-aggregated graph. If the need a Markov order $k > 1$ , the sequence of nodes traversed by causal walks exhibits memory, i.e. the next node on a walk not only depends on the current one but also on the history of past interactions. The presence of such higher-order correlations in dynamic graphs is associated with more complex causal topologies that (i) cannot be reduced to the topology of the associated time-aggregated network, and (ii) have interesting implications for spreading and diffusion processes and spectral properties [9], node centralities [12], and community structures [10].
+
+Higher-order De Bruijn graph models of causal topologies The use of higher-order Markov chain models for causal paths leads to an interesting novel view on the relationship between graph models and time series data on dynamic graphs. In this view, the common (weighted) time-aggregated graph representation of time-stamped edges corresponds to a first-order graphical model, where edge weights capture the statistics of edges, i.e. causal paths of length one. A normalization of edge weights in this graph yields a first-order Markov model of causal walks in a dynamic graph. Similarly, a graphical representation of higher-order Markov chain model of causal walks can be used to capture non-Markovian patterns in the temporal sequence of time-stamped edges. However, different from higher-order Markov chain models of general categorical sequences, a higher-order model of causal paths in dynamic graphs must account for the fact that the set of possible causal paths is constrained by the topology of the corresponding static graph (i.e. condition (i) in Definition 1). To account for this we define a higher-order De Bruijn graph model of causal walks [11]:
+
+Definition 2 ( $k$ -th order De Bruijn graph model). For a dynamic graph ${G}^{\mathcal{T}} = \left( {V,{E}^{\mathcal{T}}}\right)$ and $k \in \mathbb{N}$ , a $k$ -th order De Bruijn graph model of causal paths in ${G}^{\mathcal{T}}$ is a graph ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ , with $u \mathrel{\text{ := }} \left( {{u}_{0},{u}_{1},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ a causal walk of length $k - 1$ in ${G}^{\mathcal{T}}$ and $\left( {u,v}\right) \in {E}^{\left( k\right) }$ iff(i) $v = \left( {{v}_{1},\ldots ,{v}_{k}}\right)$ with ${u}_{i} = {v}_{i}$ for $i = 1,\ldots ,k - 1$ and (ii) $u \oplus v = \left( {{u}_{0},\ldots ,{u}_{k - 1},{v}_{k}}\right)$ a causal walk of length $k$ in ${G}^{\mathcal{T}}$ .
+
+We note that any two adjacency nodes $u,v \in {V}^{k}$ in a $k$ -th order De Bruijn graph ${G}^{\left( k\right) }$ represent two causal walks of length $k - 1$ that overlap in exactly $k - 1$ nodes, i.e. each edge $\left( {u,v}\right) \in {E}^{\left( k\right) }$ represents a causal walk of length $k$ . We can further use edge weights $w : {E}^{\left( k\right) } \rightarrow \mathbb{N}$ to capture the frequencies of causal paths of length $k$ . The (weighted) time-aggregated graph $G$ of a dynamic graph trivially corresponds to a first-order De Bruijn graph, where (i) nodes are causal walks of length zero and (ii) edges $E = {E}^{\left( 1\right) }$ capture causal walks of length one (i.e. edges) in ${G}^{\mathcal{T}}$ . To construct a second-order De Bruijn graph ${G}^{\left( 2\right) }$ we can perform a line graph transformation of a static graph $G = {G}^{\left( 1\right) }$ , where each edge $\left( {{u}_{0},{u}_{1}}\right) ,\left( {{u}_{1},{u}_{2}}\right) \in {E}^{\left( 2\right) }$ captures a causally ordered sequence of two edges $\left( {{u}_{0},{u}_{1};t}\right)$ and $\left( {{u}_{1},{u}_{2};{t}^{\prime }}\right)$ . A $k$ -th order De Bruijn graph can be constructed by a repeated line graph transformation of a static graph $G$ . Hence, De Bruijn graphs can be viewed as generalization of common graph models to a higher-order, static graphical model of causal walks of length $k$ , where walks of length $l$ in ${G}^{k}$ model causal walks of length $k + l - 1$ in ${G}^{\mathcal{T}}\left\lbrack {9,{13}}\right\rbrack$ .
+
+De Bruijn graphs have interesting mathematical properties that connect them to trajectories of subshifts of finite type as well as to dynamical systems and ergodic theory [19]. For the purpose of our work, they provide the advantage that we can use $k$ -th order De Bruijn graphs to model the causal topology in dynamic graphs. We illustrate this in fig. 1, which shows two dynamic graphs with four nodes and 33 time-stamped links. These dynamics groups only differ in term of the temporal ordering of edges, i.e. they have the same (first-order) weighted time-aggregated graph representation (center). Moreover, this first-order representation wrongly suggests that node $A$ can influence node $C$ by a path via node $B$ . While this is true in the dynamic graph on the right (see red causal paths), no corresponding causal path from $A$ via $B$ to $C$ exists in the dynamic graph on the left. A second-order De Bruijn graph model (bottom left and right) captures the fact that the causal path from $A$ via $B$ to $C$ is absent in the right example. This shows that, different from commonly used static graph representations, the edges of a $k$ -th order De Bruijn graph with $k > 1$ are sensitive to the temporal ordering of time-stamped edges. Hence, static higher-order De Bruijn graphs can be used to model the causal topology in a dynamic graph. We can view a $k$ -th order De Bruijn graph in analogy to a $k$ -th order Markov model, where a directed link from node $\left( {{u}_{0},\ldots ,{u}_{k - 1}}\right)$ to node $u = \left( {{u}_{1},\ldots ,{u}_{k},{u}_{k}}\right)$ captures a walk from node ${u}_{k}$ to ${u}_{k + 1}$ in the underlying graph, with a memory of $k$ previously visited nodes ${u}_{0},\ldots ,{u}_{k - 1}$ . This approach has been used to analyze how the causal topology of dynamic graphs influences node ranking in dynamic graphs $\left\lbrack {{10},{12}}\right\rbrack$ , the modelling of random walks and diffusion [9], community detection [10, 18], time-aware static graph embedding [14, 20]. Moreover, several works have proposed heuristic, frequentist and Bayesian methods to infer the optimal order of higher-order graph models of causal paths given time series data on dynamic graphs [10, 12, 21, 22].
+
+Problem Statement and Research Gap The works above provide the background for the generalization of graph neural networks to higher-order De Bruijn graph models of causal walks in dynamic graphs, which we propose in the following section. Following the terminology in the network science community, higher-order De Bruijn graph models can be seen as one particular type of higher-order network models [13, 23, 24], which capture (causally-ordered) sequences of interactions between more than two nodes, rather than dyadic edges. They complement other types of popular higher-order network models (like, e.g. hypergraphs, simplicial complexes, or motif-based adjacency matrices)
+
+ < g r a p h i c s >
+
+Figure 1: Simple example for two dynamic graphs with four nodes and 33 directed time-stamped edges (top left and right). The two graphs only differ in terms of the temporal ordering of edges. Frequency and topology of edges are identical, i.e. they have the same first-order time-aggregated weighted graph representation (center). Due to the arrow of time, causal walks and paths differ in the two dynamic graphs: Assuming $\delta = 1$ , in the left dynamic graph node $A$ cannot causally influence $C$ via $B$ , while such a causal path is possible in the right graph. A second-order De Bruijn graph representation of causal walks in the two graphs (bottom left and right) captures this difference in the causal topology. Building on such causality-aware graphical models, in our work we define a graph neural network architecture that is able to learn patterns in the causal topology of dynamic graphs.
+
+that consider (unordered) non-dyadic interactions in static networks, and which have been used to generalize graph neural networks to non-dyadic interactions [25, 26].
+
+To the best of our knowledge, De Bruijn graph models have not been combined with recent advanced in graph neural networks. Closing this gap, we propose a causality-aware graph convolutional network architecture that uses an augmented message passing scheme [27] in higher-order De Bruijn graphs to capture patterns in the causal topology of dynamic graphs.
+
+§ 3 DE BRUIJN GRAPH NEURAL NETWORK ARCHITECTURE
+
+We now introduce the De Bruijn Graph Neural Network (DBGNN) architecture with an augmented message passing [27] scheme whose dynamics matches the non-Markovian characteristics of dynamic graphs, which is the key contribution of our work. While we build on the message passing proposed for Graph Convolutional Networks (GCN) [28], it is easy to generalize our architecture to other message passing schemes. Our approach is based on the following three steps, which yield an easy to implement and scalable class of graph neural networks for time series and sequential data on graphs: We first use time series data on dynamic graphs to calculate statistics of causal walks of different lengths $k$ . We use these statistics to select an higher-order De Bruijn graph model for the causal topology of a dynamic graph. This step is parameter-free, i.e. we can use statistical learning techniques to infer an optimal graph model for the causal topology directly from time series data, without need for hyperparameter tuning or cross-validation. We now define a graph convolutional network that builds on neural message passing in the higher-order De Bruijn graphs inferred in step one. The hidden layers of the resulting graph convolutional network yield meaningful latent representations of patterns in the causal topology of a dynamic graph. Since the nodes in a $k$ -th order De Bruijn graph model correspond to walks (i.e. sequences) of nodes of length $k - 1$ , we implement an additional bipartite layer that maps the latent space representations of sequences to nodes in the original graph. In the following, we provide a detailed description of the three steps outlined above:
+
+Inference of Optimal Higher-Order De Bruijn Graph Model The first step in the DBGNN architecture is the inference of the higher-order De Bruijn graph model for the causal topology in a given dynamic graph data set. For this, we use Definition 1 to calculate the statistic of causal walks of different lengths $k$ for a given maximum time difference $\delta$ . We note that this can be achieved using efficient window-based algorithms $\left\lbrack {{29},{30}}\right\rbrack$ . The statistics of causal walks in the dynamic graph allows us to apply the model selection technique proposed in [12], which yields the optimal higher-order of a De Bruijn graph model given the statistics of causal walks (or paths). The resulting (static) higher-order De Bruijn graph model is the basis for our extension of the message passing scheme for a dynamic graph with non-Markovian characteristics.
+
+Message passing in higher-order De Bruijn graphs Standard message passing algorithms in graph neural networks use the topology of a graph to propagate (and smooth) features across nodes, thus generating hidden features that incorporate patterns in the topology of a graph. To additionally incorporate patterns in the causal topology of a dynamic graph we perform message passing in multiple layers of higher-order De Bruijn graphs. Assuming a $k$ -th order De Bruijn graph model ${G}^{\left( k\right) } = \left( {{V}^{\left( k\right) },{E}^{\left( k\right) }}\right)$ as defined in Definition 2, the input to the first layer $l = 0$ is a set of $k$ -th order node features ${\mathbf{h}}^{\mathbf{k},\mathbf{0}} = \left\{ {{\overrightarrow{h}}_{1}^{k,0},{\overrightarrow{h}}_{2}^{k,0},\ldots ,{\overrightarrow{h}}_{N}^{k,0}}\right\}$ , for ${\overrightarrow{h}}_{i}^{k,0} \in {\mathbb{R}}^{{H}^{0}}$ , where $N = \left| {V}^{\left( k\right) }\right|$ and ${H}^{0}$ is the dimensionality of initial node features. The De Bruijn graph message passing layer uses the causal topology to learn a new set of hidden representations for higher-nodes ${\mathbf{h}}^{\mathbf{k},\mathbf{1}} = \left\{ {{\overrightarrow{h}}_{1}^{k,1},{\overrightarrow{h}}_{2}^{k,1},\ldots ,{\overrightarrow{h}}_{N}^{k,1}}\right\}$ , with ${\overrightarrow{h}}_{i}^{k,1} \in {\mathbb{R}}^{{H}^{1}}$ for each $k - {th}$ order node $i$ (corresponding to a causal walk of length $k - 1$ ). For layer $l$ , we define the update rule of the message passing as:
+
+$$
+{\overrightarrow{h}}_{v}^{k,l} = \sigma \left( {{\mathbf{W}}^{k,l}\mathop{\sum }\limits_{{\left\{ {u \in {V}^{\left( k\right) } : \left( {u,v}\right) \in {E}^{\left( k\right) }}\right\} \cup \{ v\} }}\frac{w\left( {u,v}\right) \cdot {\overrightarrow{h}}_{v}^{k,l - 1}}{\sqrt{S\left( v\right) \cdot S\left( u\right) }}}\right) , \tag{1}
+$$
+
+where ${\overrightarrow{h}}_{u}^{k,l - 1}$ is the previous hidden representation of node $u \in {V}^{k},w\left( {u,v}\right)$ is the weight of edge $\left( {u,v}\right) \in {E}^{k}$ (capturing the frequency of the corresponding causal walk as explained in section 2), ${\mathbf{W}}^{k,l} \in {\mathbb{R}}^{{H}^{l} \times {H}^{l - 1}}$ are trainable weight matrices, $S\left( v\right) \mathrel{\text{ := }} \mathop{\sum }\limits_{{u \in {V}^{\left( k\right) }}}w\left( {u,w}\right)$ is the sum of weights of incoming edges of nodes, and $\sigma$ is a non-linear activation function. Since the message passing is performed on a higher-order De Bruijn graph, we obtain a non-Markovian (or rather higher-order Markovian) message passing dynamics, i.e. we perform a Laplacian smoothing that follows the non-Markovian patterns in the causal walks in the underlying dynamic graph. Different from standard, static graph neural networks that ignore the temporal dimension of dynamic graphs, this enables our architecture to incorporate temporal patters that shape the causal topology, i.e. which nodes in a dynamic graph can influence each other directly and indirectly based on the temporal ordering of time-stamped edges (and the arrow of time).
+
+First-order message passing and bipartite projection layer While the (static) topology of edges influences the (possible) causal walks and thus the edges in the $k$ -th order De Bruin graph, it is important to note that -due to the fact that it operates on nodes ${V}^{\left( k\right) }$ in the higher-order graph- the message passing outlined above does not allow us to incorporate information on the first-order topology. To address this issue, we additionally include message passing in the (static) time-aggregated weighted graph $G$ , which can be done in parallel to the message passing in the higher-order De Bruijn graph. The $g$ layers of this first-order message passing (whose formal definition we omit as it simply uses the GCN update rule [28]) generate hidden representations ${\overrightarrow{h}}_{v}^{1,g}$ of nodes $v \in V$ . This approach enables us to incorporate optional node features ${\overrightarrow{h}}_{v}^{0,g}$ (or alternatively use a one-hot-encoding of nodes).
+
+Since the message passing in a higher-order De Bruijn graph generates hidden features for higher-order nodes ${V}^{\left( k\right) }$ (i.e. sequences of $k$ nodes) rather than nodes $V$ in the original dynamic graph, we finally define a bipartite graph ${G}^{b} = \left( {{V}^{\left( k\right) } \cup V,{E}^{b} \subseteq {V}^{\left( k\right) } \times V}\right)$ that maps node features of higher-nodes to the first-order node space. For a given node $v \in V$ , this bipartite layer sums the hidden representations ${\overrightarrow{h}}_{u}^{k,l}$ of each higher-order node $u = \left( {{u}_{0},\ldots ,{u}_{k - 1}}\right) \in {V}^{\left( k\right) }$ with ${u}_{k - 1} = v$ to the representation ${h}_{v}^{1,g} \in {\mathbb{R}}^{{F}^{g}}$ generated by the last layer of the first-order message passing. Notice that the dimensions of representations in the last layers of the $k$ -th and first-order message passing should satisfy ${F}^{g} = {H}^{l}$ to enable the summing of the representations. We obtain representations $\left\{ {{\overrightarrow{h}}_{u}^{k,l} + {\overrightarrow{h}}_{v}^{1,g} : \text{ for }u \in {V}^{k}\text{ with }\left( {u,v}\right) \in {E}^{b}}\right\}$ that are the higher-order node representations augmented by the corresponding first order representations. We then use a function $\mathcal{F}$ to aggregate the augmented higher-order representations at the level of first-order nodes. In our experiments, we learn first-order node representations ${h}^{1,g}$ using GCN message passing with $g$ layers, allowing to integrate information on the static and the causal topology of a dynamic graph. Formally, we define the bipartite layer as
+
+$$
+{\overrightarrow{h}}_{v}^{b} = \sigma \left( {{\mathbf{W}}^{b}\mathcal{F}\left( \left\{ {{\overrightarrow{h}}_{u}^{k,l} + {\overrightarrow{h}}_{v}^{1,g} : \text{ for }u \in {V}^{\left( k\right) }\text{ with }\left( {u,v}\right) \in {E}^{b}}\right\} \right) }\right) , \tag{2}
+$$
+
+where ${\overrightarrow{h}}_{v}^{b}$ is the output of the bipartite layer for node $v \in V$ , and ${\mathbf{W}}^{b} \in {\mathbb{R}}^{{F}^{g} \times {H}^{l}}$ is a learnable weight matrix. The function $\mathcal{F}$ can be SUM, MEAN, MAX, MIN.
+
+Figure 2 gives an overview of the proposed neural network architecture for the dynamic graph (and associated second-order De Bruijn graph model) shown in Figure 1 (left). The higher-order message
+
+ < g r a p h i c s >
+
+Figure 2: Illustration of DBGNN architecture with two message passing layers in first- (left, gray) and second-order De Bruijn graph (right, orange) corresponding to the dynamic graph in Figure 1 (left). Red edges represent indicate the bipartite mapping ${G}^{b}$ of higher-order node representations to first-order representations. An additional linear layer (not shown) is used for node classification.
+
+passing layers on the right use the topology of the second-order De Bruijn graph in Figure 1 (left), while the first-order message passing layers (left) use the topology of the first-order graph. Note that the first-order and higher-order message passing can be performed in parallel, and that the number of message passing layers do not necessarily need to be the same. Red edges indicate the propagation of higher-order node representations to first order nodes performed in the final bipartite layer. Due to space constraints, in Figure 2 we omit the final linear layer used for classification.
+
+§ 4 EXPERIMENTAL EVALUATION
+
+In the following, we experimentally evaluate our proposed causality-aware graph neural network architecture both in synthetic and empirical time series data on dynamic graphs. With our evaluation, we want to answer the following questions:
+
+Q1 How does the performance of De Bruijn Graph Neural Networks compare to temporal and non-temporal graph learning techniqes?
+
+Q2 Can we use De Bruijn Graph Neural Networks to learn interpretable static latent space representations of nodes in dynamic graphs?
+
+To address those questions, we use six time series data sets on dynamic graphs that provide meta-information on node classes. The overall statistics of the data sets can be found in table 1, temp-clusters is a synthetically generated dynamic graph with three clusters in the causal topology, but no pattern in the static topology. To generate this data set, we first constructed a random graph and generated random sequences of time-stamped edges. We then selectively swap the time stamps of edges such that causal walks of length two within three clusters of nodes are overrepresented, while causal walks between clusters are underrepresented. We include a more detailed description in the appendix (code and data will be provided in a companion repository). Apart from this synthetic data set, we use five empirical time series data sets: student-sms captures time-stamped SMS exchanged over four weeks between freshmen at the Technical University of Denmark [31]. We use the gender of participants as ground truth classes and use a maximum time difference of $\delta = {40}$ . Since the time granularity of this data set is five minutes, this corresponds to a maximum time difference of 200 minutes. high-school-2011 and high-school-2012 capture time-stamped proximities between high-school students in two consecutive years [32] ( 4 days in 2001, 7 days in 2012). We use the gender of students as ground truth classes. workplace captures time-stamped proximity interaction between employees recorded in an office building for multiple days in different years [33]. We use the department of employees as ground truth classes. hospital captures time-stamped proximities between patients and healthcare workers in a hospital ward. We use employees' roles (patient, nurse, administrative, doctor) as ground truth node classes. All the proximity datasets were collected with a resolution of 20 seconds. To mitigate the computational complexity of the causal walk extraction in the (undirected) proximity data sets, we coarsen the resolution by aggregating interactions to a resolution of fifteen minutes and use $\delta = 4$ , which corresponds to a maximum time difference of one hour. Based on the resulting statistics of causal walks, we use the method (and code) provided in [12] to select a higher-order De Bruijn graph model. In table 1 we report the $p$ -value of the resulting likelihood ratio test, which is used to test the hypothesis that a first-order graph model is sufficient to explain the observed causal walk statistics, against the alternative hypothesis that a second-order De Bruijn graph model is needed. Since all $p$ -values are numerically zero, we find strong evidence for patterns that justify a second-order De Bruijn graph model for all data sets.
+
+max width=
+
+Data set Ref $\left| V\right|$ $\left| E\right|$ $\left| {E}^{\mathcal{T}}\right|$ $p\left( {k = 2}\right)$ $\left| {V}^{\left( 2\right) }\right|$ $\left| {E}^{\left( 2\right) }\right|$ $\delta$ Classes
+
+1-10
+temp-clusters [blinded] 30 560 60000 0.0 560 6,789 1 3
+
+1-10
+high-school-2011 [32] 126 3042 28561 0.0 3042 17141 4 2
+
+1-10
+high-school-2012 [32] 180 3965 45047 0.0 3965 20614 4 2
+
+1-10
+hospital [34] 75 2028 32424 0.0 2028 15500 4 4
+
+1-10
+student-sms [31] 429 733 46138 0.0 733 846 40 2
+
+1-10
+workplace [33] 92 1431 9827 0.0 1431 7121 4 5
+
+1-10
+
+Table 1: Overview of time series data and ground truth node classes used in the experiments.
+
+Using a second-order De Bruijn graph, we compare the node classification performance of the DBGNN architecture against the following five baselines. The first three are standard (static) graph learning techniques, namely Graph Convolutional Networks (GCN) [28], DeepWalk [35] and node2vec [36]. We further use two recently proposed temporal graph embedding techniques: Embedding Variable Orders (EVO) [14], is a node representation learning framework that captures non-Markovian characteristics in dynamic graphs. Similar to our approach, EVO uses a higher-order network to generate time-aware node representations that can be used for downstream node classification. HONEM [20] is a higher-order network embedding approach that captures non-Markovian dependencies in time series data on graphs. This framework uses truncated SVD on a higher-order neighborhood matrix that considers the temporal order of interactions.
+
+Addressing Q1, the results of our experiments on node classification are shown in Table 2. Since the classes of the empirical data sets are imbalanced, we use balanced accuracy and additionally report macro-averaged precision, recall and f1-score for a 70-30 training-test split. We report the average performance across multiple splits. For DBGNN, GCN, DeepWalk, node2vec, and HONEM we performed 50 runs. Due to its larger computational complexity (and time constraints) we could only perform 10 runs on EVO. The standard deviations are included in the appendix. We trained node2vec, EVO, and DeepWalk with 80 walks of length 40 per each node and a window of 10 . We obtained the embeddings using the word2vec implementation in [37]. For EVO, we use the average as an aggregator for the higher-order representations. To ensure the comparability of the results from GCN and DBGNN, we train both with the same number of convolutional layers with a learning rate of 0.001 for 5000 epochs, ELU [38] as activation function, and Adam [39] optimiser. For DBGNN, we use SUM as aggregation function $\mathcal{F}$ . Since the data sets had no node features, we used one-hot encoding of nodes as a feature matrix (and a one-hot encoding of higher-order nodes in the initial layer of the DBGNN). For all methods, we fix the dimensionality of the learned representations to $d = {16}$ , which is justified by the size of the graphs. We manually tuned the number of hidden dimensions of the first hidden layers for GCN and DBGNN, as well as the $\mathrm{p}$ and $\mathrm{q}$ parameters of EVO and node2vec. We report the results for the best combination of hyperparameters.
+
+As expected, the results in Table 2 for the synthetic temporal clusters data set show that the three time-aware methods (EVO, HONEM, and DBGNN) perform considerably better than the static counterparts, which only "see" a random graph topology that does not allow to meaningfully assign node classes. Both EVO and our proposed DBGNN architecture are able to perfectly classify nodes in this data set. Interestingly, despite their good performance in the synthetic data set, the three time-aware methods show much higher variability in the empirical data sets. We find that DBGNN shows superior performance in terms of balanced accuracy, fl-macro, and recall-macro, for all of the five empirical data sets, with a relative performance increase compared to the second best method ranging from 1.55% to 28.16%. For precision-macro, DBGNN performs best in four of the five. We attribute these results to the ability of our architecture to consider both patterns in the (static) graph topology and the causal topology, as well as to the underlying supervised approach that is due to the use of the GCN-based message passing.
+
+To address Q2, we study visualizations of the hidden representations of higher- and first-order nodes generated by the DBGNN architecture for the synthetic temporal cluster data set, which exhibits three clear clusters in the causal topology. We use the hidden representations $\overrightarrow{{h}_{v}^{b}}$ generated by the bipartite layer of our DBGNN architecture, as defined in Section 3. We compare this to the representation generated in the last message passing layer of a GCN. Figure 3 in the appendix confirms that the DBGNN architecture learns meaningful latent space representations of nodes that incorporate temporal patterns.
+
+max width=
+
+dataset method Balanced Accuracy F1-score-macro Precision-macro Recall-macro
+
+1-6
+6*temp-clusters DeepWalk 32.47 30.39 32.25 32.47
+
+2-6
+ Node2Vec $p = {1q} = 4$ 35.48 33.02 34.92 35.48
+
+2-6
+ GCN (8,32) 33.52 12.5 8.61 33.52
+
+2-6
+ EVO p=1 q=1 100.0 100.0 100.0 100.0
+
+2-6
+ HONEM 54.94 53.5 58.16 54.94
+
+2-6
+ DBGNN (16,16) 100.0 100.0 100.0 100.0
+
+1-6
+gain X 0% 0% 0% 0%
+
+1-6
+6*high-school-2011 DeepWalk 55.25 54.02 60.45 55.25
+
+2-6
+ Node2Vec $p = {1q} = 4$ 56.89 56.29 60.05 56.89
+
+2-6
+ GCN (32,4) 50.06 40.27 33.99 50.06
+
+2-6
+ EVO p=1 q=4 57.21 56.28 62.09 57.21
+
+2-6
+ HONEM 54.24 53.08 56.44 54.24
+
+2-6
+ DBGNN (32,8) 64.4 63.7 65.14 64.4
+
+1-6
+gain X 12.57% 13.16% 4.91% 12.57%
+
+1-6
+6*high-school-2012 DeepWalk 59.46 59.6 71.71 59.46
+
+2-6
+ Node2Vec $p = {1q} = 4$ 60.75 61.23 72.44 60.75
+
+2-6
+ GCN (8,32) 58.03 56.39 59.16 58.03
+
+2-6
+ EVO p=4 q=1 57.98 57.5 69.42 57.98
+
+2-6
+ HONEM 53.16 51.7 56.59 53.16
+
+2-6
+ DBGNN (4,8) 65.8 65.89 67.27 65.8
+
+1-6
+gain X 8.31% 7.61% -7.14% 8.31%
+
+1-6
+6*hospital DeepWalk 47.18 44.18 43.91 47.18
+
+2-6
+ Node2Vec $p = {1q} = 4$ 50.6 47.14 45.81 50.6
+
+2-6
+ GCN [32,32] 49.48 44.62 43.55 49.48
+
+2-6
+ EVO p=1 q=4 36.34 36.44 42.1 36.34
+
+2-6
+ HONEM 46.17 43.13 44.45 46.17
+
+2-6
+ DBGNN (32,16) 59.04 55.26 58.71 57.71
+
+1-6
+gain X 16.68% 17.23% 28.16% 14.05%
+
+1-6
+6*student-sms DeepWalk 53.22 50.57 60.57 53.22
+
+2-6
+ Node2Vec $p = 1q = 4$ 53.22 50.97 58.56 53.22
+
+2-6
+ GCN (4,32) 57.33 57.25 57.72 57.33
+
+2-6
+ EVO p=4 q=1 52.93 50.66 57.14 52.93
+
+2-6
+ HONEM 50.43 44.44 52.91 50.43
+
+2-6
+ DBGNN (4,4) 60.6 60.89 62.55 60.6
+
+1-6
+gain X 5.7% 6.36% 3.27% 5.7%
+
+1-6
+6*workplace DeepWalk 77.81 76.74 76.06 77.81
+
+2-6
+ Node2Vec $p = {1q} = 4$ 78.0 77.01 76.38 78.0
+
+2-6
+ GCN (32,16) 81.86 78.72 78.58 79.93
+
+2-6
+ EVO p=1 q=4 77.0 75.68 75.03 77.0
+
+2-6
+ HONEM 73.26 72.82 73.73 73.26
+
+2-6
+ DBGNN (32,8) 83.13 81.06 81.52 81.75
+
+1-6
+gain X 1.55% 2.97% 3.74% 2.28%
+
+1-6
+
+Table 2: Results of node classification in six dynamic graphs for static graph learning techniques (DeepWalk, node2vec, GCN) and time-aware methods (HONEM, EVO) as well as the DBGNN architecture proposed in this work.
+
+§ 5 CONCLUSION
+
+In summary, we propose an approach to apply graph neural networks to high-resolution time series data that captures the temporal ordering of time-stamped edges in dynamic graphs. Our method is based on a novel combination of (i) a statistical approach to infer an optimal static higher-order De Bruijn graph model for the causal topology that is due to the temporal ordering of edges, (ii) gradient-based learning in a neural network architecture that performs neural message passing in the inferred higher-order De Bruijn graph, and (iii) an additional bipartite mapping layer that maps the learnt hidden representation of higher-order nodes to the original node space. Thanks to this approach, our architecture is able to generalize neural message passing to a static higher-order graph model that captures the causal topology of a dynamic graph, which can considerably deviate from what we would expect based on the mere (static) topology of edges. The results of our experiments demonstrate that the resulting architecture can considerably improve the performance of node classification in time series data, despite using message passing in a relatively simple static (augmented) graph. Bridging recent research on higher-order graph models in network science and deep learning in graphs $\left\lbrack {{13},{15},{23},{24}}\right\rbrack$ , our work contributes to the ongoing discussion about the need for augmented message passing schemes in data on graphs with complex characteristics [27].
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e159c28bfea870c78e23fbd33c12f67e9ad0d6a
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,449 @@
+# Taxonomy of Benchmarks in Graph Representation Learning
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a sensitivity profile that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach and implementation in GTaxoGym package ${}^{1}$ are extendable to multiple graph prediction task types and future datasets.
+
+## 1 Introduction
+
+Machine learning for graph representation learning (GRL) has seen rapid development in recent years [27]. Originally inspired by the success of convolutional neural networks in regular Euclidean domains, thanks to their ability to leverage data-intrinsic geometries, classical graph neural network (GNN) models $\left\lbrack {{15},{35},{56}}\right\rbrack$ extend those principles to irregular graph domain. Further advances in the field have led to a wide selection of complex and powerful GNN architectures. Some models are provably more expressive than others [63, 43], can leverage multi-resolution views of graphs [41], or can account for implicit symmetries in graph data [9]. Comprehensive surveys of graph neural networks can be found in Bronstein et al. [8], Wu et al. [61], Zhou et al. [67].
+
+Most graph-structured data encode information in graph structures and node features. The structure of each graph represents relationships (i.e., edges) between different nodes, while the node features represent quantities of interest at each individual node. For example, in citation networks, nodes represent papers and edges represent citations between the papers. On such networks, node features often capture the presence or absence of certain keywords in each paper, encoded in binary feature vectors. In graphs modeling social networks, each node represents a user, and the corresponding node features often include user statistics like gender, age, or binary encodings of personal interests.
+
+Intuitively, the power of GNNs lies in relating local node-feature information to global graph structure information, typically achieved by applying a cascade of feature aggregation and transformation steps. In aggregation steps, information is exchanged between neighboring nodes, while transformation steps apply a (multi-layer) perceptron to feature vectors of each node individually. Such architectures are commonly referred to as Message Passing Neural Networks (MPNN) [24].
+
+Historically, GNN methods have been evaluated on a small collection of datasets [44], many of which originated from the development of graph kernels. The limited quantity, size and variety of these datasets have rendered them insufficient to serve as distinguishing benchmarks [17, 46]. Therefore, recent work has focused on compiling a set of large(r) benchmarking datasets across diverse graph domains $\left\lbrack {{17},{31}}\right\rbrack$ . Despite these efforts and the introduction of new datasets, it is still not well understood what aspects of a dataset most influence the performance of GNNs. Which is more important, the geometric structure of the graph or node features? Are long-range interactions crucial, or are short-range interactions sufficient for most tasks? This lack of understanding of the dataset properties and of their similarities makes it difficult to select a benchmarking suit that would enable comprehensive evaluation of GNN models. Even when an array of seemingly different datasets is used, they may be probing similar aspects of graph representation learning.
+
+---
+
+${}^{1}$ https://github.com/G-Taxonomy-Workgroup/GTaxoGym
+
+---
+
+
+
+Figure 1: Overview of our pipeline to taxonomize graph learning datasets.
+
+Leveraging symmetries and other geometric priors in graph data is crucial for generalizable learning [9]. While invariance or equivariance to some transformations is inherent, invariance to others may only be empirically or partially apparent. Motivated by this observation, we propose to use the lens of empirical transformation sensitivity to gauge how task-related information is encoded in graph datasets and subsequently taxonomize their use as benchmarks in graph representation learning. Our approach is illustrated in Figure 1. Namely, we list our contributions in this study as:
+
+1. We develop a graph dataset taxonomization framework that is extendable to both new datasets and evaluation of additional graph/task properties,
+
+2. Using this framework, we provide the first taxonomization of GNN (and GRL) benchmarking datasets, collected from TUDatasets [44], OGB [31] and other sources,
+
+3. Through the resulting taxonomy, we provide insights about existing datasets and guide better dataset selection in future benchmarking of GNN models.
+
+## 2 Methods
+
+As a proxy for invariance or sensitivity to graph perturbations, we study the changes in GNN performance on perturbed versions of each dataset. These perturbations are designed to eliminate or emphasize particular types of information embedded in the graphs. We define an empirical sensitivity profile of a dataset as a vector where each element is the performance of a GNN after a given perturbation, reported as a percentage of the network's performance on the original dataset. In particular, we use a set of 13 perturbations, visualized in Figure 2. Of these perturbations, 6 are designed to perturb node features, while keeping the graph structure intact, whereas the remaining 7 keep the node attributes the same, but manipulate the graph structure.
+
+For the purpose of these perturbations, we consider all graphs to be undirected and unweighted, and assume they all have node features, but not edge features. These assumptions hold for most datasets we use in this study. However, if necessary, we preprocess the data by symmetrizing each graph's adjacency matrix and dropping any edge attributes. Formally, let $G = \left( {V, E,\mathbf{X}}\right)$ be an undirected, unweighted, attributed graph with node set $V$ of cardinality $\left| V\right| = n$ , edge set $E \subset V \times V$ , and a matrix of $d$ -dimensional node features $\mathbf{X} \in {\mathbb{R}}^{n \times d}$ . We let $\mathbf{M} \in {\mathbb{R}}^{n \times n}$ denote the adjacency matrix of each graph, where $\mathbf{M}\left( {u, v}\right) = 1$ if $\left( {u, v}\right) \in E$ and zero otherwise.
+
+Several of our perturbations are based on spectral graph theory, which represents graph signals in a spectral domain analogous to classical Fourier analysis. We define the graph Laplacian $\mathbf{L} \mathrel{\text{:=}} \mathbf{D} - \mathbf{M}$ and the symmetric normalized graph Laplacian $\mathbf{N} \mathrel{\text{:=}} {\mathbf{D}}^{-\frac{1}{2}}\mathbf{L}{\mathbf{D}}^{-\frac{1}{2}} = \mathbf{I} - {\mathbf{D}}^{-\frac{1}{2}}\mathbf{M}{\mathbf{D}}^{-\frac{1}{2}}$ , where $\mathbf{D}$ is the diagonal degree matrix. Both $\mathbf{L}$ and $\mathbf{N}$ are positive semi-definite and have an orthonormal eigendecompositions $\mathbf{L} = \mathbf{\Phi }\mathbf{\Lambda }{\mathbf{\Phi }}^{\top }$ and $\mathbf{N} = \widetilde{\mathbf{\Phi }}\widetilde{\mathbf{\Lambda }}{\widetilde{\mathbf{\Phi }}}^{\top }$ . By convention, we order the eigenvalues and corresponding eigenvectors ${\left\{ \left( {\lambda }_{i},{\phi }_{i}\right) \right\} }_{0 \leq i \leq n - 1}$ of $\mathbf{L}$ (and similarly for $\mathbf{N}$ ) in ascending order $0 = {\lambda }_{0} \leq {\lambda }_{1} \leq \cdots \leq {\lambda }_{n - 1}$ . The eigenvectors ${\left\{ {\phi }_{i}\right\} }_{0 \leq i \leq n - 1}$ constitute a basis of the space of graph signals and can be considered as generalized Fourier modes. The eigenvalues ${\left\{ {\lambda }_{i}\right\} }_{0 \leq i \leq n - 1}$ characterize the variation of these Fourier modes over the graph and can be interpreted as (squared) frequencies.
+
+
+
+Figure 2: Node feature and graph structure perturbations of the first graph in ENZYMES. The color coding of nodes illustrates their feature values, except (k-n) where the fragment assignment is shown.
+
+### 2.1 Node Feature Perturbations
+
+We first consider two perturbations that alter local node features, setting them either to a fixed constant (w.l.o.g., one) for all nodes, or to a one-hot encoding of the degree of the node. We refer to these perturbations as NoNodeFtrs (since constant node features carry no additional information) and NodeDeg, respectively. In addition, we consider a random node feature perturbation (RandFtrs) by sampling a one-dimensional feature for each node uniformly at random within $\left\lbrack {-1,1}\right\rbrack$ . Sensitivity to these perturbations, exhibited by a large decrease in predictive performance, may indicate that a dataset (or task) is dominated by highly informative node features.
+
+We also develop spectral node feature perturbations. As in Euclidean settings, the Fourier decomposition can be used to decompose graph signals into a set of canonical signals, called Fourier modes, which are organized according to increasing variation (or frequency). In Euclidean Fourier analysis, these modes are sinusoidal waves oscillating at different frequencies. A standard practice in audio signal processing is to remove noise from a signal by identifying and removing certain Fourier modes or frequency bands. We generalize this technique to graph datasets and systematically remove certain graph Fourier modes to probe the importance of the corresponding frequency bands.
+
+In this perturbation, we use the frequencies derived from the symmetric normalized graph Laplacian $\mathbf{N}$ and split them into three roughly equal-sized frequency bands (low, mid, high), i.e., bins of subsequent eigenvalues. To assess the importance of each of the frequency bands, we then apply hard band-pass filtering to the graph signals (node feature vectors), i.e., we project the signals on the span of the selected Fourier modes. More specifically, for each band, we let ${\mathbf{I}}_{\text{band }}$ be a diagonal matrix with diagonal elements equal to one if the corresponding eigenvalue is in the band, and zero otherwise. Then, the hard band-pass filtered signal is computed as
+
+$$
+{\mathbf{X}}_{\text{band }} = \widetilde{\mathbf{\Phi }}{\mathbf{I}}_{\text{band }}{\widetilde{\mathbf{\Phi }}}^{\top }\mathbf{X}. \tag{1}
+$$
+
+The above band-pass filtering perturbation enables a precise selection of the frequency bands. However, it requires a full eigendecomposition of the normalized graph Laplacian, which is impractical for large graphs. We therefore provide an alternative approach based on wavelet bank filtering [13]. This leverages the fact that polynomial filters $h$ of the normalized graph Laplacian directly transform the spectrum via $h\left( \mathbf{N}\right) = \widetilde{\mathbf{\Phi }}h\left( \widetilde{\mathbf{\Lambda }}\right) {\widetilde{\mathbf{\Phi }}}^{\top }$ , yielding the frequency response $h\left( \lambda \right)$ for any eigenvalue $\lambda$ of N. This is usually done by taking the symmetrized diffusion matrix
+
+$$
+\mathbf{T} = \frac{1}{2}\left( {\mathbf{I} + {\mathbf{D}}^{-\frac{1}{2}}{\mathbf{{MD}}}^{-\frac{1}{2}}}\right) = \frac{1}{2}\left( {2\mathbf{I} - \mathbf{N}}\right) . \tag{2}
+$$
+
+By construction, $\mathbf{T}$ admits the same eigenbasis as $\mathbf{N}$ but its eigenvalues are mapped from $\left\lbrack {0,2}\right\rbrack$ to $\left\lbrack {0,1}\right\rbrack$ via the frequency response $h\left( \lambda \right) = 1 - \lambda /2$ . As a result, large eigenvalues are mapped to small values (and vice versa). Next, we construct diffusion wavelets [15] that consist of differences of dyadic powers ${2}^{k}, k \in {\mathbb{N}}_{0}$ of $\mathbf{T}$ , i.e., ${\Psi }_{k} = {\mathbf{T}}^{{2}^{k - 1}} - {\mathbf{T}}^{{2}^{k}}$ , which act as bandpass filters on the signal. Intuitively, this operator "compares" two neighborhoods of different sizes (radius ${2}^{k - 1}$ and ${2}^{k}$ ) at each node. Diffusion wavelets are usually maintained in a wavelet bank ${\mathcal{W}}_{K} = {\left\{ {\mathbf{\Psi }}_{k},{\mathbf{\Phi }}_{\mathbf{K}}\right\} }_{k = 0}^{K}$ , which contains additional highpass ${\mathbf{\Psi }}_{0} = \mathbf{I} - \mathbf{T}$ and lowpass ${\mathbf{\Psi }}_{\mathbf{K}} = {\mathbf{T}}^{K}$ filters. In our experiments, we choose $K = 1$ , resulting in the following low, mid, and highpass filtered node features:
+
+$$
+{\mathbf{X}}_{\text{high }} = \left( {\mathbf{I} - \mathbf{T}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{mid }} = \left( {\mathbf{T} - {\mathbf{T}}^{2}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{low }} = {\mathbf{T}}^{2}\mathbf{X}. \tag{3}
+$$
+
+These filters correspond to frequency responses ${h}_{\text{high }}\left( \lambda \right) = \lambda /2,{h}_{\text{mid }}\left( \lambda \right) = \left( {1 - \lambda /2}\right) - {\left( 1 - \lambda /2\right) }^{2}$ and ${h}_{\text{low }}\left( \lambda \right) = {\left( 1 - \lambda /2\right) }^{2}$ . Therefore, the low-pass filtering preserves low-frequency information while suppressing high-frequency information whereas high-pass filtering does the opposite. The mid-pass filtering suppresses all frequencies. However, it preserves much more middle-frequency information than it does high- or low-frequency information.
+
+Therefore, this filtering may be interpreted as approximation of the hard band-pass filtering discussed above. From the spatial message passing perspective, low-pass filtering is equivalent to local averaging of the node features, which has a profound implication on homophilic and heterophilic characteristics of the datasets (Sec. 3.2). Finally, since the computations needed in (3) can be carried out via sparse matrix multiplications, they have the advantage of scaling well to large graphs. Therefore, we utilize the wavelet bank filtering for the datasets with larger graphs considered in Sec. 3.2, while for the smaller graphs, considered in Sec. 3.1, we employ the direct band-pass filtering approach.
+
+### 2.2 Graph Structure Perturbations
+
+The following perturbations act on the graph structure by altering the adjacency matrix. By removing all edges (NoEdges) or making the graph fully-connected (FullyConn), we can eliminate the structural information completely and essentially turn the graph into a set. The difference between the two perturbations lies in whether all nodes are processed independently or all nodes are processed together. However, FullyConn is only applied to inductive datasets in Sec. 3.1 due to computational limitations. Furthermore, we consider a degree-preserving random edge rewiring perturbation (RandRewire). In each step, we randomly sample a pair of edges and randomly exchange their end nodes. We then repeat this process without replacement until ${50}\%$ of the edges have been randomly rewired.
+
+To inspect the importance of local vs. global graph structure, we designed the Frag- $k$ perturbations, which randomly partition the graph into connected components consisting of nodes whose distance to a seed node is less than $k$ . Specifically, we randomly draw one seed node at a time and extract its $k$ -hop neighborhood by eliminating all edges between this new fragment and the rest of the graph; we repeat this process on the remaining graph until the whole graph is processed. A smaller $k$ implies smaller components, and hence discards the global structure and long-range interactions.
+
+Graph fragmentations can also be constructed using spectral graph theory. In our taxonomization, we adopt one such method, which we refer to as Fiedler fragmentation (FiedlerFrag) (see [33] and the references therein). In the case when the graph $G$ is connected, ${\phi }_{0}$ , the eigenvector of the graph Laplacian $\mathbf{L}$ corresponding to ${\lambda }_{0} = 0$ , is constant. The eigenvector ${\phi }_{1}$ corresponding to the next smallest eigenvalue, ${\lambda }_{1}$ , is known as the Fiedler vector [21]. Since ${\phi }_{0}$ is constant, it follows that ${\phi }_{1}$ has zero average. This motivates partitioning the graph into two sets of vertices, one where ${\phi }_{1}$ is positive and the other where ${\phi }_{1}$ is negative. We refer to this process as binary Fiedler fragmentation. This heuristic is used to construct the ratio cut for a connected graph [26]. The ratio cut partitions a connected graph into two disjoint connected components $V = U \cup W$ , such that the objective $\left| {E\left( {U, W}\right) }\right| /\left( {\left| U\right| \cdot \left| W\right| }\right)$ is minimized, where $E\left( {U, W}\right) \mathrel{\text{:=}} \{ \left( {u, w}\right) \in E : u \in U, w \in W\}$ is the set of removed edges when fragmenting $G$ accordingly. This can be seen as a combination of the min cut objective (numerator), while encouraging a balanced partition (denominator).
+
+FiedlerFrag is based on iteratively applying binary Fiedler fragmentation. In each step, we separate out the graph into its connected components and apply binary Fiedler fragmentation to the largest component. We repeat this process until either we reach 200 iterations, or the size of the largest connected component falls below 20. In contrast to the random fragmentation Frag- $k$ , this perturbation preserves densely connected regions of the graph and eliminates connections between them. Thus, FiedlerFrag tests the importance of inter community message flow. Due to computational limits, we only apply FiedlerFrag to inductive datasets in Sec. 3.1 for which this computation is feasible.
+
+### 2.3 Data-driven Taxonomization by Hierarchical Clustering
+
+To study a systematic classification of the graph datasets, we use Ward's method [58] for hierarchical clustering analysis of their sensitivity profiles. The sensitivity profiles are established empirically by contrasting the performance of a GNN model on a perturbed dataset and on the original dataset. To quantify this performance change, we use ${\log }_{2}$ -transformed ratio of test AUROC (area under the ROC curve). Thus a sensitivity profile is a 1-D vector with as many elements as we have perturbation experiments. See Figure 1 and Appendix A for further details.
+
+
+
+Figure 3: Visualization of (a) inductive and (b) transductive datasets based on PCA of their perturbation sensitivity profiles according to a GCN model. The datasets are labeled according to their taxonomization by hierarchical clustering, shown in Figure 4 and 6, which corroborates with the emerging clustering in the PCA plots. In the bottom part are shown the loadings of the first two principal components and (in parenthesis) the percentage of variance explained by each of them.
+
+In order to generate sensitivity profiles, we must select suitable GNN models based on several practical considerations: (i) The model has to be expressive enough to efficiently leverage aspects of the node features and graph structure that we perturb. Otherwise, our analysis will not be able to uncover reliance on these properties. (ii) The model needs to be general enough to be applicable to a wide variety of datasets, avoiding dataset-specific adjustments that may lead to profiling that is not comparable between datasets. Therefore, we did not aim for specialized models that maximize performance, but rather models that (i) achieve at least baseline performance comparable to published works over all datasets, (ii) have manageable computational complexity to facilitate large-scale experimentation, and (iii) use well-established and theoretically well-understood architectures.
+
+With these criteria in mind, we focused on two popular MPNN models in our analysis: GCN [35] and GIN [63]. The original GCN serves as an ideal starting point as its abilities and limitations are well-understood. However, we also wanted to perform taxonomization through a provably more expressive and recent method, which motivated our selection of GIN as the second architecture. We emphasize that the main focus here is not to provide a benchmarking of GNN models per se, but rather to address the taxonomization of graph datasets (and accompanying tasks) used in such benchmarks. Nevertheless, we have also generated sensitivity profiles by additional models in order to comparatively demonstrate the robustness of our approach: 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]; see Figure 5.
+
+## 3 Results
+
+Each of the 48 datasets we consider is equipped with either a node classification or graph classification task. In the case of node classification, we further differentiate between the inductive setting, in which learning is done on a set of graphs and the generalization occurs from a training set of graphs to a test set, and the transductive setting, in which learning is done in one (large) graph and the generalization occurs between subsets of nodes in this graph. Graph classification tasks, by contrast, always appear in an inductive setting. The only major difference between graph classification and inductive node classification is that prior to final prediction, the hidden representations of all nodes are pooled into a single graph-level representation. In the following two subsections, we provide an analysis of the sensitivity profiles for datasets with inductive and transductive tasks.
+
+
+
+Figure 4: Taxonomy of inductive graph learning datasets via graph perturbations. For each dataset and perturbation combination, we show the GCN model performance relative to its performance on the unmodified dataset.
+
+### 3.1 Taxonomy of Inductive Benchmarks
+
+Datasets. We examine a total of 23 datasets, 20 of which are equipped with a graph-classification task (inductive by nature) and the other three are equipped with an inductive node-classification task. Of these datasets, 17 are derived from real-world data, while the other six are synthetically generated.
+
+For real-world data, we consider several domains. Biochemistry tasks are the most ubiquitous, including compound classification based on effects on cancer or HIV inhibition (NCI1 & NCI109 [57], ogbg-molhiv [31]), protein-protein interaction PPI [68, 28], multilabel compound classification based on toxicity on biological targets (ogbg-moltox21 [31]), and multiclass classification of enzymes (ENZYMES [31]). We also consider superpixel-based graph classification as an extension of image classification (MNIST & CIFAR10 [17]), collaboration datasets (IMDB-BINARY & COLLAB [64]), and social graphs (REDDIT-BINARY & REDDIT-MULTI-5K [64]).
+
+For synthetic data, we have concrete understanding of their graph domain properties and how these properties relate to their prediction task. This allows us to derive a deeper understanding of their sensitivity profiles. The six synthetic datasets in our study make use of a varied set of graph generation algorithms. Small-world [65] is based on graph generation with the Watz-Strogatz (WS) model; the task is to classify graphs based on average path length. Scale-free [65] retains the same task definition, but the graph generation algorithm is an extension of the Barabási-Albert (BA) model proposed by Holme and Kim [30]. PATTERN and CLUSTER are node-level classification tasks generated with stochastic block models (SBM) [29]. Synthie [42] graphs are derived by first sampling graphs from the well-known Erdös-Rényi (ER) model, then deriving each class of graphs by a specific graph surgery and sampling of node features from a distinct distribution per each class. Similarly, SYNTHETICnew [18] graphs are generated from a random graph, where different classes are formed by specific modifications to the original graph structure and node features. Further details of dataset definitions and synthetic graph generation algorithms are provided in Appendix C.
+
+Insights. Here we itemize the main insights into inductive datasets. Our full taxonomy is shown in Figures 4 and 3a, with a detailed analysis of individual clusters given in Appendix B.1.
+
+- Three distinct groups of datasets. We identify a categorization into three dataset clusters $\mathrm{I} - \{ 1,2,3\}$ that emerge from both the hierarchical clustering and PCA. The datasets in $\mathrm{I} - \{ 1,2\}$ exhibit stronger node feature dependency and do not encode crucial information in the graph structure. The main differentiating factor between I-1 and I-2 is their relative sensitivity to node feature perturbations - in particular, how well NodeDeg can substitute the original node features. On the other hand, datasets in I-3 rely considerably more on graph structure for correct task prediction. This is also reflected by the first two principal components (Figure 3a), where PC1 approximately corresponds to structural perturbations and PC2 to node feature perturbations.
+
+- No clear clustering by dataset domain. While datasets that are derived in a similar fashion cluster together (e.g., REDDIT-* datasets), in general, each of the three clusters contains datasets from a variety of application domains. Not all molecular datasets behave alike; e.g., ogbg-mol* datasets in I-2 considerably differ from NCI* datasets in I-3.
+
+- Synthetic datasets do not fully represent real-world scenarios. CLUSTER, SYNTHETICnew, and PATTERN lie at the periphery of the PCA embeddings, suggesting that existing synthetic datasets do not resemble the type of complexity encountered in real-world data. Hence, one should use synthetic datasets in conjunction with real-world datasets to comprehensively evaluate GNN performance rather than solely relying on synthetic ones.
+
+- Representative set. One can now select a representative subset of all datasets to cover the observed heterogeneity among the datasets. Our recommendation: CIFAR10 from I-1; D&D, ogbg-molhiv from I-2; NCI1, COLLAB, REDDIT-MULTI-5K, CLUSTER from I-3.
+
+- Robustness w.r.t. GNN choice. In addition to GCN, we have performed our perturbation analysis w.r.t. GIN [63], 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]. These models were selected to cover a variety of inductive model biases: GIN is provably 1-WL expressive, ChebNet uses higher-order approximation of the Laplacian, GatedGCN employs gating akin to attention, and GCN II leverages skip connections and identity mapping to alleviate oversmoothing. We have also tested a 2-layer GIN to probe the robustness to number of message-passing layers. The taxonomies w.r.t. other models (Figure B.1) are congruent with that of GCN. Given the differing inductive biases and representational capacity, some difference in the sensitivity profiles are not only expected but desired to validate their functions in benchmarking. The resulting profiles can be used for a detailed comparative analysis of these models, but the overall conclusions remain consistent. This consistency is further validated by our correlation analysis amongst these models, shown in Figure 5. The Pearson correlation coefficients of all pairs are above 90%, implying that our taxonomy is sufficiently robust w.r.t. different GNNs and the number of layers.
+
+
+
+Figure 5: Pearson correlation between profiles derived by six GNN models.
+
+### 3.2 Taxonomy of Transductive Benchmarks
+
+Datasets. We selected a wide variety of 25 transductive datasets with node classification task, including citation networks, social networks, and other web page derived networks (see Appendix C). In citation networks, such as CitationFull (CF) [5], nodes and edges correspond to papers that are linked via citation. In web page derived networks, like WikiNet [48], Actor [48], and WikiCS [40], they correspond to hyperlinks between pages. In social networks, like Deezer (DzEu) [50], LastFM (LFMA) [50], Twitch [49], Facebook (FBPP) [49], Github [49], and Coau [52], nodes and edges are based on a type of relationship, such as mutual-friendship and co-authorship. Flickr [66] and Amazon [52] are constructed based on other notions of similarity between entities, such as co-purchasing and image property similarities. WebKB [48] contains networks of university web pages connected via hyperlinks. It is an example of a heterophilic dataset [45], since immediate neighbor nodes do not necessarily share the same labels (which correspond to a user's role such as faculty or graduate student). By contrast, Cora, CiteSeer, and PubMed are known to be homophilic datasets where nodes within a neighborhood are likely to share the same label. In fact, no less than 60% of nodes in these networks have neighborhoods that share the same node label as the central node [40].
+
+Insights. Below we list the main insights into transductive graph datasets and their taxonomy (Figures 6 and 3b). We refer the reader to Appendix B.2 for the analysis of individual clusters.
+
+
+
+Figure 6: Taxonomization of transductive datasets based on sensitivity profiles w.r.t. a GCN model.
+
+- Transductive datasets are uniformly insensitive to structural perturbations. Sensitivity profiles of all transductive datasets show high robustness to all graph structure perturbations. This is in stark contrast with the inductive datasets, where the largest cluster I-3 is defined by high sensitivity to structural perturbations. The graph connectivity may not be vital to every dataset/task, e.g., in WikiCS word embeddings of Wikipedia pages may be sufficient for categorization without hyperlinks. While the observation that no dataset significantly depends on structural information is startling, it corroborates with reported strong performance of MLP or similar models augmented with label propagation to outperform GNNs in several of these transductive datasets [23, 32].
+
+- Three distinct groups of datasets. The transductive datasets are also categorized into three clusters as T- $\{ 1,2,3\}$ . T-1 consists of heterophilic datasets, such as WebKB and Actor [45,39]. These are well-separated from others, as seen in the right half of the PCA plot (Figure 3b), primarily via PC1 and characterized by performance drop due to removal of the original node features (NoNodeFtrs, RandFtrs) and their replacement by node degrees (NodeDeg). T-3 is indifferent to both node and structure removal, implying redundancies between node features and graph structure for their tasks. T-2 datasets, on the other hand, experience significant performance degradation on NoNodeFtrs and RandFtrs, yet these drops are recovered in NodeDeg. This indicates that T-2 datasets have tasks for which structural summary information is sufficient, perhaps due to homophily.
+
+- Representative set. Many datasets have very close sensitivity profiles, thus factoring in also the graph size and original AUROC (avoiding saturated datasets), we make the following recommendation: WebKB-Wis, Actor from T-1; WikiNet-cham, WikiCS, Flickr from T-2; WikiNet-squir, Twitch-EN, GitHub from T-3.
+
+## 4 Discussion
+
+Our results quantify the extent to which graph features or structures are more important for the downstream tasks, an important question brought up in classical works on graph kernels [37, 51]. We observed that more than half of the datasets contain rich node features. On average, excluding these features reduces GNN prediction performance more than excluding the entire graph structures, especially for transductive node-level tasks. Furthermore, low-frequency information in node features appears to be essential in most datasets that rely on node features. Historically, most graph data aimed to capture closeness among entities, which has prompted development of local aggregation approaches, such as label propagation, personalized page rank, and diffusion kernels [36, 14], all of which share a common principle of low pass filtering. High-frequency information, on the other hand, may be important in recently emerging application areas, such as combinatorial optimization, logical reasoning or biochemical property prediction, which require complex non-local representations.
+
+Further, despite the recent interest in development of new methods that could leverage long-range dependencies and heterophily, the availability of adequate benchmarking datasets remains lacking or less readily accessible. Meanwhile, some recent efforts such as GraphWorld [46] aim to comprehensively profile a GNN's performance using a collection of synthetic datasets that cover an entire parametric space. Notably, our analysis demonstrates that synthetic tasks do not fully resemble the complexity of real-world applications. Hence, bench marking made purely by synthetic datasets should be taken with caution, as the behavior might not be representative of real-world scenarios.
+
+As a comprehensive benchmarking framework, our work provides several potential use cases beyond the taxonomy analysis presented here. One such usage is understanding the characteristics of any new datasets and how they are related to existing ones. For example, DeezerEurope (DzEu) is a relatively new dataset [50] that is less commonly benchmarked and studied than the other datasets we consider. The inclusion of DzEu in T-1 suggested its heterophilic nature, which indeed has been recently demonstrated [38]. On the other hand, since the sensitivity profiles naturally suggest the invariances that are important for different datasets from a practical standpoint, they could provide valuable guidance to the development of self-supervised learning and data augmentations for GNNs [62].
+
+Finally, we observed that overall patterns in sensitivity profiles remain similar regardless whether we used GCN, GIN, or the other 4 models to derive them. Subtle differences in sensitivity profiles w.r.t. different GNN models are not only expected but also desired when comparing models that have distinct levels of expressivity. While we expect overall patterns to be similar, more expressive models should provide enhanced resolution. One could then contrast taxonomization w.r.t. first-order GNNs (such as those we used) with more expressive higher-order GNNs, Transformer-based models with global attention, and others. We hope our work will also inspire future work to empirically validate expressivity of new graph learning methods in this vein, beyond classical benchmarking.
+
+Limitations and Future Work. Our perturbation-based approach is fundamentally limited in that we cannot test the significance of a property that we cannot perturb or that the reference GNN model cannot capture. Therefore, designing more sophisticated perturbation strategies to gauge specific relations could bring further insight into the datasets and GNN models alike. New perturbations may gauge the usefulness of geometric substructures such as cycles [3] or the effects of graph bottlenecks, e.g., by rewiring graphs to modify their "curvatures" [55]. Other perturbations could include graph sparsification (edge removal) [53] and graph coarsening (edge contraction) [10, 4].
+
+A number of OGB node-level datasets are not included in this study due to memory cost of typical MPNNs. Conducting an analysis based on recent scalable GNN models [20] would be an interesting avenue of future research. Further, we only considered classification tasks, omitting regression tasks, as their evaluation metrics are not easily comparable. One way to circumvent this issue would be to quantize regression tasks into classification tasks by binning their continuous targets. Additionally, we disregarded edge features in two OGB molecular datasets we used. In a future work, edge features could be leveraged by an edge-feature aware generalization of MPNNs. The importance of edge features can then be analyzed by introducing new edge-feature perturbations. We also limited our analysis to node-level and graph-level tasks, but this framework could be further extended to link-prediction or edge-level tasks. While our perturbations could be used in this new scenario as well, new perturbations, such as the above-mentioned graph sparsification, would need to be considered. Similarly, hallmark models for link and relation predictions, outside MPNNs, should be considered.
+
+## 5 Conclusion
+
+We provide a systematic data-driven approach for taxonomizing a large collection of graph datasets - the first study of its kind. The core principle of our approach is to gauge the essential characteristics of a given dataset with respect to its accompanying prediction task by inspecting the downstream effects caused by perturbing its graph data. The resulting sensitivities to the diverse set of perturbations serve as "fingerprints" that allow to identify datasets with similar characteristics. We derive several insights into the current common benchmarks used in the field of graph representation learning, and make recommendations on selection of representative benchmarking suits. Our analysis also puts forward a foundation for evaluating new benchmarking datasets that will likely emerge in the field.
+
+References
+
+[1] A.K.Debnath, R.L. Lopez de Compadre, G. Debnath, A.J. Shusterman, and C. Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2): 786-797, 1991. 21, 22
+
+[2] U. Alon and E. Yahav. On the bottleneck of graph neural networks and its practical implications, 2021. 19
+
+[3] B. Bevilacqua, F. Frasca, D. Lim, B. Srinivasan, C. Cai, G. Balamurugan, M.M. Bronstein, and H. Maron. Equivariant subgraph aggregation networks, 2021. 9
+
+[4] C. Bodnar, C. Cangea, and P. Liò. Deep graph mapper: Seeing graphs through the neural lens. Frontiers in Big Data, 4, June 2021. 9
+
+[5] A. Bojchevski and S. Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In Proc. of ICLR, 2018. 7, 22, 23
+
+[6] K.M. Borgwardt, C.S. Ong, S. Schönauer, SVN Vishwanathan, A.J. Smola, and H. Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21:i47-i56, 2005. 20, 22
+
+[7] X. Bresson and T. Laurent. Residual gated graph convnets. ICLR, 2018. 5, 7
+
+[8] M.M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, Jul 2017. ISSN 1558-0792. 1
+
+[9] M.M. Bronstein, J. Bruna, T. Cohen, and P. Veličković. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, 2021. 1, 2
+
+[10] N. Brugnone, A. Gonopolskiy, M.W. Moyle, M. Kuchroo, D. Dijk, K.R. Moon, D. Colon-Ramos, G. Wolf, M.J. Hirn, and S. Krishnaswamy. Coarse graining of data via inhomogeneous diffusion condensation. IEEE Big Data, Dec 2019. 9
+
+[11] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks. In Proceedings of the 37th International Conference on Machine Learning, 2020. 5,7
+
+[12] W. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C. Hsieh. Cluster-gen. Proc. of 25th SIGKDD, 2019. 19
+
+[13] R.R. Coifman and M. Maggioni. Diffusion wavelets. Applied and computational harmonic analysis, 21(1):53-94, 2006. 3
+
+[14] L. Cowen, T. Ideker, B.J. Raphael, and R. Sharan. Network propagation: a universal amplifier of genetic associations. Nat. Rev. Gene., 18(9):551-562, 2017. 8
+
+[15] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in NeurIPS, volume 29, pages 3844-3852, 2016.1,3,5,7
+
+[16] P.D. Dobson and A.J. Doig. Distinguishing enzyme structures from non-enzymes without alignments. J. of Mol. Bio., 330(4):771-783, 2003. 20, 22
+
+[17] V.P. Dwivedi, C.K. Joshi, T. Laurent, Y. Bengio, and X. Bresson. Benchmarking Graph Neural Networks. arXiv:2003.00982, 2020. 1, 6, 20, 22
+
+[18] A. Feragen, N. Kasenburg, J. Petersen, M. de Bruijne, and K. Borgwardt. Scalable kernels for graphs with continuous attributes. In Adv. in NeurIPS, volume 26, 2013. 6, 21, 22
+
+[19] M. Fey and J.E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Repr. Learning on Graphs and Manifolds, 2019. 14
+
+[20] M. Fey, J. E. Lenssen, F. Weichert, and J. Leskovec. GNNAutoScale: Scalable and expressive graph neural networks via historical embeddings, 2021. 9, 19
+
+[21] M. Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory. Czechoslovak mathematical journal, 25(4):619-633, 1975. 4
+
+[22] S. Freitas, Y. Dong, J. Neil, and D.H. Chau. A large-scale database for graph representation learning. In Adv. in NeurIPS, 2021. 21, 22
+
+[23] J. Gasteiger, A. Bojchevski, and S. Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations, 2018. 8
+
+[24] J. Gilmer, S.S. Schoenholz, P.F. Riley, O. Vinyals, and G.E. Dahl. Neural message passing for quantum chemistry, 2017. 1
+
+[25] C.S. Greene, A. Krishnan, A.K. Wong, E. Ricciotti, R.A. Zelaya, D.S. Himmelstein, R. Zhang, B.M. Hartmann, E. Zaslavsky, S.C. Sealfon, et al. Understanding multicellular function and disease with human tissue-specific networks. Nature genetics, 47(6):569-576, 2015. 21
+
+[26] L. Hagen and A.B. Kahng. New spectral methods for ratio cut partitioning and clustering. IEEE transactions on computer-aided design of integrated circuits and systems, 11(9):1074-1085, 1992.4
+
+[27] W.L. Hamilton. Graph Representation Learning. Morgan & Claypool, 2020. 1
+
+[28] W.L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1025-1035, 2017. 6, 21
+
+[29] P.W. Holland, K.B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109-137, 1983. ISSN 0378-8733. 6, 20
+
+[30] P. Holme and B.J. Kim. Growing scale-free networks with tunable clustering. Physical Review $E,{65}\left( 2\right)$ , Jan 2002. ISSN 1095-3787. doi: 10.1103/physreve.65.026107. 6,21
+
+[31] W. Hu, M.s Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. Adv. in NeurIPS 33, 2020. 1, 2, 6, 21, 22
+
+[32] Q. Huang, H. He, A. Singh, S. Lim, and A. Benson. Combining label propagation and simple models out-performs graph neural networks. In International Conference on Learning Representations, 2020. 8
+
+[33] J. Irion and N. Saito. Efficient approximation and denoising of graph signals using the multiscale basis dictionaries. IEEE Transactions on Signal and Information Processing over Networks, 3 (3):607-616, 2016. 4
+
+[34] D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. 14
+
+[35] T.N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proc. of ICLR, 2017. 1, 5
+
+[36] S. Köhler, S. Bauer, D. Horn, and P.N. Robinson. Walking the interactome for prioritization of candidate disease genes. The American Journal of Human Genetics, 82(4):949-958, April 2008.8
+
+[37] N.M. Kriege, F.D. Johansson, and C. Morris. A survey on graph kernels. Applied Network Science, 5(1):1-42, 2020. 8
+
+[38] D. Lim, F. Hohne, X. Li, S. Linda H., V. Gupta, O. Bhalerao, and S. Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods, 2021. 9
+
+[39] Y. Ma, X. Liu, N. Shah, and J. Tang. Is homophily a necessity for graph neural networks?, 2021. 8, 19
+
+[40] P. Mernyei and C. Cangea. Wiki-cs: A wikipedia-based benchmark for graph neural networks, 2020.7,22,23
+
+[41] Y. Min, F. Wenkel, and G. Wolf. Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks. In Adv. in NeurIPS 33, pages 14498-14508, 2020. 1
+
+[42] C. Morris, N.M. Kriege, K. Kersting, and P. Mutzel. Faster kernels for graphs with continuous attributes via hashing. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pages 1095-1100, 2016. 6, 21, 22
+
+[43] C. Morris, M. Ritzert, M. Fey, W.L. Hamilton, J.E. Lenssen, G. Rattan, and M. Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on AI, volume 33, pages 4602-4609, 2019. 1
+
+[44] C. Morris, N.M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann. TUDataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 GRL+ Workshop, 2020.1,2
+
+[45] H. Mostafa, M. Nassar, and S. Majumdar. On local aggregation in heterophilic graphs. arXiv:2106.03213, 2021. 7, 8, 19
+
+[46] J. Palowitch, A. Tsitsulin, B. Mayer, and B. Perozzi. GraphWorld: Fake graphs bring real insights for GNNs. ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022.1,9
+
+[47] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024-8035, 2019. 14
+
+[48] H. Pei, B. Wei, K.C. Chang, Y. Lei, and B. Yang. Geom-GCN: Geometric graph convolutional networks. In Proc. of ICLR, 2020. 7, 22, 23
+
+[49] B. Rozemberczki and R. Sarkar. Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models. In Proc. of 29th ACM Int'l Conf. on Information & Knowledge Management, pages 1325-1334, 2020. 7, 23
+
+[50] B. Rozemberczki, C. Allen, and R. Sarkar. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021. 7, 9, 22, 23
+
+[51] T. Schulz and P. Welke. On the necessity of graph kernel baselines. In ECML-PKDD, GEM workshop, volume 1, page 6, 2019. 8
+
+[52] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network evaluation. NeurIPS 2018 R2L workshop, 2018. 7, 22, 23
+
+[53] D.A. Spielman and S. Teng. Spectral sparsification of graphs, 2010. 9
+
+[54] D. Szklarczyk, A. Franceschini, S. Wyder, K. Forslund, D. Heller, J. Huerta-Cepas, M. Si-monovic, A. Roth, A. Santos, K.P. Tsafou, et al. STRING v10: protein-protein interaction networks, integrated over the tree of life. Nucleic acids research, 43(D1):D447-D452, 2015. 21
+
+[55] J. Topping, F.D. Giovanni, B.P. Chamberlain, X. Dong, and M.M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature, 2021. 9, 19
+
+[56] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio. Graph attention networks. In The 6th ICLR, 2018. 1
+
+[57] N. Wale, I.A. Watson, and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3):347-375, 2008. 6, 21, 22
+
+[58] J.H. Ward Jr. Hierarchical grouping to optimize an objective function. Journal of the American statistical association, 58(301):236-244, 1963. 4, 14
+
+[59] D.J. Watts and S.H. Strogatz. Collective dynamics of 'small-world' networks. Nature, 393 (6684):440-442, 1998. 21
+
+[60] Z. Wu, B. Ramsundar, E.N. Feinberg, J. Gomes, C. Geniesse, A.S. Pappu, K. Leswing, and V. Pande. MoleculeNet: a benchmark for molecular machine learning. Chemical science, 9(2): 513-530, 2018. 21
+
+[61] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4-24, 2020. 1
+
+[62] Y. Xie, Z. Xu, J. Zhang, Z. Wang, and S. Ji. Self-supervised learning of graph neural networks: A unified review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. 9
+
+[63] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In Proc. of ICLR, 2019. 1, 5, 7
+
+[64] P. Yanardag and S.V.N. Vishwanathan. Deep graph kernels. In Proc. of 21th SIGKDD, pages 1365-1374, 2015. 6, 20, 21, 22
+
+[65] J. You, R. Ying, and J. Leskovec. Design space for graph neural networks. In NeurIPS, 2020. 6, 14, 21, 22
+
+[66] H. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. Prasanna. GraphSAINT: Graph sampling based inductive learning method. In Proc. of ICLR, 2020. 7, 19, 22, 23
+
+[67] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun. Graph neural networks: A review of methods and applications. AI Open, 1:57-81, 2020. 1
+
+[68] M. Zitnik and J. Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190-i198, 2017. 6, 21, 22
+
+## A Extended Methods
+
+
+
+Figure A.1: MPNN model blueprint used for all datasets.
+
+### A.1 Taxonomization by Hierarchical Clustering
+
+To study a systematic classification of the graph datasets, we use Ward's method [58] for hierarchical clustering analysis of their sensitivity profiles. Specifically, we first construct a perturbation sensitivity matrix where each row represents a dataset and each column represents a perturbation. An entry in this matrix is computed by taking the ratio between the test score achieved with the perturbed dataset and the test score achieved with the original dataset. As our performance metric we use the area under the receiver operating characteristic (AUROC) averaged over 10 random seed runs or 10 cross-validation folds, depending on whether a dataset has predefined data splits or not. Row-wise hierarchical clustering provides us a data-driven taxonomization of the datasets.
+
+Using AUROC as our metric, the values of the perturbation sensitivity matrix range from 0.5 to 1 when a perturbation causes a loss in predictive performance, and from 1 to 2 when it improves it. Therefore we element-wise ${\log }_{2}$ -transform the matrix to balance the two ranges and map the values onto $\left\lbrack {-1,1}\right\rbrack$ before hierarchical clustering. Yet, for a more intuitive presentation, we show the original ratio values as percentages throughout this paper.
+
+### A.2 MPNN Hyperparameter Selection
+
+We keep the model hyperparameters, illustrated in Figure A.1, identical for each dataset and perturbation combination. We use a linear node embedding layer, 5 graph convolutional layers with residual connections and batch normalization (only for inductive datasets), followed by global mean pooling (in case of graph-level prediction tasks), and finally a 2-layer MLP classifier. For training we use Adam optimizer [34] with learning rate reduction by 0.5 factor upon reaching a validation loss plateau. Early stopping is done based on validation split performance.
+
+Implementation. Our pipeline is built using PyTorch [47] and PyG [19] with GraphGym [65] (provided under MIT License). Its modular & scalable design facilitated here one of the most extensive experimental evaluation of graph datasets to date.
+
+Computing environment and used resources. All experiments were run in a shared computing cluster environment with varying CPU and GPU architectures. These involved a mix of NVidia V100 (32GB), RTX8000 (48GB), and A100 (40GB) GPUs. The resource budget for each experiment was 1 GPU, 4 CPUs, and up to 32GB system RAM.
+
+## B Extended Results
+
+### B.1 Taxonomy of Inductive Benchmarks
+
+I-1: Node-feature reliance. The top-most cluster I-1, while indifferent to structural perturbations, is highly sensitive to node feature perturbations that comprise the left-hand-side columns in Figure 4. The presence of image-based datasets MNIST and CIFAR10 in this cluster is not surprising, as for superpixel graphs the structure loosely follows a grid layout for all classes, meaning determining class solely based on structure is difficult. Additionally, the coordinate information of superpixels is encoded also in the node features, together with average pixel intensities. A model with powerful enough classifier component is then sufficient for achieving high accuracy using these node features alone. Furthermore, the sensitivity of these datasets to MidPass and HighPass indicates that the overall shape of the signals encoded by low-frequencies is more informative for classifying the image content than sharp superpixel transitions encoded by high-frequencies. The presence of ENZYMES in I-1 is likely due to the fact that some of the node features are precomputed using graph kernels, and therefore are sufficient to distinguish the enzyme classes in the dataset when structural information is removed.
+
+I-2: Node features contain majority of necessary structural information. For datasets in I-2, the graph structural information is again not necessary for achieving the baseline performance if the original node features are present, while the performance deteriorates noticably if NoNodeFtrs is applied. However, unlike I-1, these datasets are much less affected overall by the perturbations on node features. Many of the node features on these datasets are themselves derived from the graph's geometry, and it seems MPNNs are able to use either the graph structure or the node features to compensate for the absence of the other when encountering perturbed graphs. It appears that the low/mid/high-pass filterings in particular are able to retain a significant amount of geometric information.
+
+The synthetic graphs of Scale-Free and Small-world (both I-2 datasets) are generated through different algorithms (WS and BA, respectively), but the node features and tasks are equivalent: The features are the local clustering coefficient and PageRank score of each node and the task is to classify graphs based on average path length. Since the encoded features are derived from graph structure itself, MPNNs are still able to exploit them when the original graph structure is perturbed. When the MPNNs are forced to rely on graph structure instead, they are still able to attain AUROCs above random despite some decrease.
+
+For many of the I-2 datasets, NodeDeg allows one to replace geometric information of original node features with new geometric information, the degree of each vertex, to large success - for some of them the original AUROC scores are recovered and even surpassed, possibly due to NodeDeg reinforcing the existing structural signal. This trend is not as pronounced when the GIN-based model is used, since GIN achieves a comparatively high level of performance even in the face of NoNodeFtrs, likely due to the higher expressiveness of GIN compared to GCN in distinguishing of structural patterns.
+
+On the other hand, there are datasets of biochemical origin in this cluster, whose node features encode chemical and physical attributes, such as atom or amino acid type. Except MUTAG, there appears to be some information encoded in these node features that is irreplaceable by graph structure or node degree information.
+
+I-3: Graph-structure reliance. The I-3 cluster is characterized by strong structural dependencies, and can be further divided into two subgroups based on their sensitivities to node feature perturbations.
+
+The first subgroup, which consists of PATTERN, COLLAB, IMDB-BINARY and REDDIT, is not affected by node feature perturbations. These datasets do not have any original informative node features and their tasks appear to be purely structure-based. Indeed, in the case of PATTERN the task is to detect structural patterns in graphs, rendering node features irrelevant for the task. On the other hand, structural perturbations such as NoEdges and FullyConn cause drastic performance drops in this group, since most of its task signals are sourced from graph structures. This group also exhibits limited to no sensitivity towards Frag- ${k2}$ and Frag- ${k3}$ perturbations, which test for degrees of reliance on longer range interactions by limiting information propagation to $\{ 2,3\}$ hops. We still see prominent sensitivity to Frag- ${k1}$ , though, implying reliance on information from immediate neighbors. We can attribute the insensitivity for $k > 1$ to inherent graph properties for some of these datasets: For dense networks like PATTERN or ego-nets such as IMDB-BINARY and COLLAB, just 1 or 2 hops recover the original graph - for these graphs, the notion of long-range information does not exist.
+
+The second I-3 subgroup, formed by NCI datasets and Synthie, are the datasets that are notably affected by all perturbations. For Synthie, this sensitivity stems from its construction. The four synthetic classes in Synthie are formed by combinations of two distributions of graph structures and two distributions of node features - elimination of either leads to a partial collapse in the distinguishability of two classes. The NCI classification tasks, similarly to related bioinformatics datasets in I-2, show a degree of reliance on the high-dimensional node features, but additionally, they are also dependent on non-local structure as they are among the datasets most adversely affected by Frag- ${k2}$ and Frag- ${k3}$ .
+
+Synthetic datasets CLUSTER and SYNTHETICnew are also adversely affected by both structural and node feature perturbations. However, they stand out due to the magnitude of this effect. Many of the perturbations lead to a major decrease in AUROC and close-to-random performance. A closer inspection can provide an explanation. The task of CLUSTER is semi-supervised clustering of unlabeled nodes into six clusters, and the true cluster labels are given as node features in only a single node per cluster. NoEdges and FullyConn remove the cluster structure altogether, while NoNodeFtrs and NodeDeg remove the given cluster labels, rendering the task unsolvable in either case. In SYNTHETICnew, the two classes are derived from a "base" graph by a class-specific edge rewiring and node feature permutation, hence either graph structure or node features should differentiate the classes. Despite such expectation, we observe that the original node features alone are not sufficient, as structure perturbations have detrimental impact on the prediction performance. On the other hand GIN and GCN with NodeDeg can learn to distinguish the two classes even without the original node 514 features. Thus, the original node features appear to be unnecessary, while after bandpass-filtering even provide misleading signal.
+
+
+
+(c) Sensitivity profiles by 2-Layer GIN model; annotated by cluster assignment w.r.t. GCN model.
+
+
+
+Figure B.1: Taxonomy of inductive graph learning datasets via graph perturbations. The categorization into 3 dataset clusters is stable across the following models with only minor deviations: (a) GCN, (b) GIN, (c) 2-Layer GIN, (d) ChebNet, (e) GatedGCN, (f) GCNII. Panel (a) left and right is as shown in Figure 3a and 4, respectively, shown here for ease of comparison. Missing performance ratios (due to out-of-memory error) are shown in gray.
+
+### B.2 Taxonomy of Transductive Benchmarks
+
+All transductive datasets are relatively insensitive to structural perturbations. Unlike many of the inductive datasets that show significant reliance on the graph structure (I-3), the lowest performance achieved for a transductive dataset due to graph structure removal is still as high as 92% (Flickr), suggesting a weak dependence on the full graph structure. Furthermore, on average, considering only the neighborhoods of up to 3-hops (Frag-k3) nearly retains the full potential of the model $({99}\% \pm$ 1.6%), revealing the lack of long-range dependencies in these node-level datasets. Such negligence of the full graph structure might be attributed to the limitations of the GCN expressivity and issues such as oversquashing [55]. While these limitations are fundamentally true, our observation of long-range dependencies on some graph-level tasks like NCI, coupled with our architecture being 5 layers deep with residual connections, indicate that our GCN model is capable of capturing non-local information in the 3-hop neighborhoods. Furthermore, our observed long-range independence in transductive node-level datasets is consistent with the promising results presented by recent development of scalable GNNs that operate on subgraphs $\left\lbrack {{12},{66},{20}}\right\rbrack$ , breaking or limiting long-range connections.
+
+T-3: Indifference to node and structure removal. The datasets in T-3 are relatively insensitive to perturbations of graph structure and also to the removal of node features (NoNodeFtrs and NodeDeg). For example, the Amazon datasets (Am-Phot and Am-Comp) always achieve near perfect classification performance regardless of the perturbations applied, suggesting redundancy between node features and graph structure for the corresponding tasks. For these datasets, in particular, GitHub, Am, and Twitch, more sophisticated, or combinations of, perturbations might be needed to gauge their essential characteristics.
+
+T-2: Rich node features but substitutable for structural (summary) information. T-2 contains a broad spectrum of datasets from citation networks (CF), social networks (Coau, FBPP, LFMA), to web pages (WikiNet, WikiCS). The considerable performance decrease due to node feature removal suggests the relevance of the node features for their tasks. For example, it is not surprising that the binary bag-of-words features of CF datasets provide relevant information to classify papers into different fields of research, as one might expect some keywords to appear more likely in one field than in another. Furthermore, using the one-hot encoded node degrees (NodeDeg) always results in better performance over NoNodeFtrs. And in many cases such as Facebook (FBPP), NodeDeg nearly retains the baseline performance, suggesting the relevance of node degree information, as a form of structural summary, for the respective tasks.
+
+WebKB-Tex, although clustered into T-2 is more of an outlier that does not clearly fit into any of the existing clusters. As we will discuss more in T-1, WebKB-Tex considerably benefits from HighPass, while LowPass and MidPass severely decrease its performance.
+
+T-1: Heterophilic datasets. Three of the four datasets in T-1 (Actor, WebKB-Cor, and WebKB-Wis) are commonly referred to as heterophilic datasets [45, 39]. While WebKB-Tex (T-2) is also known to be heterophilic, it is isolated from T-1 mainly due to its insensitivity to node feature removal, suggesting the structure alone is sufficient for its prediction task.
+
+Our results show that in heterophilic datasets such as T-1 and WebKB-Tex, LowPass node feature filtering, realized by local aggregation (Eq. 3), significantly degrades the performance, unlike other homophilic datasets. By contrast, HighPass results in better performance than LowPass. In the case of WekbKB-Tex, HighPass significantly improves the performance over the baseline. This observation is related to recent findings [39] that in the case of extreme heterophily, local information, this time in form of the neighborhood patterns, may suffice to infer the correct node labels.
+
+Finally, despite heterophilic datasets $\left\lbrack {{39},2,{55},{45}}\right\rbrack$ attracting much recent attention, this type of datasets (T-1 and WebKB-Tex) is lacking in availability compared to the others (T-\{2,3\}), which exhibit homophily but with different levels of reliance on node features. Thus, there is a need to 3 collect and generate more real-world heterophilic datasets.
+
+B. 3 Correlations of Perturbations
+
+
+
+Figure B.2: Pearson correlation coefficients of the log2 performance fold change between different perturbations (w.r.t. a GCN model).
+
+We compute the Pearson correlation between all pairs of perturbations based on the log2 performance fold change. The results in Figure B. 2 indicate that many perturbations correlate with each other to some extend. For both transductive and inductive benchmarks, the perturbations roughly cluster into two groups, separating node feature perturbations (see Section 2.1) and graph structure perturbations (see Section 2.2). In particular, perturbations that replace the original node features with other less informative features, including RandFtrs, NoNodeFtrs, and NodeDeg, highly correlate with one another (Pearson $r \geq {0.6}$ ). Similarly, perturbations that severely break the graphs apart, including NoEdges, Frag-k1, and FiedlerFrag, are highly correlated (Pearson $r \geq {0.8}$ ).
+
+## C Graph Learning Benchmarks
+
+### C.1 Inductive Datasets
+
+MNIST and CIFAR10 [17] are derived from the well-known image classification datasets. The images are converted to graphs by SLIC superpixelization; node features are the average pixel coordinates and intensities; edges are constructed based on kNN criterion.
+
+PATTERN and CLUSTER [17] are node-level inductive datasets generated from SBMs [29]. In PATTERN, the task is to identify nodes of a structurally specific subgraph; CLUSTER has a semi-supervised clustering task of predicting the true cluster assignment of nodes while observing only one labelled node per cluster.
+
+IMDB-BINARY [64] is a dataset of ego-networks, where nodes represent actors/actresses and an edge between two nodes means that the two artists played in a movie together. The task is to determine which genre (action or romance) each ego-network belongs to.
+
+D&D [16] is a protein dataset where each protein is represented by a graph with rich node feature set. The task is to classify proteins as enzymes or non-enzymes.
+
+ENZYMES [6] is a dataset of tertiary structures from six enzymatic classes (determined by Enzyme Commission numbers). Each node represents a secondary structure element (SSE), and has an edge between its three spatially closest nodes. Node features are the type of SSE, and the physical and chemical information.
+
+PROTEINS [6] is a modification of the D&D [16]; the task is the same but the protein graphs are generated as in ENZYMES. NCI1 and NCI109 [57] consist of graph representations of chemical compounds; each graph represents a molecule in which nodes represent atoms and edges represent atomic bonds. Atom types are one-hot encoded as node features. The tasks are to determine whether a given compound is active or inactive in inhibiting non-small cell lung cancer (NCI1) or ovarian cancer (NCI109).
+
+COLLAB [64] is an ego-network dataset of researchers in three different fields of physics. Each graph is a researcher's ego-network, where nodes are researchers and an edge between two nodes means the two researchers have collaborated on a paper. The task is to determine which field a given researcher ego-network belongs to.
+
+REDDIT-BINARY and REDDIT-MULTI-5K [64] graphs are derived from Reddit communities (sub-reddits). These subreddits are Q&A based or discussion-based. Each graph represents a set of interactions between users through posts and comments; nodes represent users while an edge implies an interaction between two users. The task for REDDIT-BINARY is to determine whether the given interaction graph belongs to a Q&A or discussion subreddit. In REDDIT-MULTI-5K, the graphs are drawn from 5 specific subreddits instead, and the task is to predict the subreddit a graph belongs to.
+
+MUTAG [1] is a dataset of Nitroaromatic compounds. Each compound is represented by a graph in which nodes represent atoms with their types one-hot encoded as node features, and edges represent atomic bonds. The task is to determine whether a given compound has mutagenic effects on Salmonella typhimurium bacteria.
+
+MalNet-Tiny [22] is a smaller version of MalNet dataset, consisting of function call graphs of various malware on Android systems using Local Degree Profiles as node features. In MalNet-Tiny, the task is constrained to classification into 5 different types of malware.
+
+ogbg-molhiv, ogbg-molpcba, ogbg-moltox21 [31] datasets, adopted from MoleculeNet [60], are composed of molecular graphs, where nodes represent atoms and edges represent atomic bonds in-between. Node features include atom type and physical/chemical information such chirality and charge. The task is to classify molecules on whether they inhibit HIV replication (ogbg-molhiv) or their toxicity on on 12 different targets such as receptors and stress response pathways in a multilabel classification setting (ogbg-moltox21). In ogbg-molpcba the task is 128-way multi-task binary classification derived from 128 bioassays from PubChem BioAssay.
+
+PPI $\left\lbrack {{68},{28}}\right\rbrack$ dataset contains a collection of 24 tissue-specific protein-protein interaction networks derived from the STRING database [54] using tissue-specific gold-standards from [25]. 20 of the networks are used for training, 2 used for validation, and 2 used for testing. In each network, each protein (node) is associated with 50 different gene signatures as node features. The multi-label node classification task was to classify each gene (node) in a graph based on its gene ontology terms.
+
+SYNTHETICnew [18] is a dataset where each graph is based on a random graph $G$ with scalar node features drawn from the normal distribution. Two classes of graphs are generated from $G$ by randomly rewiring edges and permuting node attributes; the number of rewirings and permuted attributes are distinct for the two classes. Noise is added to the node features to make the tasks more difficult. The task is to determine which class a given graph belongs to.
+
+Synthie [42] dataset is generated from two Erdös-Rényi graphs ${G}_{1,2}$ : Two sets of graphs ${S}_{1,2}$ are then generated by randomly adding and removing edges from ${G}_{1,2}$ . Then,10 graphs were sampled from these sets and connected by randomly adding edges, resulting in a single graph. Two classes of these graphs, ${C}_{1,2}$ are generated by using distinct sampling probabilities for the two sets. The two classes are then in turn split into two by generating two sets of vectors $A$ and $B$ ; nodes from a given graph were appended a vector from $A$ as node features if they were sampled from ${S}_{1}$ , and $B$ for ${S}_{2}$ for one class, and vice versa for the other. The task is to classify which of these four classes a given graph belongs to.
+
+Small-world and Scale-free [65] datasets are generated by tweaking graph generation parameters for the real-world-derived small-world [59] and scale-free [30] graphs. Graphs are generated using a range of Averaging Clustering Coefficient and Average Path Length parameters. In our experiments, clustering coefficients and PageRank scores constitute node features while task is to classify graphs based on average path length, where the continuous path length variable is rendered discrete by 10-way binning.
+
+Table C.1: Inductive benchmarks. All datasets are equipped with graph-level classification tasks, except PATTERN and CLUSTER that are equipped with inductive node-level classification tasks.
+
+| $\mathbf{{Dataset}}$ | #Graphs | Avg # Nodes | Avg # Edges | #Features | #Classes | Predef. split | $\mathbf{{Ref}.}$ |
| MNIST | 70,000 | 70.57 | 564.53 | 3 | 10 | Yes | [17] |
| CIFAR10 | 60,000 | 117.63 | 941.07 | 5 | 10 | Yes | [17] |
| PATTERN | 14,000 | 118.89 | 6,078.57 | 3 | 2 | Yes | [17] |
| CLUSTER | 12,000 | 117.20 | 4,301.72 | 7 | 6 | Yes | [17] |
| IMDB-BINARY | 1,000 | 19.77 | 96.53 | - | 2 | No | [64] |
| D&D | 1,178 | 284.32 | 715.66 | 89 | 2 | No | [16] |
| ENZYMES | 600 | 32.63 | 62.14 | 21 | 6 | No | [6] |
| PROTEINS | 1,113 | 39.06 | 72.82 | 4 | 2 | No | [6] |
| NCI1 | 4,110 | 29.87 | 32.3 | 37 | 2 | No | [57] |
| NCI109 | 4,127 | 29.68 | 32.13 | 38 | 2 | No | [57] |
| COLLAB | 5,000 | 74.49 | 2,457.78 | - | 3 | No | [64] |
| REDDIT-BINARY | 2,000 | 429.63 | 497.75 | - | 2 | No | [64] |
| REDDIT-MULTI-5K | 4,999 | 508.52 | 594.87 | - | 5 | No | [64] |
| MUTAG | 188 | 17.93 | 19.79 | 7 | 2 | No | [1] |
| MalNet-Tiny | 5,000 | 1,410.3 | 2,859.94 | 5 | 5 | No | [22] |
| ogbg-molhiv | 41,127 | 25.5 | 27.5 | 9 sets | 2 | Yes | [31] |
| ogbg-molpeba | 437,929 | 26.0 | 28.1 | 9 sets | 128x binary | Yes | [31] |
| ogbg-moltox21 | 7,831 | 18.6 | 19.3 | 9 sets | 12x binary | Yes | [31] |
| PPI | 24 | 2,372.67 | 66,136 | 50 | 121 | Yes | [68] |
| SYNTHETICnew | 300 | 100 | 196 | 1 | 2 | No | [18] |
| Synthie | 400 | 95 | 196.25 | 15 | 4 | No | [42] |
| Small-world | 256 | 64 | 694 | 2 | 10 | No | [65] |
| Scale-free | 256 | 64 | 501.56 | 2 | 10 | No | [65] |
+
+### C.2 Transductive Node-level Datasets
+
+WikiNet [48] contains two networks of Wikipedia pages, where edges indicate mutual links between pages, and node features are bag-of-words (BOW) of informative nouns. The task is to classify the web pages based on their average monthly traffic bins.
+
+WebKB [48] contains networks of web pages from different universities, where an (directed) edge is a hyperlink between two web pages, with BOW node features. The task is to classify the web pages into five categories: student, project, course, staff, and faculty.
+
+Actor [48] is a network of actors, where an edge indicate co-occurrence of two actors on a same Wikipedia page, with node features represented by keywords about the actor on Wikipedia. The task is to classify the actor into one of five categories.
+
+WikiCS [40] is a network of Wikipedia articles related to Computer Science, where edges represent hyperlinks between them, with 300-dimensional word embeddings of the articles. The task is to classify the articles into one of ten branches of the field.
+
+Flickr [66] is a network of images, where the edges represent common properties between images, such as locations, gallery, and comments by the same users. The node features are BOW of image descriptions, and the task is to predict one of 7 tags for an image.
+
+CF (CitationFull) [5] contains citation networks where nodes are papers and edges represent citations, with node features as BOW of papers. The task is to classify the papers based on their topics.
+
+DzEu (DeezerEurope) [50] is a network of Deezer users from European countries where nodes are the users and edges are mutual follower relationships. The task is to predict the gender of users.
+
+LFMA (LastFMAsia) [50] is a network of LastFM users from Asian countries where edges are mutual follower relationships between them. The task is to predict the location of users.
+
+Amazon [52] contains Amazon Computers and Amazon Photo. They are segments of the Amazon co-purchase graph, where nodes represent goods, edges indicate that two goods are frequently bought together, node features are bag-of-words encoded product reviews, and class labels are given by the product category.
+
+Table C.2: Transductive benchmarks with node-level classification tasks.
+
+| $\mathbf{{Dataset}}$ | #Nodes | #Edges | #Node feat. | #Pred. classes | Predef. split | $\mathbf{{Ref}.}$ |
| WikiNet-cham | 2,277 | 72,202 | 128 | 5 | Yes | [48] |
| WikiNet-squir | 5,201 | 434,146 | 128 | 5 | Yes | [48] |
| WebKB-Cor | 183 | 298 | 1,703 | 10 | Yes | [48] |
| WebKB-Wis | 251 | 515 | 1,703 | 10 | Yes | [48] |
| WebKB-Tex | 183 | 325 | 1,703 | 10 | Yes | [48] |
| Actor | 7,600 | 30,019 | 932 | 10 | Yes | [48] |
| WikiCS | 11,701 | 297,110 | 300 | 10 | Yes | [40] |
| Flickr | 89,250 | 899,756 | 500 | 7 | Yes | [66] |
| CF-Cora | 19,793 | 126,842 | 8,710 | 70 | No | [5] |
| CF-CoraML | 2,995 | 16,316 | 2,879 | 7 | No | [5] |
| CF-CiteSeer | 4,230 | 10,674 | 602 | 6 | No | [5] |
| CF-DBLP | 17,716 | 105,734 | 1,639 | 4 | No | [5] |
| CF-PubMed | 19,717 | 88,648 | 500 | 3 | No | [5] |
| DzEu | 28,281 | 185,504 | 128 | 2 | No | [50] |
| LFMA | 7,624 | 55,612 | 128 | 18 | No | [50] |
| Am-Comp | 13,752 | 491,722 | 767 | 10 | No | [52] |
| Am-Phot | 7,650 | 238,162 | 745 | 8 | No | [52] |
| Coau-CS | 18,333 | 163,788 | 6,805 | 15 | No | [52] |
| Coau-Phy | 34,493 | 495,924 | 8,415 | 5 | No | [52] |
| Twitch-EN | 7,126 | 77,774 | 128 | 2 | No | [49] |
| Twitch-ES | 4,648 | 123,412 | 128 | 2 | No | [49] |
| Twitch-DE | 9,498 | 315,774 | 128 | 2 | No | [49] |
| Twitch-PT | 1,912 | 64,510 | 128 | 2 | No | [49] |
| Github | 37,700 | 578,006 | 128 | 2 | No | [49] |
| FBPP | 22,470 | 342,004 | 128 | 4 | No | [49] |
+
+Coau (Coauthor) [52] contains Coauthor CS and Coauthor Physics. They are co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge 3. Nodes are authors, and are connected by an edge if they co-authored a paper; node features represent paper keywords for each author's papers, and class labels indicate most active fields of study for each author.
+
+Twitch [49] contains Twitch user-user networks of gamers who stream in a certain language where nodes are the users themselves and the edges are mutual friendships between them. The task is to to predict whether a streamer uses explicit language. Due to low baseline performance even after a thorough hyperparameter search, we excluded Twitch-RU and Twitch-FR from our main analysis.
+
+Github [49] is a network of GitHub developers where nodes are developers who have starred at least 10 repositories and edges are mutual follower relationships between them. The task is to predict whether the user is a web or a machine learning developer.
+
+FBPP (FacebookPagePage) [49] is a network of verified Facebook pages that liked each other, where nodes correspond to official Facebook pages, edges to mutual likes between sites. The task is multi-class classification of the site category.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..feb8063ba9a5cbf088855ef9fdacdcdad5cff149
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/EM-Z3QFj8n/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,163 @@
+§ TAXONOMY OF BENCHMARKS IN GRAPH REPRESENTATION LEARNING
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a sensitivity profile that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach and implementation in GTaxoGym package ${}^{1}$ are extendable to multiple graph prediction task types and future datasets.
+
+§ 1 INTRODUCTION
+
+Machine learning for graph representation learning (GRL) has seen rapid development in recent years [27]. Originally inspired by the success of convolutional neural networks in regular Euclidean domains, thanks to their ability to leverage data-intrinsic geometries, classical graph neural network (GNN) models $\left\lbrack {{15},{35},{56}}\right\rbrack$ extend those principles to irregular graph domain. Further advances in the field have led to a wide selection of complex and powerful GNN architectures. Some models are provably more expressive than others [63, 43], can leverage multi-resolution views of graphs [41], or can account for implicit symmetries in graph data [9]. Comprehensive surveys of graph neural networks can be found in Bronstein et al. [8], Wu et al. [61], Zhou et al. [67].
+
+Most graph-structured data encode information in graph structures and node features. The structure of each graph represents relationships (i.e., edges) between different nodes, while the node features represent quantities of interest at each individual node. For example, in citation networks, nodes represent papers and edges represent citations between the papers. On such networks, node features often capture the presence or absence of certain keywords in each paper, encoded in binary feature vectors. In graphs modeling social networks, each node represents a user, and the corresponding node features often include user statistics like gender, age, or binary encodings of personal interests.
+
+Intuitively, the power of GNNs lies in relating local node-feature information to global graph structure information, typically achieved by applying a cascade of feature aggregation and transformation steps. In aggregation steps, information is exchanged between neighboring nodes, while transformation steps apply a (multi-layer) perceptron to feature vectors of each node individually. Such architectures are commonly referred to as Message Passing Neural Networks (MPNN) [24].
+
+Historically, GNN methods have been evaluated on a small collection of datasets [44], many of which originated from the development of graph kernels. The limited quantity, size and variety of these datasets have rendered them insufficient to serve as distinguishing benchmarks [17, 46]. Therefore, recent work has focused on compiling a set of large(r) benchmarking datasets across diverse graph domains $\left\lbrack {{17},{31}}\right\rbrack$ . Despite these efforts and the introduction of new datasets, it is still not well understood what aspects of a dataset most influence the performance of GNNs. Which is more important, the geometric structure of the graph or node features? Are long-range interactions crucial, or are short-range interactions sufficient for most tasks? This lack of understanding of the dataset properties and of their similarities makes it difficult to select a benchmarking suit that would enable comprehensive evaluation of GNN models. Even when an array of seemingly different datasets is used, they may be probing similar aspects of graph representation learning.
+
+${}^{1}$ https://github.com/G-Taxonomy-Workgroup/GTaxoGym
+
+ < g r a p h i c s >
+
+Figure 1: Overview of our pipeline to taxonomize graph learning datasets.
+
+Leveraging symmetries and other geometric priors in graph data is crucial for generalizable learning [9]. While invariance or equivariance to some transformations is inherent, invariance to others may only be empirically or partially apparent. Motivated by this observation, we propose to use the lens of empirical transformation sensitivity to gauge how task-related information is encoded in graph datasets and subsequently taxonomize their use as benchmarks in graph representation learning. Our approach is illustrated in Figure 1. Namely, we list our contributions in this study as:
+
+1. We develop a graph dataset taxonomization framework that is extendable to both new datasets and evaluation of additional graph/task properties,
+
+2. Using this framework, we provide the first taxonomization of GNN (and GRL) benchmarking datasets, collected from TUDatasets [44], OGB [31] and other sources,
+
+3. Through the resulting taxonomy, we provide insights about existing datasets and guide better dataset selection in future benchmarking of GNN models.
+
+§ 2 METHODS
+
+As a proxy for invariance or sensitivity to graph perturbations, we study the changes in GNN performance on perturbed versions of each dataset. These perturbations are designed to eliminate or emphasize particular types of information embedded in the graphs. We define an empirical sensitivity profile of a dataset as a vector where each element is the performance of a GNN after a given perturbation, reported as a percentage of the network's performance on the original dataset. In particular, we use a set of 13 perturbations, visualized in Figure 2. Of these perturbations, 6 are designed to perturb node features, while keeping the graph structure intact, whereas the remaining 7 keep the node attributes the same, but manipulate the graph structure.
+
+For the purpose of these perturbations, we consider all graphs to be undirected and unweighted, and assume they all have node features, but not edge features. These assumptions hold for most datasets we use in this study. However, if necessary, we preprocess the data by symmetrizing each graph's adjacency matrix and dropping any edge attributes. Formally, let $G = \left( {V,E,\mathbf{X}}\right)$ be an undirected, unweighted, attributed graph with node set $V$ of cardinality $\left| V\right| = n$ , edge set $E \subset V \times V$ , and a matrix of $d$ -dimensional node features $\mathbf{X} \in {\mathbb{R}}^{n \times d}$ . We let $\mathbf{M} \in {\mathbb{R}}^{n \times n}$ denote the adjacency matrix of each graph, where $\mathbf{M}\left( {u,v}\right) = 1$ if $\left( {u,v}\right) \in E$ and zero otherwise.
+
+Several of our perturbations are based on spectral graph theory, which represents graph signals in a spectral domain analogous to classical Fourier analysis. We define the graph Laplacian $\mathbf{L} \mathrel{\text{ := }} \mathbf{D} - \mathbf{M}$ and the symmetric normalized graph Laplacian $\mathbf{N} \mathrel{\text{ := }} {\mathbf{D}}^{-\frac{1}{2}}\mathbf{L}{\mathbf{D}}^{-\frac{1}{2}} = \mathbf{I} - {\mathbf{D}}^{-\frac{1}{2}}\mathbf{M}{\mathbf{D}}^{-\frac{1}{2}}$ , where $\mathbf{D}$ is the diagonal degree matrix. Both $\mathbf{L}$ and $\mathbf{N}$ are positive semi-definite and have an orthonormal eigendecompositions $\mathbf{L} = \mathbf{\Phi }\mathbf{\Lambda }{\mathbf{\Phi }}^{\top }$ and $\mathbf{N} = \widetilde{\mathbf{\Phi }}\widetilde{\mathbf{\Lambda }}{\widetilde{\mathbf{\Phi }}}^{\top }$ . By convention, we order the eigenvalues and corresponding eigenvectors ${\left\{ \left( {\lambda }_{i},{\phi }_{i}\right) \right\} }_{0 \leq i \leq n - 1}$ of $\mathbf{L}$ (and similarly for $\mathbf{N}$ ) in ascending order $0 = {\lambda }_{0} \leq {\lambda }_{1} \leq \cdots \leq {\lambda }_{n - 1}$ . The eigenvectors ${\left\{ {\phi }_{i}\right\} }_{0 \leq i \leq n - 1}$ constitute a basis of the space of graph signals and can be considered as generalized Fourier modes. The eigenvalues ${\left\{ {\lambda }_{i}\right\} }_{0 \leq i \leq n - 1}$ characterize the variation of these Fourier modes over the graph and can be interpreted as (squared) frequencies.
+
+ < g r a p h i c s >
+
+Figure 2: Node feature and graph structure perturbations of the first graph in ENZYMES. The color coding of nodes illustrates their feature values, except (k-n) where the fragment assignment is shown.
+
+§ 2.1 NODE FEATURE PERTURBATIONS
+
+We first consider two perturbations that alter local node features, setting them either to a fixed constant (w.l.o.g., one) for all nodes, or to a one-hot encoding of the degree of the node. We refer to these perturbations as NoNodeFtrs (since constant node features carry no additional information) and NodeDeg, respectively. In addition, we consider a random node feature perturbation (RandFtrs) by sampling a one-dimensional feature for each node uniformly at random within $\left\lbrack {-1,1}\right\rbrack$ . Sensitivity to these perturbations, exhibited by a large decrease in predictive performance, may indicate that a dataset (or task) is dominated by highly informative node features.
+
+We also develop spectral node feature perturbations. As in Euclidean settings, the Fourier decomposition can be used to decompose graph signals into a set of canonical signals, called Fourier modes, which are organized according to increasing variation (or frequency). In Euclidean Fourier analysis, these modes are sinusoidal waves oscillating at different frequencies. A standard practice in audio signal processing is to remove noise from a signal by identifying and removing certain Fourier modes or frequency bands. We generalize this technique to graph datasets and systematically remove certain graph Fourier modes to probe the importance of the corresponding frequency bands.
+
+In this perturbation, we use the frequencies derived from the symmetric normalized graph Laplacian $\mathbf{N}$ and split them into three roughly equal-sized frequency bands (low, mid, high), i.e., bins of subsequent eigenvalues. To assess the importance of each of the frequency bands, we then apply hard band-pass filtering to the graph signals (node feature vectors), i.e., we project the signals on the span of the selected Fourier modes. More specifically, for each band, we let ${\mathbf{I}}_{\text{ band }}$ be a diagonal matrix with diagonal elements equal to one if the corresponding eigenvalue is in the band, and zero otherwise. Then, the hard band-pass filtered signal is computed as
+
+$$
+{\mathbf{X}}_{\text{ band }} = \widetilde{\mathbf{\Phi }}{\mathbf{I}}_{\text{ band }}{\widetilde{\mathbf{\Phi }}}^{\top }\mathbf{X}. \tag{1}
+$$
+
+The above band-pass filtering perturbation enables a precise selection of the frequency bands. However, it requires a full eigendecomposition of the normalized graph Laplacian, which is impractical for large graphs. We therefore provide an alternative approach based on wavelet bank filtering [13]. This leverages the fact that polynomial filters $h$ of the normalized graph Laplacian directly transform the spectrum via $h\left( \mathbf{N}\right) = \widetilde{\mathbf{\Phi }}h\left( \widetilde{\mathbf{\Lambda }}\right) {\widetilde{\mathbf{\Phi }}}^{\top }$ , yielding the frequency response $h\left( \lambda \right)$ for any eigenvalue $\lambda$ of N. This is usually done by taking the symmetrized diffusion matrix
+
+$$
+\mathbf{T} = \frac{1}{2}\left( {\mathbf{I} + {\mathbf{D}}^{-\frac{1}{2}}{\mathbf{{MD}}}^{-\frac{1}{2}}}\right) = \frac{1}{2}\left( {2\mathbf{I} - \mathbf{N}}\right) . \tag{2}
+$$
+
+By construction, $\mathbf{T}$ admits the same eigenbasis as $\mathbf{N}$ but its eigenvalues are mapped from $\left\lbrack {0,2}\right\rbrack$ to $\left\lbrack {0,1}\right\rbrack$ via the frequency response $h\left( \lambda \right) = 1 - \lambda /2$ . As a result, large eigenvalues are mapped to small values (and vice versa). Next, we construct diffusion wavelets [15] that consist of differences of dyadic powers ${2}^{k},k \in {\mathbb{N}}_{0}$ of $\mathbf{T}$ , i.e., ${\Psi }_{k} = {\mathbf{T}}^{{2}^{k - 1}} - {\mathbf{T}}^{{2}^{k}}$ , which act as bandpass filters on the signal. Intuitively, this operator "compares" two neighborhoods of different sizes (radius ${2}^{k - 1}$ and ${2}^{k}$ ) at each node. Diffusion wavelets are usually maintained in a wavelet bank ${\mathcal{W}}_{K} = {\left\{ {\mathbf{\Psi }}_{k},{\mathbf{\Phi }}_{\mathbf{K}}\right\} }_{k = 0}^{K}$ , which contains additional highpass ${\mathbf{\Psi }}_{0} = \mathbf{I} - \mathbf{T}$ and lowpass ${\mathbf{\Psi }}_{\mathbf{K}} = {\mathbf{T}}^{K}$ filters. In our experiments, we choose $K = 1$ , resulting in the following low, mid, and highpass filtered node features:
+
+$$
+{\mathbf{X}}_{\text{ high }} = \left( {\mathbf{I} - \mathbf{T}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{ mid }} = \left( {\mathbf{T} - {\mathbf{T}}^{2}}\right) \mathbf{X},\;{\mathbf{X}}_{\text{ low }} = {\mathbf{T}}^{2}\mathbf{X}. \tag{3}
+$$
+
+These filters correspond to frequency responses ${h}_{\text{ high }}\left( \lambda \right) = \lambda /2,{h}_{\text{ mid }}\left( \lambda \right) = \left( {1 - \lambda /2}\right) - {\left( 1 - \lambda /2\right) }^{2}$ and ${h}_{\text{ low }}\left( \lambda \right) = {\left( 1 - \lambda /2\right) }^{2}$ . Therefore, the low-pass filtering preserves low-frequency information while suppressing high-frequency information whereas high-pass filtering does the opposite. The mid-pass filtering suppresses all frequencies. However, it preserves much more middle-frequency information than it does high- or low-frequency information.
+
+Therefore, this filtering may be interpreted as approximation of the hard band-pass filtering discussed above. From the spatial message passing perspective, low-pass filtering is equivalent to local averaging of the node features, which has a profound implication on homophilic and heterophilic characteristics of the datasets (Sec. 3.2). Finally, since the computations needed in (3) can be carried out via sparse matrix multiplications, they have the advantage of scaling well to large graphs. Therefore, we utilize the wavelet bank filtering for the datasets with larger graphs considered in Sec. 3.2, while for the smaller graphs, considered in Sec. 3.1, we employ the direct band-pass filtering approach.
+
+§ 2.2 GRAPH STRUCTURE PERTURBATIONS
+
+The following perturbations act on the graph structure by altering the adjacency matrix. By removing all edges (NoEdges) or making the graph fully-connected (FullyConn), we can eliminate the structural information completely and essentially turn the graph into a set. The difference between the two perturbations lies in whether all nodes are processed independently or all nodes are processed together. However, FullyConn is only applied to inductive datasets in Sec. 3.1 due to computational limitations. Furthermore, we consider a degree-preserving random edge rewiring perturbation (RandRewire). In each step, we randomly sample a pair of edges and randomly exchange their end nodes. We then repeat this process without replacement until ${50}\%$ of the edges have been randomly rewired.
+
+To inspect the importance of local vs. global graph structure, we designed the Frag- $k$ perturbations, which randomly partition the graph into connected components consisting of nodes whose distance to a seed node is less than $k$ . Specifically, we randomly draw one seed node at a time and extract its $k$ -hop neighborhood by eliminating all edges between this new fragment and the rest of the graph; we repeat this process on the remaining graph until the whole graph is processed. A smaller $k$ implies smaller components, and hence discards the global structure and long-range interactions.
+
+Graph fragmentations can also be constructed using spectral graph theory. In our taxonomization, we adopt one such method, which we refer to as Fiedler fragmentation (FiedlerFrag) (see [33] and the references therein). In the case when the graph $G$ is connected, ${\phi }_{0}$ , the eigenvector of the graph Laplacian $\mathbf{L}$ corresponding to ${\lambda }_{0} = 0$ , is constant. The eigenvector ${\phi }_{1}$ corresponding to the next smallest eigenvalue, ${\lambda }_{1}$ , is known as the Fiedler vector [21]. Since ${\phi }_{0}$ is constant, it follows that ${\phi }_{1}$ has zero average. This motivates partitioning the graph into two sets of vertices, one where ${\phi }_{1}$ is positive and the other where ${\phi }_{1}$ is negative. We refer to this process as binary Fiedler fragmentation. This heuristic is used to construct the ratio cut for a connected graph [26]. The ratio cut partitions a connected graph into two disjoint connected components $V = U \cup W$ , such that the objective $\left| {E\left( {U,W}\right) }\right| /\left( {\left| U\right| \cdot \left| W\right| }\right)$ is minimized, where $E\left( {U,W}\right) \mathrel{\text{ := }} \{ \left( {u,w}\right) \in E : u \in U,w \in W\}$ is the set of removed edges when fragmenting $G$ accordingly. This can be seen as a combination of the min cut objective (numerator), while encouraging a balanced partition (denominator).
+
+FiedlerFrag is based on iteratively applying binary Fiedler fragmentation. In each step, we separate out the graph into its connected components and apply binary Fiedler fragmentation to the largest component. We repeat this process until either we reach 200 iterations, or the size of the largest connected component falls below 20. In contrast to the random fragmentation Frag- $k$ , this perturbation preserves densely connected regions of the graph and eliminates connections between them. Thus, FiedlerFrag tests the importance of inter community message flow. Due to computational limits, we only apply FiedlerFrag to inductive datasets in Sec. 3.1 for which this computation is feasible.
+
+§ 2.3 DATA-DRIVEN TAXONOMIZATION BY HIERARCHICAL CLUSTERING
+
+To study a systematic classification of the graph datasets, we use Ward's method [58] for hierarchical clustering analysis of their sensitivity profiles. The sensitivity profiles are established empirically by contrasting the performance of a GNN model on a perturbed dataset and on the original dataset. To quantify this performance change, we use ${\log }_{2}$ -transformed ratio of test AUROC (area under the ROC curve). Thus a sensitivity profile is a 1-D vector with as many elements as we have perturbation experiments. See Figure 1 and Appendix A for further details.
+
+ < g r a p h i c s >
+
+Figure 3: Visualization of (a) inductive and (b) transductive datasets based on PCA of their perturbation sensitivity profiles according to a GCN model. The datasets are labeled according to their taxonomization by hierarchical clustering, shown in Figure 4 and 6, which corroborates with the emerging clustering in the PCA plots. In the bottom part are shown the loadings of the first two principal components and (in parenthesis) the percentage of variance explained by each of them.
+
+In order to generate sensitivity profiles, we must select suitable GNN models based on several practical considerations: (i) The model has to be expressive enough to efficiently leverage aspects of the node features and graph structure that we perturb. Otherwise, our analysis will not be able to uncover reliance on these properties. (ii) The model needs to be general enough to be applicable to a wide variety of datasets, avoiding dataset-specific adjustments that may lead to profiling that is not comparable between datasets. Therefore, we did not aim for specialized models that maximize performance, but rather models that (i) achieve at least baseline performance comparable to published works over all datasets, (ii) have manageable computational complexity to facilitate large-scale experimentation, and (iii) use well-established and theoretically well-understood architectures.
+
+With these criteria in mind, we focused on two popular MPNN models in our analysis: GCN [35] and GIN [63]. The original GCN serves as an ideal starting point as its abilities and limitations are well-understood. However, we also wanted to perform taxonomization through a provably more expressive and recent method, which motivated our selection of GIN as the second architecture. We emphasize that the main focus here is not to provide a benchmarking of GNN models per se, but rather to address the taxonomization of graph datasets (and accompanying tasks) used in such benchmarks. Nevertheless, we have also generated sensitivity profiles by additional models in order to comparatively demonstrate the robustness of our approach: 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]; see Figure 5.
+
+§ 3 RESULTS
+
+Each of the 48 datasets we consider is equipped with either a node classification or graph classification task. In the case of node classification, we further differentiate between the inductive setting, in which learning is done on a set of graphs and the generalization occurs from a training set of graphs to a test set, and the transductive setting, in which learning is done in one (large) graph and the generalization occurs between subsets of nodes in this graph. Graph classification tasks, by contrast, always appear in an inductive setting. The only major difference between graph classification and inductive node classification is that prior to final prediction, the hidden representations of all nodes are pooled into a single graph-level representation. In the following two subsections, we provide an analysis of the sensitivity profiles for datasets with inductive and transductive tasks.
+
+ < g r a p h i c s >
+
+Figure 4: Taxonomy of inductive graph learning datasets via graph perturbations. For each dataset and perturbation combination, we show the GCN model performance relative to its performance on the unmodified dataset.
+
+§ 3.1 TAXONOMY OF INDUCTIVE BENCHMARKS
+
+Datasets. We examine a total of 23 datasets, 20 of which are equipped with a graph-classification task (inductive by nature) and the other three are equipped with an inductive node-classification task. Of these datasets, 17 are derived from real-world data, while the other six are synthetically generated.
+
+For real-world data, we consider several domains. Biochemistry tasks are the most ubiquitous, including compound classification based on effects on cancer or HIV inhibition (NCI1 & NCI109 [57], ogbg-molhiv [31]), protein-protein interaction PPI [68, 28], multilabel compound classification based on toxicity on biological targets (ogbg-moltox21 [31]), and multiclass classification of enzymes (ENZYMES [31]). We also consider superpixel-based graph classification as an extension of image classification (MNIST & CIFAR10 [17]), collaboration datasets (IMDB-BINARY & COLLAB [64]), and social graphs (REDDIT-BINARY & REDDIT-MULTI-5K [64]).
+
+For synthetic data, we have concrete understanding of their graph domain properties and how these properties relate to their prediction task. This allows us to derive a deeper understanding of their sensitivity profiles. The six synthetic datasets in our study make use of a varied set of graph generation algorithms. Small-world [65] is based on graph generation with the Watz-Strogatz (WS) model; the task is to classify graphs based on average path length. Scale-free [65] retains the same task definition, but the graph generation algorithm is an extension of the Barabási-Albert (BA) model proposed by Holme and Kim [30]. PATTERN and CLUSTER are node-level classification tasks generated with stochastic block models (SBM) [29]. Synthie [42] graphs are derived by first sampling graphs from the well-known Erdös-Rényi (ER) model, then deriving each class of graphs by a specific graph surgery and sampling of node features from a distinct distribution per each class. Similarly, SYNTHETICnew [18] graphs are generated from a random graph, where different classes are formed by specific modifications to the original graph structure and node features. Further details of dataset definitions and synthetic graph generation algorithms are provided in Appendix C.
+
+Insights. Here we itemize the main insights into inductive datasets. Our full taxonomy is shown in Figures 4 and 3a, with a detailed analysis of individual clusters given in Appendix B.1.
+
+ * Three distinct groups of datasets. We identify a categorization into three dataset clusters $\mathrm{I} - \{ 1,2,3\}$ that emerge from both the hierarchical clustering and PCA. The datasets in $\mathrm{I} - \{ 1,2\}$ exhibit stronger node feature dependency and do not encode crucial information in the graph structure. The main differentiating factor between I-1 and I-2 is their relative sensitivity to node feature perturbations - in particular, how well NodeDeg can substitute the original node features. On the other hand, datasets in I-3 rely considerably more on graph structure for correct task prediction. This is also reflected by the first two principal components (Figure 3a), where PC1 approximately corresponds to structural perturbations and PC2 to node feature perturbations.
+
+ * No clear clustering by dataset domain. While datasets that are derived in a similar fashion cluster together (e.g., REDDIT-* datasets), in general, each of the three clusters contains datasets from a variety of application domains. Not all molecular datasets behave alike; e.g., ogbg-mol* datasets in I-2 considerably differ from NCI* datasets in I-3.
+
+ * Synthetic datasets do not fully represent real-world scenarios. CLUSTER, SYNTHETICnew, and PATTERN lie at the periphery of the PCA embeddings, suggesting that existing synthetic datasets do not resemble the type of complexity encountered in real-world data. Hence, one should use synthetic datasets in conjunction with real-world datasets to comprehensively evaluate GNN performance rather than solely relying on synthetic ones.
+
+ * Representative set. One can now select a representative subset of all datasets to cover the observed heterogeneity among the datasets. Our recommendation: CIFAR10 from I-1; D&D, ogbg-molhiv from I-2; NCI1, COLLAB, REDDIT-MULTI-5K, CLUSTER from I-3.
+
+ * Robustness w.r.t. GNN choice. In addition to GCN, we have performed our perturbation analysis w.r.t. GIN [63], 2-Layer GIN, ChebNet [15], GatedGCN [7] and GCN II [11]. These models were selected to cover a variety of inductive model biases: GIN is provably 1-WL expressive, ChebNet uses higher-order approximation of the Laplacian, GatedGCN employs gating akin to attention, and GCN II leverages skip connections and identity mapping to alleviate oversmoothing. We have also tested a 2-layer GIN to probe the robustness to number of message-passing layers. The taxonomies w.r.t. other models (Figure B.1) are congruent with that of GCN. Given the differing inductive biases and representational capacity, some difference in the sensitivity profiles are not only expected but desired to validate their functions in benchmarking. The resulting profiles can be used for a detailed comparative analysis of these models, but the overall conclusions remain consistent. This consistency is further validated by our correlation analysis amongst these models, shown in Figure 5. The Pearson correlation coefficients of all pairs are above 90%, implying that our taxonomy is sufficiently robust w.r.t. different GNNs and the number of layers.
+
+ < g r a p h i c s >
+
+Figure 5: Pearson correlation between profiles derived by six GNN models.
+
+§ 3.2 TAXONOMY OF TRANSDUCTIVE BENCHMARKS
+
+Datasets. We selected a wide variety of 25 transductive datasets with node classification task, including citation networks, social networks, and other web page derived networks (see Appendix C). In citation networks, such as CitationFull (CF) [5], nodes and edges correspond to papers that are linked via citation. In web page derived networks, like WikiNet [48], Actor [48], and WikiCS [40], they correspond to hyperlinks between pages. In social networks, like Deezer (DzEu) [50], LastFM (LFMA) [50], Twitch [49], Facebook (FBPP) [49], Github [49], and Coau [52], nodes and edges are based on a type of relationship, such as mutual-friendship and co-authorship. Flickr [66] and Amazon [52] are constructed based on other notions of similarity between entities, such as co-purchasing and image property similarities. WebKB [48] contains networks of university web pages connected via hyperlinks. It is an example of a heterophilic dataset [45], since immediate neighbor nodes do not necessarily share the same labels (which correspond to a user's role such as faculty or graduate student). By contrast, Cora, CiteSeer, and PubMed are known to be homophilic datasets where nodes within a neighborhood are likely to share the same label. In fact, no less than 60% of nodes in these networks have neighborhoods that share the same node label as the central node [40].
+
+Insights. Below we list the main insights into transductive graph datasets and their taxonomy (Figures 6 and 3b). We refer the reader to Appendix B.2 for the analysis of individual clusters.
+
+ < g r a p h i c s >
+
+Figure 6: Taxonomization of transductive datasets based on sensitivity profiles w.r.t. a GCN model.
+
+ * Transductive datasets are uniformly insensitive to structural perturbations. Sensitivity profiles of all transductive datasets show high robustness to all graph structure perturbations. This is in stark contrast with the inductive datasets, where the largest cluster I-3 is defined by high sensitivity to structural perturbations. The graph connectivity may not be vital to every dataset/task, e.g., in WikiCS word embeddings of Wikipedia pages may be sufficient for categorization without hyperlinks. While the observation that no dataset significantly depends on structural information is startling, it corroborates with reported strong performance of MLP or similar models augmented with label propagation to outperform GNNs in several of these transductive datasets [23, 32].
+
+ * Three distinct groups of datasets. The transductive datasets are also categorized into three clusters as T- $\{ 1,2,3\}$ . T-1 consists of heterophilic datasets, such as WebKB and Actor [45,39]. These are well-separated from others, as seen in the right half of the PCA plot (Figure 3b), primarily via PC1 and characterized by performance drop due to removal of the original node features (NoNodeFtrs, RandFtrs) and their replacement by node degrees (NodeDeg). T-3 is indifferent to both node and structure removal, implying redundancies between node features and graph structure for their tasks. T-2 datasets, on the other hand, experience significant performance degradation on NoNodeFtrs and RandFtrs, yet these drops are recovered in NodeDeg. This indicates that T-2 datasets have tasks for which structural summary information is sufficient, perhaps due to homophily.
+
+ * Representative set. Many datasets have very close sensitivity profiles, thus factoring in also the graph size and original AUROC (avoiding saturated datasets), we make the following recommendation: WebKB-Wis, Actor from T-1; WikiNet-cham, WikiCS, Flickr from T-2; WikiNet-squir, Twitch-EN, GitHub from T-3.
+
+§ 4 DISCUSSION
+
+Our results quantify the extent to which graph features or structures are more important for the downstream tasks, an important question brought up in classical works on graph kernels [37, 51]. We observed that more than half of the datasets contain rich node features. On average, excluding these features reduces GNN prediction performance more than excluding the entire graph structures, especially for transductive node-level tasks. Furthermore, low-frequency information in node features appears to be essential in most datasets that rely on node features. Historically, most graph data aimed to capture closeness among entities, which has prompted development of local aggregation approaches, such as label propagation, personalized page rank, and diffusion kernels [36, 14], all of which share a common principle of low pass filtering. High-frequency information, on the other hand, may be important in recently emerging application areas, such as combinatorial optimization, logical reasoning or biochemical property prediction, which require complex non-local representations.
+
+Further, despite the recent interest in development of new methods that could leverage long-range dependencies and heterophily, the availability of adequate benchmarking datasets remains lacking or less readily accessible. Meanwhile, some recent efforts such as GraphWorld [46] aim to comprehensively profile a GNN's performance using a collection of synthetic datasets that cover an entire parametric space. Notably, our analysis demonstrates that synthetic tasks do not fully resemble the complexity of real-world applications. Hence, bench marking made purely by synthetic datasets should be taken with caution, as the behavior might not be representative of real-world scenarios.
+
+As a comprehensive benchmarking framework, our work provides several potential use cases beyond the taxonomy analysis presented here. One such usage is understanding the characteristics of any new datasets and how they are related to existing ones. For example, DeezerEurope (DzEu) is a relatively new dataset [50] that is less commonly benchmarked and studied than the other datasets we consider. The inclusion of DzEu in T-1 suggested its heterophilic nature, which indeed has been recently demonstrated [38]. On the other hand, since the sensitivity profiles naturally suggest the invariances that are important for different datasets from a practical standpoint, they could provide valuable guidance to the development of self-supervised learning and data augmentations for GNNs [62].
+
+Finally, we observed that overall patterns in sensitivity profiles remain similar regardless whether we used GCN, GIN, or the other 4 models to derive them. Subtle differences in sensitivity profiles w.r.t. different GNN models are not only expected but also desired when comparing models that have distinct levels of expressivity. While we expect overall patterns to be similar, more expressive models should provide enhanced resolution. One could then contrast taxonomization w.r.t. first-order GNNs (such as those we used) with more expressive higher-order GNNs, Transformer-based models with global attention, and others. We hope our work will also inspire future work to empirically validate expressivity of new graph learning methods in this vein, beyond classical benchmarking.
+
+Limitations and Future Work. Our perturbation-based approach is fundamentally limited in that we cannot test the significance of a property that we cannot perturb or that the reference GNN model cannot capture. Therefore, designing more sophisticated perturbation strategies to gauge specific relations could bring further insight into the datasets and GNN models alike. New perturbations may gauge the usefulness of geometric substructures such as cycles [3] or the effects of graph bottlenecks, e.g., by rewiring graphs to modify their "curvatures" [55]. Other perturbations could include graph sparsification (edge removal) [53] and graph coarsening (edge contraction) [10, 4].
+
+A number of OGB node-level datasets are not included in this study due to memory cost of typical MPNNs. Conducting an analysis based on recent scalable GNN models [20] would be an interesting avenue of future research. Further, we only considered classification tasks, omitting regression tasks, as their evaluation metrics are not easily comparable. One way to circumvent this issue would be to quantize regression tasks into classification tasks by binning their continuous targets. Additionally, we disregarded edge features in two OGB molecular datasets we used. In a future work, edge features could be leveraged by an edge-feature aware generalization of MPNNs. The importance of edge features can then be analyzed by introducing new edge-feature perturbations. We also limited our analysis to node-level and graph-level tasks, but this framework could be further extended to link-prediction or edge-level tasks. While our perturbations could be used in this new scenario as well, new perturbations, such as the above-mentioned graph sparsification, would need to be considered. Similarly, hallmark models for link and relation predictions, outside MPNNs, should be considered.
+
+§ 5 CONCLUSION
+
+We provide a systematic data-driven approach for taxonomizing a large collection of graph datasets - the first study of its kind. The core principle of our approach is to gauge the essential characteristics of a given dataset with respect to its accompanying prediction task by inspecting the downstream effects caused by perturbing its graph data. The resulting sensitivities to the diverse set of perturbations serve as "fingerprints" that allow to identify datasets with similar characteristics. We derive several insights into the current common benchmarks used in the field of graph representation learning, and make recommendations on selection of representative benchmarking suits. Our analysis also puts forward a foundation for evaluating new benchmarking datasets that will likely emerge in the field.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8712aabdf28328d0a48b68cd6d645b0da234925e
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,469 @@
+# Neighborhood-aware Scalable Temporal Network Representation Learning
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Temporal networks have been widely used to model real-world complex systems such as financial systems and e-commerce systems. In a temporal network, the joint neighborhood of a set of nodes often provides crucial structural information useful for predicting whether they may interact at a certain time. However, recent representation learning methods for temporal networks often fail to extract such information or depend on online construction of structural features, which is time-consuming. To address the issue, this work proposes Neighborhood-Aware Temporal network model (NAT). For each node in the network, NAT abandons the commonly-used one-single-vector-based representation while adopting a novel dictionary-type neighborhood representation. Such a dictionary representation records a down-sampled set of the neighboring nodes as keys, and allows fast construction of structural features for a joint neighborhood of multiple nodes. We also design dedicated data structure termed $N$ -cache to support parallel access and update of those dictionary representations on GPUs. NAT gets evaluated over seven real-world large-scale temporal networks. NAT not only outperforms all cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy, respectively, but also keeps scalable by achieving a speed-up of ${4.1} - {76.7} \times$ against the baselines that adopts joint structural features and achieves a speed-up of ${1.6} - {4.0} \times$ against the baselines that cannot adopt those features. The link to the code: https://anonymous.4open.science/r/NAT-617D.
+
+## 1 Introduction
+
+Temporal networks are widely used as abstractions of real-world complex systems [1]. They model interacting elements as nodes, interactions as links, and when those interactions happen as timestamps on those links. Temporal networks often evolve by following certain patterns. Ranging from triadic closure [2] to higher-order motif closure [3-6], the interacting behaviors between multiple nodes have been shown to strongly depend on the network structure of their joint neighborhood. Researchers have leveraged this observation and built many practical systems to monitor and make prediction on temporal networks such as anomaly detection in financial networks [7-9], friend recommendation in social networks [10], and collaborative filtering techniques in e-commerce systems [11].
+
+Recently, graph neural networks (GNNs) have been widely used to encode network-structured data [12] and have achieved state-of-the-art (SOTA) performance in many tasks such as node/graph classification [13-15]. However, to predict how nodes interact with each other in temporal networks, a direct generalization of GNNs may not work well. Traditional GNNs often learn a vector representation for each node, and predict whether two node may interact (aka. a link) based on a combination (e.g. the inner product) of the two vector representations. This link prediction strategy often fails to capture the structure features of the joint neighborhood of the two nodes [16-19]. Consider a toy example with a temporal network in Fig. 1: Node $w$ and node $v$ share the same local structure before ${t}_{3}$ , so GNNs including their variants on temporal networks (e.g., TGN [20]) will associate $w$ and $v$ with the same vector representation. Hence, GNNs will fail to make correct prediction to tell whether $u$ will interact with $w$ or $v$ at ${t}_{3}$ . Here, GNNs cannot capture the important joint structural feature that $u$ and $v$ have a common neighbor $a$ before ${t}_{3}$ . This issue makes almost all previous works that generalize GNNs for temporal networks provide only subpar performance [20-29]. Some recent works have been proposed to address such an issue on static networks [18, 19, 30]. Their key ideas are to construct node structural features to learn the two-node joint neighborhood representations. Specifically, for two nodes of interest, they either label one linked node and construct its distance to the other node $\left\lbrack {{31},{32}}\right\rbrack$ , or label all nodes in the neighborhood with their distances to these two linked nodes $\left\lbrack {{18},{33}}\right\rbrack$ . Traditional GNNs can afterwards encode such feature-augmented neighborhood to achieve better inference. Although these ideas are theoretically powerful [18, 19] and provide good empirical performance on small networks, the induced models are not scaled up to large networks. This is because constructing such structural features is time-consuming and should be done separately for each link to be predicted. This issue becomes even more severe over temporal networks, because two nodes may interact many times and thus the number of links to be predicted is often much larger than the corresponding number in static networks.
+
+
+
+Figure 1: A toy example to predict how a temporal network evolves. Given the historical temporal network as shown in the left, the task is to predict whether $u$ prefers to interact with $v$ or $w$ at timestamp ${t}_{3}$ . If this is a social network,(u, v)is likely to happen because $u, v$ share a common neighbor $a$ and follow the principle of triadic closure [2]. However, traditional GNNs, even for their generalization on temporal networks fail here as they learn the same representations for node $v$ and node $w$ due to their common structural contexts, as shown in the middle. In the right, we show a high-level abstraction of joint neighborhood features based on $\mathrm{N}$ -caches of $\mathbf{u}$ and $\mathbf{v}$ : In the N-caches for 1-hop neighborhoods of both node $u$ and node $v, a$ appears as the keys. Joining these keys can provide a structural feature that encodes such common-neighbor information at least for prediction.
+
+In this work, we propose Neighborhood-Aware Temporal network model (NAT) that can address the aforementioned modeling issue while keeping a good scalability of the model. The key novelty of NAT is to incorporate dictionary-type neighborhood representations in place of one-single-vector node representation and a computation-friendly neighborhood cache (N-cache) to maintain such dictionary-type respresentations. Specifically, the N-cache of a node stores several size-constrained dictionaries on GPUs. Each dictionary has a sampled collection of historical neighbors of the center node as keys, and aggregates the timestamps and the features on the links connected to these neighbors as values (vector representations). With N-caches, NAT can in parallel construct the joint neighborhood structural features for a batch of node pairs to achieve fast link predictions. NAT can also update the N-caches with new interacted neighbors efficiently by adopting hash-based search functions which support GPU parallel computation.
+
+NAT provides a novel solution for scalable temporal network representation learning. We evaluate NAT over 7 real-world temporal networks, among which, one contains $1\mathrm{M} +$ nodes and almost 10 $\mathrm{M}$ temporal links to evaluate the scalability of NAT. NAT outperforms cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy respectively. NAT achieves 4.1-76.7 - speed-up compared to the baseline CAWN [34] that constructs joint neighborhood features based on random walk sampling. NAT also achieves ${1.6} - {4.0} \times$ speed-up of the fastest baselines that do not construct joint neighborhood features (and thus suffer from the issue in Fig. 1) on large networks.
+
+## 2 Related works
+
+Neighborhood structure often governs how temporal networks evolve over time. Early-time temporal network prediction models count motifs $\left\lbrack {{35},{36}}\right\rbrack$ or subgraphs $\left\lbrack {37}\right\rbrack$ in the historical neighborhood of two interacting objects as features to predict their future interactions. These models cannot use network attributes and often suffer from scalability issues because counting combinatorial structures is complicated and hard to be executed in parallel. Network-embedding approaches for temporal networks [38-42] suffer from the similar problem, because the optimization problem used to compute node embeddings is often too complex to be solved again and again as the network evolves.
+
+Recent works based on neural networks often provide more accurate and faster models, which benefit from the parallel computation hardware and scalable system support $\left\lbrack {{43},{44}}\right\rbrack$ for deep learning. Some of these works simply aggregate the sequence of links into network snapshots and treat temporal networks as a sequence of static network snapshots [21-26]. These methods may offer low prediction accuracy as they cannot model the interactions that lie in different levels of time granularity.
+
+Move advanced methods deal with link streams directly [20, 27-29, 45-47]. They generalize GNNs to encode temporal networks by associating each node with a vector representation and update it based on the nodes that one interacts with. Some works use the representation of the node that one is currently interacting with $\left\lbrack {{27},{28},{45}}\right\rbrack$ . Other works use those of the nodes that one has interacted with in the history $\left\lbrack {{20},{29},{46},{47}}\right\rbrack$ . However, in either way, these methods suffer from the limited power of GNNs to capture the structural features from the joint neighborhood of multiple nodes [17, 19]. Recently, CAWN [34] and HIT [4], inspired by the theory in static networks [18, 19], have proposed to construct such structural features to improve the representation learning on temporal networks, CAWN for link prediction and HIT for higher-order interaction prediction. However, their computational complexity is high, as for every queried link, they need to sample a large group of random walks and construct the structural features on CPUs that limit the level of parallelism. However, NAT addresses these problems via neighborhood representations and N-caches.
+
+## 3 Preliminaries: Notations and Problem Formulation
+
+In this section, we introduce some notations and the problem formulation. We consider temporal network as a sequence of timestamped interactions between pairs of nodes.
+
+Definition 3.1 (Temporal network) A temporal network $\mathcal{E}$ can be represented as $\mathcal{E} =$ $\left\{ {\left( {{u}_{1},{v}_{1},{t}_{1}}\right) ,\left( {{u}_{2},{v}_{2},{t}_{2}}\right) ,\cdots }\right\} ,{t}_{1} < {t}_{2} < \cdots$ where ${u}_{i},{v}_{i}$ denote interacting node IDs of the ith link, ${t}_{i}$ denotes the timestamp. Each temporal link(u, v, t)may have link feature ${e}_{u, v}^{t}$ . We also denote the entire node set as $\mathcal{V}$ . Without loss of generality, we use integers as node IDs, i.e., $\mathcal{V} = \{ 1,2,\ldots \}$ .
+
+A good representation learning of temporal networks is able to efficiently and accurately predict how temporal networks evolve over time. Hence, we formulate our problem as follows.
+
+Definition 3.2 (Problem formulation) Our problem is to learn a model that may use the historical information before $t$ , i.e., $\left\{ {\left( {{u}^{\prime },{v}^{\prime },{t}^{\prime }}\right) \in \mathcal{E} \mid {t}^{\prime } < t}\right\}$ , to accurately and efficiently predict whether there will be a temporal link between two nodes at time $t$ , i.e.,(u, v, t).
+
+Next, we define neighborhood in temporal networks.
+
+Definition 3.3 ( $k$ -hop neighborhood in a temporal network) Given a timestamp $t$ , denote a static network constructed by all the temporal links before $t$ as ${\mathcal{G}}_{t}$ . Remove all timestamps in ${\mathcal{G}}_{t}$ . Given a node $v$ , define $k$ -hop neighborhood of $v$ before time $t$ , denoted by ${\mathcal{N}}_{v}^{t, k}$ , as the set of all nodes $u$ such that there exists at least one walk of length $k$ from $u$ to $v$ over ${\mathcal{G}}_{t}$ . For two nodes $u, v$ , their joint neighborhood up-to $K$ hops refers to ${ \cup }_{k = 1}^{K}\left( {{\mathcal{N}}_{v}^{t, k} \cup {\mathcal{N}}_{u}^{t, k}}\right)$ .
+
+## 4 Methodology
+
+In this section, we introduce NAT. NAT consists of two major components: neighborhood representations and N-caches, constructing joint neighborhood features and NN-based encoding.
+
+### 4.1 Neighborhood Representations and N-caches
+
+In NAT, a node representation is tracked by a fixed-sized memory module, i.e., N-cache over time as the temporal network evolves. Fig. 2 Left gives an illustration. In contrast to all previous methods that adopt a single vector representation for each node $u$ , NAT adopts neighborhood representations $\left( {{Z}_{u}^{\left( 0\right) }\left( t\right) ,{Z}_{u}^{\left( 1\right) }\left( t\right) ,\ldots ,{Z}_{u}^{\left( K\right) }\left( t\right) }\right)$ , where ${Z}_{u}^{\left( k\right) }\left( t\right)$ denotes the $k$ -hop neighborhood representation, for $k = 0,1,\ldots , K$ . Note that these representations may evolve over time. For notation simplicity, the timestamps in these notations are ignored while they typically can be inferred from the context. The main goal of tracking these neighborhood representations is to enable efficient construction of structural features, which will be detailed in Sec. 4.2. Next, we first explain these neighborhood representations from the perspective of modeling and how they evolve over time. Then, we introduce the scalable implementation of $\mathrm{N}$ -caches.
+
+Modeling. For a node $u$ , the 0 -hop representation, or termed self-representation ${Z}_{u}^{\left( 0\right) }$ simply works as the standard node representation for $u$ . It gets updated via an RNN ${Z}_{u}^{\left( 0\right) } \leftarrow$ $\mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u, v}}\right\rbrack }\right)$ when node $u$ interacts with another node $v$ as shown in Fig. 2 Left. The rest neighborhood representations are more complicated. To give some intuition, we first introduce the 1-hop representation ${Z}_{u}^{\left( 1\right) }.{Z}_{u}^{\left( 1\right) }$ is a dictionary whose keys, denoted by $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , correspond to a down-sampled set of the (IDs of) nodes in the 1-hop neighborhood of $u$ . For a node $a$ in $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , the dictionary value denoted by ${Z}_{u, a}^{\left( 1\right) }$ is a vector representation as a summary of previous interactions between $u$ and $a.{Z}_{u}^{\left( 1\right) }$ will be updated as temporal network evolves. For example, in Fig. 1, as $v$ interacts with $u$ at time ${t}_{3}$ with the link feature ${e}_{u, v}$ , the entry in ${Z}_{u}^{\left( 1\right) }$ that corresponds to $v,{Z}_{u, v}^{\left( 1\right) }$ will get updated via an RNN ${Z}_{u, v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u, v}^{\left( 1\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u, v}}\right\rbrack }\right)$ . If ${Z}_{u, v}^{\left( 1\right) }$ does not exist in current ${Z}_{u}^{\left( 1\right) }$ (e.g., in the first $v, u$ interaction), a default initialization of ${Z}_{u, v}^{\left( 1\right) }$ is used. Once updated, the new value ${Z}_{u, v}^{\left( 1\right) }$ paired with the key (node ID) $v$ will be inserted into ${Z}_{u}^{\left( 1\right) }$ .
+
+| $\mathbf{{No}.}$ | Notations | Definitions |
| 1. | ${Z}_{u}^{\left( k\right) }$ | A dictionary (with values ${Z}_{u, a}^{\left( k\right) }$ , of size ${M}_{k}$ ) denoting the $k$ -hop neighborhood representation for node $u$ . |
| 2. | ${Z}_{u, a}^{\left( k\right) }$ | A vector (of length $F$ for $k \geq 1$ ) in the values of ${Z}_{u}^{\left( k\right) }$ representing node $v$ as a $k$ -hop neighbor of $u$ . |
| 3. | ${s}_{u}^{\left( k\right) }$ | An auxiliary array to record the node IDs who are currently recorded as the keys of ${Z}_{u}^{\left( k\right) }$ . |
| 4. | ${\mathrm{{DE}}}_{u}^{t}\left( a\right)$ | The distance encoding of node $a$ based on the keys of N-caches of node $u$ at time $t$ (Eq. (1)). |
| 5. | hash(a) | The hash function mapping a node ID $a$ to the position of ${Z}_{u, a}^{\left( k\right) }$ in the $k$ -hop N-cache of any node $u$ . |
+
+
+
+Figure 2: Neighborhood representations and Joining Neighborhood Features & Representations to make predictions. Left: Neighborhood representations of a node. Node $u$ interacts with $v$ at ${t}_{3}$ in the example in Fig. 1. The 0-hop (self) representation and 1-hop representations will be updated based on ${Z}_{v}^{\left( 0\right) }$ . The 2-hop representations will be updated by inserting ${Z}_{v}^{\left( 1\right) }.{Z}_{u}^{\left( k\right) }$ ’s are maintained in N-caches. Right: In the example of Fig. 1, to predict the link $\left( {u, v,{t}_{3}}\right)$ , the neighborhood representations of node $u$ and node $v$ will be joined: The structural feature DE is constructed according to Eq. (1); The representations are sum-pooled according to Eq. (2). Then, an attention layer (Eq. (3)) is adopted to make the final prediction.
+
+One remark is that for the input timestamps ${t}_{i}$ , we adopt Fourier features to encode them before filling them into RNNs, i.e., with learnable parameter ${\omega }_{i}$ ’s, $1 \leq i \leq d$ , T-encoding $\left( t\right) =$ $\left\lbrack {\cos \left( {{\omega }_{1}t}\right) ,\sin \left( {{\omega }_{1}t}\right) ,\ldots ,\cos \left( {{\omega }_{d}t}\right) ,\sin \left( {{\omega }_{d}t}\right) }\right\rbrack$ , which has been proved to be useful for temporal network representation learning [4, 20, 29, 34, 48, 49].
+
+The large-hop $\left( { > 1}\right)$ neighborhood representation ${Z}_{u}^{\left( k\right) }$ is also a dictionary. Similarly, the keys of ${Z}_{u}^{\left( k\right) }$ correspond to the nodes who lie in the $k$ -hop neighborhood of $u$ . The update of ${Z}_{u}^{\left( k\right) }$ is as follows: If $u$ interacts with $v, v$ ’s(k - 1)-hop neighborhood by definition becomes a part of $k$ -hop neighborhood of $u$ after the interaction. Given this observation, ${Z}_{u}^{\left( k\right) }$ can also be updated by using ${Z}_{v}^{\left( k - 1\right) }$ . However, we avoid using a RNN for the large-hop update to reduce complexity. Instead, we directly insert ${Z}_{v}^{\left( k - 1\right) }$ into ${Z}_{u}^{\left( k\right) }$ , i.e., setting ${Z}_{u, a}^{\left( k\right) } \leftarrow {Z}_{v, a}^{\left( k - 1\right) }$ for all $a \in \operatorname{key}\left\lbrack {Z}_{v}^{\left( k - 1\right) }\right\rbrack$ . If ${Z}_{u, a}^{\left( k\right) }$ has already existed before the insertion, we simply replace it.
+
+Next, we will introduce the implementation of the above representations via N-caches. Readers who only care about the learning models can skip this part and directly go to Sec. 4.2. The maintenance of N-caches (aka. neighborhood representations) as the network evolves is summarized in Alg. 1.
+
+Scalable Implementation. Neighborhood representations cannot be directly implemented via python dictionary to achieve scalable maintenance. Instead, we adopt the following three design techniques: (a) Setting size limit; (b) Parallelizing hash-maps; (c) Addressing collisions.
+
+Algorithm 1: N-caches construction and update $\left( {\mathcal{V},\mathcal{E},\alpha }\right)$
+
+---
+
+for $k$ from 0 to 2 (consider only two hops) do
+
+ for $u$ in $\mathcal{V}$ , in parallel, do
+
+ Initialize fixed-size dictionaries ${Z}_{u}^{\left( k\right) }$ in GPU with key spaces ${s}_{u}^{\left( k\right) }$ and value spaces;
+
+I for(u, v, t, e)in each mini-batch(u, v, t, e)of $\mathcal{E}$ , in parallel, do
+
+ ${Z}_{u}^{\left( 0\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) }, t, e}\right\rbrack }\right) //$ update 0-hop self-representation
+
+ ${Z}_{\text{prev }} \leftarrow {Z}_{u, v}^{\left( 1\right) }$ if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals $v$ , else 0 // check if ${Z}_{u, v}^{\left( 1\right) }$ is recorded in ${Z}_{u}^{\left( 1\right) }$ or not;
+
+ if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals ( $v$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
+
+ ${s}_{u}^{\left( 1\right) }\left\lbrack {\text{hash}\left( v\right) }\right\rbrack \leftarrow v,{Z}_{u, v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{\text{prev }},\left\lbrack {{Z}_{v}^{\left( 0\right) }, t, e}\right\rbrack }\right) ;//$ update 1-hop nbr. representation
+
+ for $w$ in ${s}_{v}^{\left( 1\right) }$ , in parallel, do
+
+ if ${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack$ equals ( $w$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
+
+ ${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack \leftarrow w,{Z}_{u, w}^{\left( 2\right) } \leftarrow {Z}_{v, w}^{\left( 1\right) };//$ update 2-hop nbr. representations
+
+ repeat lines 5-11 with(v, u, t, e)
+
+---
+
+(a) Limiting size: In a real-world network, the size of neighborhood of a node typically follows a long-tailed distribution [50, 51]. So, it is irregular and memory inefficient to record the entire neighborhood. Instead, we set an upper limit ${M}_{k}$ to the size of each-hop representation ${Z}_{u}^{\left( k\right) }$ , which means ${Z}_{u}^{\left( k\right) }$ may record only a subset of nodes in the $k$ -hop neighborhood of node $u$ . This idea is inspired by previous works that have shown structural features constructed based on a down-sampled neighborhood is sufficient to provide good performance [34, 52]. To further decrease the memory overhead, we only set each representation ${Z}_{u, a}^{\left( k\right) }, k \geq 1$ as a vector of small dimension $F$ . Overall, the memory overhead of the $\mathrm{N}$ -cache per node is $O\left( {\mathop{\sum }\limits_{{k = 1}}^{K}{M}_{k} \times F}\right)$ . In our experiments, we consider at most $K = 2$ hops, and set the numbers of tracked neighbors ${M}_{1},{M}_{2} \in \left\lbrack {2,{40}}\right\rbrack$ and the size of each representation $F \in \left\lbrack {2,8}\right\rbrack$ , which already gives very good performance. Based on the above design, the overall memory overhead is just about hundreds per node, which is comparable to the commonly-used memory cost of tracking a big single-vector representation for each node.
+
+(b) The hash-map: As NAT needs to frequently access N-caches, a fast implementation of using node IDs to search within N-caches in parallel is needed. To enable the parallel search, we design GPU dictionaries to implement N-caches. Specifically, for every node $u$ , we pre-allocate $O\left( {{M}_{k} \times F}\right)$ space in GPU-RAM to record the values in ${Z}_{u}^{\left( k\right) }$ . A hash function is adopted to access the values in ${Z}_{u}^{\left( k\right) }$ . For some node $a$ , we compute $\operatorname{hash}\left( a\right) \equiv \left( {q * a}\right) \left( {\;\operatorname{mod}\;{M}_{k}}\right)$ for a fixed large prime number $q$ to decide the row-index in ${Z}_{u}^{\left( k\right) }$ that records ${Z}_{u, a}^{\left( k\right) }$ . Such a simple hashing allows NAT accessing multiple neighborhood representations in N-caches in parallel.
+
+However, as the size ${M}_{k}$ of each $\mathrm{N}$ -cache is small, in particular smaller than the corresponding neighborhood, the hash-map may encounter collisions. To detect such collisions, we also pre-allocate $O\left( {M}_{k}\right)$ space in each $\mathrm{N}$ -cache ${Z}_{u}^{\left( k\right) }$ for an array ${s}_{u}^{\left( k\right) }$ to record the IDs of the nodes who are the most recent ones recorded in ${Z}_{u}^{\left( k\right) }$ . Specifically, we use ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ to check whether node $a$ is a key of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is $a,{Z}_{u, a}^{\left( k\right) }$ is recorded at the position hash(a)of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is neither $a$ nor EMPTY, the position hash(a)of ${Z}_{u}^{\left( k\right) }$ records the representation of another node.
+
+(c) Addressing collisions: If encountering a collision when NAT works on an evolving network, NAT addresses that collision in a random manner. Specifically, suppose we are to write ${Z}_{u, a}^{\left( k\right) }$ into ${Z}_{u}^{\left( k\right) }$ . If another node $b$ satisfies $\operatorname{hash}\left( a\right) = \operatorname{hash}\left( b\right) = p$ and ${Z}_{u, b}^{\left( k\right) }$ has occupied the position $p$ of ${Z}_{u}^{\left( k\right) }$ , then, we replace ${Z}_{u, b}^{\left( k\right) }$ by ${Z}_{u, a}^{\left( k\right) }$ (and ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack \leftarrow a$ simultaneously) with probability $\alpha$ . Here, $\alpha \in (0,1\rbrack$ is a hyperparameter. Although the above random replacement strategy sounds heuristic, it is essentially equivalent to random-sampling nodes from the neighborhood without replacement (random dropping $\leftrightarrow$ random sampling). Note that random-sampling neighbors is a common strategy used to scale up GNNs for static networks [53-55], so here we essentially apply an idea of similar spirit to temporal networks. We find a small size ${M}_{k}\left( { \leq {40}}\right)$ can give a good empirical performance while keeping the model scalable, and NAT is relatively robust to a wide range of $\alpha$ .
+
+### 4.2 Joint Neighborhood Structural Features and Neural-network-based Encoding
+
+As illustrated in the toy example in Fig. 1, structural features from the joint neighborhood are critical to reveal how temporal networks evolve. Previous methods in static networks adopt distance encoding (DE) (or called labeling tricks more broadly) to formulate these features [18, 19]. Recently, this idea has got generalized to temporal networks [34]. However, the model CAWN in [34] uses online random-walk sampling, which cannot be parallelized on GPUs and is thus extremely slow. Our design of N-caches allows addressing such a problem. Fig. 2 Right illustrates the procedure.
+
+NAT generates joint neighborhood structural features as follows. Suppose our prediction is made for a temporal link(u, v, t). For every node $a$ in the joint neighborhood of $u$ and $v$ decided by their N-caches at timestamp $t$ , i.e., $a \in \left\lbrack {{ \cup }_{k = 0}^{K}\operatorname{key}\left( {Z}_{u}^{\left( k\right) }\right) }\right\rbrack \cup \left\lbrack {{ \cup }_{{k}^{\prime } = 0}^{K}\operatorname{key}\left( {Z}_{v}^{\left( {k}^{\prime }\right) }\right) }\right\rbrack$ , we associate it with a DE
+
+${\mathrm{{DE}}}_{uv}^{t}\left( a\right) = {\mathrm{{DE}}}_{u}^{t}\left( a\right) \oplus {\mathrm{{DE}}}_{v}^{t}\left( a\right)$ , where ${\mathrm{{DE}}}_{w}^{t}\left( a\right) = \left\lbrack {\chi \left\lbrack {a \in {Z}_{w}^{\left( 0\right) }}\right\rbrack ,\ldots ,\chi \left\lbrack {a \in {Z}_{w}^{\left( K\right) }}\right\rbrack }\right\rbrack , w \in \{ u, v\}$(1)
+
+Here, $\chi \left\lbrack {a \in {Z}_{w}^{\left( i\right) }}\right\rbrack$ is 1 if $a$ is among the keys of N-cache ${Z}_{w}^{\left( i\right) }$ or 0 otherwise. $\oplus$ denotes vector concatenation. As for the example to predict $\left( {u, v,{t}_{3}}\right)$ in Fig. 1, the DEs of four nodes $u, a, v, b$ are as shown in Fig. 2 Right. Note that ${\mathrm{{DE}}}_{uv}^{{t}_{3}}\left( a\right) = \left\lbrack {0,1,0}\right\rbrack \oplus \left\lbrack {0,1,0}\right\rbrack$ because $a$ appears in the keys of both ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{v}^{\left( 1\right) }$ , which further implies $a$ as a common neighbor of $u$ and $v$ .
+
+Simultaneously, NAT also aggregates neighborhood representations for every node $a$ in the common neighborhood of $u$ and $v$ . Specifically, for node $a$ , we aggregate the representations via a sum pool
+
+$$
+{Q}_{uv}^{t}\left( a\right) = \mathop{\sum }\limits_{{k = 0}}^{K}\mathop{\sum }\limits_{{w \in \{ u, v\} }}{Z}_{w, a}^{\left( k\right) } \times \chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack . \tag{2}
+$$
+
+Here, if $a$ is not in the neighborhood ${Z}_{w}^{\left( k\right) },\chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack = 0$ and thus ${Z}_{w, a}^{\left( k\right) }$ does not participate in the aggregation. Both DE (Eq (1)) and representation aggregation (Eq (2)) can be done for multiple node pairs in parallel on GPUs. We detail the parallel steps in Appendix A. After joining DE and neighborhood representations, for each link(u, v, t)to be predicted, NAT has a collection of representations ${\Omega }_{u, v}^{t} = \left\{ {{\mathrm{{DE}}}_{uv}^{t}\left( a\right) \oplus {Q}_{uv}^{t}\left( a\right) \mid a \in {\mathcal{N}}_{u, v}^{t}}\right\}$ .
+
+Ultimately, we propose to use attention to aggregate the collected representations in ${\Omega }_{u, v}^{t}$ to make the final prediction for the link(u, v, t). Let MLP denote a multi-layer perceptron and we adopt
+
+$$
+\text{logit} = \operatorname{MLP}\left( {\mathop{\sum }\limits_{{h \in {\Omega }_{u, v}^{t}}}{\alpha }_{h}\operatorname{MLP}\left( h\right) }\right) \text{, where}\left\{ {\alpha }_{h}\right\} = \operatorname{softmax}\left( \left\{ {{w}^{T}\operatorname{MLP}\left( h\right) \mid h \in {\Omega }_{u, v}^{t}}\right\} \right) \text{,} \tag{3}
+$$
+
+where $w$ is a learnable vector parameter and the logit can be plugged in the cross-entropy loss for training or compared with a threshold to make the final prediction.
+
+## 5 Experiments
+
+In this section, we evaluate the performance and the scalability of NAT against a variety of baselines on real-world temporal networks. We further conduct ablation study on relevant modules and hyperparameter analysis. Unless specified for comparison, the hyperparameters of NAT (such as ${M}_{1},{M}_{2}, F,\alpha$ ) are detailed in Appendix C and Table 7 (in the Appendix).
+
+### 5.1 Experimental setup
+
+Datasets. We use seven real-world datasets that are available to the public, whose statistics are listed in Table 1. Further details of these datasets can be found in Appendix B. We preprocess all datasets by following previous literatures. We transform the node and edge features of Wikipedia and Reddit to 172-dim feature vectors. For other datasets, those features will be zeros since they are non-attributed. We split the datasets into training, validation and testing data according to the ratio of 70/15/15. For inductive test, we sample the unique nodes in validation and testing data with probability 0.1 and remove them and their associated edges from the networks during the model training. We detail the procedure of inductive evaluation for NAT in Appendix C.1.
+
+Baselines. We run experiments against 6 strong baselines that give the SOTA approaches for modeling temporal networks. Out of the 6 baselines, CAWN [34], TGAT [29] and TGN [20] need to sample neighbors from the historical events, while JODIE [28], DyRep [27], keep track of dynamic node representations to avoid sampling. CAWN is the only model that constructs neighborhood structural features. As we are interested in both prediction performance and model scalability, we include an efficient implementation of TGN sourced from Pytorch Geometric (TGN-pg), a library built upon PyTorch including different variants of GNNs [56]. TGN is slower than TGN-pg because TGN in [20] does not process a batch fully in parallel while TGN-pg does. Additional details about the baselines can be found in appendix $\mathrm{C}$ .
+
+| Measurement | Wikipedia | Reddit | Social E. $1\mathrm{\;m}$ . | Social E. | Enron | UCI | Ubuntu | Wiki-talk |
| nodes | 9,227 | 10,985 | 71 | 74 | 184 | 1,899 | 159,316 | 1,140,149 |
| temporal links | 157,474 | 672,447 | 176,090 | 2,099,519 | 125,235 | 59,835 | 964,437 | 7,833,140 |
| static links | 18,257 | 78,516 | 2,457 | 4486 | 3,125 | 20,296 | 596,933 | 3,309,592 |
| node & link attributes | 172 & 172 | 172 & 172 | 0 & 0 | 0 & 0 | 0 & 0 | 0 & 0 | 0 & 0 | 0 & 0 |
| bipartite | true | true | false | false | false | true | false | false |
+
+Table 1: Summary of dataset statistics.
+
+| Task | Method | Wikipedia | Reddit | Social E. $1\mathrm{\;m}$ . | Social E. | Enron | UCI | Ubuntu | Wiki-talk |
| Inductive | CAWN | ${98.52} \pm {0.04}$ | ${98.19} \pm {0.03}$ | ${80.09} \pm {1.89}$ | ${50.00} \pm {0.00}{}^{ * }$ | ${93.28} \pm {0.01}$ | ${80.37} \pm {0.65}$ | ${50.00} \pm {0.00}^{ * }$ | ${50.00} \pm {0.00}^{ * }$ |
| JODIE | ${95.58} \pm {0.37}$ | ${95.96} \pm {0.29}$ | ${80.61} \pm {1.55}$ | ${81.13} \pm {0.52}$ | ${81.69} \pm {2.21}$ | ${86.13} \pm {0.34}$ | ${56.68} \pm {0.49}$ | ${65.89} \pm {4.72}$ |
| DyRep | ${94.72} \pm {0.14}$ | ${97.04} \pm {0.29}$ | ${81.54} \pm {1.81}$ | ${52.68} \pm {0.11}$ | ${77.44} \pm {2.28}$ | ${68.38} \pm {1.30}$ | ${53.25} \pm {0.03}$ | ${51.87} \pm {0.93}$ |
| TGN | ${98.01} \pm {0.06}$ | ${97.76} \pm {0.05}$ | ${86.00} \pm {0.70}$ | ${67.01} \pm {10.3}$ | ${75.72} \pm {2.55}$ | ${83.21} \pm {1.16}$ | ${62.14} \pm {3.17}$ | ${56.73} \pm {2.88}$ |
| TGN-pg | ${94.91} \pm {0.35}$ | ${94.34} \pm {3.22}$ | ${63.44} \pm {3.54}$ | ${88.10} \pm {4.81}$ | ${69.55} \pm {1.62}$ | ${86.36} \pm {3.60}$ | ${79.44} \pm {0.85}$ | ${85.35} \pm {2.96}$ |
| TGAT | ${97.25} \pm {0.18}$ | ${96.69} \pm {0.11}$ | ${54.66} \pm {0.66}$ | ${50.00} \pm {0.00}$ | ${57.09} \pm {0.89}$ | ${70.47} \pm {0.59}$ | ${54.73} \pm 4.{.94}$ | ${71.04} \pm {3.59}$ |
| NAT | $\mathbf{{98.55} \pm {0.09}}$ | $\mathbf{{98.56} \pm {0.21}}$ | $\mathbf{{91.82} \pm {1.91}}$ | $\mathbf{{95.16} \pm {0.66}}$ | ${94.94} \pm {1.15}$ | $\mathbf{{92.46} \pm {0.93}}$ | $\mathbf{{90.35} \pm {0.20}}$ | $\mathbf{{93.81} \pm {1.16}}$ |
| Transductive | CAWN | ${98.62} \pm {0.05}$ | ${98.66} \pm {0.09}$ | ${79.59} \pm {0.21}$ | ${50.00} \pm {0.00}{}^{ * }$ | ${91.46} \pm {0.35}$ | ${82.84} \pm {0.16}$ | ${50.00} \pm {0.00}^{ * }$ | ${50.00} \pm {0.00}^{ * }$ |
| JODIE | ${96.15} \pm {0.36}$ | ${97.29} \pm {0.05}$ | ${77.02} \pm {1.11}$ | ${69.30} \pm {0.21}$ | ${83.42} \pm {2.63}$ | ${91.09} \pm {0.69}$ | ${60.29} \pm {2.66}$ | ${75.00} \pm {4.90}$ |
| DyRep | ${95.81} \pm {0.15}$ | ${98.00} \pm {0.19}$ | ${76.96} \pm {4.05}$ | ${51.14} \pm {0.24}$ | ${78.04} \pm {2.08}$ | ${72.25} \pm {1.81}$ | ${52.22} \pm {0.02}$ | ${62.07} \pm {0.06}$ |
| TGN | ${98.57} \pm {0.05}$ | ${98.70} \pm {0.03}$ | ${88.72} \pm {0.65}$ | ${69.39} \pm {10.50}$ | ${80.87} \pm {4.37}$ | ${89.53} \pm {1.49}$ | ${53.80} \pm {2.23}$ | ${66.01} \pm {4.79}$ |
| TGN-pg | ${97.26} \pm {0.10}$ | ${98.62} \pm {0.07}$ | ${66.39} \pm {6.90}$ | ${64.03} \pm {8.97}$ | ${80.85} \pm {2.70}$ | ${91.47} \pm {0.29}$ | ${90.56} \pm {0.44}$ | ${94.16} \pm {0.09}$ |
| TGAT | ${96.65} \pm {0.06}$ | ${98.19} \pm {0.08}$ | ${58.10} \pm {0.47}$ | ${50.00} \pm {0.00}$ | ${61.25} \pm {0.99}$ | ${77.88} \pm {0.31}$ | ${55.46} \pm {5.47}$ | ${78.43} \pm {2.15}$ |
| NAT | $\mathbf{{98.68} \pm {0.04}}$ | $\mathbf{{99.10} \pm {0.09}}$ | $\mathbf{{90.20} \pm {0.20}}$ | ${94.43} \pm {1.67}$ | $\mathbf{{92.42} \pm {0.09}}$ | $\mathbf{{93.92} \pm {0.15}}$ | $\mathbf{{93.50} \pm {0.34}}$ | ${95.82} \pm {0.31}$ |
+
+Table 2: Performance in average precision (AP) (mean in percentage $\pm {95}\%$ confidence level). Bold font and underline highlight the best performance and the second best performance on average. *The under-performance of CAWN on Social E., Ubuntu and Wiki-talk may be caused by a recent code change due to a bug [57].
+
+Regarding hyperparameters, if a dataset has been tested by a baseline, we use the set of hyperparame-ters that are provided in the corresponding paper. Otherwise, we tune the parameters such that similar components have sizes in the same scale. For example, matching the number of neighbors sampled and the embedding sizes. We also fix the training and inference batch sizes so that the comparison of training and inference time can be fair between different models. For training, since CAWN uses 32 as the default while others use 200 , we decide on using 100 that is between the two. For validation and testing, we use batch size 32 over all baselines. We also apply the early stopping strategy for all models to record the number of epochs to converge and the total model running time to converge. We also set a time limit of 10 hours for training. once that time is reached, we will use the best epoch so far for evaluation. More detailed hyperparameters are provided in Appendix C.
+
+Hardware. We run all experiments using the same device that is equipped with eight Intel Core i7-4770HQ CPU @ 2.20GHz with 15.5 GiB RAM and one GPU (GeForce GTX 1080 Ti).
+
+Evaluation Metrics. For prediction performance, we evaluation all models with Average Precision (AP) and Area Under the ROC curve (AUC). In the main text, the prediction performance in all tables is evaluated in AP. The AUC results are given in the appendix. All results are summarized based on 5 time independent experiments. For computing performance, the metrics include (a) average training and inference time (in seconds) per epoch, denoted as Train and Test respectively, (b) averaged total time (in seconds) of a model run, including training of all epochs, and testing, denoted as Total, (c) the averaged number of epochs for convergence, denoted as Epoch, (d) the maximum GPU memory and RAM occupancy percentage monitored throughout the entire processes, denoted as GPU and $\mathbf{{RAM}}$ , respectively. We ensure that there are no other applications running during our evaluations.
+
+### 5.2 Results and Discussion
+
+Overall, our method achieves SOTA performance on all 7 datasets. The modeling capacity of NAT exceeds all of the baselines and the time complexities of training and inference are either lower or comparable to the fastest baselines. Let us provide the detailed analysis next.
+
+Prediction Performance. We give the result of AP in Table 2 and AUC in Appendix Table 6.
+
+ | Method | Train | Test | Total | RAM | GPU | Epoch |
| Wikipedia | CAWN | 1,006 | 174 | 11,845 | 30.2 | 58.0 | 6.7 |
| JODIE | 28.8 | 30.6 | 1,482 | 28.3 | 17.9 | 19.1 |
| DyRep | 32.4 | 32.5 | 1,681 | 28.3 | 17.8 | 21.5 |
| TGN | 37.1 | 33.0 | 2,047 | 28.3 | 19.3 | 23.1 |
| TGN-pg | 24.2 | 6.04 | 624.8 | 30.8 | 18.1 | 15.6 |
| TGAT | 225 | 63.0 | 3,657 | 28.5 | 24.6 | 12.0 |
| NAT | 21.0 | 6.94 | 154.4 | 29.1 | 12.1 | 2.6 |
| Reddit | CAWN | 2,983 | 812 | 17,056 | 38.8 | 41.2 | 16.3 |
| JODIE | 234.4 | 176 | 8,082 | 36.4 | 23.7 | 15.3 |
| DyRep | 252.9 | 184 | 7,716 | 33.3 | 24.3 | 12.7 |
| TGN | 271.7 | 189 | 8,487 | 33.7 | 25.4 | 15.3 |
| TGN-pg | 155.1 | 27.1 | 2,142 | 39.2 | 23.6 | 6.6 |
| TGAT | 1,203 | 291 | 16,462 | 37.2 | 31.0 | 8.4 |
| NAT | 90.6 | 28.5 | 771.3 | 37.7 | 18.5 | 3.0 |
+
+ | Method | Train | Test | Total | RAM | GPU | Epoch |
| Ubuntu | CAWN | 1,066 | 222 | 5,385 | 38.9 | 17.4 | 1.0 |
| JODIE | 66.70 | 2,860 | 76,220 | 35.3 | 18.7 | 5.5 |
| DyRep | 2,195 | 2,857 | 39,148 | 38.5 | 16.6 | 1.0 |
| TGN | 5,975 | 2,391 | 73,633 | 39 | 19.6 | 5.5 |
| TGN-pg | 188.7 | 36.5 | 3,682 | 37.0 | 32.1 | 11.4 |
| TGAT | 887 | 330 | 18,431 | 47.3 | 17.0 | 2.5 |
| NAT | 125.8 | 41.2 | 1,321 | 28.9 | 10.1 | 5.4 |
| Wiki-talk | CAWN | 13,685 | 2,419 | 34,368 | 99.1 | 19.4 | 1.0 |
| JODIE | 284,789 | 145,909 | 566,607 | 58.2 | 20.9 | 1.0 |
| DyRep | 280,659 | 135,491 | 514,621 | 84.4 | 49.6 | 1.0 |
| TGN | 281,267 | 136,780 | 534,827 | 77.9 | 24.1 | 1.0 |
| TGN-pg | 1,236 | 311.5 | 12,761 | 60.9 | 59.0 | 5.1 |
| TGAT | 6,164 | 2,451 | 186,513 | 65.0 | 17.6 | 16.0 |
| $\mathbf{{NAT}}$ | 833.1 | 280.1 | 7,802 | 37.1 | 22.3 | 2.7 |
+
+Table 3: Scalability evaluation on Wikipedia, Reddit, Ubuntu and Wiki-talk.
+
+
+
+
+
+Figure 3: Convergence v.s. wall-clock time on Reddit Figure 4: Sensitivity (mean) of the overwriting (left) and Wiki-talk (right). Each dot on the curves gets probability $\alpha$ for hash-map collisions on Ubuntu collected per epoch. (Left) & Reddit (Right).
+
+On Wikipedia and Reddit, a lot of baselines achieve high performance because of the valid attributes. However, NAT still gains marginal improvements. On Wikipedia, Reddit and Enron, CAWN outperforms all baselines on inductive study and most baselines on transductive. We believe the reason is that it captures neighborhood structural information via its temporal random walk sampling. However, we are not able to reproduce comparable scores on Social Evolve, Ubuntu and Wiki-talk even tuning training batch size to 32 . We notice there is a recent code change to debug the CAWN implementation[57], which might be the cause of its under-performance.
+
+TGN and its efficient implementation TGN-pg are strong baselines without constructing structure features. On both large-scale datasets Ubuntu and Wiki-talk, TGN-pg gives impressive results on transductive learning. However, NAT still outperforms it consistently. Furthermore, TGN-pg performs poorly for inductive tasks on both datasets, while NAT gains 8-11% lift for these tasks.
+
+On Social Evolve, NAT significantly outperforms all baselines by at least 25% on transductive and 7% on inductive predictions. From Table 1, we can see that Social Evolve has a small number of nodes but many interactions. This highlights one of the advantages of NAT on dense temporal graphs. NAT keeps the neighborhood representation for a node's every individual neighbor separately so the older interactions are not squashed with the more recent ones into a single representation. Pairing with N-caches, NAT can effectively denoise the dense history and extract neighborhood features.
+
+Scalability. Table 3 shows that NAT is always trained much faster than all baselines. The inference speed of NAT is significantly faster than CAWN that can also constructs neighborhood structural features, which achieves 25-29 times speedup on inference for attributed networks. NAT also achieves at least four times faster inference than TGN, JODIE and DyRep. Compared to TGN-pg, NAT achieves comparable inference time in most cases while achieves about ${10}\%$ speed up over the largest dataset Wiki-talk. This is because when the network is large, online sampling of TGN-pg may dominate the time cost. We may expect NAT to show even better scalability for larger networks. Moreover, on the two large networks Ubuntu and Wiki-talk, NAT requires much less GPU memory. Note that albeit with just comparable or slightly better scalability, over all datasets, NAT significantly outperform TGN-pg in prediction performance.
+
+Across all datasets, NAT does not need larger model sizes than baselines to achieve better performances. More impressively, we observe that NAT uniformly requires fewer epochs to converge than all baselines, especially on larger datasets. It can be attributed to the inductive power given by the joint structural features. Because of this, the total runtime of the model is much shorter than the baselines on all datasets. Specifically, on large datasets, Ubuntu and Wiki-talk, NAT is more than three times as fast as TGN-pg. We also plot the curves on the model convergence v.s. CPU/GPU wall-clock time on Reddit and Wiki-talk for comparison in Fig. 3.
+
+| Ablation | Dataset | Inductive | Transductive | Train | Test | GPU |
| original method | Social E. | ${95.16} \pm {0.66}$ | ${91.75} \pm {0.37}$ | 281.0 | 89.0 | 8.88 |
| Ubuntu | ${90.35} \pm {0.20}$ | ${93.50} \pm {0.34}$ | 125.8 | 41.2 | 10.1 |
| Wiki-talk* | ${93.81} \pm {1.16}$ | ${95.00} \pm {0.31}$ | 833.1 | 280.1 | 22.3 |
| remove 2-hop N-cache | Social E. | ${94.30} \pm {0.90}$ | ${90.77} \pm {0.26}$ | 253.1 | 75.9 | 8.87 |
| Ubuntu | ${89.45} \pm {1.04}$ | ${93.48} \pm {0.34}$ | 111.3 | 35.7 | 9.95 |
| remove | Social E. | ${55.10} \pm {11.54}$ | ${62.12} \pm {3.53}$ | 212.9 | 64.0 | 8.46 |
| 1-&-2-hop | Ubuntu | ${85.11} \pm {0.23}$ | ${91.89} \pm {0.09}$ | 98.1 | 29.5 | 9.07 |
| N-cache | Wiki-talk | ${86.54} \pm {3.87}$ | ${94.89} \pm {1.83}$ | 409.5 | 125.4 | 16.2 |
+
+Table 4: Ablation study on N-caches. *Original method for Wiki-talk does not use the second-hop N-cache.
+
+| Param | Size | Inductive | Transductive | Train | Test | GPU |
| ${M}_{1}$ | 4 | ${92.95} \pm {2.95}$ | ${95.26} \pm {0.49}$ | 834.9 | 281.4 | 18.4 |
| 8 | $\mathbf{{93.96} \pm {0.91}}$ | ${95.39} \pm {0.28}$ | 806.3 | 274.9 | 19.9 |
| 12 | ${92.67} \pm {0.82}$ | ${95.05} \pm {0.58}$ | 818.2 | 277.6 | 21.0 |
| 16 | ${93.81} \pm {1.16}$ | ${95.82} \pm {0.31}$ | 833.1 | 280.1 | 22.3 |
| 20 | ${93.40} \pm {0.50}$ | ${95.83} \pm {0.44}$ | 841.3 | 284.8 | 23.8 |
| ${M}_{2}$ | 0 | ${93.81} \pm {1.16}$ | ${95.82} \pm {0.31}$ | 833.1 | 280.1 | 22.3 |
| 2 | ${92.91} \pm {1.01}$ | ${96.08} \pm {0.34}$ | 960.5 | 330.9 | 22.7 |
| 4 | ${94.26} \pm {0.89}$ | $\mathbf{{96.29} \pm {0.09}}$ | 935.3 | 322.9 | 23.8 |
| 8 | ${94.53} \pm {0.51}$ | ${95.90} \pm {0.07}$ | 943.3 | 325.3 | 26.0 |
| F | 2 | ${90.86} \pm {2.52}$ | ${95.74} \pm {0.27}$ | 843.6 | 284.0 | 18.5 |
| 4 | $\mathbf{{93.81} \pm {1.16}}$ | $\mathbf{{95.82} \pm {0.31}}$ | 833.1 | 280.1 | 22.3 |
| 8 | ${93.55} \pm {0.93}$ | ${95.63} \pm {0.30}$ | 828.7 | 281.1 | 26.2 |
+
+Table 5: Sensitivity of N-cache sizes on Wiki-talk.
+
+### 5.3 Further Analysis
+
+Ablation study. We conduct ablation studies on the effectiveness of the N-caches. Table 4 shows the results of removing the second-hop N-caches ${Z}_{u}^{\left( 2\right) }$ and removing both the first-hop and second-hop $\mathrm{N}$ -caches ${Z}_{u}^{\left( 1\right) },{Z}_{u}^{\left( 2\right) }$ . As expected, dropping the $\mathrm{N}$ -caches reduces the training, inference time and the GPU cost. However, it also results in prediction performance decay. Just removing ${Z}_{u}^{\left( 2\right) }$ can hurt performance by up to $1\%$ . By removing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ but keeping only the self representation, the performance drops significantly, especially on inductive settings. Keeping only self representation is analogous to some baselines such as TGN which keeps a memory state. However, since we use a smaller dimension usually between 32 to 72 , the self representation itself cannot be generalized well on these datasets. Ablation studies on other components including joint neighborhood structural features, T-encoding, RNNs, and DE are detailed in Table 8 (in the appendix).
+
+Sensitivity of the sizes of N-cache. Since N-caches induce the major consumption of the GPU memory, we study how the memory size correlates with the model performance on Wiki-talk. We compare the performances between different values of ${M}_{1},{M}_{2}$ and $F$ of $\mathrm{N}$ -caches. The baseline has ${M}_{1} = {16},{M}_{2} = 0$ and $F = 4$ and we study each parameter by fixing the other two. Table 5 details the changes in the model performance. We also study for the ubuntu dataset in Appendix Table 9.
+
+We can see that GPU memory cost scales close to a linear function for all param changes. However, increasing the model size does not necessarily improve the performance. Changing ${M}_{1}$ to either a smaller or a larger value may decrease both the transductive and the inductive performance. Increasing ${M}_{2}$ boosts the transductive performance but hurts the inductive performance. In general, changing ${M}_{2}$ is less sensitive than changing ${M}_{1}$ . Lastly, a larger $F$ could overfit the model as we can see a slight drop in the inductive prediction with the largest $F$ . Overall, training and inference time remains stable because of the parallelization of NAT. Interestingly, with larger ${M}_{1}$ and ${M}_{2}$ , we sometimes even see a decrease in running time. We hypothesize it is because it avoids hash collisions and short-circuits $\mathrm{N}$ -cache overwriting steps.
+
+Sensitivity of overwriting probability $\alpha$ . We also experiment on $\alpha$ to study whether N-cache refresh frequency is related to the prediction quality. Here, we use a large dataset Ubuntu and a medium dataset Reddit. Results can be found in Fig. 4. For Ubuntu, we update from the original sizes to ${M}_{1} = 4,{M}_{2} = 1, F = 4$ and for Reddit, we change to ${M}_{1} = {16},{M}_{2} = 2, F = 8$ to increase the number of potential collisions so that the effect of $\alpha$ can be better observed. On both datasets, we can see an overall trend that a larger $\alpha$ gives a better transductive performance. However, if $\alpha = 1$ and we always replace old neighbors, it is slightly worse than the optimal $\alpha$ . This pattern shows that the neighborhood information has to keep updated in order to gain a better performance. Some randomness can be useful because it preserves more diverse time ranges of interactions. The inductive performance is relatively more sensitive to the selection of $\alpha$ . We do not find a case when having two different probabilities for replacing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ significantly benefits model performance, so we use a single $\alpha$ for $\mathrm{N}$ -caches of different hops to keep it simple.
+
+## 6 Conclusion and Future Works
+
+In this work, we proposed NAT, the first method that adopts dictionary-type representations for nodes to track the neighborhood of nodes in temporal networks. Such representations support efficient construction of neighborhood structural features that are crucial to predict how temporal network evolves. NAT also develops N-caches to manage these representations in a parallel way. Our extensive experiments demonstrate the effectiveness of NAT in both prediction performance and scalability. In the future, we plan to extend NAT to process even larger networks that the GPU memory cannot hold the entire networks.
+
+References
+
+[1] Petter Holme and Jari Saramäki. Temporal networks. Physics reports, 519(3), 2012. 1
+
+[2] Georg Simmel. The sociology of georg simmel, volume 92892. Simon and Schuster, 1950. 1, 2
+
+[3] Austin R Benson, Rediet Abebe, Michael T Schaub, Ali Jadbabaie, and Jon Kleinberg. Simplicial closure and higher-order link prediction. Proceedings of the National Academy of Sciences, 115(48):E11221-E11230, 2018. 1
+
+[4] Yunyu Liu, Jianzhu Ma, and Pan Li. Neural predicting higher-order patterns in temporal networks. In ${WWW},{2022.3},4$
+
+[5] Ryan A Rossi, Anup Rao, Sungchul Kim, Eunyee Koh, Nesreen K Ahmed, and Gang Wu. Higher-order ranking and link prediction: From closing triangles to closing higher-order motifs. In ${WWW},{2020}$ .
+
+[6] Lauri Kovanen, Márton Karsai, Kimmo Kaski, János Kertész, and Jari Saramäki. Temporal motifs in time-dependent networks. Journal of Statistical Mechanics: Theory and Experiment, 2011. 1
+
+[7] Stephen Ranshous, Shitian Shen, Danai Koutra, Steve Harenberg, Christos Faloutsos, and Nagiza F Samatova. Anomaly detection in dynamic networks: a survey. Wiley Interdisciplinary Reviews: Computational Statistics, 7(3):223-247, 2015. 1
+
+[8] Andrew Z Wang, Rex Ying, Pan Li, Nikhil Rao, Karthik Subbian, and Jure Leskovec. Bipartite dynamic representations for abuse detection. In ${KDD}$ , pages 3638-3648,2021.
+
+[9] Pan Li, Yen-Yu Chang, Rok Sosic, MH Afifi, Marco Schweighauser, and Jure Leskovec. F-fade: Frequency factorization for anomaly detection in edge streams. In WSDM, 2021. 1
+
+[10] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7), 2007. 1
+
+[11] Yehuda Koren. Collaborative filtering with temporal dynamics. In ${KDD}$ , pages 447-456,2009. 1
+
+[12] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 2008. 1
+
+[13] Victor Fung, Jiaxin Zhang, Eric Juarez, and Bobby G Sumpter. Benchmarking graph neural networks for materials chemistry. npj Computational Materials, 7(1):1-8, 2021. 1
+
+[14] Xiangyang Ju, Steven Farrell, Paolo Calafiura, Daniel Murnane, Lindsey Gray, Thomas Kli-jnsma, Kevin Pedro, Giuseppe Cerati, Jim Kowalkowski, Gabriel Perdue, et al. Graph neural networks for particle reconstruction in high energy physics detectors. In NeurIPS, 2019.
+
+[15] Tianchun Li, Shikun Liu, Yongbin Feng, Nhan Tran, Miaoyuan Liu, and Pan Li. Semi-supervised graph neural network for particle-level noise removal. In NeurIPS 2021 AI for Science Workshop, 2021. 1
+
+[16] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In ICML, 2019. 1
+
+[17] Balasubramaniam Srinivasan and Bruno Ribeiro. On the equivalence between positional node embeddings and structural graph representations. In ${ICLR},{2020.3}$
+
+[18] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. In NeurIPS, 2020. 2,3,6
+
+[19] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. In NeurIPS, 2021. 1, 2, 3, 6
+
+[20] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. In ICML 2020 Workshop on ${GRL},{2020.1},3,4,6,7,{15},{16}$
+
+[21] Ehsan Hajiramezanali, Arman Hasanzadeh, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. Variational graph recurrent neural networks. In NeurIPS, 2019. 3
+
+[22] Palash Goyal, Sujit Rokka Chhetri, and Arquimedes Canedo. dyngraph2vec: Capturing network dynamics using dynamic graph representation learning. Knowledge-Based Systems, 187, 2020.
+
+[23] Franco Manessi, Alessandro Rozza, and Mario Manzo. Dynamic graph convolutional networks. Pattern Recognition, 97, 2020.
+
+[24] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kaneza-shi, Tim Kaler, Tao B Schardl, and Charles E Leiserson. EvolveGCN: Evolving graph convolutional networks for dynamic graphs. In AAAI, 2020.
+
+[25] Jiaxuan You, Tianyu Du, and Jure Leskovec. Roland: Graph learning framework for dynamic graphs. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2358-2366, 2022.
+
+[26] Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. DySAT: Deep neural representation learning on dynamic graphs via self-attention networks. In WSDM, 2020. 3
+
+[27] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. In ${ICLR},{2019.3},7,{15}$
+
+[28] Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. In ${KDD},{2019.3},7,{15},{16}$
+
+[29] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Inductive representation learning on temporal graphs. In ${ICLR},{2020.1},3,4,6,{15},{16}$
+
+[30] Liming Pan, Cheng Shi, and Ivan Dokmanić. Neural link prediction with walk pooling. In International Conference on Learning Representations, 2022. 2
+
+[31] Jiaxuan You, Jonathan Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In ${AAAI},{2021.2}$
+
+[32] Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. Neural bellman-ford networks: A general graph neural network framework for link prediction. In NeurIPS, 2021. 2
+
+[33] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In NeurIPS, 2018.2
+
+[34] Yanbang Wang, Yen-Yu Chang, Yunyu Liu, Jure Leskovec, and Pan Li. Inductive representation learning in temporal graphs via casual anonymous walk. In ICLR, 2021. 2, 3, 4, 5, 6, 14
+
+[35] Purnamrita Sarkar, Deepayan Chakrabarti, and Michael I Jordan. Nonparametric link prediction in dynamic networks. In ICML, 2012. 2
+
+[36] Ghadeer AbuOda, Gianmarco De Francisci Morales, and Ashraf Aboulnaga. Link prediction via higher-order motif features. In ECML PKDD, pages 412-429. Springer, 2019. 2
+
+[37] Krzysztof Juszczyszyn, Katarzyna Musial, and Marcin Budka. Link prediction based on subgraph evolution in dynamic social networks. In 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, pages 27-34. IEEE, 2011. 2
+
+[38] Le-kui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. Dynamic network embedding by modeling triadic closure process. In ${AAAI},{2018.2}$
+
+[39] Lun Du, Yun Wang, Guojie Song, Zhicong Lu, and Junshan Wang. Dynamic network embedding: An extended approach for skip-gram based network embedding. In IJCAI, 2018.
+
+[40] Sedigheh Mahdavi, Shima Khoshraftar, and Aijun An. dynnode2vec: Scalable dynamic network embedding. In International Conference on Big Data (Big Data). IEEE, 2018.
+
+[41] Uriel Singer, Ido Guy, and Kira Radinsky. Node embedding over temporal graphs. In IJCAI, 2019.
+
+[42] Giang Hoang Nguyen, John Boaz Lee, Ryan A Rossi, Nesreen K Ahmed, Eunyee Koh, and Sungchul Kim. Continuous-time dynamic network embeddings. In WWW, 2018. 2
+
+[43] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. \{TensorFlow\}: A system for \{Large-Scale\} machine learning. In OSDI, pages 265-283, 2016. 2
+
+[44] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, volume 32, 2019. 2
+
+[45] Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-evolve: deep temporal reasoning for dynamic knowledge graphs. In ICML, 2017. 3
+
+[46] Xuhong Wang, Ding Lyu, Mengjian Li, Yang Xia, Qi Yang, Xinwen Wang, Xinguang Wang, Ping Cui, Yupu Yang, and Bowen Sun. Apan: Asynchronous propagation attention network for real-time temporal graph embedding. In Proceedings of the 2021 International Conference on Management of Data, pages 2628-2638, 2021. 3
+
+[47] Hongkuan Zhou, Da Zheng, Israt Nisa, Vasileios Ioannidis, Xiang Song, and George Karypis. Tgl: A general framework for temporal gnn training on billion-scale graphs. In Proceedings of the VLDB Endowment, 2022. 3, 16
+
+[48] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Self-attention with functional time representation learning. In NeurIPS, 2019. 4
+
+[49] Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, and Marcus Brubaker. Time2vec: Learning a vector representation of time. arXiv preprint arXiv:1907.05321, 2019. 4
+
+[50] Mark EJ Newman. Clustering and preferential attachment in growing networks. volume 64, page 025102. APS, 2001. 5
+
+[51] Hawoong Jeong, Zoltan Néda, and Albert-László Barabási. Measuring preferential attachment in evolving networks. EPL (Europhysics Letters), 61(4):567, 2003. 5
+
+[52] Haoteng Yin, Muhan Zhang, Yanbang Wang, Jianguo Wang, and Pan Li. Algorithm and system co-design for efficient subgraph-based graph representation learning. 15, 2022. 5
+
+[53] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, 2017. 5
+
+[54] Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. Graphsaint: Graph sampling based inductive learning method. In ICLR, 2020.
+
+[55] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In ${KDD},{2019}$ . 5
+
+[56] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 7
+
+[57] The Git Commit That Attempts to Fix an Attention Bug in CAWN But Causes Under-performance in Multiple Datasets.7,8,14
+
+[58] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018. 15
+
+
+
+Figure 5: The procedure to find unique node IDs and the indices for pooling, which are used for parallel construction of DEs and joint representations.
+
+Algorithm 2: Construct Joint Neighborhood Features $\left( {{Z}_{u}^{\left( k\right) },{Z}_{v}^{\left( k\right) }\text{for}k \in \{ 0,1,2\} }\right)$
+
+---
+
+${\mathrm{{KEY}}}_{uv} \leftarrow \operatorname{concat}\left( {{s}_{u}^{\left( k\right) }\text{ for }k \in \{ 0,1,2\} ,{s}_{v}^{\left( k\right) }}\right.$ for $\left. {k \in \{ 0,1,2\} }\right)$ ;
+
+2 VALUE ${}_{uv} \leftarrow$ concat(value $\left( {Z}_{u}^{\left( k\right) }\right)$ for $k \in \{ 0,1,2\}$ , value $\left( {Z}_{v}^{\left( k\right) }\right)$ for $k \in \{ 0,1,2\}$ );
+
+${s}_{uv} \leftarrow$ Remove EMPTY from ${\mathrm{{KEY}}}_{uv}$ ;
+
+Remove the corresponding EMPTY entries from VALUE ${}_{uv}$ ;
+
+${\mathcal{N}}_{uv} \leftarrow$ unique $\left( {s}_{uv}\right) ,{\phi }_{uv} \leftarrow$ the index in ${\mathcal{N}}_{uv}$ for each of ${s}_{uv}$ ;
+
+Initialize ${Q}_{uv}$ with length $\left( {\mathcal{N}}_{uv}\right)$ vectors as seen in Eq (2); // to aggregate nbr. representations.
+
+Scatterly add VALUE ${}_{uv}$ into ${Q}_{uv}$ according to indices ${\phi }_{uv}$ ;
+
+Initialize ${\mathrm{{DE}}}_{u},{\mathrm{{DE}}}_{v}$ with length $\left( {\mathcal{N}}_{uv}\right)$ vectors;
+
+ for $i$ from 0 to length $\left( {\mathcal{N}}_{uv}\right)$ , in parallel (implement with scatter add using indices ${\phi }_{uv}$ ), do
+
+ for $w \in u, v$ do
+
+ ${\mathrm{{DE}}}_{w}\left\lbrack i\right\rbrack \leftarrow \left\lbrack {\mathbf{{if}}{\mathcal{N}}_{uv}\left\lbrack i\right\rbrack \text{ is one of }{s}_{w}^{\left( k\right) }\text{ then }1\text{ else }0\text{ for }k \in \{ 0,1,2\} }\right\rbrack ;$
+
+Return concat( ${\mathrm{{DE}}}_{u},{\mathrm{{DE}}}_{v},{Q}_{uv}$ ) along the last dimension;
+
+---
+
+## A Efficient Joint Neighborhood Features Implementation
+
+Here, we detail the efficient implementation that generates joint neighborhood structural features based on N-Caches as introduced in Sec. 4.2. This implementation is summarized in Alg. 2.
+
+Both DE (Eq (1)) and representation aggregation (Eq (2)) can be done for multiple nodes in parallel on GPUs using PyTorch built-in functions. Specifically, for a mini-batch of temporal links $B = \{ \ldots ,\left( {u, v, t}\right) ,\ldots \}$ , NAT first collects the union of the current neighborhoods for each end-node ${s}_{u} = { \oplus }_{k = 1}^{K}{s}_{u}^{\left( k\right) },{s}_{v} = { \oplus }_{k = 1}^{K}{s}_{v}^{\left( k\right) }$ for all $\left( {u, v, t}\right) \in B$ . Then, NAT follows the steps of Fig. 5: (1) Remove the empty entries in the joint neighborhood ${s}_{u} \oplus {s}_{v}$ with PyTorch function nonzero, denoted as ${s}_{uv}$ . (2) Find unique nodes ${\mathcal{N}}_{uv}$ in the joint neighborhood ${s}_{uv}$ . (3) Generate array ${\phi }_{uv}$ which stores the index in ${\mathcal{N}}_{uv}$ for each node in ${s}_{uv}$ . The last two steps can be computed using PyTorch function unique with parameter return_inverse set to true. (4) Compute DE features and aggregation neighborhood features via the scatter_add operation with indices recorded in ${\phi }_{uv}$ . All these operations support GPU parallel computation.
+
+## B Dataset Description
+
+The following are the detailed descriptions of the seven datasets we tested.
+
+| Task | Method | Wikipedia | Reddit | Social E. $1\mathrm{\;m}$ . | Social E. | Enron | UCI | Ubuntu | Wiki-talk |
| Inductive | CAWN | ${98.16} \pm {0.06}$ | ${97.97} \pm {0.01}$ | ${78.36} \pm {2.94}$ | ${50.00} \pm {0.00}$ | ${94.29} \pm {0.15}$ | ${79.35} \pm {0.48}$ | ${50.00} \pm {0.00}$ | ${50.00} \pm {0.00}$ |
| JODIE | ${95.16} \pm {0.42}$ | ${96.31} \pm {0.16}$ | ${85.16} \pm {1.24}$ | ${86.14} \pm {0.67}$ | ${82.56} \pm {1.88}$ | ${85.02} \pm {0.38}$ | ${52.41} \pm {5.80}$ | ${65.94} \pm {4.26}$ |
| DyRep | ${93.97} \pm {0.18}$ | ${96.86} \pm {0.29}$ | ${84.38} \pm {1.69}$ | ${49.84} \pm {0.35}$ | ${76.69} \pm {2.64}$ | ${67.36} \pm {1.47}$ | ${53.22} \pm {0.03}$ | ${50.37} \pm {0.42}$ |
| TGN | ${97.84} \pm {0.06}$ | ${97.63} \pm {0.09}$ | ${88.43} \pm {0.38}$ | ${70.86} \pm {10.30}$ | ${75.28} \pm {1.81}$ | ${81.65} \pm {1.44}$ | ${62.98} \pm {3.36}$ | ${59.24} \pm {2.34}$ |
| TGN-pg | ${94.96} \pm {0.33}$ | ${94.53} \pm {3.04}$ | ${63.17} \pm {4.69}$ | ${90.24} \pm {3.72}$ | ${67.99} \pm {1.78}$ | ${86.02} \pm {3.34}$ | ${74.85} \pm {1.44}$ | ${83.25} \pm {2.96}$ |
| TGAT | ${97.25} \pm {0.18}$ | ${96.37} \pm {0.10}$ | ${51.23} \pm {0.69}$ | ${50.0} \pm {0.00}$ | ${55.86} \pm {1.01}$ | ${70.83} \pm {0.58}$ | ${55.73} \pm {6.47}$ | ${74.50} \pm {3.71}$ |
| NAT | $\mathbf{{98.27} \pm {0.12}}$ | $\mathbf{{98.56} \pm {0.21}}$ | $\mathbf{{92.62} \pm {1.66}}$ | $\mathbf{{96.13} \pm {0.46}}$ | ${95.25} \pm {1.37}$ | $\mathbf{{90.18} \pm {1.30}}$ | $\mathbf{{87.72} \pm {0.28}}$ | $\mathbf{{92.73} \pm {1.35}}$ |
| Transductive | CAWN | ${98.39} \pm {0.08}$ | ${98.64} \pm {0.04}$ | ${79.59} \pm {0.32}$ | ${50.00} \pm {0.00}$ | ${92.32} \pm {0.26}$ | ${81.76} \pm {0.18}$ | ${50.00} \pm {0.00}$ | ${50.00} \pm {0.00}$ |
| JODIE | ${96.05} \pm {0.39}$ | ${97.63} \pm {0.05}$ | ${82.36} \pm {0.87}$ | ${76.87} \pm {0.32}$ | ${85.28} \pm {2.25}$ | ${91.69} \pm {0.40}$ | ${52.61} \pm {2.50}$ | ${73.32} \pm {4.37}$ |
| DyRep | ${95.34} \pm {0.18}$ | ${97.93} \pm {0.20}$ | ${80.58} \pm {3.55}$ | ${50.05} \pm {3.64}$ | ${79.28} \pm {1.84}$ | ${72.62} \pm {2.01}$ | ${52.38} \pm {0.02}$ | ${69.89} \pm {2.67}$ |
| TGN | ${98.42} \pm {0.05}$ | ${98.65} \pm {0.03}$ | ${90.37} \pm {0.40}$ | ${73.08} \pm {9.74}$ | ${82.08} \pm {4.36}$ | ${89.54} \pm {1.58}$ | ${54.13} \pm {2.52}$ | ${76.07} \pm {5.28}$ |
| TGN-pg | ${97.06} \pm {0.09}$ | ${98.58} \pm {0.08}$ | ${66.89} \pm {7.90}$ | ${66.14} \pm {10.7}$ | ${81.23} \pm {2.80}$ | ${91.16} \pm {0.30}$ | ${89.59} \pm {0.42}$ | ${93.69} \pm {0.06}$ |
| TGAT | ${96.65} \pm {0.06}$ | ${98.07} \pm {0.08}$ | ${56.98} \pm {0.53}$ | ${50.00} \pm {0.00}$ | ${62.08} \pm {1.08}$ | ${79.85} \pm {0.24}$ | ${57.23} \pm {6.55}$ | ${81.82} \pm {1.87}$ |
| NAT | ${98.51} \pm {0.05}$ | ${99.01} \pm {0.11}$ | $\mathbf{{91.77} \pm {0.19}}$ | $\mathbf{{93.63} \pm {0.36}}$ | $\mathbf{{93.08} \pm {0.18}}$ | $\mathbf{{92.08} \pm {0.18}}$ | $\mathbf{{92.62} \pm {0.10}}$ | $\mathbf{{95.33} \pm {0.26}}$ |
+
+Table 6: Performance in AUC (mean in percentage $\pm {95}\%$ confidence level.) bold font and underline highlight the best performance on average and the second best performance on average. Timeout means the time of training for one epoch is more than one hour.
+
+| Params | Wikipedia | Reddit | Social E. $1\mathrm{\;m}$ . | Social E. | Enron | UCI | Ubuntu | Wiki-talk |
| ${M}_{1}$ | 32 | 32 | 40 | 40 | 32 | 32 | 16 | 16 |
| ${M}_{2}$ | 16 | 16 | 20 | 20 | 16 | 16 | 2 | 0 |
| $F$ | 4 | 4 | 2 | 2 | 2 | 2 | 4 | 4 |
| $\left( {{M}_{1} + {M}_{2}}\right) * F$ | 192 | 192 | 120 | 120 | 96 | 96 | 72 | 64 |
| Self Rep. Dim. | 72 | 72 | 32 | 72 | 72 | 32 | 50 | 72 |
+
+Table 7: Hyperparameters of NAT.
+
+- Wikipedia ${}^{1}$ logs the edit events on wiki pages. A set of nodes represents the editors and another set represents the wiki pages. It is a bipartite graph which has timestamped links between the two sets. It has both node and edge features. The edge features are extracted from the contents of wiki pages.
+
+- Reddit ${}^{2}$ is a dataset of the post events by users on subreddits. It is also an attributed bipartite graph between users and subreddits.
+
+- Social Evolution ${}^{3}$ records physical proximity between students living in the dormitory overtime. The original dataset spans one year but CAWN [34] fails to perform on large datasets probably caused by a recent code change due to a bug [57]. To compare the performance, we split out the data over a month, termed Social Evolve $1\mathrm{\;m}$ ., and evaluate over all baselines.
+
+- Enron ${}^{4}$ is a network of email communications between employees of a corporation.
+
+- ${\mathrm{{UCI}}}^{5}$ is a graph recording posts to an online forum. The nodes are university students and the edges are forum messages. It is non-attributed.
+
+- Ubuntu ${}^{6}$ or Ask Ubuntu, is a dataset recording the interactions on the stack exchange web site Ask Ubuntu ${}^{7}$ . Nodes are users and there are three different types of edges,(1) user $u$ answering user $v$ ’s question,(2) user $u$ commenting on user $v$ ’s question, and (3) user $w$ commenting on user $u$ ’s answer. It is a relatively large dataset with more than ${100}\mathrm{\;K}$ nodes.
+
+- Wiki-talk ${}^{8}$ is dataset that represents the edit events on Wikipedia user talk pages. The dataset spans approximately 5 years so it accumulates a large number of nodes and edges. This is the largest dataset with more than $1\mathrm{M}$ nodes.
+
+## C Baselines and the experiment setup
+
+CAWN [34] with source code provided here is a very recent work that samples temporal random walks and anonymizes node identities to achieve motif information. It backtracks historical events to extract neighboring nodes. It achieves high prediction performance but it is both time-consuming and memory-intensive. We pull the most recent commit from their repository. When measuring the CPU usage, we also notice a garbage collection bug. It causes the CPU memory consumption to keep on increasing after every batch and every epoch without any decrease. We fix the bug such that CPU memory remains constant. Our metrics in Table 3 is recorded based on our bug fix. We tune with walk length either 1 or 2 . For Wikipedia, Reddit and SocialEvolve we use walk length of two, and others with only first-hop neighbors. We tune sampling sizes of the first walk between 20 and 64, and the second between 1 and 32 .
+
+---
+
+${}^{1}$ http://snap.stanford.edu/jodie/wikipedia.csv
+
+${}^{2}$ http://snap.stanford.edu/jodie/reddit.csv
+
+${}^{3}$ http://realitycommons.media.mit.edu/socialevolution.html
+
+${}^{4}$ https://www.cs.cmu.edu/~./enron/
+
+${}^{5}$ http://konect.cc/networks/opsahl-ucforum/
+
+${}^{6}$ https://snap.stanford.edu/data/sx-askubuntu.html
+
+${}^{7}$ http://askubuntu.com/
+
+${}^{8}$ https://snap.stanford.edu/data/wiki-talk-temporal.html
+
+---
+
+| No. | Ablation | Task | Social E. | Ubuntu |
| 1. | remove | inductive | $- {0.74} \pm {1.01}$ | $- {1.54} \pm {0.10}$ |
| T-encoding | transductive | $- {1.10} \pm {0.31}$ | $- {1.25} \pm {0.54}$ |
| 2. | remove RNN | inductive | $- {1.18} \pm {0.87}$ | $- {1.19} \pm {0.86}$ |
| transductive | $- {1.26} \pm {0.50}$ | $- {5.68} \pm {4.45}$ |
| 3. | remove attention | inductive | $- {0.77} \pm {1.14}$ | $- {0.28} \pm {0.16}$ |
| transductive | $- {0.39} \pm {0.43}$ | $- {0.01} \pm {0.20}$ |
| 4. | remove $\mathrm{{DE}}$ | inductive | $- {3.78} \pm {2.14}$ | $- {5.67} \pm {2.87}$ |
| transductive | $- {3.43} \pm {1.64}$ | $- {1.55} \pm {0.16}$ |
+
+Table 8: Ablation study with other modules of NAT (changes recorded w.r.t Table 2).
+
+| Param | Size | Inductive | Transductive | Train | Test | GPU |
| ${M}_{1}$ | 8 | ${89.50} \pm {0.37}$ | ${93.56} \pm {0.30}$ | 124.4 | 41.1 | 9.85 |
| 16 | ${90.35} \pm {0.20}$ | ${93.50} \pm {0.34}$ | 125.8 | 41.2 | 10.1 |
| 24 | ${88.39} \pm {0.46}$ | ${93.37} \pm {0.46}$ | 123.5 | 41.1 | 11.0 |
| ${M}_{2}$ | 2 | ${90.35} \pm {0.20}$ | ${93.50} \pm {0.34}$ | 125.8 | 41.2 | 10.1 |
| 4 | ${89.86} \pm {0.46}$ | ${93.46} \pm {0.27}$ | 125.7 | 41.5 | 10.2 |
| 8 | ${89.33} \pm {0.40}$ | $\mathbf{{93.50} \pm {0.27}}$ | 124.7 | 40.9 | 10.5 |
| F | 2 | ${88.82} \pm {1.64}$ | ${93.51} \pm {0.17}$ | 124.6 | 41.3 | 9.69 |
| 4 | ${90.35} \pm {0.20}$ | ${93.50} \pm {0.34}$ | 125.8 | 41.2 | 10.1 |
| 8 | ${90.29} \pm {0.33}$ | ${93.42} \pm {0.18}$ | 125.2 | 41.2 | 11.0 |
+
+Table 9: Sensitivity of N-cache sizes on Ubuntu.
+
+JODIE [28] with source code provided here is a method that learns the embeddings of evolving trajectories based on past interactions. Its backbone is RNNs. It was proposed for bipartite networks, so we adapt the model for non-bipartite temporal networks using the TGN framework. We use a time embedding module, and a vanilla RNN as the memory update module. We use 100 dimensions for its dynamic embedding which gives around the same scale as the other models and provide a fair comparison on both performance and scalability.
+
+DyRep [27] with source code provided here proposes a two-time scale deep temporal point process model that learns the dynamics of graphs both structurally and temporally. We use 100 gradient clips, and hidden size and embedding size both 100 for a fair comparison on both performance and scalability.
+
+TGN [20] with source code provided here is a very recent work as well. It does not perform as well as CAWN on certain datasets but it runs much more efficiently. It keeps track of a memory state for each node and update with new interactions. We train TGN with 300 dimensions in total for all of memory module, time feature and node embedding, and we only consider sampling the first-hop neighbors because it takes much longer to train with second-hop neighbors and the performance does not have significant improvements.
+
+TGN-pg with source code is provided in the PyTorch Geometric library ${}^{9}$ here. This link gives an example use of the library code. This is the same model design as TGN. However, it is much more efficient than TGN because it is more parallelized. Like TGN, we use 300 dimensions in total for all datasets except the largest dataset Wiki-talk. Given the limited GPU memory (11 GB), we have to tune it to 75 dimensions in total such that it can fit the GPU memory.
+
+TGAT with source code provided here is an analogy to GAT [58] for static graph, which leverages attention mechanism on graph message passing. TGAT incorporates temporal encoding to the pipeline. Similar to CAWN, TGAT also has to sample neighbors from the history. We use 2 attention heads and and 100 hidden dimensions. We tune with either 1 or 2 graph attention layers and the samping sizes between 20 and 64.
+
+NAT Since our model can provide the trade-off between performance and scalability, we tune the model with an upperbound on the GPU memory we consider acceptable. Thus, the major parameters we tuned are related to the $\mathrm{N}$ -caches size: ${M}_{1},{M}_{2}$ and $F$ . During tuning, we try to keep $\left( {{M}_{1} + {M}_{2}}\right) * F$ the same. We make sure that NAT’s GPU consumption has to be at the same level as the baselines for all datasets. For example, for the large scale dataset Wiki-talk, the estimated upperbound for GPU is based on the consumption of other baselines as presented in Table 3. The resulting hyperparameter values are given in Table 7. We tune the attention head in the final output layer from 1 to 8 and the overwriting probability for hashing collision $\alpha$ from 0 to 1 . We eventually keep $\alpha = {0.9}$ as it gives the good results for all datasets. Regarding the choice of RNN, we test both GRU and LSTM, but GRU performs better and runs faster.
+
+### C.1 Inductive evaluation of NAT
+
+Our evaluation pipeline for inductive learning is different from others with one added process. For other sampling methods such as TGN [20] and TGAT [29], when they do inductive evaluations, the entire training and evaluation data is available to be accessed, including events that are masked for inductive test. They sample neighbors of test nodes based on their historical interactions to get neighborhood information. However, NAT does not depend on sampling. Instead NAT adopts N-caches for quick access of neighborhood information. Hence, NAT cannot build up the N-caches for the masked nodes during the training stage for inductive tasks. By the end of the training, even all historical events become accessible, NAT cannot leverage them unless they have been aggregated into the N-caches. Therefore, to ensure a fair comparison, after training, NAT processes the full train and validation data with all nodes unmasked, and then processes the test data. Note that in this last pass over the full train and validation data, we do not perform training anymore.
+
+---
+
+${}^{9}$ https://github.com/pyg-team/pytorch_geometric
+
+---
+
+| Method | Wikipedia | Reddit | Social E. $1\mathrm{\;m}$ . | Social E. | Enron | UCI | Ubuntu | Wiki-talk |
| TGN-TGL | ${99.18} \pm {0.26}$ | ${99.67} \pm {0.05}$ | ${83.51} \pm {1.20}$ | ${86.14} \pm {1.45}$ | ${70.96} \pm {2.98}$ | ${86.99} \pm {2.69}$ | ${81.15} \pm {0.55}$ | ${86.60} \pm {0.32}$ |
| NAT-2-hop | ${98.68} \pm {0.04}$ | ${99.10} \pm {0.09}$ | $\mathbf{{90.20} \pm {0.20}}$ | $\mathbf{{91.75} \pm {0.37}}$ | $\mathbf{{92.42} \pm {0.09}}$ | $\mathbf{{93.92} \pm {0.15}}$ | $\mathbf{{93.50} \pm {0.34}}$ | - |
| NAT-1-hop | ${98.60} \pm {0.04}$ | ${98.94} \pm {0.08}$ | ${88.07} \pm {0.13}$ | ${90.77} \pm {0.26}$ | ${90.67} \pm {0.13}$ | ${93.28} \pm {0.17}$ | ${93.48} \pm {0.34}$ | ${95.82} \pm {0.31}$ |
+
+Table 10: Comparison on the transductive average precisions between TGN with TGL and NAT.
+
+ | Method | Train | Test | Total | RAM | GPU | Epoch |
| Ubuntu | TGN-TGL | 100.5 | 38.3 | 1,506 | 40.8 | 19.0 | 7.0 |
| NAT-2-hop | 125.8 | 41.2 | 1,321 | 28.9 | 10.1 | 5.4 |
| NAT-1-hop | 111.3 | 35.7 | 927 | 21.9 | 9.95 | 3.0 |
| Wiki-talk | TGN-TGL | 809.7 | 310.0 | 9,157 | 43.8 | 26.5 | 3.7 |
| NAT-1-hop | 833.1 | 280.1 | 7,802 | 37.1 | 22.3 | 2.7 |
+
+Table 11: Scalability evaluation on Ubuntu and Wiki-talk between TGN with TGL and NAT.
+
+## D Additional Experiments
+
+Further Ablation study. We further conduct ablation experiments on other components related to modeling capability, as shown in Table 8. For Ab. 1, 2, 3, and 4, we remove temporal encodings, replace RNN with a linear layer, replace the final attention layer with mean aggregation, and remove distance encoding respectively. All the ablations generate worse results. For both datasets, removing distance encoding shows significant impact as it fails to learn from joint neighborhood structures. Removing RNN generally has worse performance than removing temporal encoding. We think this is because RNN is critical in encoding temporal dependencies and is able to implicitly encode temporal information given a series of edges. Overall, we conclude that these modules are helpful to some extent for achieving a high performance.
+
+More on Sensitivity of N-cache sizes. We further test the sensitivity of N-cache sizes with the Ubuntu dataset as shown in Table 9. Similar to the study on Wiki-talk, the GPU memory cost scales almost linearly while the model running time fluctuates. It also shows more evidence that a larger model size does not guarantee a better prediction performance. Similar to the study on Wiki-talk, Ubuntu only needs a tiny $F$ for the model to be successful.
+
+## E One Concurrent Work
+
+TGL [47] is a concurrent work of this work where it has got published very recently. TGL proposes a general framework for large-scale Temporal Graph Neural Network training. It aims to maintain the same level of prediction accuracy as baseline models while providing speedups on training and evaluation. Its major contribution is to support parallelization on multiple GPUs, which enables training on billion-scale data. The models that this framework can support include TGN [20], JODIE [28], TGAT [29], etc. However, it neither supports the joint neighborhood features nor it is extendable to our dictionary type representations. We conduct some experiments to compare TGL with our model.
+
+We pull the TGL framework from our repo. We compare NAT with TGN implemented with the framework as it is the best performing model they provided. Similar to TGN, we use embedding dimensions 100 and we follow the same setup as described in Sec. 5.1. We tune the sampling neighbor size to be around 10 to 40. If different sizes generate similar accuracy, we use the smaller size for scalability comparison. We run TGN-TGL on single GPU for a fair comparison with our model. Since TGL does not support inductive learning, we only evaluate the transductive tasks. Finally, we compare TGN-TGL with not only our baseline model, but also NAT with only the 1-hop N-cache. We document the prediction performances in Table 10 and the scalability metrics in Table 11. Although TGN-TGL gives marginally better scores on Wikipedia and Reddit, NAT performs much better on all other datasets $\left( {{5.6} - {21.5}\% }\right)$ . We think the reason is that given that both Wikipedia and Reddit have node and edge features, the ambiguity issue in the toy example of Fig. 1 is reduced. However, for other datasets, TGN-TGL still suffers from missing capturing the structural features in 613 the joint neighborhood.
+
+In terms of scalability, TGN-TGL runs faster than NAT on training for both Ubuntu and Wiki-talk, though TGN-TGL still uses a greater number of epochs and therefore longer total time. On Ubuntu, when 2-hop N-cache is involved, it has longer inference time than TGN-TGL. However, when only 1-hop N-cache is used, TGN-TGL takes 7% and 11% longer time compared to NAT on Ubuntu and Wiki-talk respectively. TGN-TGL performs almost all training procedures in the GPU and TGN-TGL leverages the multi-core CPU to parallelize the sampling of temporal neighbors. However, because it still has to sample neighbors, TGN-TGL is slower than NAT on large networks in testing procedures.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8b320efdc265efadd3aa18f13573ebae46dea73c
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/EPUtNe7a9ta/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,449 @@
+§ NEIGHBORHOOD-AWARE SCALABLE TEMPORAL NETWORK REPRESENTATION LEARNING
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Temporal networks have been widely used to model real-world complex systems such as financial systems and e-commerce systems. In a temporal network, the joint neighborhood of a set of nodes often provides crucial structural information useful for predicting whether they may interact at a certain time. However, recent representation learning methods for temporal networks often fail to extract such information or depend on online construction of structural features, which is time-consuming. To address the issue, this work proposes Neighborhood-Aware Temporal network model (NAT). For each node in the network, NAT abandons the commonly-used one-single-vector-based representation while adopting a novel dictionary-type neighborhood representation. Such a dictionary representation records a down-sampled set of the neighboring nodes as keys, and allows fast construction of structural features for a joint neighborhood of multiple nodes. We also design dedicated data structure termed $N$ -cache to support parallel access and update of those dictionary representations on GPUs. NAT gets evaluated over seven real-world large-scale temporal networks. NAT not only outperforms all cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy, respectively, but also keeps scalable by achieving a speed-up of ${4.1} - {76.7} \times$ against the baselines that adopts joint structural features and achieves a speed-up of ${1.6} - {4.0} \times$ against the baselines that cannot adopt those features. The link to the code: https://anonymous.4open.science/r/NAT-617D.
+
+§ 1 INTRODUCTION
+
+Temporal networks are widely used as abstractions of real-world complex systems [1]. They model interacting elements as nodes, interactions as links, and when those interactions happen as timestamps on those links. Temporal networks often evolve by following certain patterns. Ranging from triadic closure [2] to higher-order motif closure [3-6], the interacting behaviors between multiple nodes have been shown to strongly depend on the network structure of their joint neighborhood. Researchers have leveraged this observation and built many practical systems to monitor and make prediction on temporal networks such as anomaly detection in financial networks [7-9], friend recommendation in social networks [10], and collaborative filtering techniques in e-commerce systems [11].
+
+Recently, graph neural networks (GNNs) have been widely used to encode network-structured data [12] and have achieved state-of-the-art (SOTA) performance in many tasks such as node/graph classification [13-15]. However, to predict how nodes interact with each other in temporal networks, a direct generalization of GNNs may not work well. Traditional GNNs often learn a vector representation for each node, and predict whether two node may interact (aka. a link) based on a combination (e.g. the inner product) of the two vector representations. This link prediction strategy often fails to capture the structure features of the joint neighborhood of the two nodes [16-19]. Consider a toy example with a temporal network in Fig. 1: Node $w$ and node $v$ share the same local structure before ${t}_{3}$ , so GNNs including their variants on temporal networks (e.g., TGN [20]) will associate $w$ and $v$ with the same vector representation. Hence, GNNs will fail to make correct prediction to tell whether $u$ will interact with $w$ or $v$ at ${t}_{3}$ . Here, GNNs cannot capture the important joint structural feature that $u$ and $v$ have a common neighbor $a$ before ${t}_{3}$ . This issue makes almost all previous works that generalize GNNs for temporal networks provide only subpar performance [20-29]. Some recent works have been proposed to address such an issue on static networks [18, 19, 30]. Their key ideas are to construct node structural features to learn the two-node joint neighborhood representations. Specifically, for two nodes of interest, they either label one linked node and construct its distance to the other node $\left\lbrack {{31},{32}}\right\rbrack$ , or label all nodes in the neighborhood with their distances to these two linked nodes $\left\lbrack {{18},{33}}\right\rbrack$ . Traditional GNNs can afterwards encode such feature-augmented neighborhood to achieve better inference. Although these ideas are theoretically powerful [18, 19] and provide good empirical performance on small networks, the induced models are not scaled up to large networks. This is because constructing such structural features is time-consuming and should be done separately for each link to be predicted. This issue becomes even more severe over temporal networks, because two nodes may interact many times and thus the number of links to be predicted is often much larger than the corresponding number in static networks.
+
+ < g r a p h i c s >
+
+Figure 1: A toy example to predict how a temporal network evolves. Given the historical temporal network as shown in the left, the task is to predict whether $u$ prefers to interact with $v$ or $w$ at timestamp ${t}_{3}$ . If this is a social network,(u, v)is likely to happen because $u,v$ share a common neighbor $a$ and follow the principle of triadic closure [2]. However, traditional GNNs, even for their generalization on temporal networks fail here as they learn the same representations for node $v$ and node $w$ due to their common structural contexts, as shown in the middle. In the right, we show a high-level abstraction of joint neighborhood features based on $\mathrm{N}$ -caches of $\mathbf{u}$ and $\mathbf{v}$ : In the N-caches for 1-hop neighborhoods of both node $u$ and node $v,a$ appears as the keys. Joining these keys can provide a structural feature that encodes such common-neighbor information at least for prediction.
+
+In this work, we propose Neighborhood-Aware Temporal network model (NAT) that can address the aforementioned modeling issue while keeping a good scalability of the model. The key novelty of NAT is to incorporate dictionary-type neighborhood representations in place of one-single-vector node representation and a computation-friendly neighborhood cache (N-cache) to maintain such dictionary-type respresentations. Specifically, the N-cache of a node stores several size-constrained dictionaries on GPUs. Each dictionary has a sampled collection of historical neighbors of the center node as keys, and aggregates the timestamps and the features on the links connected to these neighbors as values (vector representations). With N-caches, NAT can in parallel construct the joint neighborhood structural features for a batch of node pairs to achieve fast link predictions. NAT can also update the N-caches with new interacted neighbors efficiently by adopting hash-based search functions which support GPU parallel computation.
+
+NAT provides a novel solution for scalable temporal network representation learning. We evaluate NAT over 7 real-world temporal networks, among which, one contains $1\mathrm{M} +$ nodes and almost 10 $\mathrm{M}$ temporal links to evaluate the scalability of NAT. NAT outperforms cutting-edge baselines by averaged ${5.9}\% \uparrow$ and ${6.0}\% \uparrow$ in transductive and inductive link prediction accuracy respectively. NAT achieves 4.1-76.7 - speed-up compared to the baseline CAWN [34] that constructs joint neighborhood features based on random walk sampling. NAT also achieves ${1.6} - {4.0} \times$ speed-up of the fastest baselines that do not construct joint neighborhood features (and thus suffer from the issue in Fig. 1) on large networks.
+
+§ 2 RELATED WORKS
+
+Neighborhood structure often governs how temporal networks evolve over time. Early-time temporal network prediction models count motifs $\left\lbrack {{35},{36}}\right\rbrack$ or subgraphs $\left\lbrack {37}\right\rbrack$ in the historical neighborhood of two interacting objects as features to predict their future interactions. These models cannot use network attributes and often suffer from scalability issues because counting combinatorial structures is complicated and hard to be executed in parallel. Network-embedding approaches for temporal networks [38-42] suffer from the similar problem, because the optimization problem used to compute node embeddings is often too complex to be solved again and again as the network evolves.
+
+Recent works based on neural networks often provide more accurate and faster models, which benefit from the parallel computation hardware and scalable system support $\left\lbrack {{43},{44}}\right\rbrack$ for deep learning. Some of these works simply aggregate the sequence of links into network snapshots and treat temporal networks as a sequence of static network snapshots [21-26]. These methods may offer low prediction accuracy as they cannot model the interactions that lie in different levels of time granularity.
+
+Move advanced methods deal with link streams directly [20, 27-29, 45-47]. They generalize GNNs to encode temporal networks by associating each node with a vector representation and update it based on the nodes that one interacts with. Some works use the representation of the node that one is currently interacting with $\left\lbrack {{27},{28},{45}}\right\rbrack$ . Other works use those of the nodes that one has interacted with in the history $\left\lbrack {{20},{29},{46},{47}}\right\rbrack$ . However, in either way, these methods suffer from the limited power of GNNs to capture the structural features from the joint neighborhood of multiple nodes [17, 19]. Recently, CAWN [34] and HIT [4], inspired by the theory in static networks [18, 19], have proposed to construct such structural features to improve the representation learning on temporal networks, CAWN for link prediction and HIT for higher-order interaction prediction. However, their computational complexity is high, as for every queried link, they need to sample a large group of random walks and construct the structural features on CPUs that limit the level of parallelism. However, NAT addresses these problems via neighborhood representations and N-caches.
+
+§ 3 PRELIMINARIES: NOTATIONS AND PROBLEM FORMULATION
+
+In this section, we introduce some notations and the problem formulation. We consider temporal network as a sequence of timestamped interactions between pairs of nodes.
+
+Definition 3.1 (Temporal network) A temporal network $\mathcal{E}$ can be represented as $\mathcal{E} =$ $\left\{ {\left( {{u}_{1},{v}_{1},{t}_{1}}\right) ,\left( {{u}_{2},{v}_{2},{t}_{2}}\right) ,\cdots }\right\} ,{t}_{1} < {t}_{2} < \cdots$ where ${u}_{i},{v}_{i}$ denote interacting node IDs of the ith link, ${t}_{i}$ denotes the timestamp. Each temporal link(u, v, t)may have link feature ${e}_{u,v}^{t}$ . We also denote the entire node set as $\mathcal{V}$ . Without loss of generality, we use integers as node IDs, i.e., $\mathcal{V} = \{ 1,2,\ldots \}$ .
+
+A good representation learning of temporal networks is able to efficiently and accurately predict how temporal networks evolve over time. Hence, we formulate our problem as follows.
+
+Definition 3.2 (Problem formulation) Our problem is to learn a model that may use the historical information before $t$ , i.e., $\left\{ {\left( {{u}^{\prime },{v}^{\prime },{t}^{\prime }}\right) \in \mathcal{E} \mid {t}^{\prime } < t}\right\}$ , to accurately and efficiently predict whether there will be a temporal link between two nodes at time $t$ , i.e.,(u, v, t).
+
+Next, we define neighborhood in temporal networks.
+
+Definition 3.3 ( $k$ -hop neighborhood in a temporal network) Given a timestamp $t$ , denote a static network constructed by all the temporal links before $t$ as ${\mathcal{G}}_{t}$ . Remove all timestamps in ${\mathcal{G}}_{t}$ . Given a node $v$ , define $k$ -hop neighborhood of $v$ before time $t$ , denoted by ${\mathcal{N}}_{v}^{t,k}$ , as the set of all nodes $u$ such that there exists at least one walk of length $k$ from $u$ to $v$ over ${\mathcal{G}}_{t}$ . For two nodes $u,v$ , their joint neighborhood up-to $K$ hops refers to ${ \cup }_{k = 1}^{K}\left( {{\mathcal{N}}_{v}^{t,k} \cup {\mathcal{N}}_{u}^{t,k}}\right)$ .
+
+§ 4 METHODOLOGY
+
+In this section, we introduce NAT. NAT consists of two major components: neighborhood representations and N-caches, constructing joint neighborhood features and NN-based encoding.
+
+§ 4.1 NEIGHBORHOOD REPRESENTATIONS AND N-CACHES
+
+In NAT, a node representation is tracked by a fixed-sized memory module, i.e., N-cache over time as the temporal network evolves. Fig. 2 Left gives an illustration. In contrast to all previous methods that adopt a single vector representation for each node $u$ , NAT adopts neighborhood representations $\left( {{Z}_{u}^{\left( 0\right) }\left( t\right) ,{Z}_{u}^{\left( 1\right) }\left( t\right) ,\ldots ,{Z}_{u}^{\left( K\right) }\left( t\right) }\right)$ , where ${Z}_{u}^{\left( k\right) }\left( t\right)$ denotes the $k$ -hop neighborhood representation, for $k = 0,1,\ldots ,K$ . Note that these representations may evolve over time. For notation simplicity, the timestamps in these notations are ignored while they typically can be inferred from the context. The main goal of tracking these neighborhood representations is to enable efficient construction of structural features, which will be detailed in Sec. 4.2. Next, we first explain these neighborhood representations from the perspective of modeling and how they evolve over time. Then, we introduce the scalable implementation of $\mathrm{N}$ -caches.
+
+Modeling. For a node $u$ , the 0 -hop representation, or termed self-representation ${Z}_{u}^{\left( 0\right) }$ simply works as the standard node representation for $u$ . It gets updated via an RNN ${Z}_{u}^{\left( 0\right) } \leftarrow$ $\mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u,v}}\right\rbrack }\right)$ when node $u$ interacts with another node $v$ as shown in Fig. 2 Left. The rest neighborhood representations are more complicated. To give some intuition, we first introduce the 1-hop representation ${Z}_{u}^{\left( 1\right) }.{Z}_{u}^{\left( 1\right) }$ is a dictionary whose keys, denoted by $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , correspond to a down-sampled set of the (IDs of) nodes in the 1-hop neighborhood of $u$ . For a node $a$ in $\operatorname{key}\left( {Z}_{u}^{\left( 1\right) }\right)$ , the dictionary value denoted by ${Z}_{u,a}^{\left( 1\right) }$ is a vector representation as a summary of previous interactions between $u$ and $a.{Z}_{u}^{\left( 1\right) }$ will be updated as temporal network evolves. For example, in Fig. 1, as $v$ interacts with $u$ at time ${t}_{3}$ with the link feature ${e}_{u,v}$ , the entry in ${Z}_{u}^{\left( 1\right) }$ that corresponds to $v,{Z}_{u,v}^{\left( 1\right) }$ will get updated via an RNN ${Z}_{u,v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u,v}^{\left( 1\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },{t}_{3},{e}_{u,v}}\right\rbrack }\right)$ . If ${Z}_{u,v}^{\left( 1\right) }$ does not exist in current ${Z}_{u}^{\left( 1\right) }$ (e.g., in the first $v,u$ interaction), a default initialization of ${Z}_{u,v}^{\left( 1\right) }$ is used. Once updated, the new value ${Z}_{u,v}^{\left( 1\right) }$ paired with the key (node ID) $v$ will be inserted into ${Z}_{u}^{\left( 1\right) }$ .
+
+max width=
+
+$\mathbf{{No}.}$ Notations Definitions
+
+1-3
+1. ${Z}_{u}^{\left( k\right) }$ A dictionary (with values ${Z}_{u,a}^{\left( k\right) }$ , of size ${M}_{k}$ ) denoting the $k$ -hop neighborhood representation for node $u$ .
+
+1-3
+2. ${Z}_{u,a}^{\left( k\right) }$ A vector (of length $F$ for $k \geq 1$ ) in the values of ${Z}_{u}^{\left( k\right) }$ representing node $v$ as a $k$ -hop neighbor of $u$ .
+
+1-3
+3. ${s}_{u}^{\left( k\right) }$ An auxiliary array to record the node IDs who are currently recorded as the keys of ${Z}_{u}^{\left( k\right) }$ .
+
+1-3
+4. ${\mathrm{{DE}}}_{u}^{t}\left( a\right)$ The distance encoding of node $a$ based on the keys of N-caches of node $u$ at time $t$ (Eq. (1)).
+
+1-3
+5. hash(a) The hash function mapping a node ID $a$ to the position of ${Z}_{u,a}^{\left( k\right) }$ in the $k$ -hop N-cache of any node $u$ .
+
+1-3
+
+ < g r a p h i c s >
+
+Figure 2: Neighborhood representations and Joining Neighborhood Features & Representations to make predictions. Left: Neighborhood representations of a node. Node $u$ interacts with $v$ at ${t}_{3}$ in the example in Fig. 1. The 0-hop (self) representation and 1-hop representations will be updated based on ${Z}_{v}^{\left( 0\right) }$ . The 2-hop representations will be updated by inserting ${Z}_{v}^{\left( 1\right) }.{Z}_{u}^{\left( k\right) }$ ’s are maintained in N-caches. Right: In the example of Fig. 1, to predict the link $\left( {u,v,{t}_{3}}\right)$ , the neighborhood representations of node $u$ and node $v$ will be joined: The structural feature DE is constructed according to Eq. (1); The representations are sum-pooled according to Eq. (2). Then, an attention layer (Eq. (3)) is adopted to make the final prediction.
+
+One remark is that for the input timestamps ${t}_{i}$ , we adopt Fourier features to encode them before filling them into RNNs, i.e., with learnable parameter ${\omega }_{i}$ ’s, $1 \leq i \leq d$ , T-encoding $\left( t\right) =$ $\left\lbrack {\cos \left( {{\omega }_{1}t}\right) ,\sin \left( {{\omega }_{1}t}\right) ,\ldots ,\cos \left( {{\omega }_{d}t}\right) ,\sin \left( {{\omega }_{d}t}\right) }\right\rbrack$ , which has been proved to be useful for temporal network representation learning [4, 20, 29, 34, 48, 49].
+
+The large-hop $\left( { > 1}\right)$ neighborhood representation ${Z}_{u}^{\left( k\right) }$ is also a dictionary. Similarly, the keys of ${Z}_{u}^{\left( k\right) }$ correspond to the nodes who lie in the $k$ -hop neighborhood of $u$ . The update of ${Z}_{u}^{\left( k\right) }$ is as follows: If $u$ interacts with $v,v$ ’s(k - 1)-hop neighborhood by definition becomes a part of $k$ -hop neighborhood of $u$ after the interaction. Given this observation, ${Z}_{u}^{\left( k\right) }$ can also be updated by using ${Z}_{v}^{\left( k - 1\right) }$ . However, we avoid using a RNN for the large-hop update to reduce complexity. Instead, we directly insert ${Z}_{v}^{\left( k - 1\right) }$ into ${Z}_{u}^{\left( k\right) }$ , i.e., setting ${Z}_{u,a}^{\left( k\right) } \leftarrow {Z}_{v,a}^{\left( k - 1\right) }$ for all $a \in \operatorname{key}\left\lbrack {Z}_{v}^{\left( k - 1\right) }\right\rbrack$ . If ${Z}_{u,a}^{\left( k\right) }$ has already existed before the insertion, we simply replace it.
+
+Next, we will introduce the implementation of the above representations via N-caches. Readers who only care about the learning models can skip this part and directly go to Sec. 4.2. The maintenance of N-caches (aka. neighborhood representations) as the network evolves is summarized in Alg. 1.
+
+Scalable Implementation. Neighborhood representations cannot be directly implemented via python dictionary to achieve scalable maintenance. Instead, we adopt the following three design techniques: (a) Setting size limit; (b) Parallelizing hash-maps; (c) Addressing collisions.
+
+Algorithm 1: N-caches construction and update $\left( {\mathcal{V},\mathcal{E},\alpha }\right)$
+
+for $k$ from 0 to 2 (consider only two hops) do
+
+ for $u$ in $\mathcal{V}$ , in parallel, do
+
+ Initialize fixed-size dictionaries ${Z}_{u}^{\left( k\right) }$ in GPU with key spaces ${s}_{u}^{\left( k\right) }$ and value spaces;
+
+I for(u, v, t, e)in each mini-batch(u, v, t, e)of $\mathcal{E}$ , in parallel, do
+
+ ${Z}_{u}^{\left( 0\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{u}^{\left( 0\right) },\left\lbrack {{Z}_{v}^{\left( 0\right) },t,e}\right\rbrack }\right) //$ update 0-hop self-representation
+
+ ${Z}_{\text{ prev }} \leftarrow {Z}_{u,v}^{\left( 1\right) }$ if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals $v$ , else 0 // check if ${Z}_{u,v}^{\left( 1\right) }$ is recorded in ${Z}_{u}^{\left( 1\right) }$ or not;
+
+ if ${s}_{u}^{\left( 1\right) }\left\lbrack {\operatorname{hash}\left( v\right) }\right\rbrack$ equals ( $v$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
+
+ ${s}_{u}^{\left( 1\right) }\left\lbrack {\text{ hash }\left( v\right) }\right\rbrack \leftarrow v,{Z}_{u,v}^{\left( 1\right) } \leftarrow \mathbf{{RNN}}\left( {{Z}_{\text{ prev }},\left\lbrack {{Z}_{v}^{\left( 0\right) },t,e}\right\rbrack }\right) ;//$ update 1-hop nbr. representation
+
+ for $w$ in ${s}_{v}^{\left( 1\right) }$ , in parallel, do
+
+ if ${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack$ equals ( $w$ or ${EMPTY}$ ) or rand $\left( {0,1}\right) < \alpha$ then
+
+ ${s}_{u}^{\left( 2\right) }\left\lbrack {\operatorname{hash}\left( w\right) }\right\rbrack \leftarrow w,{Z}_{u,w}^{\left( 2\right) } \leftarrow {Z}_{v,w}^{\left( 1\right) };//$ update 2-hop nbr. representations
+
+ repeat lines 5-11 with(v, u, t, e)
+
+(a) Limiting size: In a real-world network, the size of neighborhood of a node typically follows a long-tailed distribution [50, 51]. So, it is irregular and memory inefficient to record the entire neighborhood. Instead, we set an upper limit ${M}_{k}$ to the size of each-hop representation ${Z}_{u}^{\left( k\right) }$ , which means ${Z}_{u}^{\left( k\right) }$ may record only a subset of nodes in the $k$ -hop neighborhood of node $u$ . This idea is inspired by previous works that have shown structural features constructed based on a down-sampled neighborhood is sufficient to provide good performance [34, 52]. To further decrease the memory overhead, we only set each representation ${Z}_{u,a}^{\left( k\right) },k \geq 1$ as a vector of small dimension $F$ . Overall, the memory overhead of the $\mathrm{N}$ -cache per node is $O\left( {\mathop{\sum }\limits_{{k = 1}}^{K}{M}_{k} \times F}\right)$ . In our experiments, we consider at most $K = 2$ hops, and set the numbers of tracked neighbors ${M}_{1},{M}_{2} \in \left\lbrack {2,{40}}\right\rbrack$ and the size of each representation $F \in \left\lbrack {2,8}\right\rbrack$ , which already gives very good performance. Based on the above design, the overall memory overhead is just about hundreds per node, which is comparable to the commonly-used memory cost of tracking a big single-vector representation for each node.
+
+(b) The hash-map: As NAT needs to frequently access N-caches, a fast implementation of using node IDs to search within N-caches in parallel is needed. To enable the parallel search, we design GPU dictionaries to implement N-caches. Specifically, for every node $u$ , we pre-allocate $O\left( {{M}_{k} \times F}\right)$ space in GPU-RAM to record the values in ${Z}_{u}^{\left( k\right) }$ . A hash function is adopted to access the values in ${Z}_{u}^{\left( k\right) }$ . For some node $a$ , we compute $\operatorname{hash}\left( a\right) \equiv \left( {q * a}\right) \left( {\;\operatorname{mod}\;{M}_{k}}\right)$ for a fixed large prime number $q$ to decide the row-index in ${Z}_{u}^{\left( k\right) }$ that records ${Z}_{u,a}^{\left( k\right) }$ . Such a simple hashing allows NAT accessing multiple neighborhood representations in N-caches in parallel.
+
+However, as the size ${M}_{k}$ of each $\mathrm{N}$ -cache is small, in particular smaller than the corresponding neighborhood, the hash-map may encounter collisions. To detect such collisions, we also pre-allocate $O\left( {M}_{k}\right)$ space in each $\mathrm{N}$ -cache ${Z}_{u}^{\left( k\right) }$ for an array ${s}_{u}^{\left( k\right) }$ to record the IDs of the nodes who are the most recent ones recorded in ${Z}_{u}^{\left( k\right) }$ . Specifically, we use ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ to check whether node $a$ is a key of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is $a,{Z}_{u,a}^{\left( k\right) }$ is recorded at the position hash(a)of ${Z}_{u}^{\left( k\right) }$ . If ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack$ is neither $a$ nor EMPTY, the position hash(a)of ${Z}_{u}^{\left( k\right) }$ records the representation of another node.
+
+(c) Addressing collisions: If encountering a collision when NAT works on an evolving network, NAT addresses that collision in a random manner. Specifically, suppose we are to write ${Z}_{u,a}^{\left( k\right) }$ into ${Z}_{u}^{\left( k\right) }$ . If another node $b$ satisfies $\operatorname{hash}\left( a\right) = \operatorname{hash}\left( b\right) = p$ and ${Z}_{u,b}^{\left( k\right) }$ has occupied the position $p$ of ${Z}_{u}^{\left( k\right) }$ , then, we replace ${Z}_{u,b}^{\left( k\right) }$ by ${Z}_{u,a}^{\left( k\right) }$ (and ${s}_{u}^{\left( k\right) }\left\lbrack {\operatorname{hash}\left( a\right) }\right\rbrack \leftarrow a$ simultaneously) with probability $\alpha$ . Here, $\alpha \in (0,1\rbrack$ is a hyperparameter. Although the above random replacement strategy sounds heuristic, it is essentially equivalent to random-sampling nodes from the neighborhood without replacement (random dropping $\leftrightarrow$ random sampling). Note that random-sampling neighbors is a common strategy used to scale up GNNs for static networks [53-55], so here we essentially apply an idea of similar spirit to temporal networks. We find a small size ${M}_{k}\left( { \leq {40}}\right)$ can give a good empirical performance while keeping the model scalable, and NAT is relatively robust to a wide range of $\alpha$ .
+
+§ 4.2 JOINT NEIGHBORHOOD STRUCTURAL FEATURES AND NEURAL-NETWORK-BASED ENCODING
+
+As illustrated in the toy example in Fig. 1, structural features from the joint neighborhood are critical to reveal how temporal networks evolve. Previous methods in static networks adopt distance encoding (DE) (or called labeling tricks more broadly) to formulate these features [18, 19]. Recently, this idea has got generalized to temporal networks [34]. However, the model CAWN in [34] uses online random-walk sampling, which cannot be parallelized on GPUs and is thus extremely slow. Our design of N-caches allows addressing such a problem. Fig. 2 Right illustrates the procedure.
+
+NAT generates joint neighborhood structural features as follows. Suppose our prediction is made for a temporal link(u, v, t). For every node $a$ in the joint neighborhood of $u$ and $v$ decided by their N-caches at timestamp $t$ , i.e., $a \in \left\lbrack {{ \cup }_{k = 0}^{K}\operatorname{key}\left( {Z}_{u}^{\left( k\right) }\right) }\right\rbrack \cup \left\lbrack {{ \cup }_{{k}^{\prime } = 0}^{K}\operatorname{key}\left( {Z}_{v}^{\left( {k}^{\prime }\right) }\right) }\right\rbrack$ , we associate it with a DE
+
+${\mathrm{{DE}}}_{uv}^{t}\left( a\right) = {\mathrm{{DE}}}_{u}^{t}\left( a\right) \oplus {\mathrm{{DE}}}_{v}^{t}\left( a\right)$ , where ${\mathrm{{DE}}}_{w}^{t}\left( a\right) = \left\lbrack {\chi \left\lbrack {a \in {Z}_{w}^{\left( 0\right) }}\right\rbrack ,\ldots ,\chi \left\lbrack {a \in {Z}_{w}^{\left( K\right) }}\right\rbrack }\right\rbrack ,w \in \{ u,v\}$(1)
+
+Here, $\chi \left\lbrack {a \in {Z}_{w}^{\left( i\right) }}\right\rbrack$ is 1 if $a$ is among the keys of N-cache ${Z}_{w}^{\left( i\right) }$ or 0 otherwise. $\oplus$ denotes vector concatenation. As for the example to predict $\left( {u,v,{t}_{3}}\right)$ in Fig. 1, the DEs of four nodes $u,a,v,b$ are as shown in Fig. 2 Right. Note that ${\mathrm{{DE}}}_{uv}^{{t}_{3}}\left( a\right) = \left\lbrack {0,1,0}\right\rbrack \oplus \left\lbrack {0,1,0}\right\rbrack$ because $a$ appears in the keys of both ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{v}^{\left( 1\right) }$ , which further implies $a$ as a common neighbor of $u$ and $v$ .
+
+Simultaneously, NAT also aggregates neighborhood representations for every node $a$ in the common neighborhood of $u$ and $v$ . Specifically, for node $a$ , we aggregate the representations via a sum pool
+
+$$
+{Q}_{uv}^{t}\left( a\right) = \mathop{\sum }\limits_{{k = 0}}^{K}\mathop{\sum }\limits_{{w \in \{ u,v\} }}{Z}_{w,a}^{\left( k\right) } \times \chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack . \tag{2}
+$$
+
+Here, if $a$ is not in the neighborhood ${Z}_{w}^{\left( k\right) },\chi \left\lbrack {a \in {Z}_{w}^{\left( k\right) }}\right\rbrack = 0$ and thus ${Z}_{w,a}^{\left( k\right) }$ does not participate in the aggregation. Both DE (Eq (1)) and representation aggregation (Eq (2)) can be done for multiple node pairs in parallel on GPUs. We detail the parallel steps in Appendix A. After joining DE and neighborhood representations, for each link(u, v, t)to be predicted, NAT has a collection of representations ${\Omega }_{u,v}^{t} = \left\{ {{\mathrm{{DE}}}_{uv}^{t}\left( a\right) \oplus {Q}_{uv}^{t}\left( a\right) \mid a \in {\mathcal{N}}_{u,v}^{t}}\right\}$ .
+
+Ultimately, we propose to use attention to aggregate the collected representations in ${\Omega }_{u,v}^{t}$ to make the final prediction for the link(u, v, t). Let MLP denote a multi-layer perceptron and we adopt
+
+$$
+\text{ logit } = \operatorname{MLP}\left( {\mathop{\sum }\limits_{{h \in {\Omega }_{u,v}^{t}}}{\alpha }_{h}\operatorname{MLP}\left( h\right) }\right) \text{ , where }\left\{ {\alpha }_{h}\right\} = \operatorname{softmax}\left( \left\{ {{w}^{T}\operatorname{MLP}\left( h\right) \mid h \in {\Omega }_{u,v}^{t}}\right\} \right) \text{ , } \tag{3}
+$$
+
+where $w$ is a learnable vector parameter and the logit can be plugged in the cross-entropy loss for training or compared with a threshold to make the final prediction.
+
+§ 5 EXPERIMENTS
+
+In this section, we evaluate the performance and the scalability of NAT against a variety of baselines on real-world temporal networks. We further conduct ablation study on relevant modules and hyperparameter analysis. Unless specified for comparison, the hyperparameters of NAT (such as ${M}_{1},{M}_{2},F,\alpha$ ) are detailed in Appendix C and Table 7 (in the Appendix).
+
+§ 5.1 EXPERIMENTAL SETUP
+
+Datasets. We use seven real-world datasets that are available to the public, whose statistics are listed in Table 1. Further details of these datasets can be found in Appendix B. We preprocess all datasets by following previous literatures. We transform the node and edge features of Wikipedia and Reddit to 172-dim feature vectors. For other datasets, those features will be zeros since they are non-attributed. We split the datasets into training, validation and testing data according to the ratio of 70/15/15. For inductive test, we sample the unique nodes in validation and testing data with probability 0.1 and remove them and their associated edges from the networks during the model training. We detail the procedure of inductive evaluation for NAT in Appendix C.1.
+
+Baselines. We run experiments against 6 strong baselines that give the SOTA approaches for modeling temporal networks. Out of the 6 baselines, CAWN [34], TGAT [29] and TGN [20] need to sample neighbors from the historical events, while JODIE [28], DyRep [27], keep track of dynamic node representations to avoid sampling. CAWN is the only model that constructs neighborhood structural features. As we are interested in both prediction performance and model scalability, we include an efficient implementation of TGN sourced from Pytorch Geometric (TGN-pg), a library built upon PyTorch including different variants of GNNs [56]. TGN is slower than TGN-pg because TGN in [20] does not process a batch fully in parallel while TGN-pg does. Additional details about the baselines can be found in appendix $\mathrm{C}$ .
+
+max width=
+
+Measurement Wikipedia Reddit Social E. $1\mathrm{\;m}$ . Social E. Enron UCI Ubuntu Wiki-talk
+
+1-9
+nodes 9,227 10,985 71 74 184 1,899 159,316 1,140,149
+
+1-9
+temporal links 157,474 672,447 176,090 2,099,519 125,235 59,835 964,437 7,833,140
+
+1-9
+static links 18,257 78,516 2,457 4486 3,125 20,296 596,933 3,309,592
+
+1-9
+node & link attributes 172 & 172 172 & 172 0 & 0 0 & 0 0 & 0 0 & 0 0 & 0 0 & 0
+
+1-9
+bipartite true true false false false true false false
+
+1-9
+
+Table 1: Summary of dataset statistics.
+
+max width=
+
+Task Method Wikipedia Reddit Social E. $1\mathrm{\;m}$ . Social E. Enron UCI Ubuntu Wiki-talk
+
+1-10
+7*Inductive CAWN ${98.52} \pm {0.04}$ ${98.19} \pm {0.03}$ ${80.09} \pm {1.89}$ ${50.00} \pm {0.00}{}^{ * }$ ${93.28} \pm {0.01}$ ${80.37} \pm {0.65}$ ${50.00} \pm {0.00}^{ * }$ ${50.00} \pm {0.00}^{ * }$
+
+2-10
+ JODIE ${95.58} \pm {0.37}$ ${95.96} \pm {0.29}$ ${80.61} \pm {1.55}$ ${81.13} \pm {0.52}$ ${81.69} \pm {2.21}$ ${86.13} \pm {0.34}$ ${56.68} \pm {0.49}$ ${65.89} \pm {4.72}$
+
+2-10
+ DyRep ${94.72} \pm {0.14}$ ${97.04} \pm {0.29}$ ${81.54} \pm {1.81}$ ${52.68} \pm {0.11}$ ${77.44} \pm {2.28}$ ${68.38} \pm {1.30}$ ${53.25} \pm {0.03}$ ${51.87} \pm {0.93}$
+
+2-10
+ TGN ${98.01} \pm {0.06}$ ${97.76} \pm {0.05}$ ${86.00} \pm {0.70}$ ${67.01} \pm {10.3}$ ${75.72} \pm {2.55}$ ${83.21} \pm {1.16}$ ${62.14} \pm {3.17}$ ${56.73} \pm {2.88}$
+
+2-10
+ TGN-pg ${94.91} \pm {0.35}$ ${94.34} \pm {3.22}$ ${63.44} \pm {3.54}$ ${88.10} \pm {4.81}$ ${69.55} \pm {1.62}$ ${86.36} \pm {3.60}$ ${79.44} \pm {0.85}$ ${85.35} \pm {2.96}$
+
+2-10
+ TGAT ${97.25} \pm {0.18}$ ${96.69} \pm {0.11}$ ${54.66} \pm {0.66}$ ${50.00} \pm {0.00}$ ${57.09} \pm {0.89}$ ${70.47} \pm {0.59}$ ${54.73} \pm 4.{.94}$ ${71.04} \pm {3.59}$
+
+2-10
+ NAT $\mathbf{{98.55} \pm {0.09}}$ $\mathbf{{98.56} \pm {0.21}}$ $\mathbf{{91.82} \pm {1.91}}$ $\mathbf{{95.16} \pm {0.66}}$ ${94.94} \pm {1.15}$ $\mathbf{{92.46} \pm {0.93}}$ $\mathbf{{90.35} \pm {0.20}}$ $\mathbf{{93.81} \pm {1.16}}$
+
+1-10
+7*Transductive CAWN ${98.62} \pm {0.05}$ ${98.66} \pm {0.09}$ ${79.59} \pm {0.21}$ ${50.00} \pm {0.00}{}^{ * }$ ${91.46} \pm {0.35}$ ${82.84} \pm {0.16}$ ${50.00} \pm {0.00}^{ * }$ ${50.00} \pm {0.00}^{ * }$
+
+2-10
+ JODIE ${96.15} \pm {0.36}$ ${97.29} \pm {0.05}$ ${77.02} \pm {1.11}$ ${69.30} \pm {0.21}$ ${83.42} \pm {2.63}$ ${91.09} \pm {0.69}$ ${60.29} \pm {2.66}$ ${75.00} \pm {4.90}$
+
+2-10
+ DyRep ${95.81} \pm {0.15}$ ${98.00} \pm {0.19}$ ${76.96} \pm {4.05}$ ${51.14} \pm {0.24}$ ${78.04} \pm {2.08}$ ${72.25} \pm {1.81}$ ${52.22} \pm {0.02}$ ${62.07} \pm {0.06}$
+
+2-10
+ TGN ${98.57} \pm {0.05}$ ${98.70} \pm {0.03}$ ${88.72} \pm {0.65}$ ${69.39} \pm {10.50}$ ${80.87} \pm {4.37}$ ${89.53} \pm {1.49}$ ${53.80} \pm {2.23}$ ${66.01} \pm {4.79}$
+
+2-10
+ TGN-pg ${97.26} \pm {0.10}$ ${98.62} \pm {0.07}$ ${66.39} \pm {6.90}$ ${64.03} \pm {8.97}$ ${80.85} \pm {2.70}$ ${91.47} \pm {0.29}$ ${90.56} \pm {0.44}$ ${94.16} \pm {0.09}$
+
+2-10
+ TGAT ${96.65} \pm {0.06}$ ${98.19} \pm {0.08}$ ${58.10} \pm {0.47}$ ${50.00} \pm {0.00}$ ${61.25} \pm {0.99}$ ${77.88} \pm {0.31}$ ${55.46} \pm {5.47}$ ${78.43} \pm {2.15}$
+
+2-10
+ NAT $\mathbf{{98.68} \pm {0.04}}$ $\mathbf{{99.10} \pm {0.09}}$ $\mathbf{{90.20} \pm {0.20}}$ ${94.43} \pm {1.67}$ $\mathbf{{92.42} \pm {0.09}}$ $\mathbf{{93.92} \pm {0.15}}$ $\mathbf{{93.50} \pm {0.34}}$ ${95.82} \pm {0.31}$
+
+1-10
+
+Table 2: Performance in average precision (AP) (mean in percentage $\pm {95}\%$ confidence level). Bold font and underline highlight the best performance and the second best performance on average. *The under-performance of CAWN on Social E., Ubuntu and Wiki-talk may be caused by a recent code change due to a bug [57].
+
+Regarding hyperparameters, if a dataset has been tested by a baseline, we use the set of hyperparame-ters that are provided in the corresponding paper. Otherwise, we tune the parameters such that similar components have sizes in the same scale. For example, matching the number of neighbors sampled and the embedding sizes. We also fix the training and inference batch sizes so that the comparison of training and inference time can be fair between different models. For training, since CAWN uses 32 as the default while others use 200, we decide on using 100 that is between the two. For validation and testing, we use batch size 32 over all baselines. We also apply the early stopping strategy for all models to record the number of epochs to converge and the total model running time to converge. We also set a time limit of 10 hours for training. once that time is reached, we will use the best epoch so far for evaluation. More detailed hyperparameters are provided in Appendix C.
+
+Hardware. We run all experiments using the same device that is equipped with eight Intel Core i7-4770HQ CPU @ 2.20GHz with 15.5 GiB RAM and one GPU (GeForce GTX 1080 Ti).
+
+Evaluation Metrics. For prediction performance, we evaluation all models with Average Precision (AP) and Area Under the ROC curve (AUC). In the main text, the prediction performance in all tables is evaluated in AP. The AUC results are given in the appendix. All results are summarized based on 5 time independent experiments. For computing performance, the metrics include (a) average training and inference time (in seconds) per epoch, denoted as Train and Test respectively, (b) averaged total time (in seconds) of a model run, including training of all epochs, and testing, denoted as Total, (c) the averaged number of epochs for convergence, denoted as Epoch, (d) the maximum GPU memory and RAM occupancy percentage monitored throughout the entire processes, denoted as GPU and $\mathbf{{RAM}}$ , respectively. We ensure that there are no other applications running during our evaluations.
+
+§ 5.2 RESULTS AND DISCUSSION
+
+Overall, our method achieves SOTA performance on all 7 datasets. The modeling capacity of NAT exceeds all of the baselines and the time complexities of training and inference are either lower or comparable to the fastest baselines. Let us provide the detailed analysis next.
+
+Prediction Performance. We give the result of AP in Table 2 and AUC in Appendix Table 6.
+
+max width=
+
+X Method Train Test Total RAM GPU Epoch
+
+1-8
+7*Wikipedia CAWN 1,006 174 11,845 30.2 58.0 6.7
+
+2-8
+ JODIE 28.8 30.6 1,482 28.3 17.9 19.1
+
+2-8
+ DyRep 32.4 32.5 1,681 28.3 17.8 21.5
+
+2-8
+ TGN 37.1 33.0 2,047 28.3 19.3 23.1
+
+2-8
+ TGN-pg 24.2 6.04 624.8 30.8 18.1 15.6
+
+2-8
+ TGAT 225 63.0 3,657 28.5 24.6 12.0
+
+2-8
+ NAT 21.0 6.94 154.4 29.1 12.1 2.6
+
+1-8
+7*Reddit CAWN 2,983 812 17,056 38.8 41.2 16.3
+
+2-8
+ JODIE 234.4 176 8,082 36.4 23.7 15.3
+
+2-8
+ DyRep 252.9 184 7,716 33.3 24.3 12.7
+
+2-8
+ TGN 271.7 189 8,487 33.7 25.4 15.3
+
+2-8
+ TGN-pg 155.1 27.1 2,142 39.2 23.6 6.6
+
+2-8
+ TGAT 1,203 291 16,462 37.2 31.0 8.4
+
+2-8
+ NAT 90.6 28.5 771.3 37.7 18.5 3.0
+
+1-8
+
+max width=
+
+X Method Train Test Total RAM GPU Epoch
+
+1-8
+7*Ubuntu CAWN 1,066 222 5,385 38.9 17.4 1.0
+
+2-8
+ JODIE 66.70 2,860 76,220 35.3 18.7 5.5
+
+2-8
+ DyRep 2,195 2,857 39,148 38.5 16.6 1.0
+
+2-8
+ TGN 5,975 2,391 73,633 39 19.6 5.5
+
+2-8
+ TGN-pg 188.7 36.5 3,682 37.0 32.1 11.4
+
+2-8
+ TGAT 887 330 18,431 47.3 17.0 2.5
+
+2-8
+ NAT 125.8 41.2 1,321 28.9 10.1 5.4
+
+1-8
+7*Wiki-talk CAWN 13,685 2,419 34,368 99.1 19.4 1.0
+
+2-8
+ JODIE 284,789 145,909 566,607 58.2 20.9 1.0
+
+2-8
+ DyRep 280,659 135,491 514,621 84.4 49.6 1.0
+
+2-8
+ TGN 281,267 136,780 534,827 77.9 24.1 1.0
+
+2-8
+ TGN-pg 1,236 311.5 12,761 60.9 59.0 5.1
+
+2-8
+ TGAT 6,164 2,451 186,513 65.0 17.6 16.0
+
+2-8
+ $\mathbf{{NAT}}$ 833.1 280.1 7,802 37.1 22.3 2.7
+
+1-8
+
+Table 3: Scalability evaluation on Wikipedia, Reddit, Ubuntu and Wiki-talk.
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Figure 3: Convergence v.s. wall-clock time on Reddit Figure 4: Sensitivity (mean) of the overwriting (left) and Wiki-talk (right). Each dot on the curves gets probability $\alpha$ for hash-map collisions on Ubuntu collected per epoch. (Left) & Reddit (Right).
+
+On Wikipedia and Reddit, a lot of baselines achieve high performance because of the valid attributes. However, NAT still gains marginal improvements. On Wikipedia, Reddit and Enron, CAWN outperforms all baselines on inductive study and most baselines on transductive. We believe the reason is that it captures neighborhood structural information via its temporal random walk sampling. However, we are not able to reproduce comparable scores on Social Evolve, Ubuntu and Wiki-talk even tuning training batch size to 32 . We notice there is a recent code change to debug the CAWN implementation[57], which might be the cause of its under-performance.
+
+TGN and its efficient implementation TGN-pg are strong baselines without constructing structure features. On both large-scale datasets Ubuntu and Wiki-talk, TGN-pg gives impressive results on transductive learning. However, NAT still outperforms it consistently. Furthermore, TGN-pg performs poorly for inductive tasks on both datasets, while NAT gains 8-11% lift for these tasks.
+
+On Social Evolve, NAT significantly outperforms all baselines by at least 25% on transductive and 7% on inductive predictions. From Table 1, we can see that Social Evolve has a small number of nodes but many interactions. This highlights one of the advantages of NAT on dense temporal graphs. NAT keeps the neighborhood representation for a node's every individual neighbor separately so the older interactions are not squashed with the more recent ones into a single representation. Pairing with N-caches, NAT can effectively denoise the dense history and extract neighborhood features.
+
+Scalability. Table 3 shows that NAT is always trained much faster than all baselines. The inference speed of NAT is significantly faster than CAWN that can also constructs neighborhood structural features, which achieves 25-29 times speedup on inference for attributed networks. NAT also achieves at least four times faster inference than TGN, JODIE and DyRep. Compared to TGN-pg, NAT achieves comparable inference time in most cases while achieves about ${10}\%$ speed up over the largest dataset Wiki-talk. This is because when the network is large, online sampling of TGN-pg may dominate the time cost. We may expect NAT to show even better scalability for larger networks. Moreover, on the two large networks Ubuntu and Wiki-talk, NAT requires much less GPU memory. Note that albeit with just comparable or slightly better scalability, over all datasets, NAT significantly outperform TGN-pg in prediction performance.
+
+Across all datasets, NAT does not need larger model sizes than baselines to achieve better performances. More impressively, we observe that NAT uniformly requires fewer epochs to converge than all baselines, especially on larger datasets. It can be attributed to the inductive power given by the joint structural features. Because of this, the total runtime of the model is much shorter than the baselines on all datasets. Specifically, on large datasets, Ubuntu and Wiki-talk, NAT is more than three times as fast as TGN-pg. We also plot the curves on the model convergence v.s. CPU/GPU wall-clock time on Reddit and Wiki-talk for comparison in Fig. 3.
+
+max width=
+
+Ablation Dataset Inductive Transductive Train Test GPU
+
+1-7
+3*original method Social E. ${95.16} \pm {0.66}$ ${91.75} \pm {0.37}$ 281.0 89.0 8.88
+
+2-7
+ Ubuntu ${90.35} \pm {0.20}$ ${93.50} \pm {0.34}$ 125.8 41.2 10.1
+
+2-7
+ Wiki-talk* ${93.81} \pm {1.16}$ ${95.00} \pm {0.31}$ 833.1 280.1 22.3
+
+1-7
+2*remove 2-hop N-cache Social E. ${94.30} \pm {0.90}$ ${90.77} \pm {0.26}$ 253.1 75.9 8.87
+
+2-7
+ Ubuntu ${89.45} \pm {1.04}$ ${93.48} \pm {0.34}$ 111.3 35.7 9.95
+
+1-7
+remove Social E. ${55.10} \pm {11.54}$ ${62.12} \pm {3.53}$ 212.9 64.0 8.46
+
+1-7
+1-&-2-hop Ubuntu ${85.11} \pm {0.23}$ ${91.89} \pm {0.09}$ 98.1 29.5 9.07
+
+1-7
+N-cache Wiki-talk ${86.54} \pm {3.87}$ ${94.89} \pm {1.83}$ 409.5 125.4 16.2
+
+1-7
+
+Table 4: Ablation study on N-caches. *Original method for Wiki-talk does not use the second-hop N-cache.
+
+max width=
+
+Param Size Inductive Transductive Train Test GPU
+
+1-7
+5*${M}_{1}$ 4 ${92.95} \pm {2.95}$ ${95.26} \pm {0.49}$ 834.9 281.4 18.4
+
+2-7
+ 8 $\mathbf{{93.96} \pm {0.91}}$ ${95.39} \pm {0.28}$ 806.3 274.9 19.9
+
+2-7
+ 12 ${92.67} \pm {0.82}$ ${95.05} \pm {0.58}$ 818.2 277.6 21.0
+
+2-7
+ 16 ${93.81} \pm {1.16}$ ${95.82} \pm {0.31}$ 833.1 280.1 22.3
+
+2-7
+ 20 ${93.40} \pm {0.50}$ ${95.83} \pm {0.44}$ 841.3 284.8 23.8
+
+1-7
+4*${M}_{2}$ 0 ${93.81} \pm {1.16}$ ${95.82} \pm {0.31}$ 833.1 280.1 22.3
+
+2-7
+ 2 ${92.91} \pm {1.01}$ ${96.08} \pm {0.34}$ 960.5 330.9 22.7
+
+2-7
+ 4 ${94.26} \pm {0.89}$ $\mathbf{{96.29} \pm {0.09}}$ 935.3 322.9 23.8
+
+2-7
+ 8 ${94.53} \pm {0.51}$ ${95.90} \pm {0.07}$ 943.3 325.3 26.0
+
+1-7
+3*F 2 ${90.86} \pm {2.52}$ ${95.74} \pm {0.27}$ 843.6 284.0 18.5
+
+2-7
+ 4 $\mathbf{{93.81} \pm {1.16}}$ $\mathbf{{95.82} \pm {0.31}}$ 833.1 280.1 22.3
+
+2-7
+ 8 ${93.55} \pm {0.93}$ ${95.63} \pm {0.30}$ 828.7 281.1 26.2
+
+1-7
+
+Table 5: Sensitivity of N-cache sizes on Wiki-talk.
+
+§ 5.3 FURTHER ANALYSIS
+
+Ablation study. We conduct ablation studies on the effectiveness of the N-caches. Table 4 shows the results of removing the second-hop N-caches ${Z}_{u}^{\left( 2\right) }$ and removing both the first-hop and second-hop $\mathrm{N}$ -caches ${Z}_{u}^{\left( 1\right) },{Z}_{u}^{\left( 2\right) }$ . As expected, dropping the $\mathrm{N}$ -caches reduces the training, inference time and the GPU cost. However, it also results in prediction performance decay. Just removing ${Z}_{u}^{\left( 2\right) }$ can hurt performance by up to $1\%$ . By removing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ but keeping only the self representation, the performance drops significantly, especially on inductive settings. Keeping only self representation is analogous to some baselines such as TGN which keeps a memory state. However, since we use a smaller dimension usually between 32 to 72, the self representation itself cannot be generalized well on these datasets. Ablation studies on other components including joint neighborhood structural features, T-encoding, RNNs, and DE are detailed in Table 8 (in the appendix).
+
+Sensitivity of the sizes of N-cache. Since N-caches induce the major consumption of the GPU memory, we study how the memory size correlates with the model performance on Wiki-talk. We compare the performances between different values of ${M}_{1},{M}_{2}$ and $F$ of $\mathrm{N}$ -caches. The baseline has ${M}_{1} = {16},{M}_{2} = 0$ and $F = 4$ and we study each parameter by fixing the other two. Table 5 details the changes in the model performance. We also study for the ubuntu dataset in Appendix Table 9.
+
+We can see that GPU memory cost scales close to a linear function for all param changes. However, increasing the model size does not necessarily improve the performance. Changing ${M}_{1}$ to either a smaller or a larger value may decrease both the transductive and the inductive performance. Increasing ${M}_{2}$ boosts the transductive performance but hurts the inductive performance. In general, changing ${M}_{2}$ is less sensitive than changing ${M}_{1}$ . Lastly, a larger $F$ could overfit the model as we can see a slight drop in the inductive prediction with the largest $F$ . Overall, training and inference time remains stable because of the parallelization of NAT. Interestingly, with larger ${M}_{1}$ and ${M}_{2}$ , we sometimes even see a decrease in running time. We hypothesize it is because it avoids hash collisions and short-circuits $\mathrm{N}$ -cache overwriting steps.
+
+Sensitivity of overwriting probability $\alpha$ . We also experiment on $\alpha$ to study whether N-cache refresh frequency is related to the prediction quality. Here, we use a large dataset Ubuntu and a medium dataset Reddit. Results can be found in Fig. 4. For Ubuntu, we update from the original sizes to ${M}_{1} = 4,{M}_{2} = 1,F = 4$ and for Reddit, we change to ${M}_{1} = {16},{M}_{2} = 2,F = 8$ to increase the number of potential collisions so that the effect of $\alpha$ can be better observed. On both datasets, we can see an overall trend that a larger $\alpha$ gives a better transductive performance. However, if $\alpha = 1$ and we always replace old neighbors, it is slightly worse than the optimal $\alpha$ . This pattern shows that the neighborhood information has to keep updated in order to gain a better performance. Some randomness can be useful because it preserves more diverse time ranges of interactions. The inductive performance is relatively more sensitive to the selection of $\alpha$ . We do not find a case when having two different probabilities for replacing ${Z}_{u}^{\left( 1\right) }$ and ${Z}_{u}^{\left( 2\right) }$ significantly benefits model performance, so we use a single $\alpha$ for $\mathrm{N}$ -caches of different hops to keep it simple.
+
+§ 6 CONCLUSION AND FUTURE WORKS
+
+In this work, we proposed NAT, the first method that adopts dictionary-type representations for nodes to track the neighborhood of nodes in temporal networks. Such representations support efficient construction of neighborhood structural features that are crucial to predict how temporal network evolves. NAT also develops N-caches to manage these representations in a parallel way. Our extensive experiments demonstrate the effectiveness of NAT in both prediction performance and scalability. In the future, we plan to extend NAT to process even larger networks that the GPU memory cannot hold the entire networks.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/FebadKZf6Gd/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/FebadKZf6Gd/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..bcb93ae0e2542ffdfe2965c844edd0e0f9090a77
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/FebadKZf6Gd/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,362 @@
+# A Generalist Neural Algorithmic Learner
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+The cornerstone of neural algorithmic reasoning is the ability to solve algorithmic tasks, especially in a way that generalises out of distribution. While recent years have seen a surge in methodological improvements in this area, they mostly focused on building specialist models. Specialist models are capable of learning to neurally execute either only one algorithm or a collection of algorithms with identical control-flow backbone. Here, instead, we focus on constructing a generalist neural algorithmic learner-a single graph neural network processor capable of learning to execute a wide range of algorithms, such as sorting, searching, dynamic programming, path-finding and geometry. We leverage the CLRS benchmark to empirically show that, much like recent successes in the domain of perception, generalist algorithmic learners can be built by "incorporating" knowledge. That is, it is possible to effectively learn algorithms in a multi-task manner, so long as we can learn to execute them well in a single-task regime. Motivated by this, we present a series of improvements to the input representation, training regime and processor architecture over CLRS, improving average single-task performance by over ${20}\%$ from prior art. We then conduct a thorough ablation of multi-task learners leveraging these improvements. Our results demonstrate a generalist learner that effectively incorporates knowledge captured by specialist models.
+
+## 201 Introduction
+
+Machine learning systems based on deep neural networks have made tremendous strides in recent years, especially so for tasks dominated by perception. Prominent models in this space are usually required to generalise in-distribution, meaning that their training and validation sets are representative of the distribution expected of test inputs. In contrast, to truly master tasks dominated by reasoning, a model needs to provide sensible outputs even when generalising out-of-distribution (OOD). Correspondingly, neural networks have seen lesser levels of success in this domain. Indeed, it has been suggested that stronger neural reasoning architectures may require careful application of methods such as algorithmic alignment [1], causality [2] and self-supervised learning [3]. Furthermore, these kinds of architectures are likely to be critical for robustly generating new knowledge based on existing observations, especially when that knowledge escapes the domain of training data.
+
+Neural algorithmic reasoning [4] offers a robust route for obtaining such modelling advancements. Its focus is on evaluating existing (graph) neural network architectures on their ability to solve algorithmic tasks, typically by learning to execute classical algorithms [5]. This is an excellent target for probing reasoning capabilities, as classical algorithms can be seen as the essential "building blocks" for all of theoretical computer science, and fundamental tools in a software engineering career [6]. While this is a fairly self-contained pipeline, evidence of its applicability has already emerged: Graph Neural Networks (GNNs) pre-trained on algorithmic tasks have been successfully utilised in implicit planning [7] and self-supervised learning [8]. All of the prior advances in this area focused on building specialist models: either focusing on a single algorithm, or a collection of algorithms with an identical control flow backbone [9, 10].
+
+In contrast, here we demonstrate a generalist neural algorithmic learner: a single GNN, with a single set of parameters, capable of learning to solve several classical algorithmic tasks simultaneously-to a level that matches relevant specialist models on average. This represents an important milestone, showing we can meaningfully incorporate reasoning capabilities even across tasks with completely disparate control flow, and in several tasks, we can exceed the OOD performance of the corresponding single-task specialist. Our generalist model is capable of performing various tasks, spanning sorting, searching, greedy algorithms, dynamic programming, graph algorithms, string algorithms and geometric algorithms (Figure 1). The experimentation we conduct is made possible by the CLRS-30 benchmark [5], a collection of thirty classical algorithmic tasks [6] spanning the above categories, along with a unified representational interface which made multi-task models easier to deploy.
+
+
+
+Figure 1: Our generalist neural algorithmic learner is a single processor GNN $P$ , with a single set of weights, capable of solving several algorithmic tasks, $\tau$ , in a shared latent space (each of which is attached to $P$ with simple encoders/decoders ${f}_{\tau }$ and ${g}_{\tau }$ ). Among others, our processor network is capable of sorting (top), shortest path-finding (middle), and convex hull finding (bottom).
+
+Our results are powered by a single salient observation: any numerical difficulties which would make individual algorithms harder to learn (e.g. unstable gradients) are amplified when trying to learn a collection of such algorithms at once. Therefore, one of our main contributions is also to present a series of improvements to the training, optimisation, input representations, and GNN architectures which, taken together, improve the best-known average performance on the CLRS-30 benchmark by over ${20}\%$ in absolute terms. We hope that our collection of improvements, with careful explanation for their applicability, will prove useful to GNN practitioners even beyond the realm of reasoning.
+
+Following the overview of related work in Section 2, we describe, in Section 3, the improvements in the representation, training regime and architecture that lead to a single model with significantly better performance than previous published state-of-the-art (SOTA) on CLRS-30. We then show in Section 4, as our main contribution, that this model, trained simultaneously on all the CLRS-30 tasks, can match corresponding specialist models on average, demonstrating general algorithmic learning.
+
+## 2 Related Work
+
+The closest related work to ours is NeuralExecutor++, a multi-task algorithmic reasoning model by Xhonneux et al. [10, NE++]. As briefly discussed in the prior section, NE++ focuses on a highly specialised setting where all of the algorithms to be learnt have an identical control flow backbone. For example, NE++ jointly learns to execute Prim's [11] and Dijkstra's [12] algorithms, which are largely identical (up to a choice of key function and edge relaxation subroutine). Even in this specialist regime, the authors are able to make critical observations, such as empirically showing the specific forms of multi-task learning that are necessary for generalising OOD. We leverage the insights of NE++, while extending its influence well beyond the domain of closely related algorithms.
+
+Also of note is the work on neural execution of graph algorithms by Veličković et al. [9]. This work provided early evidence of the potential for multi-task learning of classical algorithms. Namely, the authors simultaneously learn breadth-first search and the Bellman-Ford algorithm [13]—empirically demonstrating that jointly learning to execute them is favourable to learning them either in isolation or with various forms of curriculum learning [14]. Once again, the algorithms in question are nearly-identical in terms of backbone; in fact, breadth-first search can be interpreted as the Bellman-Ford algorithm over a graph with constant edge weights.
+
+In the multi-task learning context, our work belongs to the group of hard parameter sharing literature, pioneered by Caruana [15]. In hard parameter sharing, the same model is shared across all tasks, with, potentially, some task-specific weights. We continue a line of work demonstrating that a single general model can learn a set of challenging tasks in combinatorial optimisation [16-18], computer control [19], and multi-modal multi-embodiment learning [20, Gato]. Just like Gato provides a generalist agent for a wide variety of tasks, from language modelling to playing Atari games, and from robotic arm control to image captioning, our work provides a generalist agent for a diverse set of algorithmic domains, including sorting, searching, graphs, strings, and geometry.
+
+Due to their ability to operate on graphs of arbitrary size, GNNs (including Transformers [21]) have been extensively explored for their in- and out-of-distribution generalisation properties in Reinforcement Learning (RL) [22-26]. In our setting, OOD generalisation implies generalisation to problems of larger size, e.g., longer input arrays to sort or larger graphs to find shortest paths in. In-distribution generalisation implies generalisation to new instances of problems of the same size. From this perspective, our problem setting is similar to procedurally-generated environments in RL [27-29].
+
+The improvements we implemented for our single-task specialist reasoners are largely motivated by the theory of algorithmic alignment [30]. The key result of this theory is that neural networks will have provably smaller sample complexity if they are designed with components that "line up" with the target algorithm's operations. Following the prescriptions of this theory, we make several changes to the input data representations to make this alignment stronger [1], modify the GNN architecture to directly support higher-order reasoning [31] and suggest dedicated decoders for doubly-stochastic outputs [32].
+
+## 3 Single-task experiments
+
+Each algorithm in the CLRS benchmark [5] is specified by a number of inputs, hints and outputs. In a given sample, the inputs and outputs are fixed, while hints are time-series of intermediate states of the algorithm. Each sample for a particular task has a size, $n$ , corresponding to the number of nodes in the GNN that will execute the algorithm.
+
+A sample of every algorithm is represented as a graph, with each input, output and hint located in either the nodes, the edges, or the graph itself, and therefore has shape (excluding batch dimension, and, for hints, time dimension) $n \times f, n \times n \times f$ , or $f$ , respectively, $f$ being the dimensionality of the feature, which depends on its type. The CLRS benchmark defines five types of features: scalar, categorical, mask, mask_one and pointer, with their own encoding and decoding strategies and loss functions-e.g. a scalar type will be encoded and decoded directly by a single linear layer, and optimised using mean squared error. We defer to the CLRS benchmark paper [5] for further details.
+
+### 3.1 Base Model
+
+Encoder. We adopt the same encode-process-decode paradigm [33] presented with the CLRS benchmark [5]. At each time step, $t$ , of a particular task $\tau$ (e.g. insertion sort), the task-based encoder ${f}_{\tau }$ , consisting of a linear encoder for each input and hint, embeds inputs and the current hints as high-dimensional vectors. These embeddings of inputs and hints located in the nodes all have the same dimension and are added together; the same happens with hints and inputs located in edges, and in the graph. In our experiments we use the same dimension, $h = {128}$ , for node, edge and graph embeddings. Thus, at the end of the encoding step for a time-step $t$ of the algorithm, we have a single set of embeddings $\left\{ {{\mathbf{x}}_{i}^{\left( t\right) },{\mathbf{e}}_{ij}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }}\right\}$ , shapes $n \times h, n \times n \times h$ , and $h$ , in the nodes, edges and graph, respectively. Note that this is independent of the number and type of the inputs and hints of the particular algorithm, allowing us to share this latent space across all thirty algorithms in CLRS. Further, note that at each step, the input encoding is fed directly to these embeddings-this recall mechanism significantly improves the model's robustness over long trajectories [34].
+
+Processor. The embeddings are fed into a processor $P$ , a GNN that performs one step of computation. The processor transforms the input node, edge and graph embeddings into processed node embeddings, ${\mathbf{h}}_{i}^{\left( t\right) }$ . Additionally, the processor uses the processed node embeddings from the previous step, ${\mathbf{h}}_{i}^{\left( t - 1\right) }$ , as inputs. Importantly, the same processor model can operate on graphs of any size. We leverage the message-passing neural network [35, MPNN], using the max aggregation and passing messages over a fully-connected graph, as our base model. The MPNN computes processed embeddings as follows:
+
+$$
+{\mathbf{z}}^{\left( t\right) } = {\mathbf{x}}_{i}^{\left( t\right) }\parallel {\mathbf{h}}_{i}^{\left( t - 1\right) }\;{\mathbf{m}}_{i}^{\left( t\right) } = \mathop{\max }\limits_{{1 \leq j \leq n}}{f}_{m}\left( {{\mathbf{z}}_{i}^{\left( t\right) },{\mathbf{z}}_{j}^{\left( t\right) },{\mathbf{e}}_{ij}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }}\right) \;{\mathbf{h}}_{i}^{\left( t\right) } = {f}_{r}\left( {{\mathbf{z}}_{i}^{\left( t\right) },{\mathbf{m}}_{i}^{\left( t\right) }}\right) \tag{1}
+$$
+
+starting from ${\mathbf{h}}^{\left( 0\right) } = \mathbf{0}$ . Here $\parallel$ denotes concatenation, ${f}_{m} : {\mathbb{R}}^{2h} \times {\mathbb{R}}^{2h} \times {\mathbb{R}}^{h} \times {\mathbb{R}}^{h} \rightarrow {\mathbb{R}}^{h}$ is the message function (for which we use a three-layer MLP with ReLU activations), and ${f}_{r} : {\mathbb{R}}^{2h} \times {\mathbb{R}}^{h} \rightarrow$ ${\mathbb{R}}^{h}$ is the readout function (for which we use a linear layer with ReLU activation). The use of the max aggregator is well-motivated by prior work [5, 9], and we use the fully connected graph-letting the neighbours $j$ range over all nodes $\left( {1 \leq j \leq n}\right)$ —in order to allow the model to overcome situations where the input graph structure may be suboptimal. Layer normalisation [36] is applied to ${\mathbf{h}}_{i}^{\left( t\right) }$ before using them further. Further details on the MPNN processor may be found in Veličković et al. [5].
+
+Decoder. The processed embeddings are finally decoded with a task-based decoder ${g}_{\tau }$ , to predict the hints for the next step, and the outputs at the final step. Akin to the encoder, the task-based decoder relies mainly on a linear decoder for each hint and output, along with a mechanism to compute pairwise node similarities when appropriate. Specifically, the pointer type decoder computes a score, ${s}_{ij}$ , for each pair of nodes, and then chooses the pointer of node $i$ by taking either the ${\operatorname{argmax}}_{j}{s}_{ij}$ or ${\operatorname{softmax}}_{j}{s}_{ij}$ (depending on whether a hard or soft prediction is required).
+
+Loss. The decoded hints and outputs are used to compute the loss during training, according to their type [5]. For each sample in a batch, the hint prediction losses are averaged across hints and time, and the output loss is averaged across outputs (most algorithms have a single output, though some have two outputs). The hint loss and output loss are added together. Besides, the hint predictions at each time step are fed back as inputs for the next step, except possibly at train time if teacher forcing is used (see Section 3.2.1).
+
+We train the model on samples with sizes $n \leq {16}$ , and periodically evaluate them on in-distribution samples of size $n = {16}$ . Also, periodically, we evaluate the model with the best in-distribution evaluation score so far on OOD samples of size $n = {64}$ . In what follows, we will be reporting only these OOD evaluation scores. Full details of the model, training and evaluation hyperparameters can be found in Appendix A.
+
+### 3.2 Model improvements
+
+As previously discussed, single-task improvements, especially in terms of learning stability, will empirically transfer well to multi-task algorithmic learning. We now describe, in a gradual manner, all the changes made to the model, which have lead to an absolute improvement of over 20% on average across all 30 tasks in CLRS.
+
+#### 3.2.1 Dataset and training
+
+Removing teacher forcing. At evaluation time, the model has no access to the step-by-step hints in the dataset, and has to rely on its own hint predictions. However, during training, it is sometimes advisable to stabilise the trajectories with teacher forcing [37]-providing the ground-truth hint values instead of the network's own predictions. In the prior model [5], ground-truth hints were provided during training with probability 0.5 , as, without teacher forcing, losses tended to grow unbounded along a trajectory when scalar hints were present, destabilising the training. In this work we incorporate several significant stabilising changes (described in future paragraphs), which allows us to remove teacher forcing altogether, aligning training with evaluation, and avoiding the network becoming overconfident in always expecting correct hint predictions. With teacher forcing, performance deteriorates significantly in sorting algorithms and Kruskal's algorithm. Naïve String Matcher, on the other hand, improves with teacher forcing (see Appendix A, Figs. 7-9).
+
+Augmenting the training data. To prevent our model from over-fitting to the statistics of the fixed CLRS training dataset [5], we augmented the training data in three key ways, without breaking the intended size distribution shift. Firstly, we used the on-line samplers in CLRS to generate new training examples on the fly, rather than using a fixed dataset which is easier to overfit to. Secondly, we trained on examples of mixed sizes, $n \leq {16}$ , rather than only 16, which helps the model anticipate for a diverse range of sizes, rather than overfitting to the specifics of size $n = {16}$ . Lastly, for graph algorithms, we varied the connectivity probability $p$ of the input graphs (generated by the Erdős-Rényi model [38]); and for string matching algorithms, we varied the needle length. These both serve to expose the model to different trajectory lengths; for example, in many graph algorithms, the amount of steps the algorithm should run for is related to the graph's diameter, and varying the connection probability in the graph generation allows for varying the expected diameter. These improvements considerably increase the training data variability, compared to the original dataset in [5].
+
+Soft hint propagation. When predicted hints are fed back as inputs during training, gradients may or may not be allowed to flow through them. In previous work, only hints of the scalar type allowed gradients through, as all categoricals were post-processed from logits into the ground-truth format via argmax or thresholding before being fed back. Instead, in this work we use softmax for categorical, mask_one and pointer types, and the logistic sigmoid for mask types. Without these soft hints, performance in sorting algorithms degrades (similarly to the case of teacher forcing), as well as in Naïve String Matcher (Appendix A, Figs. 7-9).
+
+Static hint elimination. Eleven algorithms in CLRS ${}^{1}$ specify a fixed ordering of the nodes, common to every sample, via a node pointer hint that does not ever change along the trajectories. Prediction of this hint is trivial (identity function), but poses a potential problem for OOD generalization, since the model can overfit to the fixed training values. We therefore turned this fixed hint into an input for these 11 algorithms, eliminating the need for explicitly predicting it.
+
+Improving training stability with encoder initialisation and gradient clipping. The scalar hints have unbounded values, in principle, and are optimised using mean-squared error, hence their gradients can quickly grow with increasing prediction error. Further, the predicted scalar hints then get re-encoded at every step, which can rapidly amplify errors throughout the trajectory, leading to exploding signals (and consequently gradients), even before any training takes place.
+
+To rectify this issue, we use the Xavier initialisation [45], effectively reducing the initial weights for scalar hints whose input dimensionality is just 1 . However, we reverted to using the default LeCun initialisation [46] elsewhere. This combination of initialisations proved important for the initial learning stability of our model over long trajectories. Relatedly, in preliminary experiments, we saw drastic improvements in learning stability, as well as significant increases in validation performance, with gradient clipping [47], which we subsequently employed in all experiments.
+
+#### 3.2.2 Encoders and decoders
+
+Randomised position scalar. Across all algorithms in the dataset, there exists a position scalar input which uniquely indexes the nodes, with values linearly spaced between 0 and 1 along the node index. To avoid overfitting to these linearly spaced values during training, we replaced them with random values, uniformly sampled in $\left\lbrack {0,1}\right\rbrack$ , sorted to match the initial order implied by the linearly spaced values. The benefit of this change is notable in algorithms where it would be easy to overfit to these positions, such as string matching.
+
+Permutation decoders and the Sinkhorn operator. Sorting algorithms (Insertion Sort, Bubble Sort, Heapsort [48] and Quicksort [49]) always output a permutation of the input nodes. In the CLRS benchmark, this permutation is encoded as a pointer where each node points to its predecessor in the sorted order (the first node points to itself); this is represented as a $n \times n$ matrix $\mathbf{P}$ where each row is a one-hot vector, such that element(i, j)is 1 if node $i$ points to node $j$ . As with all types of pointers, such permutation pointers can be predicted using a row-wise softmax on unconstrained decoder outputs (logits), trained with cross entropy (as in [5]). However, this does not explicitly take advantage of the fact that the pointers encode a permutation, which the model has to learn instead. Our early experiments showed that the model was often failing to predict valid permutations OOD.
+
+---
+
+${}^{1}$ Binary Search, Minimum, Max Subarray [39], Matrix Chain Order, LCS Length, Optimal BST [40], Activity Selector [41], Task Scheduling [42], Naïve String Matcher, Knuth-Morris-Pratt [43] and Jarvis’ March [44].
+
+---
+
+Accordingly, we enforce a permutation inductive bias in the output decoder of sorting algorithms, as follows. First, we modify the output representation by rewiring the first node to point to the last one, turning $\mathbf{P}$ into a permutation matrix, i.e., a matrix whose rows and columns are one-hot vectors. We also augment the representation with a one-hot vector of size $n$ that specifies the first node, so we do not lose this information; this vector is treated like a regular mask_one feature. Second, we predict the permutation matrix $\mathbf{P}$ from unconstrained decoder outputs $\mathbf{Y}$ by replacing the usual row-wise softmax with the Sinkhorn operator $\mathcal{S}\left\lbrack {{32},{50} - {53}}\right\rbrack .\mathcal{S}$ projects an arbitrary square matrix $\mathbf{Y}$ into a doubly stochastic matrix $\mathcal{S}\left( \mathbf{Y}\right)$ (a non-negative matrix whose rows and columns sum to 1), by exponentiating and repeatedly normalizing rows and columns so they sum to 1 . Specifically, $\mathcal{S}$ is defined by:
+
+$$
+{\mathcal{S}}^{0}\left( \mathbf{Y}\right) = \exp \left( \mathbf{Y}\right) \;{\mathcal{S}}^{l}\left( \mathbf{Y}\right) = {\mathcal{T}}_{c}\left( {{\mathcal{T}}_{r}\left( {{\mathcal{S}}^{l - 1}\left( \mathbf{Y}\right) }\right) }\right) \;\mathcal{S}\left( \mathbf{Y}\right) = \mathop{\lim }\limits_{{l \rightarrow \infty }}{\mathcal{S}}^{l}\left( \mathbf{Y}\right) , \tag{2}
+$$
+
+where exp acts element-wise, and ${\mathcal{T}}_{r}$ and ${\mathcal{T}}_{c}$ denote row and column normalisation respectively. Although the Sinkhorn operator produces a doubly stochastic matrix rather than a permutation matrix, we can obtain a permutation matrix by introducing a temperature parameter, $\tau > 0$ , and taking $\mathbf{P} = \mathop{\lim }\limits_{{\tau \rightarrow {0}^{ + }}}\mathcal{S}\left( {\mathbf{Y}/\tau }\right)$ ; as long as there are no ties in the elements of $\mathbf{Y},\mathbf{P}$ is guaranteed to be a permutation matrix [52, Theorem 1].
+
+In practice, we compute the Sinkhorn operator using a fixed number of iterations ${l}_{\max }$ . We use a smaller number of iterations ${l}_{\max } = {10}$ for training, to limit vanishing and exploding gradients, and ${l}_{\max } = {60}$ for evaluation. A fixed temperature $\tau = {0.1}$ was experimentally found to give a good balance between speed of convergence and tie-breaking. We also encode the fact that no node points to itself, that is, that all diagonal elements of $\mathbf{P}$ should be 0, by setting the diagonal elements of $\mathbf{Y}$ to $- \infty$ . To avoid ties, we follow Mena et al. [53], injecting Gumbel noise to the elements of $\mathbf{Y}$ prior to applying the Sinkhorn operator, during training only. Finally, we transform the predicted permutation matrix $\mathbf{P}$ and predicted mask_one vector that points to the first element into the original pointer representation used by CLRS.
+
+#### 3.2.3 Processor networks
+
+Gating mechanisms. Many algorithms only require updating a few nodes at each time step, keeping the rest unchanged. However, the MPNN we use (Equation 1) is biased towards the opposite: it updates all hidden states in each step. Although it is theoretically possible for the network to keep the states unchanged, learning to do so is not easy. With this in mind, and motivated by its effectiveness in NDRs [54], we augment the network with an update gate, biased to be closed by default. We found that the gate stabilizes learning on many of the tasks, and increases the mean performance over all tasks on single-task training significantly. Surprisingly, however, we did not find gating to be advantageous in the multi-task case.
+
+To add gating to the MPNN model we produce a per-node gating vector from the same inputs that process the embeddings in Equation 1:
+
+$$
+{\mathbf{g}}_{i}^{\left( t\right) } = {f}_{g}\left( {{\mathbf{z}}_{i}^{\left( t\right) },{\mathbf{m}}_{i}^{\left( t\right) }}\right) \tag{3}
+$$
+
+where ${f}_{g} : {\mathbb{R}}^{2h} \times {\mathbb{R}}^{h} \rightarrow {\mathbb{R}}^{h}$ is the gating function, for which we use a two-layer MLP, with ReLU activation for the hidden layer and logistic sigmoid activation for the output. Importantly, the final layer bias of ${f}_{g}$ is initialized to a value of -3, which biases the network for not updating its representations, unless necessary. The processed gated embeddings, ${\widehat{\mathbf{h}}}_{i}^{\left( t\right) }$ , are computed as follows:
+
+$$
+{\widehat{\mathbf{h}}}_{i}^{\left( t\right) } = {\mathbf{g}}_{i}^{\left( t\right) } \odot {\mathbf{h}}_{i}^{\left( t\right) } + \left( {1 - {\mathbf{g}}_{i}^{\left( t\right) }}\right) \odot {\mathbf{h}}_{i}^{\left( t - 1\right) } \tag{4}
+$$
+
+and are used instead of ${\mathbf{h}}_{i}^{\left( t\right) }$ in the subsequent steps, replacing ${\mathbf{z}}^{\left( t\right) }$ in Eq. 1 by ${\mathbf{z}}^{\left( t\right) } = {\mathbf{x}}_{i}^{\left( t\right) }\parallel {\widehat{\mathbf{h}}}_{i}^{\left( t - 1\right) }$ .
+
+Triplet reasoning. Several algorithms within CLRS-30 explicitly require edge-based reasoning-where edges store values, and update them based on other edges' values. An example of this is the Floyd-Warshall algorithm [55], which computes all-pairs shortest paths in a weighted graph. The update rule for ${d}_{ij}$ , its estimate for the best distance from node $i$ to $j$ , is ${d}_{ij} = \mathop{\min }\limits_{k}{d}_{ik} + {d}_{kj}$ , which roughly says "the best way to get from $i$ to $j$ is to find the optimal mid-point $k$ , travel from $i$ to $k$ , then from $k$ to $j$ ". Similar rules are pervasive across many CLRS-30 algorithms, especially in dynamic programming. Even though there are no node representations in the the above update, all our processors are centered on passing messages between node representations ${\mathbf{h}}_{i}$ .
+
+
+
+Figure 2: The OOD performance in single-task experiments before and after the improvements presented in this paper, sorted in descending order of current performance. Error bars represent standard error of the mean across seeds (3 seeds for previous SOTA experiments, 10 seeds for current). The previous SOTA values are the best of MPNN, PGN and Memnet models (see Table 2).
+
+To rectify this situation, we augment our processor to perform message passing towards edges. Referring again to the update for ${d}_{ij}$ , we note that the edge representations are updated by choosing an intermediate node, then aggregating over all possible choices. Accordingly, and as previously observed by Dudzik and Veličković [31], we introduce triplet reasoning: first, computing representations over triplets of nodes, then reducing over one node to obtain edge latents:
+
+$$
+{\mathbf{t}}_{ijk} = {\psi }_{t}\left( {{\mathbf{h}}_{i},{\mathbf{h}}_{j},{\mathbf{h}}_{k},{\mathbf{e}}_{ij},{\mathbf{e}}_{ik},{\mathbf{e}}_{kj},\mathbf{g}}\right) \;{\mathbf{h}}_{ij} = {\phi }_{t}\left( {\mathop{\max }\limits_{k}{\mathbf{t}}_{ijk}}\right) \tag{5}
+$$
+
+Here, ${\psi }_{t}$ is a triplet message function, mapping all relevant representations to a single vector for each triplet of nodes, and ${\phi }_{t}$ is an edge readout function, which transforms the aggregated triplets for each edge for later use. According to prior findings on the CLRS benchmark [5], we use the max aggregation to obtain edge representations. The computed ${\mathbf{h}}_{ij}$ vectors can then be used in any edge-based reasoning task, and empirically they are indeed significantly beneficial, even in tasks where we did not initially anticipate such benefits. One example is Kruskal's minimum spanning tree algorithm [56], where we presume that access to triplet reasoning allowed the model to more easily sort the edges by weight, as it selects how to augment the spanning forest at each step.
+
+In order to keep the footprint of triplet embeddings as lightweight as possible, we compute only 8-dimensional features in ${\psi }_{t}.{\phi }_{t}$ then upscales the aggregated edge features back to 128 dimensions, to make them compatible with the rest of the architecture. Our initial experimentation demonstrated that the output dimensionality of ${\psi }_{t}$ did not significantly affect downstream performance. Note that computing triplet representations has been a useful approach in general GNN design [57]-however, it has predominantly been studied in the context of GNNs over constant input features. Our study is among the first to verify their utility over reasoning tasks with well-specified initial features.
+
+### 3.3 Results
+
+By incorporating the changes described in the previous sections we arrived at a single model type, with a single set of hyper-parameters, that was trained to reach new state-of-the-art performance on CLRS-30 [5]. Tables 1 and 2 show the micro- ${\mathrm{F}}_{1}$ scores of our model, which we refer to as Triplet-GMPNN (an MPNN with gating and triplet edge processing), over the original CLRS-30 test set (computed identically to [5], but with 10 repetitions instead of 3). Our baselines include the Memnet [58], MPNN [35] and PGN [59] models, taken directly from [5]. Figure 2 displays the comparison between the improved model and the best model from [5]. Our improvements lead to an overall average performance that is more than ${20}\%$ higher (in absolute terms) compared to the next best model (see Table 1), and to a significant performance improvement in all but one algorithm family, compared to every other model. Further, our stabilising changes (such as gradient clipping) have empirically reduced the scale of our model's gradient updates across the 30 tasks, preparing us better for the numerical issues of the multi-task regime. We finally also note that though we do not show it in Tables 1 & 2, applying the same improvements to the PGN processor, leads to an increase in overall performance from 50.84% (Table 1) to 69.31%.
+
+Table 1: Single-task OOD micro- ${\mathrm{F}}_{1}$ score of previous SOTA Memnet, MPNN and PGN [5] and our best model Triplet-GMPNN with all our improvements, after 10,000 training steps.
+
+| $\mathbf{{Alg}.{Type}}$ | Memnet [5] | $\mathbf{{MPNN}\left\lbrack 5\right\rbrack }$ | PGN [5] | Triplet-GMPNN (ours) |
| Div. & C. | 13.05% ± 0.14 | ${20.30}\% \pm {0.85}$ | ${65.23}\% \pm {4.44}$ | $\mathbf{{76.36}}\% \pm {1.34}$ |
| DP | ${67.94}\% \pm {8.20}$ | ${65.10}\% \pm {6.44}$ | ${70.58}\% \pm {6.48}$ | $\mathbf{{81.99}\% \pm {4.98}}$ |
| Geometry | ${45.14}\% \pm {11.95}$ | 73.11% ± 17.19 | ${61.19}\% \pm {7.01}$ | $\mathbf{{94.09}\% \pm {2.30}}$ |
| Graphs | ${24.12}\% \pm {5.30}$ | ${62.79}\% \pm {8.75}$ | ${60.25}\% \pm {8.42}$ | 81.41% $\pm$ 6.21 |
| Greedy | ${53.42}\% \pm {20.82}$ | ${82.39}\% \pm {3.01}$ | ${75.84}\% \pm {6.59}$ | ${91.21}\% \pm {2.95}$ |
| Search | ${34.35}\% \pm {21.67}$ | ${41.20}\% \pm {19.87}$ | ${56.11}\% \pm {21.56}$ | ${58.61}\% \pm {24.34}$ |
| Sorting | ${71.53}\% \pm {1.41}$ | ${11.83}\% \pm {2.78}$ | ${15.45}\% \pm {8.46}$ | 60.37% ± 12.16 |
| Strings | ${1.51}\% \pm {0.46}$ | ${3.21}\% \pm {0.94}$ | ${2.04}\% \pm {0.20}$ | ${49.09}\% \pm {23.49}$ |
| Overall avg. | 38.88% | 44.99% | 50.84% | 74.14% |
| >90% | 0/30 | 6/30 | 3/30 | 11/30 |
| > 80% | 3/30 | 9/30 | 7/30 | ${17}/{30}$ |
| >60% | 10/30 | 14/30 | ${15}/{30}$ | $\mathbf{{24}}/\mathbf{{30}}$ |
+
+There are two notable examples of algorithm families with significant OOD performance improvement. The first are geometric algorithms (Segments Intersect, Graham Scan [60] and Jarvis' March), now solved at approximately ${94}\%$ OOD, compared to the previous best of about ${73}\%$ ; the second being string algorithms (Knuth-Morris-Pratt and Naïve String Matcher) for which our model now exceeds 49% compared to the previous best of approximately 3%.
+
+The significant overall performance boost is reflected in the increased number of algorithms we can now solve at over ${60}\% ,{80}\% \& {90}\%$ OOD performance, compared to previous SOTA [5]. Specifically, we now exceed ${60}\%$ accuracy in 24 algorithms ( 15 algorithms previously), ${80}\%$ for 17 algorithms (9 previously) and 90% for 11 algorithms ( 6 previously).
+
+## 4 Multi-task experiments
+
+In the multi-task setting, we train a single processor across all CLRS-30 tasks. We keep encoders and decoders separate for each task. To perform the update, one might accumulate gradients from all the tasks before stepping the optimizer, or step independently after each batch from each algorithm. Both approaches have been deemed to be effective in the multi-task learning literature [20, 24, 61], and we empirically found that, in our setting, stepping separately per task produced superior results. Following recent work [61], we did not explore specialised multi-task optimizers, but ensured the stability of the training with gradient clipping [47] and Xavier initialisation [45] of scalar hint encoders to ameliorate exploding outputs and NaN gradients, as already described. Batch size and learning rate are the same as in single-task experiments. We found that gating (Section 3.2.3) degraded multi-task performance, so it was not included in the multi-task model.
+
+Chunking. To reduce the memory footprint of multi-task training we implemented a chunked training mode, where trajectories are split along the time axis for gradient computation and, when they are shorter than the chunk length, are concatenated with the following trajectory so as to avoid the need of padding. Thus, while a standard-training batch consists of full trajectories, padded to the length of the longest one, a chunked-training batch has a fixed time length (16 steps in our experiments) and consists of segments of trajectories. Immediately after the end of one trajectory the beginning of another one follows, so there is no padding. Losses are computed independently for each chunked batch, and gradients cannot flow between chunks. Since the output loss is computed only on the final sample of each trajectory, a chunk may give rise to no output loss, if it contains no end-of-trajectory segments. Chunking, therefore, changes the balance between hint and output losses depending on the length of trajectories. Surprisingly, multi-task performance averaged across all 30 tasks, after chunked training, is significantly better compared to full-trajectory training (Figure 4a). Only one algorithm, Bellman-Ford, has worse performance with chunked training (Figure 10). The strong effect of chunking on multi-algorithm performance indicates that the weighting of hint and output losses of the different tasks during optimization is important for successful multi-task learning.
+
+
+
+Figure 3: Per-algorithm comparison between our multi-task model and single-task Triplet-GMPNN from Table 2, ordered by biggest improvement for multi-task (left to right). Refer to Figure 5 for a comparison against the best single-task model per algorithm instead.
+
+
+
+Figure 4: Multi-task model ablations showing average performance and 95% CI across 10 seeds. ST, single-task; MT, multi-task.
+
+Results. Figure 3 compares the performance of the single-task Triplet-GMPNN against the multitask model. Additional comparisons against the best per-algorithms single-task model from Table 2 are also presented in Figure 5, along with an illustration of the number of tasks where the performance of multi-task model matches, or exceeds, that of single-task models.
+
+To evaluate the effect of our model improvements independently, we also performed a thorough model ablation. Figure 4a shows the significant difference in performance between the vanilla and chunked training regimes; we chose the latter to perform the ablations on. Figure 4b shows the results of our cumulative ablation: we gradually removed our improvements one at a time, with each element in the legend being the same as the model preceding it with a single improvement removed. On average, all the presented improvements contribute to the higher performance, with the largest effect coming from teacher forcing noise, i.e. feeding ground-truth hints at training time hurts generalisation, most likely because the correct hints are not available at test time, leading to data distribution shift.
+
+## 5 Conclusion
+
+We presented a generalist neural algorithmic learner: a single graph neural network, with a single set of weights, capable of solving a diverse collection classical algorithms, at a level comparable to (and at times exceeding) a relevant single-task expert. Achieving this objective was preceded by a range of improvements to the dataset, optimisation and architectures for neural algorithmic reasoning, which led to over 20% absolute improvements over the prior best known result. It is our hope that the results and empirical insights shared by this work will be of use to researchers and practitioners in the area, and help scale neural algorithmic learning to new domains and applications.
+
+References
+
+[1] Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. How neural networks extrapolate: From feedforward to graph neural networks. arXiv preprint arXiv:2009.11848, 2020. 1, 3
+
+[2] Beatrice Bevilacqua, Yangze Zhou, and Bruno Ribeiro. Size-invariant graph representations for graph classification extrapolations. In International Conference on Machine Learning, pages 837-851. PMLR, 2021. 1
+
+[3] Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, and Haggai Maron. From local structures to size generalization in graph neural networks. In International Conference on Machine Learning, pages 11975-11986. PMLR, 2021. 1
+
+[4] Petar Veličković and Charles Blundell. Neural algorithmic reasoning. Patterns, 2(7):100273, 2021. 1
+
+[5] Petar Veličković, Adrià Puigdomènech Badia, David Budden, Razvan Pascanu, Andrea Banino, Misha Dashevskiy, Raia Hadsell, and Charles Blundell. The CLRS algorithmic reasoning benchmark. arXiv preprint arXiv:2205.15659, 2022. 1, 2, 3, 4, 5, 7, 8, 13, 14
+
+[6] Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2022. 1, 2
+
+[7] Andreea-Ioana Deac, Petar Veličković, Ognjen Milinkovic, Pierre-Luc Bacon, Jian Tang, and Mladen Nikolic. Neural algorithmic reasoners are implicit planners. Advances in Neural Information Processing Systems, 34:15529-15542, 2021. 1
+
+[8] Petar Veličković, Matko Bošnjak, Thomas Kipf, Alexander Lerchner, Raia Hadsell, Raz-van Pascanu, and Charles Blundell. Reasoning-modulated representations. arXiv preprint arXiv:2107.08881, 2021. 1
+
+[9] Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. arXiv preprint arXiv:1910.10593, 2019. 1, 2, 4
+
+[10] Louis-Pascal Xhonneux, Andreea-Ioana Deac, Petar Veličković, and Jian Tang. How to transfer algorithmic reasoning knowledge to learn new algorithms? Advances in Neural Information Processing Systems, 34:19500-19512, 2021. 1, 2
+
+[11] Robert Clay Prim. Shortest connection networks and some generalizations. The Bell System Technical Journal, 36(6):1389-1401, 1957. 2
+
+[12] Edsger W Dijkstra. A note on two problems in connexion with graphs. Numerische mathematik, 1(1):269-271, 1959. 2
+
+[13] Richard Bellman. On a routing problem. Quarterly of applied mathematics, 16(1):87-90, 1958. 2
+
+[14] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48, 2009. 3
+
+[15] Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997. 3
+
+[16] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. Advances in Neural Information Processing Systems, 30, 2017. 3
+
+[17] Vitaly Kurin, Saad Godil, Shimon Whiteson, and Bryan Catanzaro. Can q-learning with graph networks learn a generalizable branching heuristic for a sat solver? Advances in Neural Information Processing Systems, 33:9608-9621, 2020.
+
+[18] Quentin Cappart, Didier Chételat, Elias Khalil, Andrea Lodi, Christopher Morris, and Petar Veličković. Combinatorial optimization and reasoning with graph neural networks. arXiv preprint arXiv:2102.09544, 2021. 3
+
+[19] Peter C Humphreys, David Raposo, Tobias Pohlen, Gregory Thornton, Rachita Chhaparia, Alistair Muldal, Josh Abramson, Petko Georgiev, Adam Santoro, and Timothy Lillicrap. A data-driven approach for learning to control computers. In International Conference on Machine Learning, pages 9466-9482. PMLR, 2022. 3
+
+[20] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. A generalist agent. arXiv preprint arXiv:2205.06175, 2022. 3, 8
+
+[21] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017. 3
+
+[22] Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Ried-miller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. In International Conference on Machine Learning, pages 4470-4479. PMLR, 2018. 3
+
+[23] Tingwu Wang, Renjie Liao, Jimmy Ba, and Sanja Fidler. NerveNet: Learning structured policy with graph neural networks. In International Conference on Learning Representations, 2018.
+
+[24] Vitaly Kurin, Maximilian Igl, Tim Rocktäschel, Wendelin Boehmer, and Shimon Whiteson. My body is a cage: the role of morphology in graph-based incompatible control. In International Conference on Learning Representations, 2021. 8
+
+[25] Charles Blake, Vitaly Kurin, Maximilian Igl, and Shimon Whiteson. Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing. Advances in Neural Information Processing Systems, 34:23983-23992, 2021.
+
+[26] Victor Bapst, Alvaro Sanchez-Gonzalez, Carl Doersch, Kimberly Stachenfeld, Pushmeet Kohli, Peter Battaglia, and Jessica Hamrick. Structured agents for physical construction. In International Conference on Machine Learning, pages 464-474. PMLR, 2019. 3
+
+[27] Karl Cobbe, Chris Hesse, Jacob Hilton, and John Schulman. Leveraging procedural generation to benchmark reinforcement learning. In International Conference on Machine Learning, pages 2048-2056. PMLR, 2020. 3
+
+[28] Heinrich Küttler, Nantas Nardelli, Alexander Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The NetHack learning environment. Advances in Neural Information Processing Systems, 33:7671-7684, 2020.
+
+[29] Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder, Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich Küttler, Edward Grefenstette, and Tim Rocktäschel. Mini-Hack the planet: A sandbox for open-ended reinforcement learning research. arXiv preprint arXiv:2109.13202, 2021. 3
+
+[30] Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? arXiv preprint arXiv:1905.13211, 2019. 3
+
+[31] Andrew Dudzik and Petar Veličković. Graph neural networks are dynamic programmers. arXiv preprint arXiv:2203.15544, 2022. 3, 7
+
+[32] Richard Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. The Annals of Mathematical Statistics, 35(2):876-879, 1964. doi: 10.1214/aoms/ 1177703591.3,6
+
+[33] Jessica B Hamrick, Kelsey R Allen, Victor Bapst, Tina Zhu, Kevin R McKee, Joshua B Tenenbaum, and Peter W Battaglia. Relational inductive bias for physical construction in humans and machines. arXiv preprint arXiv:1806.01203, 2018. 3
+
+[34] Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Gold-blum, and Tom Goldstein. End-to-end algorithm synthesis with recurrent networks: Logical extrapolation without overthinking. arXiv preprint arXiv:2202.05826, 2022. 3
+
+[35] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pages 1263-1272. PMLR, 2017. 4, 7
+
+[36] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 4
+
+[37] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270-280, 1989. 4
+
+[38] Paul Erdos, Alfréd Rényi, et al. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17-60, 1960. 5
+
+[39] Jon Bentley. Programming pearls: algorithm design techniques. Communications of the ACM, 27(9):865-873, 1984. 5
+
+[40] Alfred V Aho, John E Hopcroft, and Jeffrey D Ullman. The design and analysis of computer algorithms. Reading, 1974. 5
+
+[41] Fǎnică Gavril. Algorithms for minimum coloring, maximum clique, minimum covering by cliques, and maximum independent set of a chordal graph. SIAM Journal on Computing, 1(2): 180-187, 1972. 5
+
+[42] Eugene L Lawler. The traveling salesman problem: a guided tour of combinatorial optimization. Wiley-Interscience Series in Discrete Mathematics, 1985. 5
+
+[43] Donald E Knuth, James H Morris, Jr, and Vaughan R Pratt. Fast pattern matching in strings. SIAM journal on computing, 6(2):323-350, 1977. 5
+
+[44] Ray A Jarvis. On the identification of the convex hull of a finite set of points in the plane. Information processing letters, 2(1):18-21, 1973. 5
+
+[45] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256. JMLR Workshop and Conference Proceedings, 2010. 5, 8
+
+[46] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 5
+
+[47] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310-1318. PMLR, 2013.5,8,13
+
+[48] John William Joseph Williams. Algorithm 232: heapsort. Commun. ACM, 7:347-348, 1964. 5
+
+[49] Charles AR Hoare. Quicksort. The Computer Journal, 5(1):10-16, 1962. 5
+
+[50] Paul Knopp and Richard Sinkhorn. Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2):343-348, 1967. doi: pjm/1102992505. 6
+
+[51] Rodrigo Santa Cruz, Basura Fernando, Anoop Cherian, and Stephen Gould. DeepPermNet: Visual permutation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
+
+[52] Gonzalo Mena, David Belanger, Gonzalo Munoz, and Jasper Snoek. Sinkhorn networks: Using optimal transport techniques to learn permutations. In Neural Information Processing Systems Workshop in Optimal Transport and Machine Learning, 2017. 6
+
+[53] Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. Learning latent permutations with Gumbel-Sinkhorn networks. In International Conference on Learning Representations, 2018. 6
+
+[54] Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. The neural data router: Adaptive control flow in transformers improves systematic generalization. In International Conference on Learning Representations, 2022. 6
+
+[55] Robert W Floyd. Algorithm 97: shortest path. Communications of the ACM, 5(6):345, 1962. 6
+
+[56] Joseph B Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical society, 7(1):48-50, 1956. 7
+
+[57] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. Proceedings of the AAAI conference on artificial intelligence, 33(01):4602-4609, 2019. 7
+
+[58] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. Advances in Neural Information Processing Systems, 28, 2015. 7
+
+[59] Petar Veličković, Lars Buesing, Matthew Overlan, Razvan Pascanu, Oriol Vinyals, and Charles Blundell. Pointer graph networks. Advances in Neural Information Processing Systems, 33: 2232-2244, 2020. 7
+
+[60] Ronald L. Graham. An efficient algorithm for determining the convex hull of a finite planar set. Info. Pro. Lett., 1:132-133, 1972. 8
+
+[61] Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, and M Pawan Kumar. In defense of the unitary scalarization for deep multi-task learning. arXiv preprint arXiv:2201.04122, 2022. 8
+
+[62] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 13
+
+## A Appendix
+
+### A.1 Additional experimental details
+
+We use an embedding size $h = {128}$ across all experiments. We train in batches of size 32 using an Adam optimizer [62] with learning rate ${0.001},{\beta }_{1} = {0.9},{\beta }_{2} = {0.999},\epsilon = {10}^{-8}$ , employing gradient clipping by norm [47] with the clipping constant $c$ empirically set to 1.0. In single-task experiments, we train for 10,000 batches; in the multi-task experiments, we train for 10,000 cycles of 30 batches, one per algorithm. When using multiple training sizes (that is, everywhere except in no-data-augmentation ablations), each batch of each algorithm contains samples of the same size $n$ , and the sizes for each algorithm cycle along the sequence $\left\lbrack {4,7,{11},{13},{16}}\right\rbrack$ , except for string matching algorithms, where the training size is always $n = {20}$ (variability is achieved by randomising the needle size, see below). When using chunking in multi-task experiments, batches have a fixed unroll length of 16 steps; otherwise, each batch contains full-length samples. In chunked experiments it is important to keep separate values of the processor embeddings for each algorithm and training size, since unrolls are split in time and a new batch must start from the last-step embedding state of the same trajectories.
+
+The trained model is evaluated periodically during training on samples of size $n = {16}(n = {20}$ for string matching algorithms), and the best-performing model seen so far is evaluated on OOD samples, size $n = {64}$ ( $n = {80}$ for string matching). Only OOD performance is reported in this paper. The OOD data used for evaluation is sampled on-the-fly, drawn randomly at each evaluation, the number of samples being the same as in the CLRS benchmark [5]. The exception is Tables 1 and 2, where, for fair comparison, we used the fixed OOD samples from the CLRS dataset. We found no significant difference in evaluations with the fixed test data or on-the-fly samples.
+
+When using randomised edge connection probabilities $p$ for data augmentation in graph algorithms (that is, in all experiments except the no-data-augmentation ablations), we sampled $p$ independently for each sample, uniformly from the set $\{ 0,1,{0.2},{0.3},{0.4},{0.5},{0.6},{0.7},{0.8},{0.9}\}$ . However, for Articulation Points, Bridges and MST Kruskal we used a value of $p/2$ , since otherwise, with dense graphs, the algorithms produce very long trajectories that would not fit in GPU memory. In Naïve String Matcher and Knuth-Morris-Pratt we randomised the length of the needle uniformly between 1 and 8 .
+
+As discussed in the main text, data augmentation via sizes, connection probabilities and needle lengths only applied to the training data. Evaluation always used the fixed parameters established in the CLRS benchmark.
+
+A. 2 Additional experimental results
+
+Table 2: Single-task OOD average micro- ${\mathrm{F}}_{1}$ score of previous SOTA Memnet, MPNN and PGN [5] and our best model Triplet-GMPNN with all the improvements described in Section 3.
+
+| Algorithm | Memnet [5] | MPNN [5] | PGN [5] | Triplet-GMPNN (ours) |
| Activity Selector | ${24.10}\% \pm {2.22}$ | ${80.66}\% \pm {3.16}$ | ${66.80}\% \pm {1.62}$ | $\mathbf{{95.18}}\% \pm \mathbf{{0.45}}$ |
| Articulation Points | ${1.50}\% \pm {0.61}$ | ${50.91}\% \pm {2.18}$ | ${49.53}\% \pm {2.09}$ | ${88.32}\% \pm {2.01}$ |
| Bellman-Ford | ${40.04}\% \pm {1.46}$ | ${92.01}\% \pm {0.28}$ | ${92.99}\% \pm {0.34}$ | $\mathbf{{97.39}\% \pm {0.19}}$ |
| BFS | 43.34% ± 0.04 | ${99.89}\% \pm {0.05}$ | ${99.63}\% \pm {0.29}$ | 99.73% ± 0.04 |
| Binary Search | 14.37% ± 0.46 | ${36.83}\% \pm {0.26}$ | ${76.95}\% \pm {0.13}$ | ${77.58}\% \pm {2.35}$ |
| Bridges | ${30.26}\% \pm {0.05}$ | ${72.69}\% \pm {4.78}$ | ${51.42}\% \pm {7.82}$ | ${93.99}\% \pm {2.07}$ |
| Bubble Sort | ${73.58}\% \pm {0.78}$ | ${5.27}\% \pm {0.60}$ | ${6.01}\% \pm {1.95}$ | ${67.68}\% \pm {5.50}$ |
| DAG Shortest Paths | ${66.15}\% \pm {1.92}$ | ${96.24}\% \pm {0.56}$ | ${96.94}\% \pm {0.16}$ | $\mathbf{{98.19}\% \pm {0.30}}$ |
| DFS | 13.36% ± 1.61 | ${6.54}\% \pm {0.51}$ | ${8.71}\% \pm {0.24}$ | ${47.79}\% \pm {4.19}$ |
| Dijkstra | ${22.48}\% \pm {2.39}$ | ${91.50}\% \pm {0.50}$ | ${83.45}\% \pm {1.75}$ | $\mathbf{{96.05}}\% \pm \mathbf{{0.60}}$ |
| Find Max. Subarray | ${13.05}\% \pm {0.08}$ | ${20.30}\% \pm {0.49}$ | ${65.23}\% \pm {2.56}$ | 76.36% $\pm {0.43}$ |
| Floyd-Warshall | 14.17% ± 0.13 | ${26.74}\% \pm {1.77}$ | ${28.76}\% \pm {0.51}$ | ${48.52}\% \pm {1.04}$ |
| Graham Scan | ${40.62}\% \pm {2.31}$ | ${91.04}\% \pm {0.31}$ | ${56.87}\% \pm {1.61}$ | ${93.62}\% \pm {0.91}$ |
| Heapsort | ${68.00}\% \pm {1.57}$ | ${10.94}\% \pm {0.84}$ | ${5.27}\% \pm {0.18}$ | ${31.04}\% \pm {5.82}$ |
| Insertion Sort | ${71.42}\% \pm {0.86}$ | ${19.81}\% \pm {2.08}$ | 44.37% ± 2.43 | ${78.14}\% \pm {4.64}$ |
| Jarvis’ March | ${22.99}\% \pm {3.87}$ | ${34.86}\% \pm {12.39}$ | 49.19% ± 1.07 | $\mathbf{{91.01}\% \pm {1.30}}$ |
| Knuth-Morris-Pratt | ${1.81}\% \pm {0.00}$ | ${2.49}\% \pm {0.86}$ | ${2.00}\% \pm {0.12}$ | ${19.51}\% \pm {4.57}$ |
| LCS Length | ${49.84}\% \pm {4.34}$ | ${53.23}\% \pm {0.36}$ | ${56.82}\% \pm {0.21}$ | 80.51% $\pm {1.84}$ |
| Matrix Chain Order | ${81.96}\% \pm {1.03}$ | ${79.84}\% \pm {1.40}$ | ${83.91}\% \pm {0.49}$ | $\mathbf{{91.68}\% \pm {0.59}}$ |
| Minimum | ${86.93}\% \pm {0.11}$ | ${85.34}\% \pm {0.88}$ | ${87.71}\% \pm {0.52}$ | ${97.78}\% \pm {0.55}$ |
| MST-Kruskal | ${28.84}\% \pm {0.61}$ | ${70.97}\% \pm {1.50}$ | ${66.96}\% \pm {1.36}$ | $\mathbf{{89.80}\% \pm {0.77}}$ |
| MST-Prim | ${10.29}\% \pm {3.77}$ | ${69.08}\% \pm {7.56}$ | ${63.33}\% \pm {0.98}$ | ${86.39}\% \pm {1.33}$ |
| Naïve String Matcher | ${1.22}\% \pm {0.48}$ | ${3.92}\% \pm {0.30}$ | ${2.08}\% \pm {0.20}$ | ${78.67}\% \pm {4.99}$ |
| Optimal BST | ${72.03}\% \pm {1.21}$ | ${62.23}\% \pm {0.44}$ | 71.01% ± 1.82 | ${73.77}\% \pm {1.48}$ |
| Quickselect | ${1.74}\% \pm {0.03}$ | ${1.43}\% \pm {0.69}$ | ${3.66}\% \pm {0.42}$ | ${0.47}\% \pm {0.25}$ |
| Quicksort | 73.10% ± 0.67 | ${11.30}\% \pm {0.10}$ | ${6.17}\% \pm {0.15}$ | ${64.64}\% \pm {5.12}$ |
| Segments Intersect | ${71.81}\% \pm {0.90}$ | ${93.44}\% \pm {0.10}$ | ${77.51}\% \pm {0.75}$ | $\mathbf{{97.64}\% \pm {0.09}}$ |
| SCC | ${16.32}\% \pm {4.78}$ | ${24.37}\% \pm {4.88}$ | ${20.80}\% \pm {0.64}$ | ${43.43}\% \pm {3.15}$ |
| Task Scheduling | ${82.74}\% \pm {0.04}$ | ${84.11}\% \pm {0.32}$ | ${84.89}\% \pm {0.91}$ | ${87.25}\% \pm {0.35}$ |
| Topological Sort | ${2.73}\% \pm {0.11}$ | ${52.60}\% \pm {6.24}$ | ${60.45}\% \pm {2.69}$ | ${87.27}\% \pm {2.67}$ |
| Overall average | 38.03% | 51.02% | 52.31% | 75.98% |
+
+Best ST per algorithm
+
+Average score [%] MT 80 40 20
+
+0
+
+Knuth-Morris-Pratt Insertion Sort Articulation Points Strongly Conn. Comps. LCS Length Bridges Quicksort Optimal BST MST Prim MST Kruskal Task Scheduling Quickselect Matrix Chain Order Minimum Topological Sort. DAG Shortest Paths Activity Selector Bellman-Ford - DFS Jarvis' March Bubble Sort Find Max. Subarray Naïve String Matcher Overall Average
+
+(a) Per-algorithm comparison between our multi-task model and the best per-algorithm model from Table 2,
+
+ordered by biggest improvement for multi-task (left to right).
+
+Total no. of algorithms above threshold 25 20 Divide & Conquer 15 Strings Search Geometry Dynamic Programming Sorting Graphs
+
+0 20 40 60 80 100 120
+
+Threshold as % of best single-task performance
+
+(b) Number of tasks where the performance of the multi-task model matched, or exceeded, a given percentage of the performance of the best single-task model (per algorithm) from Table 2, grouped by algorithm type. Note that, for some algorithms, the performance of the multi-task learner is higher than that of the best single-task learner.
+
+
+
+(c) Number of tasks where the performance of the multi-task model matched, or exceeded, a given percentage of the performance of single-task Triplet-GMPNN from Table 2, grouped by algorithm type.
+
+Figure 5: Comparing our multi-task model to the best model per algorithm from Table 2 (5a & 5b). The comparison in $5\mathrm{c}$ is between our multi-task model and our single-task Triplet-GMPNN.
+
+
+
+Figure 6: Single-task model cumulative ablations.
+
+
+
+Figure 7: Non-cumulative single-task ablations faceted by algorithm. Part 1.
+
+
+
+Figure 8: Non-cumulative single-task ablations faceted by algorithm. Part 2.
+
+
+
+Figure 9: Non-cumulative single-task ablations faceted by algorithm. Part 3.
+
+
+
+Figure 10: Per-algorithm comparison of chunked and non-chunked multitask models.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/FebadKZf6Gd/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/FebadKZf6Gd/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ec08831fd7b54cc272f38bd29090361f1765694a
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/FebadKZf6Gd/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,207 @@
+§ A GENERALIST NEURAL ALGORITHMIC LEARNER
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+The cornerstone of neural algorithmic reasoning is the ability to solve algorithmic tasks, especially in a way that generalises out of distribution. While recent years have seen a surge in methodological improvements in this area, they mostly focused on building specialist models. Specialist models are capable of learning to neurally execute either only one algorithm or a collection of algorithms with identical control-flow backbone. Here, instead, we focus on constructing a generalist neural algorithmic learner-a single graph neural network processor capable of learning to execute a wide range of algorithms, such as sorting, searching, dynamic programming, path-finding and geometry. We leverage the CLRS benchmark to empirically show that, much like recent successes in the domain of perception, generalist algorithmic learners can be built by "incorporating" knowledge. That is, it is possible to effectively learn algorithms in a multi-task manner, so long as we can learn to execute them well in a single-task regime. Motivated by this, we present a series of improvements to the input representation, training regime and processor architecture over CLRS, improving average single-task performance by over ${20}\%$ from prior art. We then conduct a thorough ablation of multi-task learners leveraging these improvements. Our results demonstrate a generalist learner that effectively incorporates knowledge captured by specialist models.
+
+§ 201 INTRODUCTION
+
+Machine learning systems based on deep neural networks have made tremendous strides in recent years, especially so for tasks dominated by perception. Prominent models in this space are usually required to generalise in-distribution, meaning that their training and validation sets are representative of the distribution expected of test inputs. In contrast, to truly master tasks dominated by reasoning, a model needs to provide sensible outputs even when generalising out-of-distribution (OOD). Correspondingly, neural networks have seen lesser levels of success in this domain. Indeed, it has been suggested that stronger neural reasoning architectures may require careful application of methods such as algorithmic alignment [1], causality [2] and self-supervised learning [3]. Furthermore, these kinds of architectures are likely to be critical for robustly generating new knowledge based on existing observations, especially when that knowledge escapes the domain of training data.
+
+Neural algorithmic reasoning [4] offers a robust route for obtaining such modelling advancements. Its focus is on evaluating existing (graph) neural network architectures on their ability to solve algorithmic tasks, typically by learning to execute classical algorithms [5]. This is an excellent target for probing reasoning capabilities, as classical algorithms can be seen as the essential "building blocks" for all of theoretical computer science, and fundamental tools in a software engineering career [6]. While this is a fairly self-contained pipeline, evidence of its applicability has already emerged: Graph Neural Networks (GNNs) pre-trained on algorithmic tasks have been successfully utilised in implicit planning [7] and self-supervised learning [8]. All of the prior advances in this area focused on building specialist models: either focusing on a single algorithm, or a collection of algorithms with an identical control flow backbone [9, 10].
+
+In contrast, here we demonstrate a generalist neural algorithmic learner: a single GNN, with a single set of parameters, capable of learning to solve several classical algorithmic tasks simultaneously-to a level that matches relevant specialist models on average. This represents an important milestone, showing we can meaningfully incorporate reasoning capabilities even across tasks with completely disparate control flow, and in several tasks, we can exceed the OOD performance of the corresponding single-task specialist. Our generalist model is capable of performing various tasks, spanning sorting, searching, greedy algorithms, dynamic programming, graph algorithms, string algorithms and geometric algorithms (Figure 1). The experimentation we conduct is made possible by the CLRS-30 benchmark [5], a collection of thirty classical algorithmic tasks [6] spanning the above categories, along with a unified representational interface which made multi-task models easier to deploy.
+
+ < g r a p h i c s >
+
+Figure 1: Our generalist neural algorithmic learner is a single processor GNN $P$ , with a single set of weights, capable of solving several algorithmic tasks, $\tau$ , in a shared latent space (each of which is attached to $P$ with simple encoders/decoders ${f}_{\tau }$ and ${g}_{\tau }$ ). Among others, our processor network is capable of sorting (top), shortest path-finding (middle), and convex hull finding (bottom).
+
+Our results are powered by a single salient observation: any numerical difficulties which would make individual algorithms harder to learn (e.g. unstable gradients) are amplified when trying to learn a collection of such algorithms at once. Therefore, one of our main contributions is also to present a series of improvements to the training, optimisation, input representations, and GNN architectures which, taken together, improve the best-known average performance on the CLRS-30 benchmark by over ${20}\%$ in absolute terms. We hope that our collection of improvements, with careful explanation for their applicability, will prove useful to GNN practitioners even beyond the realm of reasoning.
+
+Following the overview of related work in Section 2, we describe, in Section 3, the improvements in the representation, training regime and architecture that lead to a single model with significantly better performance than previous published state-of-the-art (SOTA) on CLRS-30. We then show in Section 4, as our main contribution, that this model, trained simultaneously on all the CLRS-30 tasks, can match corresponding specialist models on average, demonstrating general algorithmic learning.
+
+§ 2 RELATED WORK
+
+The closest related work to ours is NeuralExecutor++, a multi-task algorithmic reasoning model by Xhonneux et al. [10, NE++]. As briefly discussed in the prior section, NE++ focuses on a highly specialised setting where all of the algorithms to be learnt have an identical control flow backbone. For example, NE++ jointly learns to execute Prim's [11] and Dijkstra's [12] algorithms, which are largely identical (up to a choice of key function and edge relaxation subroutine). Even in this specialist regime, the authors are able to make critical observations, such as empirically showing the specific forms of multi-task learning that are necessary for generalising OOD. We leverage the insights of NE++, while extending its influence well beyond the domain of closely related algorithms.
+
+Also of note is the work on neural execution of graph algorithms by Veličković et al. [9]. This work provided early evidence of the potential for multi-task learning of classical algorithms. Namely, the authors simultaneously learn breadth-first search and the Bellman-Ford algorithm [13]—empirically demonstrating that jointly learning to execute them is favourable to learning them either in isolation or with various forms of curriculum learning [14]. Once again, the algorithms in question are nearly-identical in terms of backbone; in fact, breadth-first search can be interpreted as the Bellman-Ford algorithm over a graph with constant edge weights.
+
+In the multi-task learning context, our work belongs to the group of hard parameter sharing literature, pioneered by Caruana [15]. In hard parameter sharing, the same model is shared across all tasks, with, potentially, some task-specific weights. We continue a line of work demonstrating that a single general model can learn a set of challenging tasks in combinatorial optimisation [16-18], computer control [19], and multi-modal multi-embodiment learning [20, Gato]. Just like Gato provides a generalist agent for a wide variety of tasks, from language modelling to playing Atari games, and from robotic arm control to image captioning, our work provides a generalist agent for a diverse set of algorithmic domains, including sorting, searching, graphs, strings, and geometry.
+
+Due to their ability to operate on graphs of arbitrary size, GNNs (including Transformers [21]) have been extensively explored for their in- and out-of-distribution generalisation properties in Reinforcement Learning (RL) [22-26]. In our setting, OOD generalisation implies generalisation to problems of larger size, e.g., longer input arrays to sort or larger graphs to find shortest paths in. In-distribution generalisation implies generalisation to new instances of problems of the same size. From this perspective, our problem setting is similar to procedurally-generated environments in RL [27-29].
+
+The improvements we implemented for our single-task specialist reasoners are largely motivated by the theory of algorithmic alignment [30]. The key result of this theory is that neural networks will have provably smaller sample complexity if they are designed with components that "line up" with the target algorithm's operations. Following the prescriptions of this theory, we make several changes to the input data representations to make this alignment stronger [1], modify the GNN architecture to directly support higher-order reasoning [31] and suggest dedicated decoders for doubly-stochastic outputs [32].
+
+§ 3 SINGLE-TASK EXPERIMENTS
+
+Each algorithm in the CLRS benchmark [5] is specified by a number of inputs, hints and outputs. In a given sample, the inputs and outputs are fixed, while hints are time-series of intermediate states of the algorithm. Each sample for a particular task has a size, $n$ , corresponding to the number of nodes in the GNN that will execute the algorithm.
+
+A sample of every algorithm is represented as a graph, with each input, output and hint located in either the nodes, the edges, or the graph itself, and therefore has shape (excluding batch dimension, and, for hints, time dimension) $n \times f,n \times n \times f$ , or $f$ , respectively, $f$ being the dimensionality of the feature, which depends on its type. The CLRS benchmark defines five types of features: scalar, categorical, mask, mask_one and pointer, with their own encoding and decoding strategies and loss functions-e.g. a scalar type will be encoded and decoded directly by a single linear layer, and optimised using mean squared error. We defer to the CLRS benchmark paper [5] for further details.
+
+§ 3.1 BASE MODEL
+
+Encoder. We adopt the same encode-process-decode paradigm [33] presented with the CLRS benchmark [5]. At each time step, $t$ , of a particular task $\tau$ (e.g. insertion sort), the task-based encoder ${f}_{\tau }$ , consisting of a linear encoder for each input and hint, embeds inputs and the current hints as high-dimensional vectors. These embeddings of inputs and hints located in the nodes all have the same dimension and are added together; the same happens with hints and inputs located in edges, and in the graph. In our experiments we use the same dimension, $h = {128}$ , for node, edge and graph embeddings. Thus, at the end of the encoding step for a time-step $t$ of the algorithm, we have a single set of embeddings $\left\{ {{\mathbf{x}}_{i}^{\left( t\right) },{\mathbf{e}}_{ij}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }}\right\}$ , shapes $n \times h,n \times n \times h$ , and $h$ , in the nodes, edges and graph, respectively. Note that this is independent of the number and type of the inputs and hints of the particular algorithm, allowing us to share this latent space across all thirty algorithms in CLRS. Further, note that at each step, the input encoding is fed directly to these embeddings-this recall mechanism significantly improves the model's robustness over long trajectories [34].
+
+Processor. The embeddings are fed into a processor $P$ , a GNN that performs one step of computation. The processor transforms the input node, edge and graph embeddings into processed node embeddings, ${\mathbf{h}}_{i}^{\left( t\right) }$ . Additionally, the processor uses the processed node embeddings from the previous step, ${\mathbf{h}}_{i}^{\left( t - 1\right) }$ , as inputs. Importantly, the same processor model can operate on graphs of any size. We leverage the message-passing neural network [35, MPNN], using the max aggregation and passing messages over a fully-connected graph, as our base model. The MPNN computes processed embeddings as follows:
+
+$$
+{\mathbf{z}}^{\left( t\right) } = {\mathbf{x}}_{i}^{\left( t\right) }\parallel {\mathbf{h}}_{i}^{\left( t - 1\right) }\;{\mathbf{m}}_{i}^{\left( t\right) } = \mathop{\max }\limits_{{1 \leq j \leq n}}{f}_{m}\left( {{\mathbf{z}}_{i}^{\left( t\right) },{\mathbf{z}}_{j}^{\left( t\right) },{\mathbf{e}}_{ij}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }}\right) \;{\mathbf{h}}_{i}^{\left( t\right) } = {f}_{r}\left( {{\mathbf{z}}_{i}^{\left( t\right) },{\mathbf{m}}_{i}^{\left( t\right) }}\right) \tag{1}
+$$
+
+starting from ${\mathbf{h}}^{\left( 0\right) } = \mathbf{0}$ . Here $\parallel$ denotes concatenation, ${f}_{m} : {\mathbb{R}}^{2h} \times {\mathbb{R}}^{2h} \times {\mathbb{R}}^{h} \times {\mathbb{R}}^{h} \rightarrow {\mathbb{R}}^{h}$ is the message function (for which we use a three-layer MLP with ReLU activations), and ${f}_{r} : {\mathbb{R}}^{2h} \times {\mathbb{R}}^{h} \rightarrow$ ${\mathbb{R}}^{h}$ is the readout function (for which we use a linear layer with ReLU activation). The use of the max aggregator is well-motivated by prior work [5, 9], and we use the fully connected graph-letting the neighbours $j$ range over all nodes $\left( {1 \leq j \leq n}\right)$ —in order to allow the model to overcome situations where the input graph structure may be suboptimal. Layer normalisation [36] is applied to ${\mathbf{h}}_{i}^{\left( t\right) }$ before using them further. Further details on the MPNN processor may be found in Veličković et al. [5].
+
+Decoder. The processed embeddings are finally decoded with a task-based decoder ${g}_{\tau }$ , to predict the hints for the next step, and the outputs at the final step. Akin to the encoder, the task-based decoder relies mainly on a linear decoder for each hint and output, along with a mechanism to compute pairwise node similarities when appropriate. Specifically, the pointer type decoder computes a score, ${s}_{ij}$ , for each pair of nodes, and then chooses the pointer of node $i$ by taking either the ${\operatorname{argmax}}_{j}{s}_{ij}$ or ${\operatorname{softmax}}_{j}{s}_{ij}$ (depending on whether a hard or soft prediction is required).
+
+Loss. The decoded hints and outputs are used to compute the loss during training, according to their type [5]. For each sample in a batch, the hint prediction losses are averaged across hints and time, and the output loss is averaged across outputs (most algorithms have a single output, though some have two outputs). The hint loss and output loss are added together. Besides, the hint predictions at each time step are fed back as inputs for the next step, except possibly at train time if teacher forcing is used (see Section 3.2.1).
+
+We train the model on samples with sizes $n \leq {16}$ , and periodically evaluate them on in-distribution samples of size $n = {16}$ . Also, periodically, we evaluate the model with the best in-distribution evaluation score so far on OOD samples of size $n = {64}$ . In what follows, we will be reporting only these OOD evaluation scores. Full details of the model, training and evaluation hyperparameters can be found in Appendix A.
+
+§ 3.2 MODEL IMPROVEMENTS
+
+As previously discussed, single-task improvements, especially in terms of learning stability, will empirically transfer well to multi-task algorithmic learning. We now describe, in a gradual manner, all the changes made to the model, which have lead to an absolute improvement of over 20% on average across all 30 tasks in CLRS.
+
+§ 3.2.1 DATASET AND TRAINING
+
+Removing teacher forcing. At evaluation time, the model has no access to the step-by-step hints in the dataset, and has to rely on its own hint predictions. However, during training, it is sometimes advisable to stabilise the trajectories with teacher forcing [37]-providing the ground-truth hint values instead of the network's own predictions. In the prior model [5], ground-truth hints were provided during training with probability 0.5, as, without teacher forcing, losses tended to grow unbounded along a trajectory when scalar hints were present, destabilising the training. In this work we incorporate several significant stabilising changes (described in future paragraphs), which allows us to remove teacher forcing altogether, aligning training with evaluation, and avoiding the network becoming overconfident in always expecting correct hint predictions. With teacher forcing, performance deteriorates significantly in sorting algorithms and Kruskal's algorithm. Naïve String Matcher, on the other hand, improves with teacher forcing (see Appendix A, Figs. 7-9).
+
+Augmenting the training data. To prevent our model from over-fitting to the statistics of the fixed CLRS training dataset [5], we augmented the training data in three key ways, without breaking the intended size distribution shift. Firstly, we used the on-line samplers in CLRS to generate new training examples on the fly, rather than using a fixed dataset which is easier to overfit to. Secondly, we trained on examples of mixed sizes, $n \leq {16}$ , rather than only 16, which helps the model anticipate for a diverse range of sizes, rather than overfitting to the specifics of size $n = {16}$ . Lastly, for graph algorithms, we varied the connectivity probability $p$ of the input graphs (generated by the Erdős-Rényi model [38]); and for string matching algorithms, we varied the needle length. These both serve to expose the model to different trajectory lengths; for example, in many graph algorithms, the amount of steps the algorithm should run for is related to the graph's diameter, and varying the connection probability in the graph generation allows for varying the expected diameter. These improvements considerably increase the training data variability, compared to the original dataset in [5].
+
+Soft hint propagation. When predicted hints are fed back as inputs during training, gradients may or may not be allowed to flow through them. In previous work, only hints of the scalar type allowed gradients through, as all categoricals were post-processed from logits into the ground-truth format via argmax or thresholding before being fed back. Instead, in this work we use softmax for categorical, mask_one and pointer types, and the logistic sigmoid for mask types. Without these soft hints, performance in sorting algorithms degrades (similarly to the case of teacher forcing), as well as in Naïve String Matcher (Appendix A, Figs. 7-9).
+
+Static hint elimination. Eleven algorithms in CLRS ${}^{1}$ specify a fixed ordering of the nodes, common to every sample, via a node pointer hint that does not ever change along the trajectories. Prediction of this hint is trivial (identity function), but poses a potential problem for OOD generalization, since the model can overfit to the fixed training values. We therefore turned this fixed hint into an input for these 11 algorithms, eliminating the need for explicitly predicting it.
+
+Improving training stability with encoder initialisation and gradient clipping. The scalar hints have unbounded values, in principle, and are optimised using mean-squared error, hence their gradients can quickly grow with increasing prediction error. Further, the predicted scalar hints then get re-encoded at every step, which can rapidly amplify errors throughout the trajectory, leading to exploding signals (and consequently gradients), even before any training takes place.
+
+To rectify this issue, we use the Xavier initialisation [45], effectively reducing the initial weights for scalar hints whose input dimensionality is just 1 . However, we reverted to using the default LeCun initialisation [46] elsewhere. This combination of initialisations proved important for the initial learning stability of our model over long trajectories. Relatedly, in preliminary experiments, we saw drastic improvements in learning stability, as well as significant increases in validation performance, with gradient clipping [47], which we subsequently employed in all experiments.
+
+§ 3.2.2 ENCODERS AND DECODERS
+
+Randomised position scalar. Across all algorithms in the dataset, there exists a position scalar input which uniquely indexes the nodes, with values linearly spaced between 0 and 1 along the node index. To avoid overfitting to these linearly spaced values during training, we replaced them with random values, uniformly sampled in $\left\lbrack {0,1}\right\rbrack$ , sorted to match the initial order implied by the linearly spaced values. The benefit of this change is notable in algorithms where it would be easy to overfit to these positions, such as string matching.
+
+Permutation decoders and the Sinkhorn operator. Sorting algorithms (Insertion Sort, Bubble Sort, Heapsort [48] and Quicksort [49]) always output a permutation of the input nodes. In the CLRS benchmark, this permutation is encoded as a pointer where each node points to its predecessor in the sorted order (the first node points to itself); this is represented as a $n \times n$ matrix $\mathbf{P}$ where each row is a one-hot vector, such that element(i, j)is 1 if node $i$ points to node $j$ . As with all types of pointers, such permutation pointers can be predicted using a row-wise softmax on unconstrained decoder outputs (logits), trained with cross entropy (as in [5]). However, this does not explicitly take advantage of the fact that the pointers encode a permutation, which the model has to learn instead. Our early experiments showed that the model was often failing to predict valid permutations OOD.
+
+${}^{1}$ Binary Search, Minimum, Max Subarray [39], Matrix Chain Order, LCS Length, Optimal BST [40], Activity Selector [41], Task Scheduling [42], Naïve String Matcher, Knuth-Morris-Pratt [43] and Jarvis’ March [44].
+
+Accordingly, we enforce a permutation inductive bias in the output decoder of sorting algorithms, as follows. First, we modify the output representation by rewiring the first node to point to the last one, turning $\mathbf{P}$ into a permutation matrix, i.e., a matrix whose rows and columns are one-hot vectors. We also augment the representation with a one-hot vector of size $n$ that specifies the first node, so we do not lose this information; this vector is treated like a regular mask_one feature. Second, we predict the permutation matrix $\mathbf{P}$ from unconstrained decoder outputs $\mathbf{Y}$ by replacing the usual row-wise softmax with the Sinkhorn operator $\mathcal{S}\left\lbrack {{32},{50} - {53}}\right\rbrack .\mathcal{S}$ projects an arbitrary square matrix $\mathbf{Y}$ into a doubly stochastic matrix $\mathcal{S}\left( \mathbf{Y}\right)$ (a non-negative matrix whose rows and columns sum to 1), by exponentiating and repeatedly normalizing rows and columns so they sum to 1 . Specifically, $\mathcal{S}$ is defined by:
+
+$$
+{\mathcal{S}}^{0}\left( \mathbf{Y}\right) = \exp \left( \mathbf{Y}\right) \;{\mathcal{S}}^{l}\left( \mathbf{Y}\right) = {\mathcal{T}}_{c}\left( {{\mathcal{T}}_{r}\left( {{\mathcal{S}}^{l - 1}\left( \mathbf{Y}\right) }\right) }\right) \;\mathcal{S}\left( \mathbf{Y}\right) = \mathop{\lim }\limits_{{l \rightarrow \infty }}{\mathcal{S}}^{l}\left( \mathbf{Y}\right) , \tag{2}
+$$
+
+where exp acts element-wise, and ${\mathcal{T}}_{r}$ and ${\mathcal{T}}_{c}$ denote row and column normalisation respectively. Although the Sinkhorn operator produces a doubly stochastic matrix rather than a permutation matrix, we can obtain a permutation matrix by introducing a temperature parameter, $\tau > 0$ , and taking $\mathbf{P} = \mathop{\lim }\limits_{{\tau \rightarrow {0}^{ + }}}\mathcal{S}\left( {\mathbf{Y}/\tau }\right)$ ; as long as there are no ties in the elements of $\mathbf{Y},\mathbf{P}$ is guaranteed to be a permutation matrix [52, Theorem 1].
+
+In practice, we compute the Sinkhorn operator using a fixed number of iterations ${l}_{\max }$ . We use a smaller number of iterations ${l}_{\max } = {10}$ for training, to limit vanishing and exploding gradients, and ${l}_{\max } = {60}$ for evaluation. A fixed temperature $\tau = {0.1}$ was experimentally found to give a good balance between speed of convergence and tie-breaking. We also encode the fact that no node points to itself, that is, that all diagonal elements of $\mathbf{P}$ should be 0, by setting the diagonal elements of $\mathbf{Y}$ to $- \infty$ . To avoid ties, we follow Mena et al. [53], injecting Gumbel noise to the elements of $\mathbf{Y}$ prior to applying the Sinkhorn operator, during training only. Finally, we transform the predicted permutation matrix $\mathbf{P}$ and predicted mask_one vector that points to the first element into the original pointer representation used by CLRS.
+
+§ 3.2.3 PROCESSOR NETWORKS
+
+Gating mechanisms. Many algorithms only require updating a few nodes at each time step, keeping the rest unchanged. However, the MPNN we use (Equation 1) is biased towards the opposite: it updates all hidden states in each step. Although it is theoretically possible for the network to keep the states unchanged, learning to do so is not easy. With this in mind, and motivated by its effectiveness in NDRs [54], we augment the network with an update gate, biased to be closed by default. We found that the gate stabilizes learning on many of the tasks, and increases the mean performance over all tasks on single-task training significantly. Surprisingly, however, we did not find gating to be advantageous in the multi-task case.
+
+To add gating to the MPNN model we produce a per-node gating vector from the same inputs that process the embeddings in Equation 1:
+
+$$
+{\mathbf{g}}_{i}^{\left( t\right) } = {f}_{g}\left( {{\mathbf{z}}_{i}^{\left( t\right) },{\mathbf{m}}_{i}^{\left( t\right) }}\right) \tag{3}
+$$
+
+where ${f}_{g} : {\mathbb{R}}^{2h} \times {\mathbb{R}}^{h} \rightarrow {\mathbb{R}}^{h}$ is the gating function, for which we use a two-layer MLP, with ReLU activation for the hidden layer and logistic sigmoid activation for the output. Importantly, the final layer bias of ${f}_{g}$ is initialized to a value of -3, which biases the network for not updating its representations, unless necessary. The processed gated embeddings, ${\widehat{\mathbf{h}}}_{i}^{\left( t\right) }$ , are computed as follows:
+
+$$
+{\widehat{\mathbf{h}}}_{i}^{\left( t\right) } = {\mathbf{g}}_{i}^{\left( t\right) } \odot {\mathbf{h}}_{i}^{\left( t\right) } + \left( {1 - {\mathbf{g}}_{i}^{\left( t\right) }}\right) \odot {\mathbf{h}}_{i}^{\left( t - 1\right) } \tag{4}
+$$
+
+and are used instead of ${\mathbf{h}}_{i}^{\left( t\right) }$ in the subsequent steps, replacing ${\mathbf{z}}^{\left( t\right) }$ in Eq. 1 by ${\mathbf{z}}^{\left( t\right) } = {\mathbf{x}}_{i}^{\left( t\right) }\parallel {\widehat{\mathbf{h}}}_{i}^{\left( t - 1\right) }$ .
+
+Triplet reasoning. Several algorithms within CLRS-30 explicitly require edge-based reasoning-where edges store values, and update them based on other edges' values. An example of this is the Floyd-Warshall algorithm [55], which computes all-pairs shortest paths in a weighted graph. The update rule for ${d}_{ij}$ , its estimate for the best distance from node $i$ to $j$ , is ${d}_{ij} = \mathop{\min }\limits_{k}{d}_{ik} + {d}_{kj}$ , which roughly says "the best way to get from $i$ to $j$ is to find the optimal mid-point $k$ , travel from $i$ to $k$ , then from $k$ to $j$ ". Similar rules are pervasive across many CLRS-30 algorithms, especially in dynamic programming. Even though there are no node representations in the the above update, all our processors are centered on passing messages between node representations ${\mathbf{h}}_{i}$ .
+
+ < g r a p h i c s >
+
+Figure 2: The OOD performance in single-task experiments before and after the improvements presented in this paper, sorted in descending order of current performance. Error bars represent standard error of the mean across seeds (3 seeds for previous SOTA experiments, 10 seeds for current). The previous SOTA values are the best of MPNN, PGN and Memnet models (see Table 2).
+
+To rectify this situation, we augment our processor to perform message passing towards edges. Referring again to the update for ${d}_{ij}$ , we note that the edge representations are updated by choosing an intermediate node, then aggregating over all possible choices. Accordingly, and as previously observed by Dudzik and Veličković [31], we introduce triplet reasoning: first, computing representations over triplets of nodes, then reducing over one node to obtain edge latents:
+
+$$
+{\mathbf{t}}_{ijk} = {\psi }_{t}\left( {{\mathbf{h}}_{i},{\mathbf{h}}_{j},{\mathbf{h}}_{k},{\mathbf{e}}_{ij},{\mathbf{e}}_{ik},{\mathbf{e}}_{kj},\mathbf{g}}\right) \;{\mathbf{h}}_{ij} = {\phi }_{t}\left( {\mathop{\max }\limits_{k}{\mathbf{t}}_{ijk}}\right) \tag{5}
+$$
+
+Here, ${\psi }_{t}$ is a triplet message function, mapping all relevant representations to a single vector for each triplet of nodes, and ${\phi }_{t}$ is an edge readout function, which transforms the aggregated triplets for each edge for later use. According to prior findings on the CLRS benchmark [5], we use the max aggregation to obtain edge representations. The computed ${\mathbf{h}}_{ij}$ vectors can then be used in any edge-based reasoning task, and empirically they are indeed significantly beneficial, even in tasks where we did not initially anticipate such benefits. One example is Kruskal's minimum spanning tree algorithm [56], where we presume that access to triplet reasoning allowed the model to more easily sort the edges by weight, as it selects how to augment the spanning forest at each step.
+
+In order to keep the footprint of triplet embeddings as lightweight as possible, we compute only 8-dimensional features in ${\psi }_{t}.{\phi }_{t}$ then upscales the aggregated edge features back to 128 dimensions, to make them compatible with the rest of the architecture. Our initial experimentation demonstrated that the output dimensionality of ${\psi }_{t}$ did not significantly affect downstream performance. Note that computing triplet representations has been a useful approach in general GNN design [57]-however, it has predominantly been studied in the context of GNNs over constant input features. Our study is among the first to verify their utility over reasoning tasks with well-specified initial features.
+
+§ 3.3 RESULTS
+
+By incorporating the changes described in the previous sections we arrived at a single model type, with a single set of hyper-parameters, that was trained to reach new state-of-the-art performance on CLRS-30 [5]. Tables 1 and 2 show the micro- ${\mathrm{F}}_{1}$ scores of our model, which we refer to as Triplet-GMPNN (an MPNN with gating and triplet edge processing), over the original CLRS-30 test set (computed identically to [5], but with 10 repetitions instead of 3). Our baselines include the Memnet [58], MPNN [35] and PGN [59] models, taken directly from [5]. Figure 2 displays the comparison between the improved model and the best model from [5]. Our improvements lead to an overall average performance that is more than ${20}\%$ higher (in absolute terms) compared to the next best model (see Table 1), and to a significant performance improvement in all but one algorithm family, compared to every other model. Further, our stabilising changes (such as gradient clipping) have empirically reduced the scale of our model's gradient updates across the 30 tasks, preparing us better for the numerical issues of the multi-task regime. We finally also note that though we do not show it in Tables 1 & 2, applying the same improvements to the PGN processor, leads to an increase in overall performance from 50.84% (Table 1) to 69.31%.
+
+Table 1: Single-task OOD micro- ${\mathrm{F}}_{1}$ score of previous SOTA Memnet, MPNN and PGN [5] and our best model Triplet-GMPNN with all our improvements, after 10,000 training steps.
+
+max width=
+
+$\mathbf{{Alg}.{Type}}$ Memnet [5] $\mathbf{{MPNN}\left\lbrack 5\right\rbrack }$ PGN [5] Triplet-GMPNN (ours)
+
+1-5
+Div. & C. 13.05% ± 0.14 ${20.30}\% \pm {0.85}$ ${65.23}\% \pm {4.44}$ $\mathbf{{76.36}}\% \pm {1.34}$
+
+1-5
+DP ${67.94}\% \pm {8.20}$ ${65.10}\% \pm {6.44}$ ${70.58}\% \pm {6.48}$ $\mathbf{{81.99}\% \pm {4.98}}$
+
+1-5
+Geometry ${45.14}\% \pm {11.95}$ 73.11% ± 17.19 ${61.19}\% \pm {7.01}$ $\mathbf{{94.09}\% \pm {2.30}}$
+
+1-5
+Graphs ${24.12}\% \pm {5.30}$ ${62.79}\% \pm {8.75}$ ${60.25}\% \pm {8.42}$ 81.41% $\pm$ 6.21
+
+1-5
+Greedy ${53.42}\% \pm {20.82}$ ${82.39}\% \pm {3.01}$ ${75.84}\% \pm {6.59}$ ${91.21}\% \pm {2.95}$
+
+1-5
+Search ${34.35}\% \pm {21.67}$ ${41.20}\% \pm {19.87}$ ${56.11}\% \pm {21.56}$ ${58.61}\% \pm {24.34}$
+
+1-5
+Sorting ${71.53}\% \pm {1.41}$ ${11.83}\% \pm {2.78}$ ${15.45}\% \pm {8.46}$ 60.37% ± 12.16
+
+1-5
+Strings ${1.51}\% \pm {0.46}$ ${3.21}\% \pm {0.94}$ ${2.04}\% \pm {0.20}$ ${49.09}\% \pm {23.49}$
+
+1-5
+Overall avg. 38.88% 44.99% 50.84% 74.14%
+
+1-5
+>90% 0/30 6/30 3/30 11/30
+
+1-5
+> 80% 3/30 9/30 7/30 ${17}/{30}$
+
+1-5
+>60% 10/30 14/30 ${15}/{30}$ $\mathbf{{24}}/\mathbf{{30}}$
+
+1-5
+
+There are two notable examples of algorithm families with significant OOD performance improvement. The first are geometric algorithms (Segments Intersect, Graham Scan [60] and Jarvis' March), now solved at approximately ${94}\%$ OOD, compared to the previous best of about ${73}\%$ ; the second being string algorithms (Knuth-Morris-Pratt and Naïve String Matcher) for which our model now exceeds 49% compared to the previous best of approximately 3%.
+
+The significant overall performance boost is reflected in the increased number of algorithms we can now solve at over ${60}\% ,{80}\% \& {90}\%$ OOD performance, compared to previous SOTA [5]. Specifically, we now exceed ${60}\%$ accuracy in 24 algorithms ( 15 algorithms previously), ${80}\%$ for 17 algorithms (9 previously) and 90% for 11 algorithms ( 6 previously).
+
+§ 4 MULTI-TASK EXPERIMENTS
+
+In the multi-task setting, we train a single processor across all CLRS-30 tasks. We keep encoders and decoders separate for each task. To perform the update, one might accumulate gradients from all the tasks before stepping the optimizer, or step independently after each batch from each algorithm. Both approaches have been deemed to be effective in the multi-task learning literature [20, 24, 61], and we empirically found that, in our setting, stepping separately per task produced superior results. Following recent work [61], we did not explore specialised multi-task optimizers, but ensured the stability of the training with gradient clipping [47] and Xavier initialisation [45] of scalar hint encoders to ameliorate exploding outputs and NaN gradients, as already described. Batch size and learning rate are the same as in single-task experiments. We found that gating (Section 3.2.3) degraded multi-task performance, so it was not included in the multi-task model.
+
+Chunking. To reduce the memory footprint of multi-task training we implemented a chunked training mode, where trajectories are split along the time axis for gradient computation and, when they are shorter than the chunk length, are concatenated with the following trajectory so as to avoid the need of padding. Thus, while a standard-training batch consists of full trajectories, padded to the length of the longest one, a chunked-training batch has a fixed time length (16 steps in our experiments) and consists of segments of trajectories. Immediately after the end of one trajectory the beginning of another one follows, so there is no padding. Losses are computed independently for each chunked batch, and gradients cannot flow between chunks. Since the output loss is computed only on the final sample of each trajectory, a chunk may give rise to no output loss, if it contains no end-of-trajectory segments. Chunking, therefore, changes the balance between hint and output losses depending on the length of trajectories. Surprisingly, multi-task performance averaged across all 30 tasks, after chunked training, is significantly better compared to full-trajectory training (Figure 4a). Only one algorithm, Bellman-Ford, has worse performance with chunked training (Figure 10). The strong effect of chunking on multi-algorithm performance indicates that the weighting of hint and output losses of the different tasks during optimization is important for successful multi-task learning.
+
+ < g r a p h i c s >
+
+Figure 3: Per-algorithm comparison between our multi-task model and single-task Triplet-GMPNN from Table 2, ordered by biggest improvement for multi-task (left to right). Refer to Figure 5 for a comparison against the best single-task model per algorithm instead.
+
+ < g r a p h i c s >
+
+Figure 4: Multi-task model ablations showing average performance and 95% CI across 10 seeds. ST, single-task; MT, multi-task.
+
+Results. Figure 3 compares the performance of the single-task Triplet-GMPNN against the multitask model. Additional comparisons against the best per-algorithms single-task model from Table 2 are also presented in Figure 5, along with an illustration of the number of tasks where the performance of multi-task model matches, or exceeds, that of single-task models.
+
+To evaluate the effect of our model improvements independently, we also performed a thorough model ablation. Figure 4a shows the significant difference in performance between the vanilla and chunked training regimes; we chose the latter to perform the ablations on. Figure 4b shows the results of our cumulative ablation: we gradually removed our improvements one at a time, with each element in the legend being the same as the model preceding it with a single improvement removed. On average, all the presented improvements contribute to the higher performance, with the largest effect coming from teacher forcing noise, i.e. feeding ground-truth hints at training time hurts generalisation, most likely because the correct hints are not available at test time, leading to data distribution shift.
+
+§ 5 CONCLUSION
+
+We presented a generalist neural algorithmic learner: a single graph neural network, with a single set of weights, capable of solving a diverse collection classical algorithms, at a level comparable to (and at times exceeding) a relevant single-task expert. Achieving this objective was preceded by a range of improvements to the dataset, optimisation and architectures for neural algorithmic reasoning, which led to over 20% absolute improvements over the prior best known result. It is our hope that the results and empirical insights shared by this work will be of use to researchers and practitioners in the area, and help scale neural algorithmic learning to new domains and applications.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/GdvKsq3_eH/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/GdvKsq3_eH/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..dd89b0f61b87e77a7620f3a5f69483e15f0225c3
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/GdvKsq3_eH/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,501 @@
+# A simple way to learn metrics between attributed graphs
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+The choice of good distances and similarity measures between objects is important for many machine learning methods. Therefore, many metric learning algorithms have been developed in recent years, mainly for Euclidean data in order to improve performance of classification or clustering methods. However, due to difficulties in establishing computable, efficient and differentiable distances between attributed graphs, few metric learning algorithms adapted to graphs have been developed despite the strong interest of the community. In this paper, we address this issue by proposing a new Simple Graph Metric Learning - SGML - model with few trainable parameters based on Simple Graph Convolutional Neural Networks - SGCN - and elements of Optimal Transport theory. This model allows us to build an appropriate distance from a database of labeled (attributed) graphs to improve the performance of simple classification algorithms such as $k$ -NN. This distance can be quickly trained while maintaining good performances as illustrated by the experimental study presented in this paper.
+
+## 1 Introduction
+
+Attributed graphs classification task has received much attention in recent years because graphs are well suited to represent a broad class of data in fields such as chemistry, biology, computer science, etc $\left\lbrack {1,2}\right\rbrack$ . Advances were obtained in particular thanks to the development of graph convolutional neural networks (GCN) [3-6] of which many actually graph learning model can rely on $\left\lbrack {7,8}\right\rbrack$ . GCN have attracted interest in the past recent years, due to their low computational cost, their ability to extract task-specific information, and their ease of training and integration into various models. Some tackle classification problems for attributed graphs by leveraging GCN: they characterize and build Euclidean representations for attributed graphs both in a supervised (e.g. $\left\lbrack {5,9}\right\rbrack$ ) or unsupervised (e.g. $\left\lbrack {{10},{11}}\right\rbrack$ ) way. Despite these achievements, classification methods based on direct evaluation of similarity measures between graphs remain relevant since they can obtain similar, and in some cases even better, performance [12]. Currently, most of these methods work in a task-agnostic way. However, because of the diversity of graph datasets, we can not expect from one similarity measure to be well suited for all of them, on all learning task.
+
+Having a way to adapt similarity measures to specific datasets and related tasks help to improve their generality and their performance. One of such approach is known as Metric Learning (hereafter ML), and has already been successful for Euclidean data. [13] is the first article to have proposed a Metric Learning method to improve a specific method ( $k$ -means for clustering of Euclidean data). This first work sparked a strong interest in ML which led to the development of a various panel of methods [14- 17] for Euclidean data. In contrast, few of these methods exist for attributed graphs. Existing methods (e.g., [18]) rely on iterative procedures which are hardly differentiable and this makes also scalability an issue. In the state-of-the-art of classification, neural networks tend to currently dominate in the literature, yet building simple and learned (hence adapted to data and task) similarity measures between attributed graphs remain a relevant issue for at least two reasons: it allows to step up simpler graph classification algorithms, and also it allows to rely on graph kernels $\left\lbrack {1,{19}}\right\rbrack$ which are, as of today, as efficient on numerous tasks as models relying on graph neural networks.
+
+Our contribution. To address the issue of scalability in Metric Learning for graphs, we propose here a novel graph ML method, called Simple Graph Metric Learning (SGML). In the first step, attributed graphs are coded as distributions by combining the attributes and the topology thanks to GCN. Then, relying on Optimal Transport, we define a novel similarity measure between these distributions, that we call Restricted Projected Wasserstein, $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ for short. $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ is differentiable and has a quasi-linear complexity on the distribution size (in number of bins; this is also the number of nodes); it removes certain limitation of the well known Sliced Wasserstein (noted ${\mathcal{{SW}}}_{2}$ ) [20]. The ${\mathcal{{RPW}}}_{2}$ similarity measure is then used to build a parametric distance between attributed graphs which then has also a quasi-linear complexity on the graph size (in the number of nodes). The similarity measure proposed in SGML has a limited number of parameters, and it helps our model to scale efficiently. Next, we focus on the the k-nearest neighbors (k-NN) method for classification. An advantage of $\mathrm{k} - \mathrm{{NN}}$ is that, if the learning set grows, it can exploit it at near zero additional cost (since it only requires to store these new data) on the contrary of SVMs that would require to retrain the whole data (a task quadratic in size). Since many real datasets (e.g., graphs from social networks, or to detect anomalies on computer networks) are expected to have a growing size, this property is important for continual learning, and from an energetic and environmental stance to avoid costly retraining. In order to use $\mathrm{k} - \mathrm{{NN}}$ and train the distance, we propose a novel softmax-based loss function over class point clouds. It appears to be novel in the context of graph ML and it leads to better results in the explored setting than the usual ML losses (i.e., those specifically built to improve k-NN for Euclidean data). Our experiments show that SGML learns a metric increasing significantly the k-NN performance, compared to state-of-art algorithms for graph similarity measures.
+
+The article is organized as follows. In Section 2, we discuss related works on graph metric learning and on optimal transport theory applied to the construction of attributed graphs similarity measures. Section 3 provides useful notations and definitions needed for the present work. The SGML model is defined in Section 4. Finally, in Section 5, we present various numerical experiments assessing the efficiency of our model. These experiments show that in various conditions, SGML has great ability to build accurate distance with competitive performance with the state-of-the-art in classification of graphs, both in context of $\mathrm{k} - \mathrm{{NN}}$ and kernel-based methods, and that despite its limited number of parameters. A main advantage of the proposed SGML method is also its simplicity, hence leading to a scalable and efficient method for graph Metric Learning. We conclude in Section 6.
+
+Societal Impact The contribution is essentially fundamental, and we do not see any direct and immediate potential negative societal impact. Conversely, the scalability of the method will help to alleviate the energy consumption of ML on graphs.
+
+## 2 Related Works
+
+### 2.1 Graph Metric Learning
+
+About ML for graphs, we can notably mention a series of works [21-23] that consist in learning a metric through Graph Edit Distance (GED). The major disadvantage of these methods is the complexity of the computation of the GED which can be only done for very small graphs.
+
+Following the introduction of GCN, an approach based on Siamese neural networks has been proposed in [24] for the study of brain connectivity signals, represented as graphs signals. In this specific case, all graphs are the same and they differ only by the signal they carry. This makes this method not applicable to most of datasets. More recently models without neural networks have been proposed: [18] present Interpretable Graph Metric Learning which builds a similarity measure by counting the most relevant subgraphs to perform a classification task. However, their method cannot handle large graphs. [25] proposes to learn a kernel based on graph persistent homology. The resulting model is also efficient, but it has the disadvantage of not being able to deal with discrete features in graphs.
+
+As seen, existing work on graph ML are either limited by the assumptions made to build their model, or too costly, or not suitable to actually leverage simple (classification) algorithms and increase their performance. To obtain a simple graph ML procedure that is not itself too costly, we need to have a similarity measure between graphs than can be computed quickly. To construct such a distance, recent works suggest that Optimal Transport is an appropriate tool.
+
+### 2.2 Optimal Transport for Graphs
+
+Optimal Transport (OT) has been put forward as a good approach to quickly compute similarity measures between graphs, relying on the the fact that it provides tools for computing metric between distributions [26]. Recent studies have shown that efficient distances and kernels for graphs can be built from this theory. Fused-Gromov-Wasserstein [27] is such a metric (distance in a mathematical sense) using OT to compare graphs through both their structures and attributes. Notably it allows one to compute barycenter of a set of graphs, and interpolation between graphs. Experimentally, it leads to good results in classification. Its bi-quadratic complexity in the size of graphs is its main drawback, even if it can be reduced to cubic cost with entropic regularization.
+
+In [28] an OT based approach to compare graphs having the same number of nodes is developed. It uses OT between the signals on the graphs (and not the structures). Thanks to a Gaussian distribution hypothesis, the analytical expression of the OT between these signals is derived. While the model has good results, it is limited to graphs having the same size, and a task of node alignment (which has a cubic complexity) must be performed.
+
+[12] has proposed the Wasserstein Weisfeiler-Lehman (WWL) method which can be seen as an evolution of the previous one [28] without these two hypotheses, neither on the size of the graphs nor on the nature of the signals they carry. In addition, a non trainable GCN is used to build task-agnostic characteristics which are then compared through OT. This pseudo-metric is then used to build an efficient kernel for graph classification. Unfortunately this model requires the computation of the optimal transport map which has a cubic cost (or quadratic with entropy regularization).
+
+While these previous models are efficient on classification tasks, their complexity remains high, and they are not fast enough (being quadratic or more) to be incorporated in a framework of Metric Learning. A part of our contribution is to provide such an optimal transport-based fast similarity measure for attributed graphs, with no restriction on the nature of the graphs we compare.
+
+## 3 Background on Metric Learning and Optimal Transport
+
+Notations. Let us consider a finite dataset $\mathbb{X} = {\left\{ {\mathbf{x}}_{i}\right\} }_{i = 1}^{\left| \mathbb{X}\right| }$ whose elements are in ${\mathbb{R}}^{q}$ . The dataset comes with a set of labels $\mathbb{E} = {\left\{ {e}_{i}\right\} }_{i = 1}^{\left| \mathbb{E}\right| }$ and a labeling function $\mathcal{E} : \mathbb{X} \rightarrow \mathbb{E}$ . We note $\mathcal{P}\left( \mathbb{X}\right) \subset \mathcal{P}\left( {\mathbb{R}}^{q}\right)$ the set of discrete probability over $\mathbb{X} \subset {\mathbb{R}}^{q}$ . ${\delta }_{x}$ is the Dirac distribution centered in $\mathbf{x}$ . We note $d$ a metric on $\mathbb{X}$ . It verifies the following properties: Symmetry - $\forall \left( {\mathbf{x},\mathbf{y}}\right) \in {\mathbb{X}}^{2}, d\left( {\mathbf{x},\mathbf{y}}\right) =$ $d\left( {\mathbf{y},\mathbf{x}}\right)$ ; Identity of indiscernibles - $\forall \left( {\mathbf{x},\mathbf{y}}\right) \in {\mathbb{X}}^{2}, d\left( {\mathbf{x},\mathbf{y}}\right) = 0 \Leftrightarrow \mathbf{x} = \mathbf{y}$ ; Triangle inequality - $\forall \left( {\mathbf{x},\mathbf{y},\mathbf{z}}\right) \in {\mathbb{X}}^{3}, d\left( {\mathbf{x},\mathbf{z}}\right) \leq d\left( {\mathbf{x},\mathbf{y}}\right) + d\left( {\mathbf{y},\mathbf{z}}\right)$ . $d$ is referred to as a pseudo-metric when it follows these properties except the identity of indiscernibles. "Distance" will also be used herein an informal way as a synonym of measures of similarity.
+
+### 3.1 Learning a metric
+
+For ML, we suppose that a dataset $\mathbb{X}$ is given with the knowledge of two sets: $\mathcal{S}$ (similar) and $\mathcal{D}$ (dissimilar), containing pairs of some elements of $\mathbb{X}$ . The goal is to build a parametric distance ${d}_{\theta }$ in such a way that the pairs of elements in $\mathcal{S}$ should be close while the pairs in $\mathcal{D}$ should be far away ${}^{1}$ . These sets are often built from the labeling function of $\mathbb{X}$ such that $\left\{ {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right\} \in \mathcal{S}$ if $\mathcal{E}\left( {\mathbf{x}}_{i}\right) = \mathcal{E}\left( {\mathbf{x}}_{j}\right)$ otherwise $\left\{ {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right\} \in \mathcal{D}$ . An optimization problem depending on ${d}_{\theta },\mathcal{S}$ and $\mathcal{D}$ is then defined with a loss function $\mathcal{F}$ suitable for the purpose:
+
+$$
+\mathop{\max }\limits_{\theta }\mathcal{F}\left( {{d}_{\theta },\mathcal{S},\mathcal{D}}\right) \tag{1}
+$$
+
+We denote ${\theta }^{ * }$ the optimal parameters. The interest for building such a distance ${d}_{{\theta }^{ * }}$ with respect to information in $\mathcal{D}$ and $\mathcal{S}$ lies in the fact that $\mathbb{X}$ is often included in a larger set, containing elements which are not labeled. The goal is that the obtained distance ${d}_{{\theta }^{ * }}$ will ease learning algorithm to find these missing labels. A part of our work will be to introduce a new and suitable loss function $\mathcal{F}$ in metric learning literature for the problem of metric learning for graphs.
+
+### 3.2 Optimal transport
+
+Let us consider two finite datasets $\mathbb{X},{\mathbb{X}}^{\prime }$ , and two distributions $\mu \in \mathcal{P}\left( \mathbb{X}\right)$ et $\nu \in \mathcal{P}\left( {\mathbb{X}}^{\prime }\right)$ on these sets:
+
+$$
+\mu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathbb{X}}}{a}_{i}{\delta }_{{\mathbf{x}}_{i}}\text{ and }\nu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i}^{\prime } \in {\mathbb{X}}^{\prime }}}{b}_{i}{\delta }_{{\mathbf{x}}_{i}^{\prime }} \tag{2}
+$$
+
+---
+
+${}^{1}$ Some algorithms use a third type of information, which consists of triples indicating that a given element must be closer to such element than to another element [16].
+
+---
+
+with ${a}_{i} \geq 0,{b}_{i} \geq 0, n = \left| \mathbb{X}\right| ,{n}^{\prime } = \left| {\mathbb{X}}^{\prime }\right|$ , and $\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i} = 1,\mathop{\sum }\limits_{{i = 1}}^{{n}^{\prime }}{b}_{i} = 1$ . Given a continuous cost function $c : {\mathbb{R}}^{q} \times {\mathbb{R}}^{q} \rightarrow {\mathbb{R}}_{ + }$ , one can build from optimal transport a metric between distributions with support in ${\mathbb{R}}^{q}$ , the so-called 2-Wasserstein distance ${\mathcal{W}}_{2}$ :
+
+$$
+{\mathcal{W}}_{2}\left( {\mu ,\nu }\right) = \mathop{\inf }\limits_{{{\pi }_{i, j} \in {\Pi }_{a, b}}}{\left( \mathop{\sum }\limits_{{i, j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i, j}c{\left( {\mathbf{x}}_{i},{\mathbf{x}}_{j}^{\prime }\right) }^{2}\right) }^{\frac{1}{2}} \tag{3}
+$$
+
+${\Pi }_{a, b}$ is the set of joint distributions on $\mathbb{X} \times {\mathbb{X}}^{\prime },\pi = \mathop{\sum }\limits_{{i, j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i, j}{\delta }_{\left( {\mathbf{x}}_{i},{\mathbf{x}}_{j}^{\prime }\right) }$ whose marginals are the distributions $\mu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i}^{\prime } \in {\mathbb{X}}^{\prime }}}\pi \left( {\cdot ,{\mathbf{x}}_{i}^{\prime }}\right)$ and $\nu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathbb{X}}}\pi \left( {{\mathbf{x}}_{i}, \cdot }\right)$ . We note ${\pi }^{ * } \in {\Pi }_{a, b}$ the optimal distribution (or coupling, or map) giving the solution of this problem. The cost function $c$ is taken as 2-norm: $c\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}^{\prime }}\right) = {\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}^{\prime }\end{Vmatrix}}_{2}$ , leading hence to the 2-Wasserstein distance. This defines an efficient way to compare distributions. One could easily use differentiable versions (w.r.t the parameters of a distribution) by considering the 1-Wasserstein [29] or the entropic regularization of ${\mathcal{W}}_{2}\left\lbrack {{26},{30}}\right\rbrack$ . Still, they are not fully suitable for metric learning because of the (initial) complexity in ${}^{2}$ $O\left( {{n}^{3}\log n}\right)$ , or in $O\left( {{n}^{2}\log \left( n\right) }\right)$ thanks to the Sinkhorn algorithm for the entropic regularization [26].
+
+Sliced Wasserstein distance $\left( {\mathcal{{SW}}}_{2}\right)$ . In order to drastically reduce the cost for computing the OT, [20] has proposed another metric ${\mathcal{{SW}}}_{2}$ which consists to compare the measures $\mu$ and $\nu$ via their one dimensional projections. Let $\mathbf{\theta } \in {\mathbb{S}}^{q - 1}$ be a vector of the unit sphere of ${\mathbb{R}}^{q}$ . Distributions $\mu$ and $\nu$ projected along $\mathbf{\theta }$ are denoted ${\mu }_{\mathbf{\theta }} = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathbb{X}}}{a}_{i}{\delta }_{{\mathbf{x}}_{i} \cdot \mathbf{\theta }}$ and ${\nu }_{\mathbf{\theta }} = \mathop{\sum }\limits_{{{x}_{i}^{\prime } \in {\mathbb{X}}^{\prime }}}{b}_{i}{\delta }_{{\mathbf{x}}_{i} \cdot \mathbf{\theta }}.\mathcal{S}{\mathcal{W}}_{2}$ is defined as follows:
+
+$$
+{\mathcal{{SW}}}_{2}{\left( \mu ,\nu \right) }^{2} = {\int }_{{\mathbb{S}}^{q - 1}}{\mathcal{W}}_{2}{\left( {\mu }_{\mathbf{\theta }},{\nu }_{\mathbf{\theta }}\right) }^{2}d\mathbf{\theta } \tag{4}
+$$
+
+The advantage of this formulation stems from the quasi-linearity in $n$ of the computation cost of ${\mathcal{W}}_{2}$ distance between one dimensional distributions. The integral can be estimated via a Monte-Carlo sampling. The complexity is then (when ${n}^{\prime } \leq n$ ) at most $O\left( {M\left( {n\log n}\right) }\right)$ with $M$ the number of samples (uniformly) drawn from ${\mathbb{S}}^{q - 1}$ . However,[31] shows that $\mathcal{S}{\mathcal{W}}_{2}$ is a biased downwards compared to ${\mathcal{W}}_{2}$ , since the vector $\mathbf{\theta }$ for projection determines at the same time the OT plans and also the cost of transport; this leads to a less effective distance.
+
+Projected Wasserstein distance $\left( {\mathcal{{PW}}}_{2}\right)$ . When $n = {n}^{\prime },{\mathcal{{PW}}}_{2}$ is introduced by [31] in answer to previous limitations. $\mathcal{P}{\mathcal{W}}_{2}$ is computed similarly as $\mathcal{S}{\mathcal{W}}_{2}$ , but for each projection $\mathbf{\theta }$ , the one dimensional optimal transport plan ${\pi }^{\mathbf{\theta }, * }$ between ${\mu }_{\mathbf{\theta }}$ and ${\nu }_{\mathbf{\theta }}$ is used with the original distributions $\mu$ and $\nu$ so as to compute the transport cost:
+
+$$
+\mathcal{P}{\mathcal{W}}_{2}{\left( \mu ,\nu \right) }^{2} = {\int }_{{\mathbb{S}}^{q - 1}}\mathop{\sum }\limits_{{i, j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i, j}^{\mathbf{\theta }, * }{\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}^{\prime }\end{Vmatrix}}_{2}^{2}d\mathbf{\theta } \tag{5}
+$$
+
+They show that this formulation gives a metric, has good properties and is more suitable for several learning task, e.g. generative tasks or reinforcement learning. Unfortunately their result holds only for uniform distributions of the same size. We extend the method to distributions of different sizes.
+
+## 4 Simple Graph Metric Learning
+
+Let us consider a dataset $\mathbb{G}$ of attributed graphs with labeling set $\mathbb{E}$ and labeling function $\mathcal{E}$ . For a given graph $\mathcal{G} \in \mathbb{G}$ having $\mathbf{A}$ as adjacency matrix, we call $n$ the number of node of the graph. Each node $i$ of $\mathcal{G}$ carry features $\mathbf{X}\left( {i, : }\right) \in {\mathbb{R}}^{q}$ ; thus $\mathbf{X} \in {\mathbb{R}}^{n \times q}$ is the attributed matrix of the graph.
+
+### 4.1 From graph to distribution
+
+Previous works using OT (pseudo-)metric have shown that comparing graphs through the signal they carry is a good way to compare them; we follow this path. The first step of our learning method consists in the generation of features jointly representative of the structure of each graph $\mathcal{G}$ and the attributes of their nodes $\mathbf{X}$ . We use for this purpose Simple GCN [6], a streamlined version of GCN in which all the intermediate non-linearities have been removed. This choice is dictated by the need to
+
+---
+
+${}^{2}$ When $n = {n}^{\prime }$ .
+
+---
+
+strongly reduce the number of trainable parameters, and it accelerates the training without degrading its performance compared to other GCN. This Simple GCN creates features as:
+
+$$
+\mathbf{Y} = \operatorname{ReLU}\left( {{\widetilde{\mathbf{A}}}^{r}\mathbf{X}\mathbf{\Theta }}\right) \tag{6}
+$$
+
+where $\mathbf{X} \in {\mathbb{R}}^{n \times q}$ are the initial attributes of the nodes, $\widetilde{\mathbf{A}} = \mathbf{A} + {\mathbf{I}}_{n}$ (where ${\mathbf{I}}_{n}$ is the identity matrix of ${\mathbb{R}}^{n}$ ) and $\mathbf{Y} \in {\mathbb{R}}^{n \times p}$ are the features computed by SGCN. The neighborhood exploration depth $r$ of this GCN is one of the hyperparameters of the method, along with the dimension $p$ of the extracted features $\mathbf{Y}$ . The coefficients of the matrix $\mathbf{\Theta } \in {\mathbb{R}}^{q \times p}$ of this GCN are the (only) trainable weights of the method. We will always choose $p \leq q$ , so the method has at most ${q}^{2}$ trainable parameters. From the extracted features $\mathbf{Y}$ , we define a uniform distribution whose suport is the nodes’ characteristics:
+
+$$
+{\mathcal{D}}_{\mathbf{\Theta }}\left( {\mathcal{G},\mathbf{X}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{1}{n}{\delta }_{\mathbf{Y}\left( {i, : }\right) } \tag{7}
+$$
+
+This first step is similar to WWL [12], except that we consider a trainable GCN, $\Theta$ being the trainable parameters. In eq. (7), both the structure $\mathcal{G}$ and the attributes $\mathbf{X}$ are accounted for. Next, we propose a novel way to evaluate the similarity between attributed graphs using these distributions.
+
+### 4.2 From distributions to distance
+
+The distances between graphs are computed as a distance between their representative distributions (Eq. (7)) with OT; specifically, we propose a novel one, called Restricted Projected Wasserstein (and noted ${\mathcal{{RPW}}}_{2}$ ) extending ${\mathcal{{PW}}}_{2}$ previously introduced in [31].
+
+Restricted Projected Sliced-Wasserstein. In [31], $\mathcal{P}{\mathcal{W}}_{2}$ is only defined for uniform distributions when $n = {n}^{\prime }$ . We extend this to cases $n \neq {n}^{\prime }$ . Then, it is much more delicate that $\mathcal{P}{\mathcal{W}}_{2}$ remains a metric of uniform distribution space, because the triangle inequality cannot be derived as easily in cases where $n \neq {n}^{\prime }$ . We did not find numerically evidences that this inequality is not verified; we only found a few examples of triplets where, numerically, the inequality was not satisfied because of numerical precision limit. However further works would have to answer this question.
+
+In order to compute this quantity, we could rely on Monte-Carlo sampling, and the complexity would be $O\left( {{Mpn}\log \left( n\right) }\right)$ . This can be prohibitive due to the term ${pM}$ . In order to obtain a scalable model, we restrict the projections to be alongside the basis vectors ${\left\{ {\mathbf{u}}_{k}\right\} }_{k = 1}^{p}$ of ${\mathbb{R}}^{p}$ only. This gives a new distance, called Restricted $\mathcal{P}{\mathcal{W}}_{2}$ or $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ for short:
+
+$$
+\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}{\left( \mu ,\nu \right) }^{2} = \frac{1}{p}\mathop{\sum }\limits_{{k = 1}}^{p}\mathop{\sum }\limits_{{i, j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i, j}^{{\mathbf{u}}_{k}, * }{\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}^{\prime }\end{Vmatrix}}_{2}^{2} \tag{8}
+$$
+
+This distance is defined by a deterministic formula; this avoids the variability introduced by a Monte-Carlo sampling. However, the drawback is that for a given ${\mathbf{u}}_{k}$ , many ${\pi }_{i, j}^{{\mathbf{u}}_{k}}$ may be optimal and they would lead different values in computation of (8). In order to have an unambiguous and deterministic definition, we add to this definition a deterministic way to choose among admissible optimal transport maps. In our case we rely on the deterministic implementation in tensorflow of argsort and this determines the chosen optimal transport map. The complexity of ${\mathcal{{RPW}}}_{2}$ is given by $O\left( {{p}^{2}n\log \left( n\right) }\right)$ which saves a factor $\frac{M}{p}$ as compared to $\mathcal{P}{\mathcal{W}}_{2}$ and this term is often greater than 10 .
+
+Finally, the parametric distance ${d}_{\Theta }^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ between two attributed graphs(G, X)and $\left( {{\mathcal{G}}^{\prime },{\mathbf{X}}^{\prime }}\right)$ is defined as:
+
+$$
+{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}\left( {\mathcal{G},{\mathcal{G}}^{\prime }}\right) = \mathcal{R}\mathcal{P}{\mathcal{W}}_{2}\left( {{\mathcal{D}}_{\mathbf{\Theta }}\left( {\mathcal{G},\mathbf{X}}\right) ,{\mathcal{D}}_{\mathbf{\Theta }}\left( {{\mathcal{G}}^{\prime },{\mathbf{X}}^{\prime }}\right) }\right) \tag{9}
+$$
+
+All the experiments will be conducted using this distance, excepted in an ablative study where we report the use of $\mathcal{S}{\mathcal{W}}_{2}$ .
+
+### 4.3 Loss for training distance: the Nearest Class Cloud Metric Learning
+
+The last element to complete our model is to define the loss function $\mathcal{F}$ for Eq. (1). We propose here a loss function for the purpose of improving the $k$ -nearest neighbors method. Actually there are classical losses already efficient for this purpose: one can notably mention Large Margin Nearest Neighbor (LMNN) [15] and Neighbourhood Component Analysis (NCA)[14].
+
+Algorithm 1 SGML: High-level algorithm to build ${d}_{{\Theta }^{ * }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ .
+
+---
+
+Require: A dataset of attributed graphs $\mathbb{G}$ and their labeling function $\mathcal{E}$ .
+
+ for each epoch $e \in \{ 1,\ldots , E\}$ do
+
+ Build a partition: ${ \cup }_{k}{B}_{k} = \mathbb{G}$ such that ${B}_{k} \cap {B}_{{k}^{\prime }} = \varnothing$ .
+
+ for each batch ${B}_{k}$ do
+
+ for each graph pair $\left( {\mathcal{G},{\mathcal{G}}^{\prime }}\right) \in {B}_{k} \times {B}_{k}$ do
+
+ Compute distance ${d}_{\Theta }^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}\left( {\mathcal{G},{\mathcal{G}}^{\prime }}\right)$ (Eq. (9))
+
+ Compute $- {\mathcal{F}}_{\mathbf{\Theta }}^{{B}_{k}}$ (Eq. (11)) and apply an iteration of Adam descent algorithm.
+
+ return all pairwise distance ${d}_{{\Theta }^{ * }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ in $\mathbb{G}$ .
+
+---
+
+The optimization is done using a gradient descent algorithm. Since computing all pairwise distances between graphs at each step of gradient descent would be intractable for large datasets, we have to train our loss in a batch way. In this context, LMNN may be not relevant since this method works locally and a batch is often not representative of the true neighborhood of an element of the dataset. On the contrary NCA loss can be trained in a batch way, as it is a probability model which tends to attract elements with the same label with each other, wherever they are. However, preliminary experiments showed only a slight improvement of the k-NN with NCA. Therefore we have constructed a new loss which proposes a different way to ensure the same condition (see Appendix A.1) and which experimentally works better in our setting (see Ablative study, Sec. 5.4). The model is called Nearest Cloud Class Metric Learning (NCCML); the probability of being labeled by $e \in \mathbb{E}$ for a graph $\mathcal{G}$ depends on the distance to the point clouds of a class (hence the name of the method):
+
+$$
+{p}_{\mathbf{\Theta }}\left( {e \mid \mathcal{G}}\right) = \frac{\exp \left( {-\mathop{\sum }\limits_{\substack{{{\mathcal{G}}_{i} \in \mathbb{G}} \\ {\mathcal{E}\left( {\mathcal{G}}_{i}\right) = e} }}{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}{\left( \mathcal{G},{\mathcal{G}}_{i}\right) }^{2}}\right) }{\mathop{\sum }\limits_{{{e}^{\prime } \in \mathbb{E}}}\exp \left( {-\mathop{\sum }\limits_{\substack{{{\mathcal{G}}_{i} \in \mathbb{G}} \\ {\mathcal{E}\left( {\mathcal{G}}_{i}\right) = {e}^{\prime }} }}{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}{\left( \mathcal{G},{\mathcal{G}}_{i}\right) }^{2}}\right) }. \tag{10}
+$$
+
+Given this probability, we want to construct the distance ${d}_{\Theta }^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ maximizing the probability that the labeled graphs in the dataset have the correct labels, which leads to solve the following problem:
+
+$$
+\mathop{\max }\limits_{\mathbf{\Theta }}{\mathcal{F}}_{\Theta }^{\mathbb{G}} = \mathop{\max }\limits_{\mathbf{\Theta }}\mathop{\sum }\limits_{{{\mathcal{G}}_{i} \in \mathbb{G},\mathcal{E}\left( {\mathcal{G}}_{i}\right) \neq \varnothing }}\log {p}_{\mathbf{\Theta }}\left( {\mathcal{E}\left( {\mathcal{G}}_{i}\right) \mid {\mathcal{G}}_{i}}\right) . \tag{11}
+$$
+
+By maximizing this loss, we construct a distance which, for each element, favors its relative distance to elements of the same labels compared to those of different labels. This should favor k-NN, especially when $k > 1$ . We will show in the experiments that, in this specific context, NCCML exhibits better performance than NCA. More details on NCCML can be found in Appendix A.1.
+
+### 4.4 Computational aspects
+
+We will test our metric learning method with both $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ and $\mathcal{S}{\mathcal{W}}_{2}$ .
+
+Optimization. In terms of optimization, we can differentiate directly with respect to one dimensional distribution parameters of Wasserstein distance, thus we can also differentiate through approximation of ${\mathcal{{SW}}}_{2}$ (Eq. (4)) and ${\mathcal{{RPW}}}_{2}$ (Eq. (5)). Self-differentiation techniques can be used on these expressions (see [26]). We implemented our algorithm in tensorflow ${}^{3}$ . The minimization of the loss is performed by batch and stochastic gradient descent (in particular with the optimizer Adam [32]).
+
+Parameters. The following default parameters are used (unless otherwise indicated in the text): learning rate ${l}_{r} = {0.999} * {10}^{-2}$ , number of epochs $E = {10}$ , batch size $B = 8$ , and the GCN output features size $p = \min \left( {5, q}\right)$ . For experiments involving $\mathcal{S}{\mathcal{W}}_{2}$ , the sampling number is set to $M = {50}$ which is a common value used in the literature.
+
+---
+
+${}^{3}$ The implementation can be found in the supplementary material.
+
+---
+
+Time complexity. Theoretically, the training time is negligible compared to the computation of all pairwise distances; therefore we focus on this last step for the time complexity analysis (see Appendix A. 4 for runtimes per dataset). If we denote $\widetilde{n}$ the number of average nodes of a graph, the total complexity of this computation with $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ (resp. $\mathcal{S}{\mathcal{W}}_{2}$ ) is given by $O\left( {\left| \mathbb{G}\right| \widetilde{n}\left( {{p}^{2} + \widetilde{n}{rp}}\right) + }\right.$ ${\left| \mathbb{G}\right| }^{2}{p}^{2}\widetilde{n}\log \widetilde{n}$ ) (resp. $O\left( {\left| \mathbb{G}\right| \widetilde{n}\left( {{p}^{2} + \widetilde{n}{rp}}\right) + {\left| \mathbb{G}\right| }^{2}{pM}\widetilde{n}\log \widetilde{n}}\right)$ ). The first terms occur for application of GCN and the latest for computing distances. In practice, for not too large $\widetilde{n}$ values, a quadratic implementation exploiting vectorization can be faster (see section 5.2). Furthermore, one can see that the GCN becomes the limiting element for scaling (on graph sizes); in practice, the sparsity of the adjacency matrix and the optimizations on GPUs limit this problem. However, it is still an active research topic to determine the less expensive ways to characterize the nodes [33, 34].
+
+Spatial Complexity. Our quadratic implementation mentioned above requires to store in memory a tensor of size $O\left( {{\widetilde{n}}^{2}p}\right)$ for $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ and $O\left( {{\widetilde{n}}^{2}M}\right)$ for $\mathcal{S}{\mathcal{W}}_{2}$ . The sequential implementation have a $O\left( \widetilde{n}\right)$ spatial complexity (more details on these implementations are in Appendix A.2). Anyway for both implementations, for the datatsets of graphs considered, SGML is very cheap in term of memory consumption in regards of actual GPU capability.
+
+## 5 Experiments
+
+### 5.1 Datasets
+
+
+
+Figure 1: Run times comparisons.
+
+For the experiments, we use a large panel of data sets from the literature [2] ${}^{4}$ : ENZYMES, PROTEINS, IMDB-B, IMDB-M, MUTAG, BZER, COX2 and NCI1. More information on these datasets can be found in Appendix A.3. Additional details about the following experiments can be found in Appendix A. 6 for reproducibility. When a dataset has discrete features, they are one-hot encoded.
+
+### 5.2 $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ Running times
+
+We have generated uniform random (normal) distributions with support in ${\mathbb{R}}^{5}$ of size ranging from ${10}^{1}$ to ${10}^{6}$ .
+
+This sizes of the distributions correspond to graph sizes $n$ (number of nodes). The choice of ${\mathbb{R}}^{5}$ is motivated by the usual good performance of ML when performed in small dimension. We compare the running time to compute the distance between these distributions with ${\mathcal{W}}_{2},{\mathcal{W}}_{2}^{e},\left( {\mathcal{W}}_{2}\right.$ with entropic regularization parameter $\gamma = {100}$ ), $\mathcal{S}{\mathcal{W}}_{2}$ using POT [35] library and $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ . For $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ we compare both the quadratic and the sequential (numpy) implementations we developed. The results can be found on Figure 1. Additional details and results are given in Appendix A.5.
+
+As expected $\mathcal{S}{\mathcal{W}}_{2}$ and $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ are the methods scaling the best: we obtain the expected (quasi) linear slope for both methods $O\left( {n\log n}\right)$ . As soon as $n > {10}^{4},{\mathcal{{SW}}}_{2}$ and ${\mathcal{{RPW}}}_{2}$ allows us to compute distances between distributions of several orders of magnitude larger for the same time as ${\mathcal{W}}_{2}$ and ${\mathcal{W}}_{2}^{e}$ . Although ${\mathcal{{SW}}}_{2}$ and ${\mathcal{{RPW}}}_{2}$ scale mostly the same, ${\mathcal{{SW}}}_{2}$ seems a bit faster than ${\mathcal{{RPW}}}_{2}$ . However, we will show in the next experiment (Sec. 5.4) that ${\mathcal{{RPW}}}_{2}$ builds better metrics than ${\mathcal{{SW}}}_{2}$ . Finally, we can note that the quadratic implementation is the fastest for samples with less than 200 instances, which is the case for the datasets considered in the following experiments.
+
+### 5.3 Supervised classification
+
+We evaluate the method in two ways: by using k-NN directly on the computed distances, and by using a SVM with a custom kernel built from the model proposed. We eventually compare the method to several (pseudo-) metric and distances from literature such as NetLSD [36], WWL [12], FGW [27].
+
+---
+
+${}^{4}$ http://graphkernels.cs.tu-dortmund.de
+
+---
+
+Table 1: Results of the main experiments for datasets of graphs with discrete attributes. Features are node labels for NCI1, PROTEINS and ENZYMES; and degrees for others. Accuracy is in bold green when it is the best of its block. For $\mathcal{{FGW}}$ -WL (resp. PSCN), depth is set to 4 (resp. 10).
+
+| Method | MUTAG | NCI1 | PROTEINS | ENZYMES | IMDB-M | IMDB-B |
| $\mathbf{k}$ -NN |
| ${\mathcal{{RPW}}}_{2}$ | ${90.00} \pm {7.60}$ | ${72.12} \pm {1.65}$ | ${70.18} \pm {4.01}$ | ${49.00} \pm {8.17}$ | ${45.00} \pm {5.46}$ | ${68.90} \pm {5.45}$ |
| Net-LSD-h | 84.90 | 65.89 | 64.89 | 31.99 | 40.51 | 68.04 |
| FGSD | 86.47 | 75.77 | 65.30 | 41.58 | 41.14 | 69.54 |
| NetSimile | 84.09 | 66.56 | 62.45 | 33.23 | 40.97 | 69.20 |
| SVM & GCN |
| ${\mathcal{{RPW}}}_{2}$ | ${88.95} \pm {7.61}$ | ${74.84} \pm {1.81}$ | ${74.55} \pm {4.19}$ | ${54.00} \pm {7.07}$ | ${51.00} \pm {5.44}$ | ${72.00} \pm {3.16}$ |
| WWL | ${87.27} \pm {1.50}$ | ${85.75} \pm {0.25}$ | ${74.28} \pm {0.56}$ | ${59.13} \pm {0.80}$ | ✘ | ✘ |
| FGW | ${83.26} \pm {10.30}$ | ${72.82} \pm {1.46}$ | ✘ | ✘ | ${48.00} \pm {3.22}$ | ${63.80} \pm {3.49}$ |
| $\mathcal{F}\mathcal{G}\mathcal{W}$ -WL | ${88.42} \pm {5.67}$ | ${86.42} \pm {1.63}$ | ✘ | ✘ | ✘ | ✘ |
| WL-OA | ${87.15} \pm {1.82}$ | ${86.08} \pm {0.27}$ | ${76.37} \pm {0.30}$ | ${58.97} \pm {0.82}$ | ✘ | ✘ |
| PSCN | ${83.47} \pm {10.26}$ | ${70.65} \pm {2.58}$ | ${58.34} \pm {7.71}$ | ✘ | ✘ | ✘ |
+
+Table 2: Results of the main experiments for datasets of graphs with continuous attributes graphs datasets. The best accuracy are in bold green. Note that for PROTEINS, ENZYMES and CUNEIFORM we concatenate continuous attributes with discrete attributes to build an extended continuous attributes (see Appendix A.6 for more details).
+
+| Method | BZR | COX2 | PROTEINS | ENZYMES | CUNEIFORM |
| ${\mathcal{{RPW}}}_{2}\left( \mathrm{{kNN}}\right)$ | ${85.61} \pm {2.98}$ | ${79.79} \pm {2.18}$ | ${71.79} \pm {4.47}$ | ${51.66} \pm {5.16}$ | ${54.81} \pm {12.26}$ |
| SVM & GCN |
| ${\mathcal{{RPW}}}_{2}$ | ${84.39} \pm {3.81}$ | ${78.51} \pm {0.01}$ | ${74.29} \pm {4.11}$ | ${48.83} \pm {4.78}$ | ${64.44} \pm {10.50}$ |
| WWL | ${84.42} \pm {2.03}$ | ${78.29} \pm {0.47}$ | ${77.91} \pm {0.80}$ | ${73.25} \pm {0.87}$ | ✘ |
| $\mathcal{F}\mathcal{G}\mathcal{W}$ | ${85.12} \pm {4.15}$ | ${77.23} \pm {4.86}$ | ${74.55} \pm {2.74}$ | ${71.00} \pm {6.76}$ | ${76.67} \pm {7.04}$ |
| PROPAK | ${79.51} \pm {5.02}$ | ${77.66} \pm {3.95}$ | ${61.34} \pm {4.38}$ | ${71.67} \pm {5.63}$ | ${12.59} \pm {6.67}$ |
| HGK-SP | ${76.42} \pm {0.72}$ | ${72.57} \pm {1.18}$ | ${75.78} \pm {0.17}$ | ${66.36} \pm {0.37}$ | ✘ |
| PSCN [K = 10] (GCN) | ${80.00} \pm {4.47}$ | ${71.70} \pm {3.57}$ | ${67.95} \pm {11.28}$ | ${26.67} \pm {4.77}$ | ${25.19} \pm {7.73}$ |
+
+k-Nearest Neighbors. Datasets are split in a training (90%) and test set (10%). For each of them we train ${\mathcal{{RPW}}}_{2}$ following Algorithm 1 on the training set with only one hyperparameter to adjust: the depth of SGCN taken as $r = \{ 1,2,3,4\}$ for all datasets, except for MUTAG for which we go up to 7 . The training is done for each parameter $r$ during 10 epochs. A 5 -fold cross validation of the number of neighbors $k = \{ 1,2,3,5,7\}$ to be considered is performed on the training set using the considered distance. Then for the best ${k}^{ * }$ , we keep the associated validation accuracy, and we finally train a $\mathrm{k} - \mathrm{{NN}}$ on the whole training set and evaluate its accuracy on test set. This experiment is averaged on 10 runs. The final test accuracy retained is the one associated to the largest validation accuracy. In this procedure, test set labels were never seen during neither training nor validation. Results are given in the first lines of Table 2 for graphs with continuous attributes, and Table 1 for graphs having labeled nodes.
+
+The learning metric framework combined with k-NN allows us to obtain good performance in classification tasks, in particular for datasets of graphs with continuous attributes. The exception is ENZYMES where we can see a lower net performance. For discrete attributes, SGML performs slightly below the state-of-the-art, yet it outperforms the existing distances classically combined with k-NN. Experiments show that our graph ML distance framework is efficient.
+
+Note: This procedure is very similar to the one used by WWL, except that the parameter $k$ is replaced by the corresponding parameters of their kernel (see next section).
+
+SVM. To compare to graph kernel methods, the experiment described in the previous section is 7 reproduced using a SVM for classification. The kernel ${\mathbf{K}}_{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}} = \exp \left( {-\lambda {d}_{{\Theta }^{ * }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}}\right)$ is built from the constructed distance. In this experiment, kernel hyperparameter $\lambda$ and SVM hyperparameter $C$ are tuned similarly as the parameter $k$ above. The set of possible $\lambda$ (resp. $C$ ) values are 6 (resp. 12) regularly spaced values between ${10}^{-4}$ and ${10}^{1}$ (resp. ${10}^{-4}$ and ${10}^{5}$ including 1). The results are provided in Table 1 (bottom part).
+
+Table 3: Ablative study results. Acc. is the accuracy. $\Delta$ is the difference in accuracy between the model of the column and the proposed one SGML whose results are on Table. 1. Red negative (resp. Green positive) number means that our model perform better (resp. worse).
+
+| Dataset Method | $\mathbf{{WWL}}$ | SGML - ${\mathcal{{SW}}}_{2}$ | SGML - NCA |
| $\mathbf{{Acc}.}$ | $\Delta$ | $\mathbf{{Acc}.}$ | $\Delta$ | $\mathbf{{Acc}.}$$\Delta$ |
| BZR | 78.05 | - 7.56 | 82.93 | - 2.68 | 83.41- 2.2 |
| COX2 | 78.51 | -1.26 | 78.30 | - 1.49 | 77.66- 2.13 |
| MUTAG | 83.68 - 6.32 | | 86.84 | - 3.16 | 87.37- 2.63 |
| NCI1 | $\begin{array}{ll} {80.43} & {5.31} \end{array}$ | | 69.03 | - 3.09 | 69.66- 2.46 |
| PROTEINS | 71.60 | 1.42 | 71.34 | 1.16 | 71.701.52 |
| $\mathbf{{IMDB} - B}$ | 68.20 | - 0.7 | 68.20 | -0.7 | 67.40-1.5 |
| IMDB-M | 48.73 | 3.73 | 42.33 | -2.67 | 42.73-2.27 |
| ENZYMES | 56.00 | 7 | 44.33 | - 4.67 | 55.33+ 6.33 |
+
+In this part of the table, one can see that the distance learned with our model performs as well as other OT distances when used as a kernel, on the majority of the datasets. We reach or are slightly above state of the art results on 5 datasets over 6 but are still below on NCI1. We recall that our method is specifically designed for the k-nearest neighbors method and that its computational complexity is much lower than many of the best methods on these datasets (notably WWL and $\mathcal{{FGW}}$ ).
+
+### 5.4 Ablative study
+
+We perform experiments to justify the design choice of our model. Specifically we show that these choices effectively help to improve k-NN performance by reproducing the experiments above (with k-NN) on different versions of the method without some (or all) of our propositions.
+
+Raw model. Without any of our novel propositions, the method would be equivalent to WWL, which corresponds to use the Wasserstein distance between distributions of Eq. (7), where $\mathbf{Y}$ is generated with GIN [5], a non trainable GCN. This specific case corresponds to the first column denoted WWL of Table 3. We see that even if there are datasets where there is a loss of performance, others benefit from the learned metrics. Moreover we remind that our distance is much less expensive to use than ${\mathcal{W}}_{2}$ on which WWL is based.
+
+SGML with ${\mathcal{{SW}}}_{2}$ . This second ablative study is in the second column, denoted SGML- ${\mathcal{{SW}}}_{2}$ , of Table 3, and is related to replacing ${\mathcal{{RPW}}}_{2}$ by ${\mathcal{{SW}}}_{2}$ . The result clearly validates our choice to use $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ instead of $\mathcal{S}{\mathcal{W}}_{2}$ . Our model is the best one except on one dataset.
+
+SGML with NCA. For this final experiment we replaced the loss NCCML by the NCA loss. The result is in the third column, SGML - NCA of Table 3. It appears that NCCML outperforms NCA in our specific ML framework.
+
+Globally, the ablative study is in favor of the choices proposed for SGML. Note that the driving idea of choosing simple and scalable methods over more complex ones, leads to competitive performance while allowing scalability.
+
+## 6 Conclusion
+
+In this article, we proposed a metric learning method for attributed graphs, specifically to increase the performance of k-NN. We have shown experimentally that it can indeed achieve performance similar or even superior to the state of the art. However, a theoretical work on the properties of ${\mathcal{{RPW}}}_{2}$ will be useful to allow us to better understand when it does not perform well. Appendix A. 8 presents some additional elements on the limits of the work. In addition, further work may easily adapt SGML to perform other tasks like graph clustering or regression, with an appropriate (and probably different) ML loss.
+
+References
+
+[1] Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. JMLR, 2011.
+
+[2] Kristian Kersting, Nils M. Kriege, Christopher Morris, Petra Mutzel, and Marion Neumann. Benchmark data sets for graph kernels, 2016.
+
+[3] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, 2016.
+
+[4] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ${ICLR},{2017}$ .
+
+[5] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ${ICLR},{2019}$ .
+
+[6] F. Wu, T. Zhang, A. H. Souza Jr., C; Fifty, T. Yu, and K. Q. Weinberger. Simplifying graph convolutional networks. Proceedings of Machine Learning Research, 2019.
+
+[7] Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In ICLR, 2019.
+
+[8] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs, 2017.
+
+[9] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In NeurIPS. 2018.
+
+[10] Fan-Yun Sun, Jordan Hoffman, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In ICLR, 2020.
+
+[11] Louis Béthune, Yacouba Kaloga, Pierre Borgnat, Aurélien Garivier, and Amaury Habrard. Hierarchical and unsupervised graph representation learning with Loukas's coarsening. Algorithms, 2020.
+
+[12] Matteo Togninalli, Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, and Karsten Borg-wardt. Wasserstein Weisfeiler-Lehman graph kernels. In NeurIPS. 2019.
+
+[13] Eric Xing, Michael Jordan, Stuart J Russell, and Andrew Ng. Distance metric learning with application to clustering with side-information. Advances in NeurIPS, 2002.
+
+[14] Jacob Goldberger, Geoffrey E Hinton, Sam Roweis, and Russ R Salakhutdinov. Neighbourhood components analysis. In Advances in NeurIPS, 2005.
+
+[15] Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 2009.
+
+[16] Aurélien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.
+
+[17] Juan Luis Suárez-Díaz, Salvador García, and Francisco Herrera. A tutorial on distance metric learning: Mathematical foundations, algorithms, experimental analysis, prospects and challenges. arXiv:1812.05944, 2018.
+
+[18] Tomoki Yoshida, Ichiro Takeuchi, and Masayuki Karasuyama. Distance metric learning for graph structured data. Machine Learning, 110, 2021.
+
+[19] Nils M. Kriege, Pierre-Louis Giscard, and Richard Wilson. On valid optimal assignment kernels and applications to graph classification. In Advances in NeurIPS. 2016.
+
+[20] Nicolas Bonneel, Julien Rabin, Gabriel Peyré, and Hanspeter Pfister. Sliced and radon Wasser-stein barycenters of measures. Journal of Mathematical Imaging and Vision, 2015.
+
+[21] Aurélien Bellet, Amaury Habrard, and Marc Sebban. Good edit similarity learning by loss minimization. Machine Learning, 2012.
+
+[22] Michel Neuhaus and Horst Bunke. Automatic learning of cost functions for graph edit distance. Information Sciences, 2007.
+
+[23] Linlin Jia, Benoit Gaüzère, Florian Yger, and Paul Honeine. A metric learning approach to graph edit costs for regression. In Joint IAPR Workshops SPR & SSPR, 2021.
+
+[24] S.I. Ktena, S. Parisot, E. Ferrante, M. Rajchl, M. Lee, B. Glocker, and D. Rueckert. Metric learning with spectral graph convolutions on brain connectivity networks. NeuroImage, 2018.
+
+[25] Qi Zhao and Yusu Wang. Learning metrics for persistence-based summaries and applications for graph classification. In NeurIPS, 2019.
+
+[26] Gabriel Peyré and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 11, 2019.
+
+[27] Titouan Vayer, Laetitia Chapel, Rémi Flamary, Romain Tavenard, and Nicolas Courty. Optimal transport for structured data with application on graphs. In ICML, 2019.
+
+[28] H.P. Maretic, M. El Gheche, G. Chierchia, and P. Frossard. GOT: an optimal transport framework for graph comparison. In NeurIPS, 2019.
+
+[29] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN, 2017.
+
+[30] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in NeurIPS, 2013.
+
+[31] Mark Rowland, Jiri Hron, Yunhao Tang, Krzysztof Choromanski, Tamas Sarlos, and Adrian Weller. Orthogonal estimation of Wasserstein distances. In AISTATS, 2019.
+
+[32] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
+
+[33] Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. Scaling graph neural networks with approximate PageRank. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, aug 2020.
+
+[34] Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Ben Chamberlain, Michael Bronstein, and Federico Monti. Sign: Scalable inception graph neural networks, 2020.
+
+[35] Rémi Flamary, Nicolas Courty, and Alexandre Gramfort et al. POT: Python Optimal Transport. Journal of Machine Learning Research, 22(78):1-8, 2021. URL http://jmlr.org/papers/ v22/20-451.html.
+
+[36] Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Alexander Bronstein, and Emmanuel Müller. Netlsd. In ACM SIGKDD. ACM, 2018.
+
+[37] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric learning for large scale image classification: Generalizing to new classes at near-zero cost. In ${ECCV},{2012}$ .
+
+[38] S Luan, M Zhao, X-W Chang, and D Precup. Break the ceiling: Stronger multi-scale deep graph convolutional networks. In Advances in Neural Information Processing Systems 32, pages 10945-10955. 2019.
+
+[39] Andreas Loukas. What graph neural networks cannot learn: depth vs width. abs/1907.03199, 2019.
+
+[40] Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J. Kim. Graph transformer networks. abs/1911.06455, 2019.
+
+[41] Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. abs/2012.09699, 2020.
+
+[42] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE Trans. Patt. Analysis and Machine Intelligence, 2013.
+
+## A Appendix
+
+### A.1 Motivation and Interpretation of NCCML
+
+We detail here some of the insights that led us to propose NCCML for ML.
+
+Since we want to maintain a low complexity to train our model, a batch training is desirable. As a consequence and as said in section 4.3, the Large Margin Nearest Neighbor (LMNN) [15] loss was not appropriate because it works very locally and is not optimal with batch training. Indeed, LMNN tries to attract and repel points with elements of the datasets which are neighbours, according to their labels. On a batch training, this could lead to some overfitting where we try to attract points which should not be close even if they share the same label. This is even true the smaller the batch size we use. So we decided to use Neighborhood Component Analysis (NCA)[14] which gave us a slightly better but limited performance. In reality NCA is also a very locally method. Indeed it considers probability $p\left( {{\mathcal{G}}_{i},{\mathcal{G}}_{j}}\right)$ for two elements to have the same labels:
+
+$$
+{p}_{\mathbf{\Theta }}\left( {{\mathcal{G}}_{i},{\mathcal{G}}_{j}}\right) = \frac{\exp \left( {-{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}{\left( {\mathcal{G}}_{j},{\mathcal{G}}_{i}\right) }^{2}}\right) }{\mathop{\sum }\limits_{{k,{k}^{\prime }}}\exp \left( {-{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}{\left( {\mathcal{G}}_{k},{\mathcal{G}}_{{k}^{\prime }}\right) }^{2}}\right) } \tag{12}
+$$
+
+Given this form of probability, it tries to maximize them for all elements which have effectively the same labels:
+
+$$
+\mathop{\max }\limits_{\Theta }\mathop{\sum }\limits_{{{\mathcal{G}}_{i} \in \mathbb{G}}}\mathop{\sum }\limits_{\substack{{{\mathcal{G}}_{j} \in \mathbb{G}} \\ {\mathcal{E}\left( {\mathcal{G}}_{i}\right) = \mathcal{E}\left( {\mathcal{G}}_{j}\right) } }}{p}_{\Theta }\left( {{\mathcal{G}}_{i},{\mathcal{G}}_{j}}\right) \tag{13}
+$$
+
+However, as one can see from Eq. 13, the probability of having the same labels is a softmax, so distant elements do not contribute a lot to these probability. It contains mostly local information. We believe that one could obtain better results by considering a more global criterion. Moreover, using a batch would be now advantageous since it will help the model to build good metric, even for k-NN (which requires a local fine metric) since the batch training will act as a regularization and will help to generalize.
+
+An inspiration for that comes from NCMML [37] which proposes a loss function specifically built to increase performance of nearest mean classifier. This model also relies on a probabilistic model where the probability to belong on a class is given by a softmax which considers the distance to the mean of different classes. Obviously NCMML is not well suited for our tasks using kNN. Plus, it would require an additional layer of computation for computing barycenter with OT.
+
+We took a compromise between NCA and the NCMML loss. The probability to be part of a class is given by a softmax which depends on the relative distance to different same label element (Eq. (11)). It has the advantage that the loss on a batch will be representative of the loss over the whole dataset, because the relative distance to different labels should remain the same also on subsamples of the dataset. Moreover it benefits from the batch training which acts as a regularizer. That finally leads to a better metric learned compared to NCA for k-NN as proven on our ablative study (Table. 3). Anyway, in a regular setting where we could use all datasets to build and train theses losses, NCCML would certainly shows worse results than LMNN and NCA.
+
+The specific settings that is studied here, due to the requirement of scalability, forces to propose a loss different from the literature, that indeed shows some improvment compared to NCA.
+
+### A.2 Implementation details
+
+Sequential implementation. A priori, it is necessary to compute all the transport costs between two distributions so as to calculate the optimal transport and this operation has a quadratic complexity. For most OT distance such as ${\mathcal{W}}_{2}$ , since the complexity is dominated by the computation of the optimal transport plan, this was of no consequence. However for ${\mathcal{{RPW}}}_{2}$ (as well as for ${\mathcal{{SW}}}_{2}$ ) it becomes a critical aspect. Hopefully, there is no need to compute all the costs to find the optimal transport and the transport cost has no more than $n + m$ (given that the distributions have sizes $n$ and $m$ ) non zero coefficients. This is why their complexity remains quasi-linear in $O\left( {n\log n}\right)$ . The algorithm of the implementation referred to as "sequential implementation" in the core text can be found on Algorithm 2. The experiment on Section 5.2 assessed the quasi-linear complexity of this algorithm.
+
+Quadratic implementation. In this second implementation, we compute all possible transport costs using a library of matrix multiplication, and then we multiply these costs by the optimal transport matrix. These operations allow us to benefit from the advantages of vectorization and to gain time compared to the sequential implementation, when $n$ is not too large. This result is assessed experimentally in Section 5.2.
+
+Both implementation can be found with this supplementary material.
+
+Note: In the reported experiments, we have seen that for $n < {1000}$ , it’s better to use the quadratic implementation. Anyway this result strongly depends on the hardware used, and also on the dimension of the distribution support $p$ . The scaling behavior of the two implementations is an interesting characteristic, showing than the proposed method can be implemented in a quasi-linear way. The second comment is also that the method can be made rapid enough (and very competitive) with optimizations.
+
+Algorithm 2 ${\mathcal{{RPW}}}_{2}$ - Sequential
+
+---
+
+Ensure: Build the distance between two discrete distributions $\mu$ and $\nu$ in $\mathcal{P}\left( {\mathbb{R}}^{p}\right)$ .
+
+Require: $\mu = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{\delta }_{{\mathbf{x}}_{i}}$ and $\nu = \mathop{\sum }\limits_{{j = 1}}^{m}{b}_{i}{\delta }_{{\mathbf{y}}_{j}}$ .
+
+ Set $c = 0$ .
+
+ for each epoch $k \in \{ 1,\ldots , p\}$ do
+
+ Get ${\sigma }_{\mu }^{k},{\sigma }_{\nu }^{k}$ sort permutation of supports vectors $k$ -th components.
+
+ i.e ${\mathbf{x}}_{{\sigma }_{\mu }^{k}\left( 0\right) }\left( k\right) \leq \cdots \leq {\mathbf{x}}_{{\sigma }_{\mu }^{k}\left( {n - 1}\right) }\left( k\right)$ and ${\mathbf{y}}_{{\sigma }_{\nu }^{k}\left( 0\right) }\left( k\right) \leq \cdots \leq {\mathbf{y}}_{{\sigma }_{\nu }^{k}\left( {m - 1}\right) }\left( k\right)$ .
+
+ Set $T =$ true. Set $i, j = 0,0$ .
+
+ Set ${w}_{\mu },{w}_{\nu } = {a}_{{\sigma }_{\mu }^{k}\left( 0\right) },{b}_{{\sigma }_{\nu }^{k}\left( 0\right) }$ .
+
+ while $T = =$ True do
+
+ if ${w}_{\mu } < {w}_{\nu }$ then
+
+ $c = c + {w}_{\mu } * {\begin{Vmatrix}{\mathbf{x}}_{{\sigma }_{\mu }^{k}\left( i\right) } - {\mathbf{y}}_{{\sigma }_{\nu }^{k}\left( j\right) }\end{Vmatrix}}_{2}^{2}$
+
+ $i = i + 1$
+
+ if $i = = n$ then
+
+ $T =$ false
+
+ ${w}_{\nu } = {w}_{\nu } - {w}_{\mu }$
+
+ ${w}_{\mu } = {a}_{{\sigma }_{\mu }^{k}\left( i\right) }$
+
+ else
+
+ $c = c + {w}_{\nu } * {\begin{Vmatrix}{\mathbf{x}}_{{\sigma }_{\mu }^{k}\left( i\right) } - {\mathbf{y}}_{{\sigma }_{\nu }^{k}\left( j\right) }\end{Vmatrix}}_{2}^{2}$
+
+ $j = j + 1$
+
+ if $j = = m$ then
+
+ $T =$ false
+
+ ${w}_{\mu } = {w}_{\mu } - {w}_{\nu }$
+
+ ${w}_{\nu } = {b}_{{\sigma }_{\nu }^{k}\left( j\right) }$
+
+ return $\sqrt{\frac{c}{q}}$
+
+---
+
+### A.3 Datasets
+
+The characteristics of the datasets used are summarized in Table 4.
+
+### A.4 SGML - Datasets runtimes
+
+The following Table 5 provides the typical runtimes for both training part and distance computation phases for the different datasets considered in this paper. We used the proposed quadratic implementation for all datasets. A tensorflow implementation is used during the training phase (to leverage the build-in functions for optimization and training) while the numpy implementation is used during the final distance computation. All running time experiments were conducted with a computer equipped with an Intel CORE i9900ks processor (62 GB of RAM) and GeForce RTX 3090 (24 GB of RAM). 2 The parameters are the same as in the experiments described in the paper. We fixed the depth of our GCN to $r = 4$ . As we can see despite lower theoretical complexity, the training time is bigger than distance computation. This is because the numpy implementation is much more efficient (especially in computing the sort operation) and these datasets are not large enough (in terms of the number of graphs) for tensorflow implementation catches up to numpy implementation. One clearly sees that the bigger the dataset (e.g., the NCI1 dataset), the lower the numpy implementation saves time.
+
+Table 4: Graph datasets used in our experiments. #Graphs: number of graphs. #Nodes: average number of nodes. cont.: attributes have continuous values; lab.: attributes are labels. deg.: the featurattributes are degrees of nodes. $q$ is the feature dimension.
+
+| Datasets | BZR | COX2 | PROTEINS | ENZYMES | MUTAG | NCI1 | IMDB-B | IMDB-M | CUNEIFORM |
| #Graphs | 405 | 467 | 1113 | 600 | 188 | 4110 | 1000 | 1500 | 267 |
| #Nodes | 35.75 | 41.22 | 39.06 | 32.63 | 17.93 | 29.97 | 19.77 | 13 | 21.27 |
| Node attributes | cont. | cont. | cont. / lab. | cont. / lab. | deg. | lab. | deg. | deg. | cont. / lab. |
| $q$ | 3 | 3 | 1/3 | 18 /3 | 4 | 38 | 135 | 88 | 3 / 3 |
+
+Table 5: Typical runtimes in our experiments. The running time of WWL $\left( {r = 2}\right)$ and $\mathcal{{FGW}}$ $(\alpha = {0.5}$ except for IMDB datasets where it is set to 1 ) to calculate distances are also provided.
+
+| Datasets | BZR | COX2 | | ENZYMES | MUTAG | NCI1 | IMDB-B/M) | IMDB-M |
| Training time (s) | 35 | 40 | 240 | 220 | 15 | 480 | 80/120 | 120 |
| Distances comp. (s) | 5 | 7 | 40 | 40 | 1 | 480 | 10/55 | 55 |
| Dist. comp. (s) - WWL | 16 | 25 | 200 | 30 | 2 | 1500 | 80/140 | 140 |
| Dist. comp. (s) $\mathcal{F}\mathcal{G}\mathcal{W}$ | 240 | 270 | 1h | 540 | 30 | 6h30min | 1000/1400 | 1400 |
+
+## A. $5\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ runtimes according to graph size
+
+In section 5.2, we have not been able to extend the comparison of computation times between ${\mathcal{{RPW}}}_{2}$ and ${\mathcal{{SW}}}_{2}$ up to ${10}^{8}$ size distributions in the same experimental conditions. The reason is that, beyond approximately 6 million points, we encountered a memory issue with ${\mathcal{{SW}}}_{2}$ on our Intel CORE i9900ks processor $\times {62}\mathrm{{GB}}$ of RAM computer. It appears to be an implementation issue from POT toolboxes. Anyway, we have redone all the experiment with a computer with more RAM but a less powerful processor, an Intel Xeon Gold 5218 x 2 To of RAM. This amount of RAM is obviously overkill but it allows us to avoid any issue on the $\mathcal{S}{\mathcal{W}}_{2}$ implementation. The results can be found in Figure 2. It confirms the calculated complexity on Section 4.4, asymptotically ${\mathcal{{RPW}}}_{2}$ scale better than $\mathcal{S}{\mathcal{W}}_{2}$ since in our settings $p\left( { = 5}\right) < M\left( { = {50}}\right)$ .
+
+### A.6 Additional details
+
+ENZYMES. (discrete) The learning rate ${0.999}{10}^{-2}$ was too heavy for NCA loss on ENZYMES, so we used ${0.999}{10}^{-3}$ for this dataset. Accordingly we set the number of epochs to 20 . However, we let the possibility to early stop at 10 epochs, meaning that the epochs number $E$ becomes an hyper-parameter. $E = \{ {10},{20}\}$ .
+
+PROTEINS. The above remark applies to PROTEINS (with continuous attributes). The learning rate was set to ${0.999}{10}^{-4}$ and the epochs number $E$ becomes an hyper-parameter $E = \{ {10},{20}\}$ .
+
+CUNEIFORM. Since it has 30 different labels, the batch size has been set to 64.
+
+ENZYMES. (continuous) It was trained in the same way as ENZYMES (discrete).
+
+Extended vector attributes. We used a concatenation of continuous attributes and one-hot encoding of discrete attributes to build an extended vectors attributes. Since our method is a ML method it is pertinent to give all information we have and let the method to select the most relevant information. In case of PROTEINS, this choice was motivated because its node features are scalars which is not suitable for the adaptation procedure while in case of ENZYMES (continuous) and Cuneiform using only continuous attributes lead to poor results. This choice help to have more flexibility for SGCN to build the metric while avoiding to use more powerful but also more costly GCN.
+
+
+
+Figure 2: Run times comparisons 2.
+
+Table 6: Ablative experiment with $\mathcal{{FGW}}$ . Acc. is the accuracy. $\Delta$ is the difference in accuracy between the model of the column and the proposed one SGML whose results are on Table. 1. Red negative (resp. Green positive) number means that our model perform better (resp. worse). $x$ symbol means that we had infinite distance values with the default settings of FGW solver.
+
+| Dataset Method | $\mathcal{F}\mathcal{G}\mathcal{W}$ |
| $\mathbf{{Acc}.}$$\Delta$ |
| BZR | 81.70 - 3.91 |
| COX2 | 78.51- 1.28 |
| MUTAG | 83.16 |
| NCI1 | ✘✘ |
| PROTEINS | ✘✘ |
| IMDB-B | 80.8011.9 |
| IMDB-M | ✘✘ |
| ENZYMES | 70.8319.33 |
+
+### A.7 FGW with k-NN
+
+In the ablative study, we evaluated WWL with a k-NN to justify the design choice. Here, as a complement, we reproduced this experiment with $\mathcal{{FGW}}$ . $\mathcal{{FGW}}$ has a parameter denoted $\alpha \in \left\lbrack {0,1}\right\rbrack$ which sets the trade-off between the structure and the characteristics of the nodes in the distance computation. We performed a small grid search over this parameter $\alpha = \left\lbrack {{0.25},{0.5},{0.75}}\right\rbrack$ . Except for IMDB datsaets where $\alpha = 1$ as in original paper. The results can be found in Table 6. One can see that the results are mitigated, FGW performs very well on some datasets and much less well on others. Moreover one could probably get even better results by doing a much larger hyperparameters tuning, as in the $\mathcal{{FGW}}$ original paper. Still, the present comparison is fair since, first, the grid search on the proposed method was also relatively small. Second, these results must be analyzed keeping in mind the significant difference in calculation time between the two methods (see Table 5). This illustrates also that doing a fine hyperparameter tuning with such expensive methods is not often feasible on very large data sets.
+
+### A.8 Limitations of this work
+
+We discuss some of the limitations of the model and give some suggestions for improvements.
+
+GCNs. To generate the distributions associated with the graphs, the model relies on a Graph Convolutionnal Neural network (GCN).Because of this we can expect some sub-optimal behavior of the model in terms of expressiveness. Indeed, while they are very efficient to characterize graphs locally, GCNs tend to lose efficiency when their depths increase. Although variations on their architectures have been proposed to solve this issue [38], it appears that most neural networks show similar results [6] and this defect seems to be intrinsic of their low-pass message passing scheme [39]. Therefore, new ways to efficiently characterize graphs at small and large scales could allow learning a better metric. In this regard, transformers are promising methods $\left\lbrack {{40},{41}}\right\rbrack$ . Their ability to characterize context at different scales has already been successfully exploited in natural language processing tasks. Currently many attempts have been made in recent years to adapt them to graphs. However, these are difficult networks to train and their integration in SGML would not result in a simple and scalable metric learning model.
+
+Performance. The model allows us to obtain an improvement in classification with k-NN as compared to the current methods. It is then more suitable for dealing with real datasets where new input are available after (or coming as graph streams) as the method do not need to be fully re-trained. However, performance with the k-NN remain inferior to those reached with a SVM. Thus in a critical real application (medical for example), where performance is of utmost importance, it is preferable to use the SVM. Additional work would be therefore necessary to gain more performance with the k-NN. This gain in performance could be acquired by introducing a different model of GCN so to generate the features, as mentioned above. But it could also done by making the model more complex. For example instead of considering uniform distributions from GCN features, we could introduce an attention mechanism that could modulate theirs weights on the distributions. This could give more flexibility to the model to build the metric, but at the cost of a more expensive training.
+
+Theoretical. The work on the distance that we introduced here, $\mathcal{{RP}}{\mathcal{W}}_{2}$ , which is scalable and has a good behavior in our model, is currently methodological and driven by insight. As of today, we have not proven that it satisfies the triangular inequality so that it is not guaranteed that it is a true metric or not. This aspects remains to be clarified.
+
+Opening to other tasks. Our work has been limited here to the k-NN for supervised classication. But other relevant classifiers with interesting properties where NCCML is not efficient enough could be consider, eg. the Nearest Class Mean [42]. Other tasks can also be considered such as clustering (k-means,...) and regression (k-NN regression,...). We believe that the present work is a first step on the goal of lowering the cost of many other tasks on graphs.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/GdvKsq3_eH/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/GdvKsq3_eH/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..42c27b6ecf295f39ec547e12c44a2b25cf171f73
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/GdvKsq3_eH/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,348 @@
+§ A SIMPLE WAY TO LEARN METRICS BETWEEN ATTRIBUTED GRAPHS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+The choice of good distances and similarity measures between objects is important for many machine learning methods. Therefore, many metric learning algorithms have been developed in recent years, mainly for Euclidean data in order to improve performance of classification or clustering methods. However, due to difficulties in establishing computable, efficient and differentiable distances between attributed graphs, few metric learning algorithms adapted to graphs have been developed despite the strong interest of the community. In this paper, we address this issue by proposing a new Simple Graph Metric Learning - SGML - model with few trainable parameters based on Simple Graph Convolutional Neural Networks - SGCN - and elements of Optimal Transport theory. This model allows us to build an appropriate distance from a database of labeled (attributed) graphs to improve the performance of simple classification algorithms such as $k$ -NN. This distance can be quickly trained while maintaining good performances as illustrated by the experimental study presented in this paper.
+
+§ 1 INTRODUCTION
+
+Attributed graphs classification task has received much attention in recent years because graphs are well suited to represent a broad class of data in fields such as chemistry, biology, computer science, etc $\left\lbrack {1,2}\right\rbrack$ . Advances were obtained in particular thanks to the development of graph convolutional neural networks (GCN) [3-6] of which many actually graph learning model can rely on $\left\lbrack {7,8}\right\rbrack$ . GCN have attracted interest in the past recent years, due to their low computational cost, their ability to extract task-specific information, and their ease of training and integration into various models. Some tackle classification problems for attributed graphs by leveraging GCN: they characterize and build Euclidean representations for attributed graphs both in a supervised (e.g. $\left\lbrack {5,9}\right\rbrack$ ) or unsupervised (e.g. $\left\lbrack {{10},{11}}\right\rbrack$ ) way. Despite these achievements, classification methods based on direct evaluation of similarity measures between graphs remain relevant since they can obtain similar, and in some cases even better, performance [12]. Currently, most of these methods work in a task-agnostic way. However, because of the diversity of graph datasets, we can not expect from one similarity measure to be well suited for all of them, on all learning task.
+
+Having a way to adapt similarity measures to specific datasets and related tasks help to improve their generality and their performance. One of such approach is known as Metric Learning (hereafter ML), and has already been successful for Euclidean data. [13] is the first article to have proposed a Metric Learning method to improve a specific method ( $k$ -means for clustering of Euclidean data). This first work sparked a strong interest in ML which led to the development of a various panel of methods [14- 17] for Euclidean data. In contrast, few of these methods exist for attributed graphs. Existing methods (e.g., [18]) rely on iterative procedures which are hardly differentiable and this makes also scalability an issue. In the state-of-the-art of classification, neural networks tend to currently dominate in the literature, yet building simple and learned (hence adapted to data and task) similarity measures between attributed graphs remain a relevant issue for at least two reasons: it allows to step up simpler graph classification algorithms, and also it allows to rely on graph kernels $\left\lbrack {1,{19}}\right\rbrack$ which are, as of today, as efficient on numerous tasks as models relying on graph neural networks.
+
+Our contribution. To address the issue of scalability in Metric Learning for graphs, we propose here a novel graph ML method, called Simple Graph Metric Learning (SGML). In the first step, attributed graphs are coded as distributions by combining the attributes and the topology thanks to GCN. Then, relying on Optimal Transport, we define a novel similarity measure between these distributions, that we call Restricted Projected Wasserstein, $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ for short. $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ is differentiable and has a quasi-linear complexity on the distribution size (in number of bins; this is also the number of nodes); it removes certain limitation of the well known Sliced Wasserstein (noted ${\mathcal{{SW}}}_{2}$ ) [20]. The ${\mathcal{{RPW}}}_{2}$ similarity measure is then used to build a parametric distance between attributed graphs which then has also a quasi-linear complexity on the graph size (in the number of nodes). The similarity measure proposed in SGML has a limited number of parameters, and it helps our model to scale efficiently. Next, we focus on the the k-nearest neighbors (k-NN) method for classification. An advantage of $\mathrm{k} - \mathrm{{NN}}$ is that, if the learning set grows, it can exploit it at near zero additional cost (since it only requires to store these new data) on the contrary of SVMs that would require to retrain the whole data (a task quadratic in size). Since many real datasets (e.g., graphs from social networks, or to detect anomalies on computer networks) are expected to have a growing size, this property is important for continual learning, and from an energetic and environmental stance to avoid costly retraining. In order to use $\mathrm{k} - \mathrm{{NN}}$ and train the distance, we propose a novel softmax-based loss function over class point clouds. It appears to be novel in the context of graph ML and it leads to better results in the explored setting than the usual ML losses (i.e., those specifically built to improve k-NN for Euclidean data). Our experiments show that SGML learns a metric increasing significantly the k-NN performance, compared to state-of-art algorithms for graph similarity measures.
+
+The article is organized as follows. In Section 2, we discuss related works on graph metric learning and on optimal transport theory applied to the construction of attributed graphs similarity measures. Section 3 provides useful notations and definitions needed for the present work. The SGML model is defined in Section 4. Finally, in Section 5, we present various numerical experiments assessing the efficiency of our model. These experiments show that in various conditions, SGML has great ability to build accurate distance with competitive performance with the state-of-the-art in classification of graphs, both in context of $\mathrm{k} - \mathrm{{NN}}$ and kernel-based methods, and that despite its limited number of parameters. A main advantage of the proposed SGML method is also its simplicity, hence leading to a scalable and efficient method for graph Metric Learning. We conclude in Section 6.
+
+Societal Impact The contribution is essentially fundamental, and we do not see any direct and immediate potential negative societal impact. Conversely, the scalability of the method will help to alleviate the energy consumption of ML on graphs.
+
+§ 2 RELATED WORKS
+
+§ 2.1 GRAPH METRIC LEARNING
+
+About ML for graphs, we can notably mention a series of works [21-23] that consist in learning a metric through Graph Edit Distance (GED). The major disadvantage of these methods is the complexity of the computation of the GED which can be only done for very small graphs.
+
+Following the introduction of GCN, an approach based on Siamese neural networks has been proposed in [24] for the study of brain connectivity signals, represented as graphs signals. In this specific case, all graphs are the same and they differ only by the signal they carry. This makes this method not applicable to most of datasets. More recently models without neural networks have been proposed: [18] present Interpretable Graph Metric Learning which builds a similarity measure by counting the most relevant subgraphs to perform a classification task. However, their method cannot handle large graphs. [25] proposes to learn a kernel based on graph persistent homology. The resulting model is also efficient, but it has the disadvantage of not being able to deal with discrete features in graphs.
+
+As seen, existing work on graph ML are either limited by the assumptions made to build their model, or too costly, or not suitable to actually leverage simple (classification) algorithms and increase their performance. To obtain a simple graph ML procedure that is not itself too costly, we need to have a similarity measure between graphs than can be computed quickly. To construct such a distance, recent works suggest that Optimal Transport is an appropriate tool.
+
+§ 2.2 OPTIMAL TRANSPORT FOR GRAPHS
+
+Optimal Transport (OT) has been put forward as a good approach to quickly compute similarity measures between graphs, relying on the the fact that it provides tools for computing metric between distributions [26]. Recent studies have shown that efficient distances and kernels for graphs can be built from this theory. Fused-Gromov-Wasserstein [27] is such a metric (distance in a mathematical sense) using OT to compare graphs through both their structures and attributes. Notably it allows one to compute barycenter of a set of graphs, and interpolation between graphs. Experimentally, it leads to good results in classification. Its bi-quadratic complexity in the size of graphs is its main drawback, even if it can be reduced to cubic cost with entropic regularization.
+
+In [28] an OT based approach to compare graphs having the same number of nodes is developed. It uses OT between the signals on the graphs (and not the structures). Thanks to a Gaussian distribution hypothesis, the analytical expression of the OT between these signals is derived. While the model has good results, it is limited to graphs having the same size, and a task of node alignment (which has a cubic complexity) must be performed.
+
+[12] has proposed the Wasserstein Weisfeiler-Lehman (WWL) method which can be seen as an evolution of the previous one [28] without these two hypotheses, neither on the size of the graphs nor on the nature of the signals they carry. In addition, a non trainable GCN is used to build task-agnostic characteristics which are then compared through OT. This pseudo-metric is then used to build an efficient kernel for graph classification. Unfortunately this model requires the computation of the optimal transport map which has a cubic cost (or quadratic with entropy regularization).
+
+While these previous models are efficient on classification tasks, their complexity remains high, and they are not fast enough (being quadratic or more) to be incorporated in a framework of Metric Learning. A part of our contribution is to provide such an optimal transport-based fast similarity measure for attributed graphs, with no restriction on the nature of the graphs we compare.
+
+§ 3 BACKGROUND ON METRIC LEARNING AND OPTIMAL TRANSPORT
+
+Notations. Let us consider a finite dataset $\mathbb{X} = {\left\{ {\mathbf{x}}_{i}\right\} }_{i = 1}^{\left| \mathbb{X}\right| }$ whose elements are in ${\mathbb{R}}^{q}$ . The dataset comes with a set of labels $\mathbb{E} = {\left\{ {e}_{i}\right\} }_{i = 1}^{\left| \mathbb{E}\right| }$ and a labeling function $\mathcal{E} : \mathbb{X} \rightarrow \mathbb{E}$ . We note $\mathcal{P}\left( \mathbb{X}\right) \subset \mathcal{P}\left( {\mathbb{R}}^{q}\right)$ the set of discrete probability over $\mathbb{X} \subset {\mathbb{R}}^{q}$ . ${\delta }_{x}$ is the Dirac distribution centered in $\mathbf{x}$ . We note $d$ a metric on $\mathbb{X}$ . It verifies the following properties: Symmetry - $\forall \left( {\mathbf{x},\mathbf{y}}\right) \in {\mathbb{X}}^{2},d\left( {\mathbf{x},\mathbf{y}}\right) =$ $d\left( {\mathbf{y},\mathbf{x}}\right)$ ; Identity of indiscernibles - $\forall \left( {\mathbf{x},\mathbf{y}}\right) \in {\mathbb{X}}^{2},d\left( {\mathbf{x},\mathbf{y}}\right) = 0 \Leftrightarrow \mathbf{x} = \mathbf{y}$ ; Triangle inequality - $\forall \left( {\mathbf{x},\mathbf{y},\mathbf{z}}\right) \in {\mathbb{X}}^{3},d\left( {\mathbf{x},\mathbf{z}}\right) \leq d\left( {\mathbf{x},\mathbf{y}}\right) + d\left( {\mathbf{y},\mathbf{z}}\right)$ . $d$ is referred to as a pseudo-metric when it follows these properties except the identity of indiscernibles. "Distance" will also be used herein an informal way as a synonym of measures of similarity.
+
+§ 3.1 LEARNING A METRIC
+
+For ML, we suppose that a dataset $\mathbb{X}$ is given with the knowledge of two sets: $\mathcal{S}$ (similar) and $\mathcal{D}$ (dissimilar), containing pairs of some elements of $\mathbb{X}$ . The goal is to build a parametric distance ${d}_{\theta }$ in such a way that the pairs of elements in $\mathcal{S}$ should be close while the pairs in $\mathcal{D}$ should be far away ${}^{1}$ . These sets are often built from the labeling function of $\mathbb{X}$ such that $\left\{ {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right\} \in \mathcal{S}$ if $\mathcal{E}\left( {\mathbf{x}}_{i}\right) = \mathcal{E}\left( {\mathbf{x}}_{j}\right)$ otherwise $\left\{ {{\mathbf{x}}_{i},{\mathbf{x}}_{j}}\right\} \in \mathcal{D}$ . An optimization problem depending on ${d}_{\theta },\mathcal{S}$ and $\mathcal{D}$ is then defined with a loss function $\mathcal{F}$ suitable for the purpose:
+
+$$
+\mathop{\max }\limits_{\theta }\mathcal{F}\left( {{d}_{\theta },\mathcal{S},\mathcal{D}}\right) \tag{1}
+$$
+
+We denote ${\theta }^{ * }$ the optimal parameters. The interest for building such a distance ${d}_{{\theta }^{ * }}$ with respect to information in $\mathcal{D}$ and $\mathcal{S}$ lies in the fact that $\mathbb{X}$ is often included in a larger set, containing elements which are not labeled. The goal is that the obtained distance ${d}_{{\theta }^{ * }}$ will ease learning algorithm to find these missing labels. A part of our work will be to introduce a new and suitable loss function $\mathcal{F}$ in metric learning literature for the problem of metric learning for graphs.
+
+§ 3.2 OPTIMAL TRANSPORT
+
+Let us consider two finite datasets $\mathbb{X},{\mathbb{X}}^{\prime }$ , and two distributions $\mu \in \mathcal{P}\left( \mathbb{X}\right)$ et $\nu \in \mathcal{P}\left( {\mathbb{X}}^{\prime }\right)$ on these sets:
+
+$$
+\mu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathbb{X}}}{a}_{i}{\delta }_{{\mathbf{x}}_{i}}\text{ and }\nu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i}^{\prime } \in {\mathbb{X}}^{\prime }}}{b}_{i}{\delta }_{{\mathbf{x}}_{i}^{\prime }} \tag{2}
+$$
+
+${}^{1}$ Some algorithms use a third type of information, which consists of triples indicating that a given element must be closer to such element than to another element [16].
+
+with ${a}_{i} \geq 0,{b}_{i} \geq 0,n = \left| \mathbb{X}\right| ,{n}^{\prime } = \left| {\mathbb{X}}^{\prime }\right|$ , and $\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i} = 1,\mathop{\sum }\limits_{{i = 1}}^{{n}^{\prime }}{b}_{i} = 1$ . Given a continuous cost function $c : {\mathbb{R}}^{q} \times {\mathbb{R}}^{q} \rightarrow {\mathbb{R}}_{ + }$ , one can build from optimal transport a metric between distributions with support in ${\mathbb{R}}^{q}$ , the so-called 2-Wasserstein distance ${\mathcal{W}}_{2}$ :
+
+$$
+{\mathcal{W}}_{2}\left( {\mu ,\nu }\right) = \mathop{\inf }\limits_{{{\pi }_{i,j} \in {\Pi }_{a,b}}}{\left( \mathop{\sum }\limits_{{i,j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i,j}c{\left( {\mathbf{x}}_{i},{\mathbf{x}}_{j}^{\prime }\right) }^{2}\right) }^{\frac{1}{2}} \tag{3}
+$$
+
+${\Pi }_{a,b}$ is the set of joint distributions on $\mathbb{X} \times {\mathbb{X}}^{\prime },\pi = \mathop{\sum }\limits_{{i,j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i,j}{\delta }_{\left( {\mathbf{x}}_{i},{\mathbf{x}}_{j}^{\prime }\right) }$ whose marginals are the distributions $\mu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i}^{\prime } \in {\mathbb{X}}^{\prime }}}\pi \left( {\cdot ,{\mathbf{x}}_{i}^{\prime }}\right)$ and $\nu = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathbb{X}}}\pi \left( {{\mathbf{x}}_{i}, \cdot }\right)$ . We note ${\pi }^{ * } \in {\Pi }_{a,b}$ the optimal distribution (or coupling, or map) giving the solution of this problem. The cost function $c$ is taken as 2-norm: $c\left( {{\mathbf{x}}_{i},{\mathbf{x}}_{j}^{\prime }}\right) = {\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}^{\prime }\end{Vmatrix}}_{2}$ , leading hence to the 2-Wasserstein distance. This defines an efficient way to compare distributions. One could easily use differentiable versions (w.r.t the parameters of a distribution) by considering the 1-Wasserstein [29] or the entropic regularization of ${\mathcal{W}}_{2}\left\lbrack {{26},{30}}\right\rbrack$ . Still, they are not fully suitable for metric learning because of the (initial) complexity in ${}^{2}$ $O\left( {{n}^{3}\log n}\right)$ , or in $O\left( {{n}^{2}\log \left( n\right) }\right)$ thanks to the Sinkhorn algorithm for the entropic regularization [26].
+
+Sliced Wasserstein distance $\left( {\mathcal{{SW}}}_{2}\right)$ . In order to drastically reduce the cost for computing the OT, [20] has proposed another metric ${\mathcal{{SW}}}_{2}$ which consists to compare the measures $\mu$ and $\nu$ via their one dimensional projections. Let $\mathbf{\theta } \in {\mathbb{S}}^{q - 1}$ be a vector of the unit sphere of ${\mathbb{R}}^{q}$ . Distributions $\mu$ and $\nu$ projected along $\mathbf{\theta }$ are denoted ${\mu }_{\mathbf{\theta }} = \mathop{\sum }\limits_{{{\mathbf{x}}_{i} \in \mathbb{X}}}{a}_{i}{\delta }_{{\mathbf{x}}_{i} \cdot \mathbf{\theta }}$ and ${\nu }_{\mathbf{\theta }} = \mathop{\sum }\limits_{{{x}_{i}^{\prime } \in {\mathbb{X}}^{\prime }}}{b}_{i}{\delta }_{{\mathbf{x}}_{i} \cdot \mathbf{\theta }}.\mathcal{S}{\mathcal{W}}_{2}$ is defined as follows:
+
+$$
+{\mathcal{{SW}}}_{2}{\left( \mu ,\nu \right) }^{2} = {\int }_{{\mathbb{S}}^{q - 1}}{\mathcal{W}}_{2}{\left( {\mu }_{\mathbf{\theta }},{\nu }_{\mathbf{\theta }}\right) }^{2}d\mathbf{\theta } \tag{4}
+$$
+
+The advantage of this formulation stems from the quasi-linearity in $n$ of the computation cost of ${\mathcal{W}}_{2}$ distance between one dimensional distributions. The integral can be estimated via a Monte-Carlo sampling. The complexity is then (when ${n}^{\prime } \leq n$ ) at most $O\left( {M\left( {n\log n}\right) }\right)$ with $M$ the number of samples (uniformly) drawn from ${\mathbb{S}}^{q - 1}$ . However,[31] shows that $\mathcal{S}{\mathcal{W}}_{2}$ is a biased downwards compared to ${\mathcal{W}}_{2}$ , since the vector $\mathbf{\theta }$ for projection determines at the same time the OT plans and also the cost of transport; this leads to a less effective distance.
+
+Projected Wasserstein distance $\left( {\mathcal{{PW}}}_{2}\right)$ . When $n = {n}^{\prime },{\mathcal{{PW}}}_{2}$ is introduced by [31] in answer to previous limitations. $\mathcal{P}{\mathcal{W}}_{2}$ is computed similarly as $\mathcal{S}{\mathcal{W}}_{2}$ , but for each projection $\mathbf{\theta }$ , the one dimensional optimal transport plan ${\pi }^{\mathbf{\theta }, * }$ between ${\mu }_{\mathbf{\theta }}$ and ${\nu }_{\mathbf{\theta }}$ is used with the original distributions $\mu$ and $\nu$ so as to compute the transport cost:
+
+$$
+\mathcal{P}{\mathcal{W}}_{2}{\left( \mu ,\nu \right) }^{2} = {\int }_{{\mathbb{S}}^{q - 1}}\mathop{\sum }\limits_{{i,j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i,j}^{\mathbf{\theta }, * }{\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}^{\prime }\end{Vmatrix}}_{2}^{2}d\mathbf{\theta } \tag{5}
+$$
+
+They show that this formulation gives a metric, has good properties and is more suitable for several learning task, e.g. generative tasks or reinforcement learning. Unfortunately their result holds only for uniform distributions of the same size. We extend the method to distributions of different sizes.
+
+§ 4 SIMPLE GRAPH METRIC LEARNING
+
+Let us consider a dataset $\mathbb{G}$ of attributed graphs with labeling set $\mathbb{E}$ and labeling function $\mathcal{E}$ . For a given graph $\mathcal{G} \in \mathbb{G}$ having $\mathbf{A}$ as adjacency matrix, we call $n$ the number of node of the graph. Each node $i$ of $\mathcal{G}$ carry features $\mathbf{X}\left( {i, : }\right) \in {\mathbb{R}}^{q}$ ; thus $\mathbf{X} \in {\mathbb{R}}^{n \times q}$ is the attributed matrix of the graph.
+
+§ 4.1 FROM GRAPH TO DISTRIBUTION
+
+Previous works using OT (pseudo-)metric have shown that comparing graphs through the signal they carry is a good way to compare them; we follow this path. The first step of our learning method consists in the generation of features jointly representative of the structure of each graph $\mathcal{G}$ and the attributes of their nodes $\mathbf{X}$ . We use for this purpose Simple GCN [6], a streamlined version of GCN in which all the intermediate non-linearities have been removed. This choice is dictated by the need to
+
+${}^{2}$ When $n = {n}^{\prime }$ .
+
+strongly reduce the number of trainable parameters, and it accelerates the training without degrading its performance compared to other GCN. This Simple GCN creates features as:
+
+$$
+\mathbf{Y} = \operatorname{ReLU}\left( {{\widetilde{\mathbf{A}}}^{r}\mathbf{X}\mathbf{\Theta }}\right) \tag{6}
+$$
+
+where $\mathbf{X} \in {\mathbb{R}}^{n \times q}$ are the initial attributes of the nodes, $\widetilde{\mathbf{A}} = \mathbf{A} + {\mathbf{I}}_{n}$ (where ${\mathbf{I}}_{n}$ is the identity matrix of ${\mathbb{R}}^{n}$ ) and $\mathbf{Y} \in {\mathbb{R}}^{n \times p}$ are the features computed by SGCN. The neighborhood exploration depth $r$ of this GCN is one of the hyperparameters of the method, along with the dimension $p$ of the extracted features $\mathbf{Y}$ . The coefficients of the matrix $\mathbf{\Theta } \in {\mathbb{R}}^{q \times p}$ of this GCN are the (only) trainable weights of the method. We will always choose $p \leq q$ , so the method has at most ${q}^{2}$ trainable parameters. From the extracted features $\mathbf{Y}$ , we define a uniform distribution whose suport is the nodes’ characteristics:
+
+$$
+{\mathcal{D}}_{\mathbf{\Theta }}\left( {\mathcal{G},\mathbf{X}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{1}{n}{\delta }_{\mathbf{Y}\left( {i, : }\right) } \tag{7}
+$$
+
+This first step is similar to WWL [12], except that we consider a trainable GCN, $\Theta$ being the trainable parameters. In eq. (7), both the structure $\mathcal{G}$ and the attributes $\mathbf{X}$ are accounted for. Next, we propose a novel way to evaluate the similarity between attributed graphs using these distributions.
+
+§ 4.2 FROM DISTRIBUTIONS TO DISTANCE
+
+The distances between graphs are computed as a distance between their representative distributions (Eq. (7)) with OT; specifically, we propose a novel one, called Restricted Projected Wasserstein (and noted ${\mathcal{{RPW}}}_{2}$ ) extending ${\mathcal{{PW}}}_{2}$ previously introduced in [31].
+
+Restricted Projected Sliced-Wasserstein. In [31], $\mathcal{P}{\mathcal{W}}_{2}$ is only defined for uniform distributions when $n = {n}^{\prime }$ . We extend this to cases $n \neq {n}^{\prime }$ . Then, it is much more delicate that $\mathcal{P}{\mathcal{W}}_{2}$ remains a metric of uniform distribution space, because the triangle inequality cannot be derived as easily in cases where $n \neq {n}^{\prime }$ . We did not find numerically evidences that this inequality is not verified; we only found a few examples of triplets where, numerically, the inequality was not satisfied because of numerical precision limit. However further works would have to answer this question.
+
+In order to compute this quantity, we could rely on Monte-Carlo sampling, and the complexity would be $O\left( {{Mpn}\log \left( n\right) }\right)$ . This can be prohibitive due to the term ${pM}$ . In order to obtain a scalable model, we restrict the projections to be alongside the basis vectors ${\left\{ {\mathbf{u}}_{k}\right\} }_{k = 1}^{p}$ of ${\mathbb{R}}^{p}$ only. This gives a new distance, called Restricted $\mathcal{P}{\mathcal{W}}_{2}$ or $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ for short:
+
+$$
+\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}{\left( \mu ,\nu \right) }^{2} = \frac{1}{p}\mathop{\sum }\limits_{{k = 1}}^{p}\mathop{\sum }\limits_{{i,j = 1}}^{{n,{n}^{\prime }}}{\pi }_{i,j}^{{\mathbf{u}}_{k}, * }{\begin{Vmatrix}{\mathbf{x}}_{i} - {\mathbf{x}}_{j}^{\prime }\end{Vmatrix}}_{2}^{2} \tag{8}
+$$
+
+This distance is defined by a deterministic formula; this avoids the variability introduced by a Monte-Carlo sampling. However, the drawback is that for a given ${\mathbf{u}}_{k}$ , many ${\pi }_{i,j}^{{\mathbf{u}}_{k}}$ may be optimal and they would lead different values in computation of (8). In order to have an unambiguous and deterministic definition, we add to this definition a deterministic way to choose among admissible optimal transport maps. In our case we rely on the deterministic implementation in tensorflow of argsort and this determines the chosen optimal transport map. The complexity of ${\mathcal{{RPW}}}_{2}$ is given by $O\left( {{p}^{2}n\log \left( n\right) }\right)$ which saves a factor $\frac{M}{p}$ as compared to $\mathcal{P}{\mathcal{W}}_{2}$ and this term is often greater than 10 .
+
+Finally, the parametric distance ${d}_{\Theta }^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ between two attributed graphs(G, X)and $\left( {{\mathcal{G}}^{\prime },{\mathbf{X}}^{\prime }}\right)$ is defined as:
+
+$$
+{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}\left( {\mathcal{G},{\mathcal{G}}^{\prime }}\right) = \mathcal{R}\mathcal{P}{\mathcal{W}}_{2}\left( {{\mathcal{D}}_{\mathbf{\Theta }}\left( {\mathcal{G},\mathbf{X}}\right) ,{\mathcal{D}}_{\mathbf{\Theta }}\left( {{\mathcal{G}}^{\prime },{\mathbf{X}}^{\prime }}\right) }\right) \tag{9}
+$$
+
+All the experiments will be conducted using this distance, excepted in an ablative study where we report the use of $\mathcal{S}{\mathcal{W}}_{2}$ .
+
+§ 4.3 LOSS FOR TRAINING DISTANCE: THE NEAREST CLASS CLOUD METRIC LEARNING
+
+The last element to complete our model is to define the loss function $\mathcal{F}$ for Eq. (1). We propose here a loss function for the purpose of improving the $k$ -nearest neighbors method. Actually there are classical losses already efficient for this purpose: one can notably mention Large Margin Nearest Neighbor (LMNN) [15] and Neighbourhood Component Analysis (NCA)[14].
+
+Algorithm 1 SGML: High-level algorithm to build ${d}_{{\Theta }^{ * }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ .
+
+Require: A dataset of attributed graphs $\mathbb{G}$ and their labeling function $\mathcal{E}$ .
+
+ for each epoch $e \in \{ 1,\ldots ,E\}$ do
+
+ Build a partition: ${ \cup }_{k}{B}_{k} = \mathbb{G}$ such that ${B}_{k} \cap {B}_{{k}^{\prime }} = \varnothing$ .
+
+ for each batch ${B}_{k}$ do
+
+ for each graph pair $\left( {\mathcal{G},{\mathcal{G}}^{\prime }}\right) \in {B}_{k} \times {B}_{k}$ do
+
+ Compute distance ${d}_{\Theta }^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}\left( {\mathcal{G},{\mathcal{G}}^{\prime }}\right)$ (Eq. (9))
+
+ Compute $- {\mathcal{F}}_{\mathbf{\Theta }}^{{B}_{k}}$ (Eq. (11)) and apply an iteration of Adam descent algorithm.
+
+ return all pairwise distance ${d}_{{\Theta }^{ * }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ in $\mathbb{G}$ .
+
+The optimization is done using a gradient descent algorithm. Since computing all pairwise distances between graphs at each step of gradient descent would be intractable for large datasets, we have to train our loss in a batch way. In this context, LMNN may be not relevant since this method works locally and a batch is often not representative of the true neighborhood of an element of the dataset. On the contrary NCA loss can be trained in a batch way, as it is a probability model which tends to attract elements with the same label with each other, wherever they are. However, preliminary experiments showed only a slight improvement of the k-NN with NCA. Therefore we have constructed a new loss which proposes a different way to ensure the same condition (see Appendix A.1) and which experimentally works better in our setting (see Ablative study, Sec. 5.4). The model is called Nearest Cloud Class Metric Learning (NCCML); the probability of being labeled by $e \in \mathbb{E}$ for a graph $\mathcal{G}$ depends on the distance to the point clouds of a class (hence the name of the method):
+
+$$
+{p}_{\mathbf{\Theta }}\left( {e \mid \mathcal{G}}\right) = \frac{\exp \left( {-\mathop{\sum }\limits_{\substack{{{\mathcal{G}}_{i} \in \mathbb{G}} \\ {\mathcal{E}\left( {\mathcal{G}}_{i}\right) = e} }}{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}{\left( \mathcal{G},{\mathcal{G}}_{i}\right) }^{2}}\right) }{\mathop{\sum }\limits_{{{e}^{\prime } \in \mathbb{E}}}\exp \left( {-\mathop{\sum }\limits_{\substack{{{\mathcal{G}}_{i} \in \mathbb{G}} \\ {\mathcal{E}\left( {\mathcal{G}}_{i}\right) = {e}^{\prime }} }}{d}_{\mathbf{\Theta }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}{\left( \mathcal{G},{\mathcal{G}}_{i}\right) }^{2}}\right) }. \tag{10}
+$$
+
+Given this probability, we want to construct the distance ${d}_{\Theta }^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}$ maximizing the probability that the labeled graphs in the dataset have the correct labels, which leads to solve the following problem:
+
+$$
+\mathop{\max }\limits_{\mathbf{\Theta }}{\mathcal{F}}_{\Theta }^{\mathbb{G}} = \mathop{\max }\limits_{\mathbf{\Theta }}\mathop{\sum }\limits_{{{\mathcal{G}}_{i} \in \mathbb{G},\mathcal{E}\left( {\mathcal{G}}_{i}\right) \neq \varnothing }}\log {p}_{\mathbf{\Theta }}\left( {\mathcal{E}\left( {\mathcal{G}}_{i}\right) \mid {\mathcal{G}}_{i}}\right) . \tag{11}
+$$
+
+By maximizing this loss, we construct a distance which, for each element, favors its relative distance to elements of the same labels compared to those of different labels. This should favor k-NN, especially when $k > 1$ . We will show in the experiments that, in this specific context, NCCML exhibits better performance than NCA. More details on NCCML can be found in Appendix A.1.
+
+§ 4.4 COMPUTATIONAL ASPECTS
+
+We will test our metric learning method with both $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ and $\mathcal{S}{\mathcal{W}}_{2}$ .
+
+Optimization. In terms of optimization, we can differentiate directly with respect to one dimensional distribution parameters of Wasserstein distance, thus we can also differentiate through approximation of ${\mathcal{{SW}}}_{2}$ (Eq. (4)) and ${\mathcal{{RPW}}}_{2}$ (Eq. (5)). Self-differentiation techniques can be used on these expressions (see [26]). We implemented our algorithm in tensorflow ${}^{3}$ . The minimization of the loss is performed by batch and stochastic gradient descent (in particular with the optimizer Adam [32]).
+
+Parameters. The following default parameters are used (unless otherwise indicated in the text): learning rate ${l}_{r} = {0.999} * {10}^{-2}$ , number of epochs $E = {10}$ , batch size $B = 8$ , and the GCN output features size $p = \min \left( {5,q}\right)$ . For experiments involving $\mathcal{S}{\mathcal{W}}_{2}$ , the sampling number is set to $M = {50}$ which is a common value used in the literature.
+
+${}^{3}$ The implementation can be found in the supplementary material.
+
+Time complexity. Theoretically, the training time is negligible compared to the computation of all pairwise distances; therefore we focus on this last step for the time complexity analysis (see Appendix A. 4 for runtimes per dataset). If we denote $\widetilde{n}$ the number of average nodes of a graph, the total complexity of this computation with $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ (resp. $\mathcal{S}{\mathcal{W}}_{2}$ ) is given by $O\left( {\left| \mathbb{G}\right| \widetilde{n}\left( {{p}^{2} + \widetilde{n}{rp}}\right) + }\right.$ ${\left| \mathbb{G}\right| }^{2}{p}^{2}\widetilde{n}\log \widetilde{n}$ ) (resp. $O\left( {\left| \mathbb{G}\right| \widetilde{n}\left( {{p}^{2} + \widetilde{n}{rp}}\right) + {\left| \mathbb{G}\right| }^{2}{pM}\widetilde{n}\log \widetilde{n}}\right)$ ). The first terms occur for application of GCN and the latest for computing distances. In practice, for not too large $\widetilde{n}$ values, a quadratic implementation exploiting vectorization can be faster (see section 5.2). Furthermore, one can see that the GCN becomes the limiting element for scaling (on graph sizes); in practice, the sparsity of the adjacency matrix and the optimizations on GPUs limit this problem. However, it is still an active research topic to determine the less expensive ways to characterize the nodes [33, 34].
+
+Spatial Complexity. Our quadratic implementation mentioned above requires to store in memory a tensor of size $O\left( {{\widetilde{n}}^{2}p}\right)$ for $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ and $O\left( {{\widetilde{n}}^{2}M}\right)$ for $\mathcal{S}{\mathcal{W}}_{2}$ . The sequential implementation have a $O\left( \widetilde{n}\right)$ spatial complexity (more details on these implementations are in Appendix A.2). Anyway for both implementations, for the datatsets of graphs considered, SGML is very cheap in term of memory consumption in regards of actual GPU capability.
+
+§ 5 EXPERIMENTS
+
+§ 5.1 DATASETS
+
+ < g r a p h i c s >
+
+Figure 1: Run times comparisons.
+
+For the experiments, we use a large panel of data sets from the literature [2] ${}^{4}$ : ENZYMES, PROTEINS, IMDB-B, IMDB-M, MUTAG, BZER, COX2 and NCI1. More information on these datasets can be found in Appendix A.3. Additional details about the following experiments can be found in Appendix A. 6 for reproducibility. When a dataset has discrete features, they are one-hot encoded.
+
+§ 5.2 $\MATHCAL{R}\MATHCAL{P}{\MATHCAL{W}}_{2}$ RUNNING TIMES
+
+We have generated uniform random (normal) distributions with support in ${\mathbb{R}}^{5}$ of size ranging from ${10}^{1}$ to ${10}^{6}$ .
+
+This sizes of the distributions correspond to graph sizes $n$ (number of nodes). The choice of ${\mathbb{R}}^{5}$ is motivated by the usual good performance of ML when performed in small dimension. We compare the running time to compute the distance between these distributions with ${\mathcal{W}}_{2},{\mathcal{W}}_{2}^{e},\left( {\mathcal{W}}_{2}\right.$ with entropic regularization parameter $\gamma = {100}$ ), $\mathcal{S}{\mathcal{W}}_{2}$ using POT [35] library and $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ . For $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ we compare both the quadratic and the sequential (numpy) implementations we developed. The results can be found on Figure 1. Additional details and results are given in Appendix A.5.
+
+As expected $\mathcal{S}{\mathcal{W}}_{2}$ and $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ are the methods scaling the best: we obtain the expected (quasi) linear slope for both methods $O\left( {n\log n}\right)$ . As soon as $n > {10}^{4},{\mathcal{{SW}}}_{2}$ and ${\mathcal{{RPW}}}_{2}$ allows us to compute distances between distributions of several orders of magnitude larger for the same time as ${\mathcal{W}}_{2}$ and ${\mathcal{W}}_{2}^{e}$ . Although ${\mathcal{{SW}}}_{2}$ and ${\mathcal{{RPW}}}_{2}$ scale mostly the same, ${\mathcal{{SW}}}_{2}$ seems a bit faster than ${\mathcal{{RPW}}}_{2}$ . However, we will show in the next experiment (Sec. 5.4) that ${\mathcal{{RPW}}}_{2}$ builds better metrics than ${\mathcal{{SW}}}_{2}$ . Finally, we can note that the quadratic implementation is the fastest for samples with less than 200 instances, which is the case for the datasets considered in the following experiments.
+
+§ 5.3 SUPERVISED CLASSIFICATION
+
+We evaluate the method in two ways: by using k-NN directly on the computed distances, and by using a SVM with a custom kernel built from the model proposed. We eventually compare the method to several (pseudo-) metric and distances from literature such as NetLSD [36], WWL [12], FGW [27].
+
+${}^{4}$ http://graphkernels.cs.tu-dortmund.de
+
+Table 1: Results of the main experiments for datasets of graphs with discrete attributes. Features are node labels for NCI1, PROTEINS and ENZYMES; and degrees for others. Accuracy is in bold green when it is the best of its block. For $\mathcal{{FGW}}$ -WL (resp. PSCN), depth is set to 4 (resp. 10).
+
+max width=
+
+Method MUTAG NCI1 PROTEINS ENZYMES IMDB-M IMDB-B
+
+1-7
+7|c|$\mathbf{k}$ -NN
+
+1-7
+${\mathcal{{RPW}}}_{2}$ ${90.00} \pm {7.60}$ ${72.12} \pm {1.65}$ ${70.18} \pm {4.01}$ ${49.00} \pm {8.17}$ ${45.00} \pm {5.46}$ ${68.90} \pm {5.45}$
+
+1-7
+Net-LSD-h 84.90 65.89 64.89 31.99 40.51 68.04
+
+1-7
+FGSD 86.47 75.77 65.30 41.58 41.14 69.54
+
+1-7
+NetSimile 84.09 66.56 62.45 33.23 40.97 69.20
+
+1-7
+7|c|SVM & GCN
+
+1-7
+${\mathcal{{RPW}}}_{2}$ ${88.95} \pm {7.61}$ ${74.84} \pm {1.81}$ ${74.55} \pm {4.19}$ ${54.00} \pm {7.07}$ ${51.00} \pm {5.44}$ ${72.00} \pm {3.16}$
+
+1-7
+WWL ${87.27} \pm {1.50}$ ${85.75} \pm {0.25}$ ${74.28} \pm {0.56}$ ${59.13} \pm {0.80}$ ✘ ✘
+
+1-7
+FGW ${83.26} \pm {10.30}$ ${72.82} \pm {1.46}$ ✘ ✘ ${48.00} \pm {3.22}$ ${63.80} \pm {3.49}$
+
+1-7
+$\mathcal{F}\mathcal{G}\mathcal{W}$ -WL ${88.42} \pm {5.67}$ ${86.42} \pm {1.63}$ ✘ ✘ ✘ ✘
+
+1-7
+WL-OA ${87.15} \pm {1.82}$ ${86.08} \pm {0.27}$ ${76.37} \pm {0.30}$ ${58.97} \pm {0.82}$ ✘ ✘
+
+1-7
+PSCN ${83.47} \pm {10.26}$ ${70.65} \pm {2.58}$ ${58.34} \pm {7.71}$ ✘ ✘ ✘
+
+1-7
+
+Table 2: Results of the main experiments for datasets of graphs with continuous attributes graphs datasets. The best accuracy are in bold green. Note that for PROTEINS, ENZYMES and CUNEIFORM we concatenate continuous attributes with discrete attributes to build an extended continuous attributes (see Appendix A.6 for more details).
+
+max width=
+
+Method BZR COX2 PROTEINS ENZYMES CUNEIFORM
+
+1-6
+${\mathcal{{RPW}}}_{2}\left( \mathrm{{kNN}}\right)$ ${85.61} \pm {2.98}$ ${79.79} \pm {2.18}$ ${71.79} \pm {4.47}$ ${51.66} \pm {5.16}$ ${54.81} \pm {12.26}$
+
+1-6
+X 5|c|SVM & GCN
+
+1-6
+${\mathcal{{RPW}}}_{2}$ ${84.39} \pm {3.81}$ ${78.51} \pm {0.01}$ ${74.29} \pm {4.11}$ ${48.83} \pm {4.78}$ ${64.44} \pm {10.50}$
+
+1-6
+WWL ${84.42} \pm {2.03}$ ${78.29} \pm {0.47}$ ${77.91} \pm {0.80}$ ${73.25} \pm {0.87}$ ✘
+
+1-6
+$\mathcal{F}\mathcal{G}\mathcal{W}$ ${85.12} \pm {4.15}$ ${77.23} \pm {4.86}$ ${74.55} \pm {2.74}$ ${71.00} \pm {6.76}$ ${76.67} \pm {7.04}$
+
+1-6
+PROPAK ${79.51} \pm {5.02}$ ${77.66} \pm {3.95}$ ${61.34} \pm {4.38}$ ${71.67} \pm {5.63}$ ${12.59} \pm {6.67}$
+
+1-6
+HGK-SP ${76.42} \pm {0.72}$ ${72.57} \pm {1.18}$ ${75.78} \pm {0.17}$ ${66.36} \pm {0.37}$ ✘
+
+1-6
+PSCN [K = 10] (GCN) ${80.00} \pm {4.47}$ ${71.70} \pm {3.57}$ ${67.95} \pm {11.28}$ ${26.67} \pm {4.77}$ ${25.19} \pm {7.73}$
+
+1-6
+
+k-Nearest Neighbors. Datasets are split in a training (90%) and test set (10%). For each of them we train ${\mathcal{{RPW}}}_{2}$ following Algorithm 1 on the training set with only one hyperparameter to adjust: the depth of SGCN taken as $r = \{ 1,2,3,4\}$ for all datasets, except for MUTAG for which we go up to 7 . The training is done for each parameter $r$ during 10 epochs. A 5 -fold cross validation of the number of neighbors $k = \{ 1,2,3,5,7\}$ to be considered is performed on the training set using the considered distance. Then for the best ${k}^{ * }$ , we keep the associated validation accuracy, and we finally train a $\mathrm{k} - \mathrm{{NN}}$ on the whole training set and evaluate its accuracy on test set. This experiment is averaged on 10 runs. The final test accuracy retained is the one associated to the largest validation accuracy. In this procedure, test set labels were never seen during neither training nor validation. Results are given in the first lines of Table 2 for graphs with continuous attributes, and Table 1 for graphs having labeled nodes.
+
+The learning metric framework combined with k-NN allows us to obtain good performance in classification tasks, in particular for datasets of graphs with continuous attributes. The exception is ENZYMES where we can see a lower net performance. For discrete attributes, SGML performs slightly below the state-of-the-art, yet it outperforms the existing distances classically combined with k-NN. Experiments show that our graph ML distance framework is efficient.
+
+Note: This procedure is very similar to the one used by WWL, except that the parameter $k$ is replaced by the corresponding parameters of their kernel (see next section).
+
+SVM. To compare to graph kernel methods, the experiment described in the previous section is 7 reproduced using a SVM for classification. The kernel ${\mathbf{K}}_{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}} = \exp \left( {-\lambda {d}_{{\Theta }^{ * }}^{\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}}}\right)$ is built from the constructed distance. In this experiment, kernel hyperparameter $\lambda$ and SVM hyperparameter $C$ are tuned similarly as the parameter $k$ above. The set of possible $\lambda$ (resp. $C$ ) values are 6 (resp. 12) regularly spaced values between ${10}^{-4}$ and ${10}^{1}$ (resp. ${10}^{-4}$ and ${10}^{5}$ including 1). The results are provided in Table 1 (bottom part).
+
+Table 3: Ablative study results. Acc. is the accuracy. $\Delta$ is the difference in accuracy between the model of the column and the proposed one SGML whose results are on Table. 1. Red negative (resp. Green positive) number means that our model perform better (resp. worse).
+
+max width=
+
+2*Dataset Method 2|c|$\mathbf{{WWL}}$ 2|c|SGML - ${\mathcal{{SW}}}_{2}$ SGML - NCA
+
+2-6
+ $\mathbf{{Acc}.}$ $\Delta$ $\mathbf{{Acc}.}$ $\Delta$ $\mathbf{{Acc}.}$ $\Delta$
+
+1-6
+BZR 78.05 - 7.56 82.93 - 2.68 83.41- 2.2
+
+1-6
+COX2 78.51 -1.26 78.30 - 1.49 77.66- 2.13
+
+1-6
+MUTAG 83.68 - 6.32 X 86.84 - 3.16 87.37- 2.63
+
+1-6
+NCI1 $\begin{array}{ll} {80.43} & {5.31} \end{array}$ X 69.03 - 3.09 69.66- 2.46
+
+1-6
+PROTEINS 71.60 1.42 71.34 1.16 71.701.52
+
+1-6
+$\mathbf{{IMDB} - B}$ 68.20 - 0.7 68.20 -0.7 67.40-1.5
+
+1-6
+IMDB-M 48.73 3.73 42.33 -2.67 42.73-2.27
+
+1-6
+ENZYMES 56.00 7 44.33 - 4.67 55.33+ 6.33
+
+1-6
+
+In this part of the table, one can see that the distance learned with our model performs as well as other OT distances when used as a kernel, on the majority of the datasets. We reach or are slightly above state of the art results on 5 datasets over 6 but are still below on NCI1. We recall that our method is specifically designed for the k-nearest neighbors method and that its computational complexity is much lower than many of the best methods on these datasets (notably WWL and $\mathcal{{FGW}}$ ).
+
+§ 5.4 ABLATIVE STUDY
+
+We perform experiments to justify the design choice of our model. Specifically we show that these choices effectively help to improve k-NN performance by reproducing the experiments above (with k-NN) on different versions of the method without some (or all) of our propositions.
+
+Raw model. Without any of our novel propositions, the method would be equivalent to WWL, which corresponds to use the Wasserstein distance between distributions of Eq. (7), where $\mathbf{Y}$ is generated with GIN [5], a non trainable GCN. This specific case corresponds to the first column denoted WWL of Table 3. We see that even if there are datasets where there is a loss of performance, others benefit from the learned metrics. Moreover we remind that our distance is much less expensive to use than ${\mathcal{W}}_{2}$ on which WWL is based.
+
+SGML with ${\mathcal{{SW}}}_{2}$ . This second ablative study is in the second column, denoted SGML- ${\mathcal{{SW}}}_{2}$ , of Table 3, and is related to replacing ${\mathcal{{RPW}}}_{2}$ by ${\mathcal{{SW}}}_{2}$ . The result clearly validates our choice to use $\mathcal{R}\mathcal{P}{\mathcal{W}}_{2}$ instead of $\mathcal{S}{\mathcal{W}}_{2}$ . Our model is the best one except on one dataset.
+
+SGML with NCA. For this final experiment we replaced the loss NCCML by the NCA loss. The result is in the third column, SGML - NCA of Table 3. It appears that NCCML outperforms NCA in our specific ML framework.
+
+Globally, the ablative study is in favor of the choices proposed for SGML. Note that the driving idea of choosing simple and scalable methods over more complex ones, leads to competitive performance while allowing scalability.
+
+§ 6 CONCLUSION
+
+In this article, we proposed a metric learning method for attributed graphs, specifically to increase the performance of k-NN. We have shown experimentally that it can indeed achieve performance similar or even superior to the state of the art. However, a theoretical work on the properties of ${\mathcal{{RPW}}}_{2}$ will be useful to allow us to better understand when it does not perform well. Appendix A. 8 presents some additional elements on the limits of the work. In addition, further work may easily adapt SGML to perform other tasks like graph clustering or regression, with an appropriate (and probably different) ML loss.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/aI0-qQsFCHV/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/aI0-qQsFCHV/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..28be6e993740f8f36414b0d5bcc8b7dbc552b833
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/aI0-qQsFCHV/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,260 @@
+# Higher-Order Patterns Reveal Causal Temporal Scales in Time Series Network Data
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+The research of dynamic complex systems has in recent years advanced beyond static graph representations $\left\lbrack {1,2}\right\rbrack$ . The focus has shifted to various generalizations of diadic interactions in graphs: multiple types of interactions in multilayer network [3], multibody interactions in the form of simplicial complexes and hypergraphs [4] and models that incorporate concepts of memory [5-7]. Such generalized relationships allow us to model richer data, without losing possibly important features of the data.
+
+Temporal networks record not only who interacted with whom, but also when each interaction happened, which allows (and often requires) analysis beyond the standard network approach [8, 9]. The time information can yield valuable insights on its own [10], and, although the temporal and topological aspects of temporal networks were initially mostly studied independently, even richer insights are hidden in the coupling of the temporal and topological patterns. Such coupling can affect the statistics of time-respecting paths [8] in temporal networks and thus complicate the analysis of temporal networks, e.g., analysis of accessibility [11], reachability [12], spreading [5, 6, 13, 14], clustering [15], centralities [16], and visualization [17]. In cases when the statistics of time-respecting paths deviate significantly from the statistics of random walks in static graphs, the static graphs can become a misleading representations of the temporal network.
+
+Although, there are many possible ways in which temporal and topological patterns can couple in complex systems, one of the most basic cases is when the occurrence of a temporal edge causes a change in the frequencies of subsequent edges emanating from the target node within a given time-window. For instance, in a communication network we expect an incoming message to induce a outgoing message on the same topic, e.g. in the form of a reply, within a certain time window reflecting the minimal reaction time and memory of the recipient. Knowing the time-scale at which such causal influence take place would allow us to capture the time-respecting paths that correspond to casual influences; this would in turn improve the analysis of the temporal network, e.g. the detection of time central nodes or community detection. However, information on the time-scales relevant for the temporal network dynamics is rarely available in a real world settings.
+
+We define an information theoretic measure aimed at detecting the prevalence of causal interactions at various time-scales of complex systems. We demonstrate that our measure can be used to infer time scales that are relevant to the dynamics of temporal networks in both synthetic and real world data.
+
+Let $\Gamma = \left( {V,\mathcal{E}}\right)$ be a temporal network consisting of a set of nodes $V$ and a set of time-stamped edges $\mathcal{E} \subseteq V \times V \times \mathbb{R}$ . A temporal edge $\left( {v, w, t}\right) \in \mathcal{E}$ represents a direct link from node $v$ to node $w$ at time $t$ . For simplicity, we assume that the temporal edges are instantaneous, however the method and algorithms can be modified in a straightforward fashion to the case where edges have finite duration. Formally, we call a sequence of time-stamped edges $\left( {{v}_{1},{w}_{1},{t}_{1}}\right) ,\ldots ,\left( {{v}_{k},{w}_{k},{t}_{k}}\right)$ a time-respecting path iff for all $i \in \{ 2,\ldots , k\}$ they satisfy the following conditions [8,18,19]:
+
+$$
+{w}_{i - 1} = {v}_{i} \tag{1}
+$$
+
+$$
+{\delta }_{\min } < {t}_{i} - {t}_{i - 1} < {\delta }_{\max }\text{.} \tag{2}
+$$
+
+The parameters ${\delta }_{\min }$ and ${\delta }_{\max }$ naturally introduce a time scale that affects all analyses of temporal networks that are based on time-respecting paths. Examples of such analyses include detection of cluster structures in temporal networks, measures of temporal centrality used to rank nodes in temporal networks, as well as results about dynamical processes like epidemic spreading, or diffusion processes. The time scale has to be defined differently for processes on the temporal network or the processes of the temporal network [8]. In the former case, the time scale is defined by the process
+
+running on the temporal network, e.g. in the case of an epidemic that is spreading over a temporal network of contacts, the time scale is a property of a disease, related to the time interval in which a person is contagious and not related to the time-scales at which contacts occur ${}^{1}$ . In the latter case, the time scale is part of the process of edge activation, and thus shapes the temporal network itself, e.g. information that is spreading between persons is also affecting the persons' choice with whom to share the information: a person would be more likely to share the family-related information with a family member and work-related information with a colleague. We are investigating this latter case, more specifically, we consider the problem of detecting the time window ${\Delta }_{t} = \left\lbrack {{\delta }_{\min },{\delta }_{\max }}\right\rbrack$ at which causal correlations between temporal edges take place.
+
+In the literature, there exist a variety of definitions of time scales in temporal networks, as well as a variety of methods aimed at detecting them. The various definitions of time scales are based on the different structural features of temporal networks. One popular definition of time scales in temporal networks is the approach based on splitting the network into time-slices and aggregating the edges inside the time-interval [21]. In the same framework, Ghasemian et al. [22] and Taylor et al. [23] investigate the limitations of detectability of cluster structures dependent on the time-scales of aggregation. Since this framework is based on aggregating the temporal network into a sequence of static time-aggregated networks, it loses information of the time-respecting paths and is therefore not in line with our aims. Other lines of research often related to time-scale detection are change point detection [24], and analysis of large-scale structures. Gauvin et al. [25] detects clusters and their temporal activations in a temporal network using tensor decomposition. Similarly, Peixoto [26] proposed a method to detect the change points of cluster structure in a temporal network. Peixoto and Rosvall [27] proposed a method to simultaneously detect the clusters and time-scales in temporal network, however, they model the temporal network as a single sequence of tokens (similar to [24]) that represent temporal edges, and their time-scale inference refers to the number of tokens in the memory of a Markov chain that models such a sequence. In our view, these works focus on mesoscale structures, and take a coarse grained view of temporal networks, while in this work, we propose a complementary approach by focusing on local correlations between temporal edges incident on a node and subsequent temporal edges emanating from it. Among the works that took a fine-grained view, Williams et al. [7] investigated correlations between the temporal edges, however, they, too, considered sequences of edges that do not have any nodes in common, and therefore are not directly related to time-respecting paths. Scholtes et al. [16] found that correlations between edges on time-respecting paths affect centralities, and modeled the time-respecting paths with higher-order models, and found that this approach improves the centrality rankings, and identified the issue of time-scale detection, which our work complements. Our work also complements Pfitzner et al. [28] which introduces betweenness preference that can be used to study over- and under-represented time-respected paths in temporal networks, but does not address the problem of detecting the time-scales at which these paths occur.
+
+We address this issue by analysing the statistics of time respecting paths ${\mathcal{P}}_{{\Delta }_{t}}$ of length $k$ in a temporal network $\Gamma$ obtained with different time-scales ${\Delta }_{t} = \left\lbrack {{\delta }_{\min },{\delta }_{\max }}\right\rbrack$ . Specifically, we observe paths $\left( {{v}_{0},{v}_{1},\ldots ,{v}_{k}}\right)$ of length $k$ and measure the "causal path entropy" (of order $k$ ), defined as the entropy of the last node ${v}_{k}$ conditional on the sub-path $\left( {{v}_{0},{v}_{1},\ldots ,{v}_{k - 1}}\right)$ :
+
+$$
+\mathcal{H}\left( {\mathcal{P}}_{{\Delta }_{t}}\right) = H\left( {{v}_{k} \mid {v}_{0},\ldots ,{v}_{k - 1}}\right) = H\left( {{v}_{0},\ldots ,{v}_{k}}\right) - H\left( {{v}_{0},\ldots ,{v}_{k - 1}}\right) , \tag{3}
+$$
+
+where Eq. 3 is obtained by applying the chain rule (see Appendix for derivation). By definition $\mathcal{H}\left( {\mathcal{P}}_{{\Delta }_{t}}\right)$ measures the average uncertainty in the last step of a time-respecting path given the $k - 1$ previous steps. A lower value of the entropy indicates a high correlation between the memory of time respecting paths and subsequent steps. Hence the ${\Delta }_{t}$ for which the entropy reaches its minimum gives us the time scale for which causal paths become most predictable, i.e. where the correlations between temporal edges are the most pronounced. The entropy can also be defined for a single node $v$ , by simply fixing ${v}_{k - 1} = v$ , allowing for a more fine grained analysis that could be important if nodes differ significantly with respect to the time scales they operate on. Given a time-scale ${\Delta }_{t}$ , the entropy can be estimated using the counts of time-respecting paths of length $k$ e.g. using the methods from $\left\lbrack {{29},{30}}\right\rbrack$ .
+
+The estimation of the causal path entropy can be challenging for small ranges of time-scales, since the temporal network can get temporally disconnected resulting in very few paths of order $k$ being observed. As a result we require an efficient method for estimating the entropy of the multinomial distributions $p\left( {{v}_{0},\ldots ,{v}_{k - 1}}\right)$ and $p\left( {{v}_{0},\ldots ,{v}_{k}}\right)$ in an undersampled regime. The simplest estimator of a multinomial distribution, called the plug-in estimator, is based on the maximum likelihood estimation which however is known to severely underestimate the entropy in the undersampled regime and has various corrections (e.g. [31, 32]). An alternative to the plug in estimator is to follow a Bayesian approach which results in entropy estimators that strongly depend on the choice of prior. To counteract this dependency the NSB estimator [33] directly infers the entropy from the counts by averaging over different priors for the transition probabilities, rather than inferring transition probabilities. Being a Bayesian method, the NSB estimator can also be used to quantify the uncertainty of the estimate. More specifically, assuming that the estimates of $H\left( {{v}_{0},\ldots ,{v}_{k}}\right)$ and $H\left( {{v}_{0},\ldots ,{v}_{k - 1}}\right)$ have independent errors ${\sigma }_{k}$ and ${\sigma }_{k - 1}$ , we can approximate the total error of the estimate as $\sigma = {\left( {\sigma }_{k}^{2} + {\sigma }_{k - 1}^{2}\right) }^{1/2}$ . As the NSB estimator requires the size of the alphabet to be known it is most suitable for cases where the number of nodes is fixed and improves further if the set of edges that can occur are known a priory as this further restricts the number of potential paths. In cases when the number of nodes in the system is unknown, the Pitman-Yor Mixture entropy estimator [34] could be used instead.
+
+---
+
+${}^{1}$ We note that the processes on and of the temporal network may interact [20], and thus blur the distinction.
+
+---
+
+
+
+Figure 1: Causal path entropy as a function of causal temporal scales in synthetic (left) and real-world data (central and right). In each experiment, we show the histogram of causal inter-event times (right $y$ -axis in the left and central panel, and bottom graphic on the right). In the left and central panel, we measure causal path entropy $\mathcal{H}$ for a fixed ${\delta }_{\min } = 0$ and variable ${\delta }_{\max }\left( {x\text{-axis}}\right)$ in original (solid line) and shuffled temporal network (dashed line). In the top right panel, the height of a bar represent causal path entropy (errors are barely visible) measured and the $x$ -limits of the bar represent the interval ${\Delta }_{t} = \left( {{\delta }_{\min },{\delta }_{\max }}\right)$ at which the causal path entropy has been measured. In real world data, we indicate on $x$ -axis the time scales of one minute (m), hour (h), day (d), week (w), and year (y). We observe that the causal path entropy decreases (or increases slower) at causal time scales.
+
+Our implementation is based on the path counting methods $\left\lbrack {{29},{30}}\right\rbrack$ which we use to obtain counts ${n}_{{v}_{0}{v}_{1}{v}_{2}}$ of paths $\left( {{v}_{0},{v}_{1},{v}_{2}}\right)$ of length $k = 2$ , and counts ${n}_{{v}_{0}{v}_{1}} = \mathop{\sum }\limits_{{v}_{2}}{n}_{{v}_{0}{v}_{1}{v}_{2}}$ for a given time-scales. Based on these counts we then estimate the entropies $H\left( {{v}_{0},{v}_{1},{v}_{2}}\right)$ and $H\left( {{v}_{0},{v}_{1}}\right)$ along with their respective errors using the NSB estimator [33]. Finally by repeating this procedure over a range of different time-scales we identify the time-scales for which the entropy is minimized. In the following part, validate our method using synthetically generated temporal networks with known causal time scales as well as real-world networks.
+
+We first perform synthetic experiments in order to observe the behavior of the causal path entropy in a controlled setting. In order to simulate temporal networks with a ground truth timescale ${\bar{\Delta }}_{t} = \left\lbrack {{\bar{\delta }}_{\min },{\bar{\delta }}_{\max }}\right\rbrack$ , we start from a static Erdős-Rényi random (static) graph with 50 nodes and 500 directed edges. We sample a random subset ${\mathcal{P}}_{\text{causal }}$ of ${n}_{\text{u.p. }} = {500}$ unique paths of length $k = 2$ in the static network which correspond to causal influences in the system. We then, with repetition, sample ${n}_{\mathrm{p}} = {50000}$ paths from ${\mathcal{P}}_{\text{causal }}$ . We then add each path $\left( {{v}_{0},{v}_{1},{v}_{2}}\right)$ to the temporal network by sampling a random starting time $t$ uniformly from $\left\lbrack {0,{T}_{\text{total }} - {\bar{\delta }}_{\max }}\right\rbrack$ and create a temporal edge $\left( {{v}_{0},{v}_{1}, t}\right)$ ; we sample temporal distance $\delta$ between edges on the path (inter-event time) uniformly from ${\bar{\Delta }}_{t}$ and add the temporal edge $\left( {{v}_{1},{v}_{2}, t + \delta }\right)$ . We choose ${\bar{\Delta }}_{t}$ with ${\bar{\delta }}_{\min } = {100}$ and ${\bar{\delta }}_{\max } = {200}$ . To add some noise to the system, we uniformly sample 50000 edges from the static graph, and sample their timestamps uniformly from $\left\lbrack {0,{T}_{\text{total }}}\right\rbrack$ . Additionally, we also generate a shuffled temporal network, by randomly shuffling the timestamps of edges, thus destroying the correlations between
+
+topological and temporal patterns (while preserving the distribution of edges and the distribution of timestamps). The causal path entropies for the synthetic network and the shuffled network along with the histogram of inter-event times between causal paths is show in Fig. 1 where we measured the causal path entropy $\mathcal{H}\left( {y\text{-axis}}\right)$ for a fixed ${\delta }_{\min } = 0$ and various ${\delta }_{\max }\left( {x\text{-axis}}\right)$ . More synthetic examples can be found in the Appendix.
+
+We observe that the causal path entropy behaves as expected and decreases in accordance with the timescale of the planted causal interactions. Moreover, this pattern disappears when the timestamps of edges are shuffled, demonstrating that measure correctly captures the dependencies between temporal and topological patterns.
+
+In general testing in real world data is more challenging due to the lack of a ground truth timescale. In the Appendix, we show the results obtained for e-mail data sets [35, 36] and the SocioPatterns datasets [37-42]. Although the lack of ground truth in these datasets makes detailed evaluation of the method difficult, the results across datasets are consistent and in accordance with the circadian rhythm that is typical of human activities.
+
+In order to circumvent the problem of ground truth in real world networks, we consider temporal networks where we know the causal path structure. As a first data set, we consider public data set of Hillary Clinton's emails [43], where we know the sender, the receiver, the timestamp, and the subject of each email. While sender, receiver and the timestamp form a temporal network, email subjects allow us to obtain causal inter-event times: for each incoming email, we extract the time duration until an email was sent with the same subject. As a second data set, we consider the bipartite temporal networks of Wikibooks co-editing patterns [44, 45] (here, we show the German Wikibooks, but the reader can find other Wikibooks data sets in the Appendix). This data contains information about edits on the Wikibooks website: for each edit, we know the editor, the article that was edited, and the time at which the edit occurred. We preprocess this data to obtain a temporal network of editors: if editor $v$ edited an article and after that editor $w$ edited the same article at time $t$ , we assume that a link(v, w, t)occurred in the temporal network of editors. We define causal inter-event times based on the articles: we extract the time intervals between successive edits of each article. We use the inter-event times between emails with the same subject and the inter-event times of articles for evaluation; the temporal networks contain only the temporal edges and not any additional information about the ground truth time-scales. The details of each data-set are in the Appendix (Table 1).
+
+We compare the histogram of causal inter-event times with the causal path entropy at different time-scales of the temporal network. In the central panel of Fig. 1 we show results in the emails data set. We measure the causal path entropy of the node representing Hillary Clinton, with fixed ${\delta }_{\min } = 0$ and variable ${\delta }_{\max }$ , in both the original data set and with shuffling. Results of the co-edits data-set are shown in the right panels of Fig. Fig. 1. On the top is the causal path entropy $\mathcal{H}$ , where we vary both ${\delta }_{\min }$ and ${\delta }_{\max }$ : the left and right $x$ limit of a bar represents ${\delta }_{\min }$ and ${\delta }_{\max }$ , and the height of the bar represents $\mathcal{H}$ . On the bottom right panel, we show the histogram of inter-event times.
+
+In empirical data sets we observe that the causal path entropy decreases or increases at a slower pace at time scales where large numbers of of causal interactions occur. In emails data set, we see that the causal path entropy is able to identify causal time-scales even there are only few thousand temporal edges, which suggests that could be used to detect time-scales for individual nodes separately. In the coedits data set, we demonstrate how we can pick different time-scales and analyse them in isolation. We observe that decreases in the causal path entropy coincide with peaks in the number of causal paths.
+
+To summarize, the analysis of temporal networks heavily depends on the analysis of time-respecting paths $\left\lbrack {8,9,{13},{16},{18},{29}}\right\rbrack$ . However, in order to model and analyze the time-respecting paths, we first need to identify the correct time-scale. In this work we address this problem by introducing an information theoretic measure, the causal path entropy, that is able to capture time-scales at which causal influences occur in temporal networks. Using real world data we demonstrated that the measure can be applied to temporal networks as a whole as well a single nodes and showed that the causal path entropy accurately captures the causal time-scales in both synthetic and empirical temporal networks. We further support our findings by observing that the decreases in the causal path entropy coincide with increases in the number of causal paths. The causal path entropy allows system relevant time-scales to be inferred from the temporal networks themselves which is crucial for the analysis of temporal networks where inherent time-scales are unavailable and hard to measure.
+
+References
+
+[1] Renaud Lambiotte, Martin Rosvall, and Ingo Scholtes. From networks to optimal higher-order models of complex systems. Nature physics, page 1, 2019.
+
+[2] Federico Battiston, Giulia Cencetti, Iacopo Iacopini, Vito Latora, Maxime Lucas, Alice Patania, Jean-Gabriel Young, and Giovanni Petri. Networks beyond pairwise interactions: structure and dynamics. Physics Reports, 2020.
+
+[3] Mikko Kivelä, Alex Arenas, Marc Barthelemy, James P Gleeson, Yamir Moreno, and Mason A Porter. Multilayer networks. Journal of complex networks, 2(3):203-271, 2014.
+
+[4] Giovanni Petri and Alain Barrat. Simplicial activity driven model. Physical review letters, 121 (22):228301, 2018.
+
+[5] Ingo Scholtes, Nicolas Wider, René Pfitzner, Antonios Garas, Claudio J Tessone, and Frank Schweitzer. Causality-driven slow-down and speed-up of diffusion in non-markovian temporal networks. Nature communications, 5:5024, 2014. doi: 10.1038/ncomms6024. URL https: //doi.org/10.1038/ncomms6024.
+
+[6] Renaud Lambiotte, Vsevolod Salnikov, and Martin Rosvall. Effect of memory on the dynamics of random walks on networks. Journal of Complex Networks, 3(2):177-188, 2015.
+
+[7] Oliver E Williams, Lucas Lacasa, Ana P Millán, and Vito Latora. The shape of memory in temporal networks. Nature communications, 13(1):1-8, 2022.
+
+[8] Petter Holme and Jari Saramäki. Temporal networks. Physics reports, 519(3):97-125, 2012.
+
+[9] Petter Holme. Modern temporal network theory: a colloquium. The European Physical Journal $B,{88}\left( 9\right) : {234},{2015}$ .
+
+[10] K-I Goh and A-L Barabási. Burstiness and memory in complex systems. EPL (Europhysics Letters), 81(4):48002, 2008.
+
+[11] Hartmut HK Lentz, Thomas Selhorst, and Igor M Sokolov. Unfolding accessibility provides a macroscopic approach to temporal networks. Physical review letters, 110(11):118701, 2013. doi: 10.1103/PhysRevLett.110.118701. URL http://link.aps.org/doi/10.1103/ PhysRevLett.110.118701.
+
+[12] Arash Badie-Modiri, Márton Karsai, and Mikko Kivelä. Efficient limited-time reachability estimation in temporal networks. Physical Review E, 101(5):052303, 2020.
+
+[13] Naoki Masuda, Konstantin Klemm, and Víctor M Eguíluz. Temporal networks: slowing down diffusion by long lasting interactions. Physical Review Letters, 111(18):188701, 2013.
+
+[14] Arash Badie-Modiri, Abbas K Rizi, Márton Karsai, and Mikko Kivelä. Directed percolation in temporal networks. Physical Review Research, 4(2):L022047, 2022.
+
+[15] Martin Rosvall, Alcides V Esquivel, Andrea Lancichinetti, Jevin D West, and Renaud Lambiotte. Memory in network flows and its effects on spreading dynamics and community detection. Nature communications, 5:4630, 2014.
+
+[16] Ingo Scholtes, Nicolas Wider, and Antonios Garas. Higher-order aggregate networks in the analysis of temporal networks: path structures and centralities. The European Physical Journal $B,{89}\left( 3\right) : {61},{2016}$ .
+
+[17] Vincenzo Perri and Ingo Scholtes. Hotvis: Higher-order time-aware visualisation of dynamic graphs. In Graph Drawing and Network Visualization - 28th International Symposium, -, 2020.
+
+[18] Raj Kumar Pan and Jari Saramäki. Path lengths, correlations, and centrality in temporal networks. Physical Review E, 84(1):016105, 2011.
+
+[19] Arnaud Casteigts, Anne-Sophie Himmel, Hendrik Molter, and Philipp Zschoche. Finding temporal paths under waiting time constraints. Algorithmica, 83(9):2754-2802, 2021.
+
+[20] Thilo Gross and Hiroki Sayama. Adaptive networks. In Adaptive networks, pages 1-8. Springer, 2009.
+
+[21] Rajmonda Sulo Caceres and Tanya Berger-Wolf. Temporal scale of dynamic networks. In Temporal networks, pages 65-94. Springer, 2013.
+
+[22] Amir Ghasemian, Pan Zhang, Aaron Clauset, Cristopher Moore, and Leto Peel. Detectability thresholds and optimal algorithms for community structure in dynamic networks. Physical Review X, 6(3):031005, 2016.
+
+[23] Dane Taylor, Saray Shai, Natalie Stanley, and Peter J Mucha. Enhanced detectability of community structure in multilayer networks through layer aggregation. Physical review letters, 116(22):228301, 2016.
+
+[24] Tiago P. Peixoto and Laetitia Gauvin. Change points, memory and epidemic spreading in temporal networks. Scientific reports, 8(1):1-10, 2018.
+
+[25] Laetitia Gauvin, André Panisson, and Ciro Cattuto. Detecting the community structure and activity patterns of temporal networks: a non-negative tensor factorization approach. PloS one, 9(1):e86028, 2014.
+
+[26] Tiago P Peixoto. Inferring the mesoscale structure of layered, edge-valued, and time-varying networks. Physical Review E, 92(4):042807, 2015.
+
+[27] Tiago P Peixoto and Martin Rosvall. Modelling sequences and temporal networks with dynamic community structures. Nature communications, 8(1):1-12, 2017.
+
+[28] René Pfitzner, Ingo Scholtes, Antonios Garas, Claudio J Tessone, and Frank Schweitzer. Betweenness preference: Quantifying correlations in the topological dynamics of temporal networks. Physical review letters, 110(19):198701, 2013. doi: 10.1103/PhysRevLett.110.198701. URL http://link.aps.org/doi/10.1103/PhysRevLett.110.198701.
+
+[29] Mikko Kivelä, Jordan Cambe, Jari Saramäki, and Márton Karsai. Mapping temporal-network percolation to weighted, static event graphs. Scientific reports, 8(1):1-9, 2018.
+
+[30] Luka V Petrović and Ingo Scholtes. Paco: Fast counting of causal paths in temporal network data. In Companion Proceedings of the Web Conference 2021, pages 521-526, 2021.
+
+[31] GA Miller. Note on the bias of information estimates. information theory in psychology: Problems and methods. Quastler H, pages 95-100, 1955.
+
+[32] Peter Grassberger. Entropy estimates from insufficient samplings. arXiv preprint physics/0307138, 2003.
+
+[33] Ilya Nemenman, Fariel Shafee, and William Bialek. Entropy and inference, revisited. Advances in neural information processing systems, 14, 2001.
+
+[34] Evan Archer, Il Memming Park, and Jonathan W Pillow. Bayesian entropy estimation for countable discrete distributions. The Journal of Machine Learning Research, 15(1):2833-2868, 2014.
+
+[35] Ashwin Paranjape, Austin R Benson, and Jure Leskovec. Motifs in temporal networks. In Proceedings of the tenth ACM international conference on web search and data mining, pages 601-610, 2017.
+
+[36] Jérôme Kunegis. Konect: The koblenz network collection. In Proceedings of the 22nd International Conference on World Wide Web, WWW '13 Companion, page 1343-1350, New York, NY, USA, 2013. Association for Computing Machinery. ISBN 9781450320382. doi: 10.1145/2487788.2488173. URL https://doi.org/10.1145/2487788.2488173.
+
+[37] Mathieu Génois, Christian L Vestergaard, Julie Fournet, André Panisson, Isabelle Bonmarin, and Alain Barrat. Data on face-to-face contacts in an office building suggest a low-cost vaccination strategy based on community linkers. Network Science, 3(3):326-347, 2015.
+
+[38] Philippe Vanhems, Alain Barrat, Ciro Cattuto, Jean-François Pinton, Nagham Khanafer, Corinne Régis, Byeul-a Kim, Brigitte Comte, and Nicolas Voirin. Estimating potential infection transmission routes in hospital wards using wearable proximity sensors. PloS one, 8(9):e73970, 2013.
+
+[39] Rossana Mastrandrea, Julie Fournet, and Alain Barrat. Contact patterns in a high school: a comparison between data collected using wearable sensors, contact diaries and friendship surveys. PloSone, 10(9):e0136497, 2015.
+
+[40] Lorenzo Isella, Juliette Stehlé, Alain Barrat, Ciro Cattuto, Jean-François Pinton, and Wouter Van den Broeck. What's in a crowd? analysis of face-to-face behavioral networks. Journal of theoretical biology, 271(1):166-180, 2011.
+
+[41] Valerio Gemmetto, Alain Barrat, and Ciro Cattuto. Mitigation of infectious disease at school: targeted class closure vs school closure. BMC infectious diseases, 14(1):1-10, 2014.
+
+[42] Juliette Stehlé, Nicolas Voirin, Alain Barrat, Ciro Cattuto, Lorenzo Isella, Jean-François Pinton, Marco Quaggiotto, Wouter Van den Broeck, Corinne Régis, Bruno Lina, et al. High-resolution measurements of face-to-face contact patterns in a primary school. PloS one, 6(8):e23176, 2011.
+
+[43] Hillary clinton emails. URL https://www.kaggle.com/datasets/kaggle/ hillary-clinton-emails.
+
+[44] Wikimedia Foundation. Wikimedia downloads. URL http://dumps.wikimedia.org/.
+
+[45] Tiago P. Peixoto. The netzschleuder network catalogue and repository, 2020. URL https: //networks.skewed.de/.
+
+1 Datasets
+
+| Data set | V | $\left| \mathcal{E}\right|$ | ${T}_{\text{total }}$ |
| Synthetic | 50 | 150k | 100k |
| Coedits | 11476 | 528120 | $\sim {15.9y}$ |
| Emails | 326 | 8313 | $\sim {3.8y}$ |
+
+Table 1: The sizes of temporal networks that we analyzed in the experiments.
+
+## 298 2 Conditional entropy: The chain rule
+
+For discrete random variables $X$ and $Y$ , the definition of the entropy (in nats) is
+
+$$
+H\left( X\right) = - \mathop{\sum }\limits_{x}p\left( {X = x}\right) \ln p\left( {X = x}\right)
+$$
+
+300 and the definition of conditional entropy (in nats) $H\left( {Y \mid X}\right)$ is:
+
+$$
+H\left( {Y \mid X}\right) = - \mathop{\sum }\limits_{{x, y}}p\left( {X = x, Y = y}\right) \ln \frac{p\left( {X = x, Y = y}\right) }{p\left( {X = x}\right) }
+$$
+
+1 In the following, we use the above definitions to derive the chain rule of conditional entropy:
+
+$$
+H\left( {Y \mid X}\right) = - \mathop{\sum }\limits_{{x, y}}p\left( {X = x, Y = y}\right) \left( {\ln p\left( {X = x, Y = y}\right) - \ln p\left( {X = x}\right) }\right) =
+$$
+
+$$
+= - \mathop{\sum }\limits_{{x, y}}p\left( {X = x, Y = y}\right) \ln p\left( {X = x, Y = y}\right)
+$$
+
+$$
+- \left\lbrack {-\mathop{\sum }\limits_{{x, y}}p\left( {X = x, Y = y}\right) \ln \left( {p\left( {X = x}\right) }\right) )}\right\rbrack =
+$$
+
+$$
+= H\left( {X, Y}\right) - \left\lbrack {-\mathop{\sum }\limits_{{x, y}}p\left( {Y = y \mid X = x}\right) p\left( {X = x}\right) \ln \left( {p\left( {X = x}\right) }\right) )}\right\rbrack =
+$$
+
+$$
+= H\left( {X, Y}\right) - \left\lbrack {-\mathop{\sum }\limits_{x}p\left( {X = x}\right) \ln \left( {p\left( {X = x}\right) }\right) }\right\rbrack \left( {\mathop{\sum }\limits_{y}p\left( {Y = y \mid X = x}\right) }\right\rbrack - 1 =
+$$
+
+$$
+= H\left( {X, Y}\right) - H\left( X\right) \text{.}
+$$
+
+## 3 Entropy estimation
+
+In this experiment we test four estimates of the entropy of a multinomial distribution: MLE, Miller [31], Grassberger, and NSB. We vary the sample size and measure the errors of the estimates. For each sample size we repeat the procedure n_repetitions (100) times. First, we generate a random 50-dimensional $\overrightarrow{p}$ from a Dirichlet distribution with all concentration parameters $\alpha$ (in code: gen_alpha) $\overrightarrow{p} \sim \operatorname{Dir}\left( {\alpha \overrightarrow{1}}\right)$ . The sampled vector $\overrightarrow{p}$ represents the ground truth probability distribution, and determines the ground truth entropy $H = \mathop{\sum }\limits_{i}{p}_{i}\ln {p}_{i}$ . Then, we use the same $\overrightarrow{p}$ and generate a random multinomial sample. Using the sample, we infer the entropy rate $\widehat{H}$ using four methods: MLE, Miller, Grassberger and NSB. We note the differences between the estimate and the ground truth value, and plot the average errors in those 100 repetitions. Error bars represent intervals between 5th and 95th quantile of the distribution.
+
+
+
+Figure 2: Entropy estimation error as a function of the number of samples. MLE underestimates the entropy for small samples. Grassberger estimates always estimates entropy as $\approx {1.07}$ when there is only one sample, and as the data size grows, it approaches the true value. Although NSB estimator was on average a better estimator than the Grassberger estimator, it also had negative bias in our experimetns. In contrast, Miller method overestimates the entropy for small data sizes. Error bars denote intervals between 5th and 95th quantile.
+
+## 4 Comparisson of Entropy Estiamtes on the Second-Order Transition Matrix
+
+
+
+Figure 3: We generate random two-hop transition matrix ${T}_{\text{gt }}$ in the form of $p\left( {{v}_{3} \mid {v}_{1},{v}_{2}}\right)$ , where every row of the matrix represents a different memory ${v}_{1},{v}_{2}$ and every column represent different final node ${v}_{3}$ . We first generate an ${\alpha }_{0}$ for that matrix as a draw from a gamma distribution with hyper-parameter 2, then use it to generate every row of the matrix ${T}_{\mathrm{{gt}}}$ using ${\alpha }_{0}$ as the hyperparameters of the Dirichlet distribution. We use the same hyperparameter to generate a probability distribution of $p\left( {{v}_{1},{v}_{2}}\right)$ . Given the transition matrix ${T}_{\text{gt }}$ and a random probability distribution of memory ${v}_{1},{v}_{2}$ , we generate a matrix of probabilities $p\left( {{v}_{1},{v}_{2},{v}_{3}}\right)$ and from it a matrix of counts $c\left( {{v}_{3} \mid {v}_{1},{v}_{2}}\right)$ , such that the total number of counts is equal to the number of samples ( $x$ -axis). We compute the ground truth entropy $H\left( {{v}_{3} \mid {v}_{1},{v}_{2}}\right)$ to the entropies inferred from counts $c$ . We present the average difference between the two and the error of this average (bars represent quantiles 5-th and 95-th quantile of the differences). For each number of samples $\left( {{10},{20},{30},\ldots {150}}\right)$ , we ran 100 trials. The temporal network had 4 nodes, and all 12 edges were possible.
+
+## 14 5 Additional synthetic data
+
+In this section we present an additional synthetic example where we added less causal paths to the temporal network. In Fig. 5, we note that for smaller number of observations of causal influences, the behavior of causal path entropyflattens.
+
+## 6 Empirical data without the ground truth
+
+In this section, we show multiple datasets in which we do not have access to the ground truth temporal scales (Fig. 6. We observe circadian rhythms as plateaus at time-scales smaller than one day (indicated with the vertical bar).
+
+## 7 Other empirical data with ground truth
+
+In this section we show other datasets of wikibooks that we used to test the method. They constitute Wikibooks platforms on different languages.
+
+In Fig. 8, we measure the causal path entropy for different ${\delta }_{\max }$ , while keeping a fixed value of ${\delta }_{\min } = 0$ . We note that the causal path entropy increases at time-scales where there are few causal paths, and decreases (or increases at a slower pace) when there are more causal paths available. In Fig. 9, we measure causal path entropy while varying both ${\delta }_{\min }$ and ${\delta }_{\max }$ . The $x$ -limits of a bar represent ${\delta }_{\min }$ and ${\delta }_{\max }$ , while the height of the bar represents the causal path entropy $\mathcal{H}$ for that time scale.
+
+
+
+Figure 6: NetSci2022 figures.
+
+
+
+Figure 7: Causal path entropy as a function of ${\delta }_{\max }$ , measured for a fixed ${\delta }_{\min } = 0$ .
+
+
+
+Figure 8: Causal path entropy as a function of ${\delta }_{\max }$ , measured for a fixed ${\delta }_{\min } = 0$ .
+
+
+
+Figure 9: causal path entropy as a function of ${\delta }_{\min }$ and ${\delta }_{\max }$ .
+
+
+
+Figure 4: Causal path entropy $\mathcal{H}$ for a fixed ${\delta }_{\min } = 0$ and a variable ${\delta }_{\max }\left( {x\text{-axis}}\right)$ . The experiment followed the same procedure as in the main text, except that, instead of ${n}_{\mathrm{p}} = {50k}$ causal paths, we added ${n}_{\mathrm{p}} = {25k}$ causal paths to the temporal network.
+
+
+
+Figure 5: Causal path entropy $\mathcal{H}$ for a fixed ${\delta }_{\min } = 0$ and a variable ${\delta }_{\max }\left( {x\text{-axis}}\right)$ . The experiment followed the same procedure as in the main text, except that, instead of ${n}_{\mathrm{p}} = {50k}$ causal paths, we added ${n}_{\mathrm{p}} = {2k}$ causal paths to the temporal network in the lower panel. We can see that the $\mathcal{H}$ in the original temporal network is almost the same as in the shuffled network, because there are too few causal paths.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/aI0-qQsFCHV/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/aI0-qQsFCHV/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2b944412664621dd3fe0fc2bdcb6fb725681ff94
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/aI0-qQsFCHV/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,65 @@
+§ HIGHER-ORDER PATTERNS REVEAL CAUSAL TEMPORAL SCALES IN TIME SERIES NETWORK DATA
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+The research of dynamic complex systems has in recent years advanced beyond static graph representations $\left\lbrack {1,2}\right\rbrack$ . The focus has shifted to various generalizations of diadic interactions in graphs: multiple types of interactions in multilayer network [3], multibody interactions in the form of simplicial complexes and hypergraphs [4] and models that incorporate concepts of memory [5-7]. Such generalized relationships allow us to model richer data, without losing possibly important features of the data.
+
+Temporal networks record not only who interacted with whom, but also when each interaction happened, which allows (and often requires) analysis beyond the standard network approach [8, 9]. The time information can yield valuable insights on its own [10], and, although the temporal and topological aspects of temporal networks were initially mostly studied independently, even richer insights are hidden in the coupling of the temporal and topological patterns. Such coupling can affect the statistics of time-respecting paths [8] in temporal networks and thus complicate the analysis of temporal networks, e.g., analysis of accessibility [11], reachability [12], spreading [5, 6, 13, 14], clustering [15], centralities [16], and visualization [17]. In cases when the statistics of time-respecting paths deviate significantly from the statistics of random walks in static graphs, the static graphs can become a misleading representations of the temporal network.
+
+Although, there are many possible ways in which temporal and topological patterns can couple in complex systems, one of the most basic cases is when the occurrence of a temporal edge causes a change in the frequencies of subsequent edges emanating from the target node within a given time-window. For instance, in a communication network we expect an incoming message to induce a outgoing message on the same topic, e.g. in the form of a reply, within a certain time window reflecting the minimal reaction time and memory of the recipient. Knowing the time-scale at which such causal influence take place would allow us to capture the time-respecting paths that correspond to casual influences; this would in turn improve the analysis of the temporal network, e.g. the detection of time central nodes or community detection. However, information on the time-scales relevant for the temporal network dynamics is rarely available in a real world settings.
+
+We define an information theoretic measure aimed at detecting the prevalence of causal interactions at various time-scales of complex systems. We demonstrate that our measure can be used to infer time scales that are relevant to the dynamics of temporal networks in both synthetic and real world data.
+
+Let $\Gamma = \left( {V,\mathcal{E}}\right)$ be a temporal network consisting of a set of nodes $V$ and a set of time-stamped edges $\mathcal{E} \subseteq V \times V \times \mathbb{R}$ . A temporal edge $\left( {v,w,t}\right) \in \mathcal{E}$ represents a direct link from node $v$ to node $w$ at time $t$ . For simplicity, we assume that the temporal edges are instantaneous, however the method and algorithms can be modified in a straightforward fashion to the case where edges have finite duration. Formally, we call a sequence of time-stamped edges $\left( {{v}_{1},{w}_{1},{t}_{1}}\right) ,\ldots ,\left( {{v}_{k},{w}_{k},{t}_{k}}\right)$ a time-respecting path iff for all $i \in \{ 2,\ldots ,k\}$ they satisfy the following conditions [8,18,19]:
+
+$$
+{w}_{i - 1} = {v}_{i} \tag{1}
+$$
+
+$$
+{\delta }_{\min } < {t}_{i} - {t}_{i - 1} < {\delta }_{\max }\text{ . } \tag{2}
+$$
+
+The parameters ${\delta }_{\min }$ and ${\delta }_{\max }$ naturally introduce a time scale that affects all analyses of temporal networks that are based on time-respecting paths. Examples of such analyses include detection of cluster structures in temporal networks, measures of temporal centrality used to rank nodes in temporal networks, as well as results about dynamical processes like epidemic spreading, or diffusion processes. The time scale has to be defined differently for processes on the temporal network or the processes of the temporal network [8]. In the former case, the time scale is defined by the process
+
+running on the temporal network, e.g. in the case of an epidemic that is spreading over a temporal network of contacts, the time scale is a property of a disease, related to the time interval in which a person is contagious and not related to the time-scales at which contacts occur ${}^{1}$ . In the latter case, the time scale is part of the process of edge activation, and thus shapes the temporal network itself, e.g. information that is spreading between persons is also affecting the persons' choice with whom to share the information: a person would be more likely to share the family-related information with a family member and work-related information with a colleague. We are investigating this latter case, more specifically, we consider the problem of detecting the time window ${\Delta }_{t} = \left\lbrack {{\delta }_{\min },{\delta }_{\max }}\right\rbrack$ at which causal correlations between temporal edges take place.
+
+In the literature, there exist a variety of definitions of time scales in temporal networks, as well as a variety of methods aimed at detecting them. The various definitions of time scales are based on the different structural features of temporal networks. One popular definition of time scales in temporal networks is the approach based on splitting the network into time-slices and aggregating the edges inside the time-interval [21]. In the same framework, Ghasemian et al. [22] and Taylor et al. [23] investigate the limitations of detectability of cluster structures dependent on the time-scales of aggregation. Since this framework is based on aggregating the temporal network into a sequence of static time-aggregated networks, it loses information of the time-respecting paths and is therefore not in line with our aims. Other lines of research often related to time-scale detection are change point detection [24], and analysis of large-scale structures. Gauvin et al. [25] detects clusters and their temporal activations in a temporal network using tensor decomposition. Similarly, Peixoto [26] proposed a method to detect the change points of cluster structure in a temporal network. Peixoto and Rosvall [27] proposed a method to simultaneously detect the clusters and time-scales in temporal network, however, they model the temporal network as a single sequence of tokens (similar to [24]) that represent temporal edges, and their time-scale inference refers to the number of tokens in the memory of a Markov chain that models such a sequence. In our view, these works focus on mesoscale structures, and take a coarse grained view of temporal networks, while in this work, we propose a complementary approach by focusing on local correlations between temporal edges incident on a node and subsequent temporal edges emanating from it. Among the works that took a fine-grained view, Williams et al. [7] investigated correlations between the temporal edges, however, they, too, considered sequences of edges that do not have any nodes in common, and therefore are not directly related to time-respecting paths. Scholtes et al. [16] found that correlations between edges on time-respecting paths affect centralities, and modeled the time-respecting paths with higher-order models, and found that this approach improves the centrality rankings, and identified the issue of time-scale detection, which our work complements. Our work also complements Pfitzner et al. [28] which introduces betweenness preference that can be used to study over- and under-represented time-respected paths in temporal networks, but does not address the problem of detecting the time-scales at which these paths occur.
+
+We address this issue by analysing the statistics of time respecting paths ${\mathcal{P}}_{{\Delta }_{t}}$ of length $k$ in a temporal network $\Gamma$ obtained with different time-scales ${\Delta }_{t} = \left\lbrack {{\delta }_{\min },{\delta }_{\max }}\right\rbrack$ . Specifically, we observe paths $\left( {{v}_{0},{v}_{1},\ldots ,{v}_{k}}\right)$ of length $k$ and measure the "causal path entropy" (of order $k$ ), defined as the entropy of the last node ${v}_{k}$ conditional on the sub-path $\left( {{v}_{0},{v}_{1},\ldots ,{v}_{k - 1}}\right)$ :
+
+$$
+\mathcal{H}\left( {\mathcal{P}}_{{\Delta }_{t}}\right) = H\left( {{v}_{k} \mid {v}_{0},\ldots ,{v}_{k - 1}}\right) = H\left( {{v}_{0},\ldots ,{v}_{k}}\right) - H\left( {{v}_{0},\ldots ,{v}_{k - 1}}\right) , \tag{3}
+$$
+
+where Eq. 3 is obtained by applying the chain rule (see Appendix for derivation). By definition $\mathcal{H}\left( {\mathcal{P}}_{{\Delta }_{t}}\right)$ measures the average uncertainty in the last step of a time-respecting path given the $k - 1$ previous steps. A lower value of the entropy indicates a high correlation between the memory of time respecting paths and subsequent steps. Hence the ${\Delta }_{t}$ for which the entropy reaches its minimum gives us the time scale for which causal paths become most predictable, i.e. where the correlations between temporal edges are the most pronounced. The entropy can also be defined for a single node $v$ , by simply fixing ${v}_{k - 1} = v$ , allowing for a more fine grained analysis that could be important if nodes differ significantly with respect to the time scales they operate on. Given a time-scale ${\Delta }_{t}$ , the entropy can be estimated using the counts of time-respecting paths of length $k$ e.g. using the methods from $\left\lbrack {{29},{30}}\right\rbrack$ .
+
+The estimation of the causal path entropy can be challenging for small ranges of time-scales, since the temporal network can get temporally disconnected resulting in very few paths of order $k$ being observed. As a result we require an efficient method for estimating the entropy of the multinomial distributions $p\left( {{v}_{0},\ldots ,{v}_{k - 1}}\right)$ and $p\left( {{v}_{0},\ldots ,{v}_{k}}\right)$ in an undersampled regime. The simplest estimator of a multinomial distribution, called the plug-in estimator, is based on the maximum likelihood estimation which however is known to severely underestimate the entropy in the undersampled regime and has various corrections (e.g. [31, 32]). An alternative to the plug in estimator is to follow a Bayesian approach which results in entropy estimators that strongly depend on the choice of prior. To counteract this dependency the NSB estimator [33] directly infers the entropy from the counts by averaging over different priors for the transition probabilities, rather than inferring transition probabilities. Being a Bayesian method, the NSB estimator can also be used to quantify the uncertainty of the estimate. More specifically, assuming that the estimates of $H\left( {{v}_{0},\ldots ,{v}_{k}}\right)$ and $H\left( {{v}_{0},\ldots ,{v}_{k - 1}}\right)$ have independent errors ${\sigma }_{k}$ and ${\sigma }_{k - 1}$ , we can approximate the total error of the estimate as $\sigma = {\left( {\sigma }_{k}^{2} + {\sigma }_{k - 1}^{2}\right) }^{1/2}$ . As the NSB estimator requires the size of the alphabet to be known it is most suitable for cases where the number of nodes is fixed and improves further if the set of edges that can occur are known a priory as this further restricts the number of potential paths. In cases when the number of nodes in the system is unknown, the Pitman-Yor Mixture entropy estimator [34] could be used instead.
+
+${}^{1}$ We note that the processes on and of the temporal network may interact [20], and thus blur the distinction.
+
+ < g r a p h i c s >
+
+Figure 1: Causal path entropy as a function of causal temporal scales in synthetic (left) and real-world data (central and right). In each experiment, we show the histogram of causal inter-event times (right $y$ -axis in the left and central panel, and bottom graphic on the right). In the left and central panel, we measure causal path entropy $\mathcal{H}$ for a fixed ${\delta }_{\min } = 0$ and variable ${\delta }_{\max }\left( {x\text{ -axis }}\right)$ in original (solid line) and shuffled temporal network (dashed line). In the top right panel, the height of a bar represent causal path entropy (errors are barely visible) measured and the $x$ -limits of the bar represent the interval ${\Delta }_{t} = \left( {{\delta }_{\min },{\delta }_{\max }}\right)$ at which the causal path entropy has been measured. In real world data, we indicate on $x$ -axis the time scales of one minute (m), hour (h), day (d), week (w), and year (y). We observe that the causal path entropy decreases (or increases slower) at causal time scales.
+
+Our implementation is based on the path counting methods $\left\lbrack {{29},{30}}\right\rbrack$ which we use to obtain counts ${n}_{{v}_{0}{v}_{1}{v}_{2}}$ of paths $\left( {{v}_{0},{v}_{1},{v}_{2}}\right)$ of length $k = 2$ , and counts ${n}_{{v}_{0}{v}_{1}} = \mathop{\sum }\limits_{{v}_{2}}{n}_{{v}_{0}{v}_{1}{v}_{2}}$ for a given time-scales. Based on these counts we then estimate the entropies $H\left( {{v}_{0},{v}_{1},{v}_{2}}\right)$ and $H\left( {{v}_{0},{v}_{1}}\right)$ along with their respective errors using the NSB estimator [33]. Finally by repeating this procedure over a range of different time-scales we identify the time-scales for which the entropy is minimized. In the following part, validate our method using synthetically generated temporal networks with known causal time scales as well as real-world networks.
+
+We first perform synthetic experiments in order to observe the behavior of the causal path entropy in a controlled setting. In order to simulate temporal networks with a ground truth timescale ${\bar{\Delta }}_{t} = \left\lbrack {{\bar{\delta }}_{\min },{\bar{\delta }}_{\max }}\right\rbrack$ , we start from a static Erdős-Rényi random (static) graph with 50 nodes and 500 directed edges. We sample a random subset ${\mathcal{P}}_{\text{ causal }}$ of ${n}_{\text{ u.p. }} = {500}$ unique paths of length $k = 2$ in the static network which correspond to causal influences in the system. We then, with repetition, sample ${n}_{\mathrm{p}} = {50000}$ paths from ${\mathcal{P}}_{\text{ causal }}$ . We then add each path $\left( {{v}_{0},{v}_{1},{v}_{2}}\right)$ to the temporal network by sampling a random starting time $t$ uniformly from $\left\lbrack {0,{T}_{\text{ total }} - {\bar{\delta }}_{\max }}\right\rbrack$ and create a temporal edge $\left( {{v}_{0},{v}_{1},t}\right)$ ; we sample temporal distance $\delta$ between edges on the path (inter-event time) uniformly from ${\bar{\Delta }}_{t}$ and add the temporal edge $\left( {{v}_{1},{v}_{2},t + \delta }\right)$ . We choose ${\bar{\Delta }}_{t}$ with ${\bar{\delta }}_{\min } = {100}$ and ${\bar{\delta }}_{\max } = {200}$ . To add some noise to the system, we uniformly sample 50000 edges from the static graph, and sample their timestamps uniformly from $\left\lbrack {0,{T}_{\text{ total }}}\right\rbrack$ . Additionally, we also generate a shuffled temporal network, by randomly shuffling the timestamps of edges, thus destroying the correlations between
+
+topological and temporal patterns (while preserving the distribution of edges and the distribution of timestamps). The causal path entropies for the synthetic network and the shuffled network along with the histogram of inter-event times between causal paths is show in Fig. 1 where we measured the causal path entropy $\mathcal{H}\left( {y\text{ -axis }}\right)$ for a fixed ${\delta }_{\min } = 0$ and various ${\delta }_{\max }\left( {x\text{ -axis }}\right)$ . More synthetic examples can be found in the Appendix.
+
+We observe that the causal path entropy behaves as expected and decreases in accordance with the timescale of the planted causal interactions. Moreover, this pattern disappears when the timestamps of edges are shuffled, demonstrating that measure correctly captures the dependencies between temporal and topological patterns.
+
+In general testing in real world data is more challenging due to the lack of a ground truth timescale. In the Appendix, we show the results obtained for e-mail data sets [35, 36] and the SocioPatterns datasets [37-42]. Although the lack of ground truth in these datasets makes detailed evaluation of the method difficult, the results across datasets are consistent and in accordance with the circadian rhythm that is typical of human activities.
+
+In order to circumvent the problem of ground truth in real world networks, we consider temporal networks where we know the causal path structure. As a first data set, we consider public data set of Hillary Clinton's emails [43], where we know the sender, the receiver, the timestamp, and the subject of each email. While sender, receiver and the timestamp form a temporal network, email subjects allow us to obtain causal inter-event times: for each incoming email, we extract the time duration until an email was sent with the same subject. As a second data set, we consider the bipartite temporal networks of Wikibooks co-editing patterns [44, 45] (here, we show the German Wikibooks, but the reader can find other Wikibooks data sets in the Appendix). This data contains information about edits on the Wikibooks website: for each edit, we know the editor, the article that was edited, and the time at which the edit occurred. We preprocess this data to obtain a temporal network of editors: if editor $v$ edited an article and after that editor $w$ edited the same article at time $t$ , we assume that a link(v, w, t)occurred in the temporal network of editors. We define causal inter-event times based on the articles: we extract the time intervals between successive edits of each article. We use the inter-event times between emails with the same subject and the inter-event times of articles for evaluation; the temporal networks contain only the temporal edges and not any additional information about the ground truth time-scales. The details of each data-set are in the Appendix (Table 1).
+
+We compare the histogram of causal inter-event times with the causal path entropy at different time-scales of the temporal network. In the central panel of Fig. 1 we show results in the emails data set. We measure the causal path entropy of the node representing Hillary Clinton, with fixed ${\delta }_{\min } = 0$ and variable ${\delta }_{\max }$ , in both the original data set and with shuffling. Results of the co-edits data-set are shown in the right panels of Fig. Fig. 1. On the top is the causal path entropy $\mathcal{H}$ , where we vary both ${\delta }_{\min }$ and ${\delta }_{\max }$ : the left and right $x$ limit of a bar represents ${\delta }_{\min }$ and ${\delta }_{\max }$ , and the height of the bar represents $\mathcal{H}$ . On the bottom right panel, we show the histogram of inter-event times.
+
+In empirical data sets we observe that the causal path entropy decreases or increases at a slower pace at time scales where large numbers of of causal interactions occur. In emails data set, we see that the causal path entropy is able to identify causal time-scales even there are only few thousand temporal edges, which suggests that could be used to detect time-scales for individual nodes separately. In the coedits data set, we demonstrate how we can pick different time-scales and analyse them in isolation. We observe that decreases in the causal path entropy coincide with peaks in the number of causal paths.
+
+To summarize, the analysis of temporal networks heavily depends on the analysis of time-respecting paths $\left\lbrack {8,9,{13},{16},{18},{29}}\right\rbrack$ . However, in order to model and analyze the time-respecting paths, we first need to identify the correct time-scale. In this work we address this problem by introducing an information theoretic measure, the causal path entropy, that is able to capture time-scales at which causal influences occur in temporal networks. Using real world data we demonstrated that the measure can be applied to temporal networks as a whole as well a single nodes and showed that the causal path entropy accurately captures the causal time-scales in both synthetic and empirical temporal networks. We further support our findings by observing that the decreases in the causal path entropy coincide with increases in the number of causal paths. The causal path entropy allows system relevant time-scales to be inferred from the temporal networks themselves which is crucial for the analysis of temporal networks where inherent time-scales are unavailable and hard to measure.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/b9g0vxzYa_/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/b9g0vxzYa_/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..11ea6f20be80e542f6e0c1f6063afed29cb4ecde
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/b9g0vxzYa_/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,394 @@
+# Influence-Based Mini-Batching for Graph Neural Networks
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Using graph neural networks for large graphs is challenging since there is no clear way of constructing mini-batches. To solve this, previous methods have relied on sampling or graph clustering. While these approaches often lead to good training convergence, they introduce significant overhead due to expensive random data accesses and perform poorly during inference. In this work we instead focus on model behavior during inference. We theoretically model batch construction via maximizing the influence score of nodes on the outputs. This formulation leads to optimal approximation of the output when we do not have knowledge of the trained model. We call the resulting method influence-based mini-batching (IBMB). IBMB accelerates inference by up to ${130}\mathrm{x}$ compared to previous methods that reach similar accuracy. Remarkably, with adaptive optimization and the right training schedule IBMB can also substantially accelerate training, thanks to precomputed batches and consecutive memory accesses. This results in up to ${18}\mathrm{x}$ faster training per epoch and up to ${17}\mathrm{x}$ faster convergence per runtime compared to previous methods.
+
+## 1 Introduction
+
+Creating mini-batches is highly non-trivial for connected data, since it requires selecting a meaningful subset despite the data's connectedness. When the graph does not fit into memory, the mini-batching problem is equally relevant for both inference and training. However, mini-batching methods have so far mostly been focused on training, despite the major practical importance of inference. Once a model is put into production, it continuously runs inference to serve user queries. On AWS, more than ${90}\%$ of infrastructure cost is due to inference, and less than ${10}\%$ is due to training [24]. Even during training, inference is necessary for early stopping and performance monitoring. A training method thus has rather limited utility by itself.
+
+Selecting mini-batches for inference is distinctly different from training. Instead of averaging out stochastic sampling effects over many training steps, we need to ensure that every prediction is as accurate as possible. To achieve this, we propose a theoretical framework for creating mini-batches based on the expected influence of nodes on the outputs. Selecting nodes according to this formulation provably leads to an optimal approximation of the output. The resulting optimization problem shows that we need to distinguish between two classes of nodes: Output nodes and auxiliary nodes. Output nodes are those for which we compute a prediction in this batch, for example a set of validation nodes. Auxiliary nodes provide inputs and define the batch's subgraph. This distinction allows us to choose a meaningful neighborhood for every prediction, while ignoring irrelevant parts of the graph. Note that output nodes in one batch can be auxiliary nodes in another batch.
+
+This distinction furthermore splits mini-batching into two problems: 1. How do we partition output nodes into efficient mini-batches? 2. How do we choose the auxiliary nodes for a given set of output nodes? Having split the problem like this, we see that most previous works either focus exclusively on the first question by only using graph partitions [7] or on the second question and choose a uniformly random subset of nodes as output nodes $\left\lbrack {{21},{41}}\right\rbrack$ . Jointly considering both aspects with an overarching theoretical framework allows for substantial synergy effects. For example, batching nearby output nodes together allows one output node to leverage another one's auxiliary nodes.
+
+We call this overall framework influence-based mini-batching (IBMB). On the practical side, We propose two instantiations of IBMB by approximating the influence between nodes via personalized
+
+PageRank (PPR). We use fast approximations of PPR to select auxiliary nodes by their highest PPR scores. Accordingly, we partition output nodes using PPR-based node distances or via graph partitioning. We then use the subgraph induced by these nodes as a mini-batch. IBMB accelerates inference by up to ${130}\mathrm{x}$ compared to previous methods that achieve similar accuracy.
+
+Remarkably, we found that IBMB also works well for training, despite being derived from inference. This is due to the computational advantage of precomputed mini-batches, which can be loaded from a cache to ensure efficient memory accesses. We counteract the negative effect of the resulting sparse mini-batch gradients via adaptive optimization and batch scheduling. Overall, IBMB achieves an up to ${18}\mathrm{x}$ improvement in time per training epoch, with similar final accuracy. This fast runtime more than makes up for any slow-down in convergence per step. Its speed advantage grows even further for the common setting of low label ratios, since our method avoids computation on irrelevant parts of the graph. Our implementation is available online ${}^{1}$ . In summary, our core contributions are:
+
+- Influence-based mini-batching (IBMB): A theoretical framework for selecting mini-batches for GNN inference based on influence scores.
+
+- Practical instantiations of IBMB that work for a variety of GNNs and datasets. They substantially accelerate inference and training without sacrificing accuracy, especially for small label ratios.
+
+- Methods for mitigating the impact of fixed, local mini-batches and sparse gradients on training.
+
+## 2 Background and related work
+
+Graph neural networks. We consider a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ with node set $\mathcal{V}$ and (possibly directed) edge set $\mathcal{E}.N = \left| \mathcal{V}\right|$ denotes the number of nodes, $E = \left| \mathcal{E}\right|$ the number of edges, and $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ the adjacency matrix. GNNs use one embedding per node ${\mathbf{h}}_{u} \in {\mathbb{R}}^{H}$ and edge ${\mathbf{e}}_{\left( uv\right) } \in {\mathbb{R}}^{{H}_{\mathrm{c}}}$ , and update them in each layer via message passing between neighboring nodes. We the node the embedding in layer $l$ as ${\mathbf{h}}_{u}^{\left( l\right) }$ and its $i$ ’th entry as ${\mathbf{h}}_{ui}^{\left( l\right) }$ . Most GNNs can be expressed via the following equations:
+
+$$
+{\mathbf{h}}_{u}^{\left( l + 1\right) } = {f}_{\text{node }}\left( {{\mathbf{h}}_{u}^{\left( l\right) },\mathop{\operatorname{Agg}}\limits_{{v \in {\mathcal{N}}_{u}}}\left\lbrack {{f}_{\mathrm{{msg}}}\left( {{\mathbf{h}}_{u}^{\left( l\right) },{\mathbf{h}}_{v}^{\left( l\right) },{\mathbf{e}}_{\left( uv\right) }^{\left( l\right) }}\right) }\right\rbrack }\right) , \tag{1}
+$$
+
+$$
+{\mathbf{e}}_{\left( uv\right) }^{\left( l + 1\right) } = {f}_{\text{edge }}\left( {{\mathbf{h}}_{u}^{\left( l + 1\right) },{\mathbf{h}}_{v}^{\left( l + 1\right) },{\mathbf{e}}_{\left( uv\right) }^{\left( l\right) }}\right) . \tag{2}
+$$
+
+The node and edge update functions ${f}_{\text{node }}$ and ${f}_{\text{edge }}$ , and the message function ${f}_{\text{msg }}$ can be implemented using e.g. linear layers, multi-layer perceptrons (MLPs), and skip connections. The node's neighborhood ${\mathcal{N}}_{u}$ is usually defined directly by the graph $\mathcal{G}$ [27], but can be generalized to consider larger or even global neighborhoods $\left\lbrack {1,{16}}\right\rbrack$ , or feature similarity $\left\lbrack {10}\right\rbrack$ . The most common aggregation function Agg is summation, but multiple other alternatives have also been explored [9, 17]. Edge embeddings ${\mathbf{e}}_{\left( uv\right) }$ are often not used in GNNs, but some variants rely on them exclusively [6].
+
+Scalable GNNs. Multiple works have proposed massively scalable GNNs that leverage the peculiarities of message passing to condense it into a single step, akin to label or feature propagation $\left\lbrack {4,{14}}\right\rbrack$ . Our work focuses on general, model-agnostic scalability methods.
+
+Scalable graph learning. Classical graph learning faced issues similar to GNNs when scaling to large graphs. Multiple frameworks for distributed graph computations were proposed to solve this without approximations or sampling $\left\lbrack {{19},{28},{31},{32}}\right\rbrack$ . Other works scaled to large graphs via stochastic variational inference, e.g. by sampling nodes and node pairs [20]. Interestingly, this approach is quite similar to sampling-based mini-batching for GNNs.
+
+Mini batching for GNNs. Previous mini-batching methods can largely be divided into three categories: Node-wise sampling, layer-wise sampling, and subgraph-based sampling [29]. In node-wise sampling, we obtain a separate set of auxiliary nodes for every output node, which are sampled independently for each message passing step. Each output node is treated independently; if two output nodes sample the same auxiliary node, we compute its embedding twice [21,30,39]. Layer-wise sampling jointly considers all output nodes of a batch to compute a stochastic set of activations in each layer. Computations on auxiliary nodes are thus shared [5, 23, 41]. Subgraph-based sampling selects a meaningful subgraph and then runs the GNN on this subgraph as if it were the full graph. This method thus computes the outputs and intermediate embeddings of all nodes in that subgraph $\left\lbrack {7,{40}}\right\rbrack$ . Our method most closely resembles the subgraph-based sampling approach. However, IBMB considers both output and auxiliary nodes, resulting in better batches, and only computes the output of predetermined output nodes, similar to node-wise sampling. Note that mini-batch generation is an orthogonal problem to training frameworks such as GNNAutoScale [13]. We can also use IBMB to provide mini-batches as part of GNNAutoScale.
+
+---
+
+${}^{1}$ https://figshare.com/s/f615b330391677014bc5
+
+---
+
+## 3 Influence-based mini-batching
+
+Influence scores. To effectively create graph-based mini-batches we must first quantify how important one node is for another node's prediction. As proposed by Xu et al. [38], we can do this via the influence score, which determines the local sensitivity of the output at node $u$ on the input at node $v$ as:
+
+$$
+I\left( {v, u}\right) = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}\left| \frac{\partial {\mathbf{h}}_{ui}^{\left( L\right) }}{\partial {\mathbf{X}}_{vj}}\right| , \tag{3}
+$$
+
+where ${\mathbf{h}}_{ui}^{\left( L\right) }$ is the $i$ ’th entry in the embedding of node $u$ in the last layer $L$ . The influence score provides a crisp understanding of how to select nodes for inference when we only have knowledge of the graph, not the model or the node features. To prove this formally, we consider a slightly limited class of GNNs and model our lack of knowledge via a randomization assumption of ReLU activations, similar to Choromanska et al. [8], and by assuming that all nodes have the same expected features, yielding (proof in App. A):
+
+Theorem 1. Given a GNN with linear, graph-dependent aggregation and ReLU activations. Assume that all paths in the model’s computation graph are activated with the same probability $\rho$ and nodes have features with expected value $\mathbb{E}\left\lbrack {X}_{v, i}\right\rbrack = {\chi }_{i}$ . If we restrict the model input features to a set of auxiliary nodes ${\mathcal{S}}_{\text{aux }} \subseteq \mathcal{V}$ , then the error
+
+$$
+{\begin{Vmatrix}{\widetilde{\mathbf{h}}}_{u}^{\left( L\right) } - {\mathbf{h}}_{u}^{\left( L\right) }\end{Vmatrix}}_{1} \tag{4}
+$$
+
+between the approximate logits ${\widetilde{\mathbf{h}}}_{u}^{\left( L\right) }$ and the true logits ${\mathbf{h}}_{u}^{\left( L\right) }$ is minimized, in expectation, by selecting the nodes $v \in {\mathcal{S}}_{\text{aux }}$ with maximum influence score $I\left( {v, u}\right)$ .
+
+Formalizing mini-batching. We can leverage this insight by formalizing the mini-batching as the optimization problem(5)
+
+
+
+where $\mathbb{P}\left( {\mathcal{V}}_{\text{out }}\right)$ denotes the set of partitions of the output nodes ${\mathcal{V}}_{\text{out }}, b$ the number of batches, and $B$ the maximum batch size. This optimization yields two results: The output node partition ${P}_{\text{out }}$ and the auxiliary node set for each batch of output nodes, ${\mathcal{S}}_{\text{aux }}$ . The hyperparameter $B$ is determined by the available (GPU) memory, while $b$ trades off runtime and approximation quality. This formulation optimizes the average approximation across all outputs. This might not be ideal since some nodes might already be approximated well with a lower number of auxiliary nodes. We can instead focus on the worst-case approximation by optimizing the minimum aggregate influence score as(6)
+
+
+
+Both Eqs. (5) and (6) split the mini-batching problem into three parts: Output node partitioning, auxiliary node selection, and influence score computation. We call this approach influence-based mini-batching (IBMB).
+
+Computing influence scores. The model's influence score depends on various model details, especially when considering exact, trained models. In many cases we can calculate the expected influence score by making simplifying assumptions, similar to Theorem 1. This allows tailoring the mini-batching method to the exact model of interest. For the remainder of this work we will focus our analysis on the broad class of models that use the average as an aggregation function, such as GCN. In this case, the influence is proportional to a slightly modified random walk with $L$ steps [38]. We additionally remove the influence score’s dependence on the number of layers $L$ by using the limit distribution of random walks with restart, also known as personalized PageRank (PPR), as proposed by Gasteiger et al. [15]. Notably, PPR even works as a measure of influence for models with more complex, data-dependent influence scores, such as GAT (see Sec. 5). The PPR matrix is given by
+
+$$
+{\mathbf{\Pi }}^{\mathrm{{ppr}}} = \alpha {\left( {\mathbf{I}}_{N} - \left( 1 - \alpha \right) {\mathbf{D}}^{-1}\mathbf{A}\right) }^{-1}, \tag{7}
+$$
+
+with the teleport probability $\alpha \in (0,1\rbrack$ and the diagonal degree matrix ${\mathbf{D}}_{ii} = \mathop{\sum }\limits_{k}{\mathbf{A}}_{ik}$ . The entry ${\Pi }_{uv}^{\mathrm{{ppr}}}$ then provides a measure for the influence of node $v$ on $u$ . Calculating the above inverse is obviously infeasible for large graphs. However, we can approximate ${\mathbf{\Pi }}^{\mathrm{{ppr}}}$ with a sparse matrix ${\widetilde{\mathbf{\Pi }}}^{\mathrm{{ppr}}}$ in time $\mathcal{O}\left( \frac{1}{{\varepsilon }_{\alpha }}\right)$ per row, with error $\varepsilon \deg \left( v\right)$ [2]. Importantly, this approximation uses only the node’s local neighborhood, making its runtime independent of the overall graph size and thus massively scalable. Furthermore, the calculation is deterministic and model-independent, so we only need to perform this computation once during preprocessing.
+
+### 3.1 Auxiliary node selection
+
+Node-wise selection. Selecting auxiliary nodes on large graphs requires a method that efficiently yields nodes with highest expected influence. Fortunately, there is a well-developed literature of methods for finding the top-k PPR nodes. The classic approximate PPR method [2] is guaranteed to provide all nodes with a PPR value ${\Pi }_{uv}^{\text{PPr }} > \varepsilon \deg \left( v\right)$ w.r.t. the root (output) node $u$ . Optimizing auxiliary nodes by the worst-case influence score (Eq. (6)) thus equates to separately running approximate PPR for each output node in a batch ${\mathcal{S}}_{\text{out }}$ , and then merging them.
+
+Batch-wise selection. Considering each output node separately does not take into account how one auxiliary node jointly affects multiple output nodes, as required for the average-case formulation in Eq. (5). Fortunately, PPR calculation can be adapted to use a set of root nodes. To do so, we use a set of nodes in the teleport vector $t$ instead of a single node, e.g. by leveraging the underlying recursive equation for a PPR vector ${\pi }_{\mathrm{{ppr}}}\left( \mathbf{t}\right) = \left( {1 - \alpha }\right) {\mathbf{D}}^{-1}\mathbf{A}{\pi }_{\mathrm{{ppr}}}\left( \mathbf{t}\right) + \alpha \mathbf{t}.\mathbf{t}$ is a one-hot vector in the node-wise setting, while for batch-wise PPR it is $1/\left| {\mathcal{S}}_{\text{out }}\right|$ for all nodes in ${\mathcal{S}}_{\text{out }}$ . This variant is also known as topic-sensitive PageRank. We found that batch-wise PPR is significantly faster than node-wise PPR. However, it can lead to cases where one outlier node receives almost no neighbors, while others have excessively many. Whether node-wise or batch-wise selection performs better thus often depends on the dataset and model.
+
+Subgraph generation. Creating mini-batches also requires selecting a subgraph of relevant edges. We do so by using the subgraph induced by the selected output and auxiliary nodes in a batch. Note that the above node selection methods ignore how these changes to the graph affect the influence scores. This is a limitation of these methods. However, PPR is a local clustering method and we can thus expect auxiliary nodes to be well-connected.
+
+### 3.2 Output node partitioning
+
+Optimal partitioning. Finding the optimal node partition in Eqs. (5) and (6) would require trying out every possible partition since a change in ${\mathcal{S}}_{\text{out }}$ can unpredictably affect the optimal choice of auxiliary nodes. Doing so is clearly intractable since the number of partitions grows exponentially with $N$ for a fixed batch size. We thus need to approximate the optimal partition via a scalable heuristic. The implicit goal of this step is finding output nodes that share a large number of auxiliary nodes. One good proxy for these overlaps is the proximity of nodes in the graph.
+
+Distance-based partitioning. We propose two methods that leverage graph locality as a heuristic. The first is based on node distances. In this approach we first compute the pairwise node distances between nodes that are close in the graph. We can use PPR for this as well, since it is also commonly used as a node distance. If we select auxiliary nodes with node-wise PPR, we thus only need to calculate PPR scores once for both steps.
+
+Next, we greedily construct the partition ${P}_{\text{out }}$ from ${\widetilde{\Pi }}^{\text{ppr }}$ . To do so, we start by putting every node $u$ into a separate batch $\{ u\}$ . We then sort all elements in ${\widetilde{\mathbf{\Pi }}}^{\text{ppr }}$ by magnitude, independent of their row or column. We scan over these values in descending order, considering the value’s indices(u, v)and merging the batches containing the two nodes. Afterwards we randomly merge any small leftover batches. We stay within memory constraints by only merging batches that stay below the maximum batch size $B$ . This method achieves well-overlapping batches and can efficiently add incrementally incoming out nodes, e.g. in a streaming setting. Our experiments show that this method achieves a good compromise between well-overlapping batches and good gradients for training (see Sec. 5). Note that the resulting partition is unbalanced, i.e. some sets will be larger than others.
+
+
+
+Figure 1: Practical example of influence-based mini-batching (IBMB). The output nodes are indicated by pentagons. These nodes are first partitioned into batches, e.g. by grouping nearby nodes together. We then use influence scores to select the auxiliary nodes of each batch, e.g. neighbors with top- $k$ personalized PageRank (PPR) scores. Finally, we generate a batch using the induced subgraph of all selected nodes, but only calculate the outputs of the output nodes we chose when partitioning. Batches can overlap and do not need to cover the whole graph.
+
+Graph partitioning. For our second method, we note that partitioning output nodes into overlapping mini-batches is closely related to partitioning graphs. We can thus leverage the extensive research on this topic by using the METIS graph partitioning algorithm [25] to find a partition of output nodes ${P}_{\text{out }}$ . We found that graph partitioning yields roughly a two times higher overlap of auxiliary nodes than distance-based partitioning, thus leading to significantly more efficient batches. However, it also results in worse gradient samples, which we found to be detrimental for training (see Sec. 5). Note that Cluster-GCN also uses graph partitioning, and thus aligns somewhat with the IBMB framework [7]. However, IBMB additionally selects relevant auxiliary nodes and ignores irrelevant parts of the graph. This significantly accelerates training on small training sets and improves the accuracy of output nodes close to the partition boundary.
+
+Computational complexity. Since IBMB ignores irrelevant parts of the graph, inference and training scale linearly in the number of output nodes $\mathcal{O}\left( {N}_{\text{out }}\right)$ . Preprocessing runs in $\mathcal{O}\left( \frac{{N}_{\text{out }}}{\epsilon \alpha }\right)$ for node-wise PPR-based steps, $\mathcal{O}\left( \frac{b}{\epsilon \alpha }\right)$ for batch-wise PPR, and in $\mathcal{O}\left( E\right)$ for graph partitioning. The runtime of IBMB is thus independent of the graph size if we use distance-based partitioning. Fig. 1 gives an overview of the full practical IBMB process.
+
+## 4 Training with IBMB
+
+Computational advantages. The above analysis focused on node outputs, not gradient estimation and training. However, this procedure also has inherent advantages for training, since we need to perform mini-batch generation only once during preprocessing. We can then cache each mini-batch in consecutive blocks of memory, thereby allowing the data to be stored where it is needed and circumventing expensive random data accesses. This significantly accelerates training, allows efficient distributed training, and enables more expensive node selection procedures. In contrast, most previous methods select both output and auxiliary nodes randomly in each epoch, which incurs significant overhead. Our experiments show that IBMB's more efficient memory accesses clearly outweigh the slightly worse gradient estimates (see Sec. 5). This seems counter-intuitive since the deterministic, fixed mini-batches in IBMB only provide sparse, fixed gradient samples. In this section we discuss these aspects and how adaptive optimization and batch scheduling counteract their effects.
+
+
+
+Figure 2: Test accuracy and log. inference time for a fixed GNN. IBMB provides the best accuracy versus time trade-off (top-left corner) in all settings.
+
+Sparse gradients. Partitioning output nodes based on proximity effectively correlates the gradients sampled in a batch. The model thus sees a sparse gradient sample, which does not cover all aspects of the dataset. Fortunately, adaptive optimization methods such as Adagrad and Adam were developed exactly for such sparse gradients $\left\lbrack {{12},{26}}\right\rbrack$ . We furthermore ensure an unbiased training process by using every output (training) node exactly once per epoch.
+
+Fixed batches. Using a fixed set of batches can lead to problems with basic stochastic gradient descend (SGD) as well. Imagine training with two fixed batches whose loss functions have different minima. If training has "converged" to one of these minima, SGD would start to oscillate: It would take one step towards the other minimum, and then back, and so forth. To counteract this oscillation, we could add a "consensus constraint" to enforce a consensus between the weights after different batches, akin to distributed optimization [33]. We can solve this constraint using a primal-dual saddle-point algorithm with directed communication [18]. The resulting dynamics are ${\dot{x}}^{\left( t\right) } = - \nabla {\widetilde{f}}^{\left( t\right) }\left( {x}^{\left( t\right) }\right) - {\alpha \lambda }{\dot{x}}^{\left( t - 1\right) } - {\lambda }^{2}{\dot{x}}^{\left( t - 2\right) }$ , with the weights ${x}^{\left( t\right) }$ at time step $t$ , the learning rate $\lambda$ and the dual variable $\alpha$ . These dynamics resemble SGD with momentum, and fit perfectly into the framework of adaptive optimization methods [34]. Indeed, momentum and adaptive methods suppress the oscillations in the above example with two minima. Accordingly, prior works have also found benefits in deterministically selecting fixed mini-batches [3, 37]. We further improve convergence by adaptively reducing the learning rate when the validation loss plateaus, which ensures that the step size decreases consistently.
+
+Batch scheduling. While Adam with learning rate scheduling consistently ensures convergence, we still observe downward spikes in accuracy during training. To illustrate this issue, consider a sequence of mini-batches. In regular training every mini-batch is similar and the order of these batches is irrelevant. In our case, however, some of the mini-batches might be very similar. If the optimizer sees a series of similar batches, it will take increasingly large steps in a suboptimal direction, which leads to the observed downward spikes in accuracy. We propose to prevent these suboptimal batch sequences by optimizing the order of batches. To quantify batch similarity we measure the symmetrized KL-divergence of the label distribution between batches. In particular, we use the normalized training label distribution ${p}_{i} = {c}_{i}/\mathop{\sum }\limits_{j}{c}_{j}$ , where ${c}_{i}$ is the number of training nodes of class $i$ . This results in the pairwise batch distance ${d}_{ab}$ between batches $a$ and $b$ . We propose two ways to use this for improving the batch schedule: (i) Find the fixed batch cycle that maximizes the batch distances between consecutive batches. This is a traveling salesman problem for finding the maximum distance loop that visits all batches. It is therefore only feasible for a small number of batches. (ii) Sample the next batch weighted by the distance to the current batch. Both scheduling methods improve convergence and increase final accuracy, at almost no cost during training. Overall, our training scheme leads to consistent convergence. Even accumulating gradients over the whole epoch does not significantly change convergence or final accuracy (see Fig. 8).
+
+
+
+Figure 3: Training convergence of validation accuracy in log. time. Average and 95% confidence interval of 10 runs. GraphSAINT-RW does not reach the shown accuracy range in some settings due to its bad validation performance. IBMB converges the fastest.
+
+## 5 Experiments
+
+Experimental setup. We show results for two variants of our method: IBMB with PPR distance-based batches and node-wise PPR clustering (node-wise IBMB), and IBMB with graph partition-based batches and batch-wise PPR clustering (batch-wise IBMB). We also experimented with the two other combinations of the output node partitioning and auxiliary node selection variants, but found these two to work best. We compare them to four state-of-the-art mini-batching methods: Neighbor sampling [21], Layer-Dependent Importance Sampling (LADIES) [41], GraphSAINT-RW [40], and Cluster-GCN [7]. We use four large node classification datasets for evaluation: ogbn-arxiv [22, 36, ODC-BY], ogbn-products [36, Amazon license], Reddit [21], and ogbn-papers100M [22, 36, ODC-BY]. While these datasets use the transductive setting, IBMB makes no assumptions about this and can equally be applied to the inductive setting. We skip the common, small datasets (Cora, Citeseer, PubMed) since they are ill-suited for evaluating scalability methods. We do not strive to set a new accuracy record but instead aim for a consistent, fair comparison based on three standard GNNs: graph convolutional networks (GCN) [27], graph attention networks (GAT) [35], and GraphSAGE [21]. We use the same training pipeline for all methods, giving them access to the same optimizations. We run each experiment 10 times and report the mean and standard deviation in all tables and the bootstrapped mean and 95% confidence intervals in all figures. We fully pipeline data loading and batch creation by prefetching batches in parallel. We found that using more than one worker for this does not improve runtime, most likely because data loading is limited by the memory bandwidth, which is shared between workers. We keep GPU memory usage constant between methods, and tune all remaining hyperparameters for both IBMB and the baselines. See App. B for full experimental details.
+
+
+
+
+
+Figure 4: Training convergence in log. time for GCN on ogbn- Figure 5: Trained accuracy for products with smaller training sets. The gap in convergence speed node-wise IBMB, depending on between IBMB and the baselines grows larger for small training the output nodes per batch (GCN, sets, since IBMB scales with training set size and not with overall ogbn-arxiv). IBMB is rather in-graph size. sensitive to this choice.
+
+Inference. Fig. 2 compares the inference accuracy and time of different batching methods, using the same pretrained model and varying computational budgets (number of auxiliary nodes/sampled nodes) at a fixed GPU memory budget. IBMB provides the best trade-off between accuracy and time in all settings. Node-wise IBMB performs better than graph partitioning, except on ogbn-products. IBMB provides a significant speedup over chunking-based full-batch inference on GPU, being 10 to 900 times faster at comparable accuracy. All previous methods are either significantly slower or less accurate.
+
+Training. To evaluate training performance we compare how fast the validation accuracy converges for each method. Since full inference is too slow to execute every epoch we use the same mini-batching method to approximate inference. Fig. 3 shows how the accuracy increases depending on training time. IBMB performs significantly better than previous methods, converging up to ${17}\mathrm{x}$ faster than all baselines. This is despite the fact that we always prefetch the next batch in parallel. Note that GAT is slower to compute than GCN and GraphSAGE, limiting the positive impact of a fast batching method. Compute-constrained models like GAT are less relevant in practice since data access is typically the bottleneck for GNNs on large, often even disk-based datasets [4]. Table 6 in the appendix furthermore shows that IBMB's time per epoch is significantly faster than all sampling-based methods. Cluster-GCN has a comparable runtime, which is expected due to its similarity with IBMB. However, it converges more slowly than IBMB and reaches substantially lower final accuracy. Neighbor sampling achieves good final accuracy, but is extremely slow. GraphSAINT-RW only achieves good final accuracy with prohibitively expensive full-batch inference. Node-wise IBMB achieves the best final accuracy with a scalable inference method in 8 out of 10 settings. On ogbn-papers100M, IBMB has a substantially faster time per epoch and lower memory consumption than previous methods demonstrating IBMB's favorable scaling with dataset size. It even performs as well as SIGN-XL $\left( {\left( {{66.1} \pm {0.2}}\right) \% }\right) \left\lbrack {14}\right\rbrack$ , while using ${30}\mathrm{x}$ fewer parameters and no hyperparameter tuning. Notably, we were unable to evaluate GraphSAINT-RW and Cluster-GCN on this dataset, since they use more than ${256}\mathrm{{GB}}$ of main memory.
+
+Preprocessing. IBMB requires more preprocessing than previous methods. However, since IBMB is rather insensitive to hyperparameter choices (see Fig. 5, Table 5), preprocessing rarely needs to be re-run. Instead, its result can be saved to disk and re-used for training different models. Just considering our 10 training seeds, preprocessing of node-wise IBMB only took 1.3% of the training time for GCN and ${0.25}\%$ for GAT on ogbn-arxiv.
+
+Training set size. The ogbn-arxiv and ogbn-products datasets both contain a large number of training nodes (91k and 197k, respectively). However, labeling training samples is often an expensive endeavor, and models are commonly trained with only a few hundred or thousand training samples. GraphSAINT-RW and Cluster-GCN are global training methods, i.e. they always use the whole graph for training. They are thus ill-suited for the common setting of a large overall graph containing a small number of training nodes (resulting in a small label rate). In contrast, the training time of IBMB purely scales with the number of training nodes. To demonstrate this, we reduce the label rate by sub-sampling the training nodes of ogbn-products and compare the convergence in Fig. 4. As 05 expected, the gap in convergence speed between IBMB and both Cluster-GCN and GraphSAINT-RW grows even larger for smaller training sets.
+
+
+
+Figure 6: Convergence per time Figure 7: Batch scheduling for Figure 8: Gradient accumula-for training GCN on ogbn-arxiv. GAT on ogbn-arxiv. Optimal tion for batch-wise IBMB on Both batch-wise and node-wise batch order prevents downward GCN, ogbn-arxiv. The difference
+
+IBMB lead to faster convergence ence spikes in accuracy and leads to is $\mathrm{m}$ is minor, even when accumulat-than fixed random batches. higher final accuracy. ing over the full epoch.
+
+Ablation studies. We ablate our output node partitioning schemes by instead batching together random sets of nodes. We use fixed batches since we found that resampling incurs significant overhead without benefits - which is consistent with our considerations on gradient samples and contiguous memory accesses. Fig. 6 shows that this method ("Fixed random") converges more slowly and does not reach the same level of accuracy as our partition schemes. Node-wise IBMB converges the fastest, which suggests a trade-off between full gradient samples (random batching) and maximum batch overlap (graph partitioning). Fig. 2 shows that random batching ("IBMB, random batch.") is also substantially slower and often less accurate for inference. This is due to the synergy effects of output node partitioning: If output nodes have similar auxiliary nodes, they benefit from each other's neighborhood. We can ablate auxiliary node selection by comparing IBMB to Cluster-GCN, since it just uses the graph partition as a batch instead of smartly selecting auxiliary nodes. We use the graph partition size as the number of auxiliary nodes for batch-wise IBMB to allow for a direct comparison. As discussed above, Cluster-GCN consistently performs worse, especially in terms of final accuracy, for inference, and for small label rates. Finally, Fig. 7 compares the proposed batch scheduling methods. Optimal and weighted sampling-based scheduling improve convergence and prevent or reduce downward spikes in accuracy.
+
+Gradient accumulation. Accumulating gradients across multiple batches is a method for smoothing batches if the gradient noise is too high. We might expect this to happen in IBMB due to its sparse gradients. However, Fig. 8 shows that gradient accumulation in fact only has an insignificant effect on IBMB, demonstrating its stability during training.
+
+Sensitivity analysis. IBMB is largely insensitive to different local clustering methods and hyperpa-rameters for selecting auxiliary nodes (see Table 5). Even increasing the number of output nodes per batch with a fixed number of auxiliary nodes per output node only has a minor impact on accuracy, especially above 1000 output nodes per batch, as shown by Fig. 5. IBMB performs well even in extremely constrained settings with small batches of 100 output nodes per batch. In practice, IBMB only has a single free hyperparameter to choose: The number of auxiliary nodes per output node, which allows optimizing for accuracy or speed. The number of output nodes per batch is then given by the available GPU memory, while the local clustering method and other hyperparameters are not important.
+
+## 6 Conclusion
+
+We propose influence-based mini-batching (IBMB), a method for extracting batches for GNNs. IBMB formalizes creating batches for inference by optimizing the influence on the output nodes. Remarkably, with the right training scheme IBMB even performs well during training. It improves training convergence by up to ${17}\mathrm{x}$ and inference time by up to ${130}\mathrm{x}$ compared to previous methods that reach similar accuracy. These improvements grow even larger in the common setting of sparse labels and when the pipeline is constrained by data access speed.
+
+References
+
+[1] Uri Alon and Eran Yahav. On the Bottleneck of Graph Neural Networks and its Practical Implications. In ${ICLR},{2021.2}$
+
+[2] R. Andersen, F. Chung, and K. Lang. Local Graph Partitioning using PageRank Vectors. In FOCS, 2006. 4, 14
+
+[3] Subhankar Banerjee and Shayok Chakraborty. Deterministic Mini-batch Sequencing for Training Deep Neural Networks. In AAAI, 2021. 6
+
+[4] Aleksandar Bojchevski, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. Scaling Graph Neural Networks with Approximate PageRank. In ${KDD},{2020.2},8$
+
+[5] Jie Chen, Tengfei Ma, and Cao Xiao. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In ICLR, 2018. 2
+
+[6] Zhengdao Chen, Lisha Li, and Joan Bruna. Supervised Community Detection with Line Graph Neural Networks. In ICLR, 2019. 2
+
+[7] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. In KDD, 2019. 1, 2, 5, 7
+
+[8] Anna Choromanska, Yann LeCun, and Gérard Ben Arous. Open Problem: The landscape of the loss surfaces of multilayer networks. In ${COLT},{2015.3},{12}$
+
+[9] Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veličković. Principal Neighbourhood Aggregation for Graph Nets. In NeurIPS, 2020. 2
+
+[10] Chenhui Deng, Zhiqiang Zhao, Yongyu Wang, Zhiru Zhang, and Zhuo Feng. GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding. In ICLR, 2020. 2
+
+[11] Johann Dréo, Alain Pétrowski, Patrick Siarry, and Eric Taillard. Metaheuristics for Hard Optimization: Methods and Case Studies. 2006. 13
+
+[12] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121-2159, 2011. 6
+
+[13] Matthias Fey, Jan E. Lenssen, Frank Weichert, and Jure Leskovec. GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings. In ICML, 2021. 3
+
+[14] Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Ben Chamberlain, Michael Bronstein, and Federico Monti. SIGN: Scalable Inception Graph Neural Networks. arXiv, 2004.11198, 2020. 2,8
+
+[15] Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Günnemann. Predict then Propagate: Graph Neural Networks Meet Personalized PageRank. In ICLR, 2019. 4
+
+[16] Johannes Gasteiger, Stefan Weißenberger, and Stephan Günnemann. Diffusion Improves Graph Learning. In NeurIPS, 2019. 2
+
+[17] Simon Geisler, Daniel Zügner, and Stephan Günnemann. Reliable Graph Neural Networks via Robust Aggregation. In NeurIPS, 2020. 2
+
+[18] Bahman Gharesifard and Jorge Cortés. Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs. IEEE Transactions on Automatic Control, 59(3):781-786, 2014. 6
+
+[19] Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. Power-Graph: Distributed Graph-Parallel Computation on Natural Graphs. In OSDI, 2012. 2
+
+[20] Prem K. Gopalan, Sean Gerrish, Michael Freedman, David Blei, and David Mimno. Scalable Inference of Overlapping Communities. In NeurIPS, 2012. 2
+
+[21] William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. In NeurIPS, 2017. 1, 2, 7
+
+[22] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. In NeurIPS, 2020. 7
+
+[23] Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive Sampling Towards Fast Graph Representation Learning. In NeurIPS, 2018. 2
+
+[24] Gadi Hutt, Vibhav Viswanathan, and Adam Nadolski. Deliver high performance ML inference with AWS Inferentia, 2019. 1
+
+[25] George Karypis and Vipin Kumar. A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs. SIAM Journal on Scientific Computing, 20(1):359-392, 1998. 5
+
+[26] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR, 2015.6
+
+[27] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, 2017. 2, 7
+
+[28] Aapo Kyrola, Guy Blelloch, and Carlos Guestrin. GraphChi: Large-Scale Graph Computation on Just a PC. In ${OSDI},{2012.2}$
+
+[29] Xin Liu, Mingyu Yan, Lei Deng, Guoqi Li, Xiaochun Ye, and Dongrui Fan. Sampling Methods for Efficient Training of Graph Convolutional Networks: A Survey. IEEE/CAA Journal of Automatica Sinica, 9(2):205-234, 2022. 2
+
+[30] Ziqi Liu, Zhengwei Wu, Zhiqiang Zhang, Jun Zhou, Shuang Yang, Le Song, and Yuan Qi. Bandit Samplers for Training Graph Neural Networks. In NeurIPS, 2020. 2
+
+[31] Yucheng Low, Danny Bickson, Joseph Gonzalez, Carlos Guestrin, Aapo Kyrola, and Joseph M. Hellerstein. Distributed GraphLab: a framework for machine learning and data mining in the cloud. In ${VLDB},{2012.2}$
+
+[32] Grzegorz Malewicz, Matthew H. Austern, Aart J.C Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. Pregel: a system for large-scale graph processing. In SIGMOD, 2010. 2
+
+[33] Reza Olfati-Saber, J. Alex Fax, and Richard M. Murray. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE, 95(1):215-233, 2007. 6
+
+[34] Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the Convergence of Adam and Beyonc In ${ICLR},{2018.6}$
+
+[35] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In ICLR, 2018. 7
+
+[36] Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. Microsoft Academic Graph: When experts are not enough. Quantitative Science Studies, 1(1):396-413, 2020. 7
+
+[37] Shengjie Wang, Wenruo Bai, Chandrashekhar Lavania, and Jeff Bilmes. Fixing Mini-batch Sequences with Hierarchical Robust Partitioning. In AISTATS, 2019. 6
+
+[38] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation Learning on Graphs with Jumping Knowledge Networks. In ICML, 2018. 3
+
+[39] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In KDD, 2018. 2
+
+[40] Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. GraphSAINT: Graph Sampling Based Inductive Learning Method. In ICLR, 2020. 2, 7
+
+[41] Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks. In NeurIPS, 2019. 1, 2, 7
+
+## A Proof of Theorem 1
+
+Path-based view of neural networks. We can view a neural network with ReLUs as a directed acyclic computational graph and express the $i$ ’th output logit via paths through this graph as
+
+$$
+{\mathbf{h}}_{i}^{\left( L\right) } = \frac{1}{{\lambda }^{\left( {H - 1}\right) /2}}\mathop{\sum }\limits_{{q = 1}}^{\phi }{Z}_{i, q}{X}_{i, q}\mathop{\prod }\limits_{{l = 1}}^{L}{w}_{i, q}^{\left( l\right) }, \tag{8}
+$$
+
+where $\lambda$ is a constant related to the size of the network [8] and $\phi$ is the total number of paths. Furthermore, ${Z}_{\left( i, q\right) } \in \{ 0,1\}$ denotes whether the path $q$ is active or inactive when any ReLU is deactivated. ${X}_{\left( i, q\right) }$ represents the input feature used in the $q$ -th path of logit $i$ , and ${w}_{\left( i, q\right) }^{\left( l\right) }$ the used entry of the weight matrix ${W}_{l}$ in layer $l$ .
+
+Path-based view of GNNs. We can extend this framework to graph neural networks by additionally introducing paths $p$ through the (data-based) graph, starting from the auxiliary node $v$ and ending at the output node $u$ , as
+
+$$
+{\mathbf{h}}_{u, i}^{\left( L\right) } = \frac{1}{{\lambda }^{\left( {H - 1}\right) /2}}\mathop{\sum }\limits_{{v \in \mathcal{V}}}\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\sum }\limits_{{q = 1}}^{\phi }{Z}_{v, p, i, q}{X}_{v, p, i, q}\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }{w}_{i, q}^{\left( l\right) }, \tag{9}
+$$
+
+where $\psi$ is the total number of graph-based paths and ${a}_{v, p}^{\left( l\right) }$ denotes the graph-dependent but feature-independent aggregation weights. Note that ${a}_{v, p}^{\left( l\right) }$ depends on the whole path(v, p)and can thus be a function of any node and edge on this path, including the current and next layer's nodes.
+
+Expected influence score. To obtain the influence score, we calculate the derivative
+
+$$
+\frac{\partial {\mathbf{h}}_{u, i}^{\left( L\right) }}{\partial {X}_{v, j}} = \frac{1}{{\lambda }^{\left( {H - 1}\right) /2}}\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\sum }\limits_{{q = 1}}^{{\phi }^{\prime }}{Z}_{v, p, i, q}\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }{w}_{i, q}^{\left( l\right) }, \tag{10}
+$$
+
+with ${X}_{v, j}$ denoting input feature $j$ at node $v$ and ${\phi }^{\prime }$ denoting the number of computational paths with input feature $j$ . To simplify this expression, we use the assumption that all paths(v, p, i, q)are activated with the same probability $\rho$ , i.e. $\mathbb{E}\left\lbrack {Z}_{v, p, i, q}\right\rbrack = \rho$ , and compute the expectation:
+
+$$
+\mathbb{E}\left\lbrack \frac{\partial {\mathbf{h}}_{u, i}^{\left( L\right) }}{\partial {X}_{v, j}}\right\rbrack = \frac{1}{{\lambda }^{\left( {H - 1}\right) /2}}\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\sum }\limits_{{q = 1}}^{{\phi }^{\prime }}\rho \mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }{w}_{i, q}^{\left( l\right) }
+$$
+
+$$
+= \frac{\rho }{{\lambda }^{\left( {H - 1}\right) /2}}\left( {\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }}\right) \left( {\mathop{\sum }\limits_{{q = 1}}^{{\phi }^{\prime }}\mathop{\prod }\limits_{{l = 1}}^{L}{w}_{i, q}^{\left( l\right) }}\right) . \tag{11}
+$$
+
+The only node-dependent term in the expected influence score $\mathbb{E}\left\lbrack {I\left( {v, u}\right) }\right\rbrack = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}\mathbb{E}\left\lbrack \left| \frac{\partial {h}_{u, i}^{\left( L\right) }}{\partial {X}_{v, j}}\right| \right\rbrack$ is thus $\left| {\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }}\right|$ .
+
+Expected output. We similarly obtain the expected output by additionally using the assumption that features have a node-independent expected value $\mathbb{E}\left\lbrack {X}_{v, p, i, q}\right\rbrack = {\chi }_{i, q}$ , yielding
+
+$$
+\mathbb{E}\left\lbrack {\mathbf{h}}_{u, i}^{\left( L\right) }\right\rbrack = \frac{1}{{\lambda }^{\left( {H - 1}\right) /2}}\mathop{\sum }\limits_{{v \in \mathcal{V}}}\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\sum }\limits_{{q = 1}}^{\phi }\rho {\chi }_{i, q}\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }{w}_{i, q}^{\left( l\right) } \tag{12}
+$$
+
+$$
+= \frac{\rho }{{\lambda }^{\left( {H - 1}\right) /2}}\mathop{\sum }\limits_{{v \in \mathcal{V}}}\left( {\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }}\right) \left( {\mathop{\sum }\limits_{{q = 1}}^{\phi }{\chi }_{i, q}\mathop{\prod }\limits_{{l = 1}}^{L}{w}_{i, q}^{\left( l\right) }}\right) .
+$$
+
+Again, the only node-dependent term in the expected output is $\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }$ . Adding any input node thus changes node $u$ ’s output in absolute terms by
+
+$$
+\left| {\mathop{\sum }\limits_{{p = 1}}^{\psi }\mathop{\prod }\limits_{{l = 1}}^{L}{a}_{v, p}^{\left( l\right) }}\right| C = \mathbb{E}\left\lbrack {I\left( {v, u}\right) }\right\rbrack {C}^{\prime }, \tag{13}
+$$
+
+with $C$ and ${C}^{\prime }$ denoting all node-independent terms. Selecting the input nodes with maximum influence score $I\left( {v, u}\right)$ thus minimizes the ${L}_{1}$ norm of the approximation error. Note that this choice only considers the effect of selecting input nodes. It does not model the effect of changing the graph.
+
+Table 1: Number of batches for batch-wise IBMB.
+
+| Model | Dataset | Number of batches |
| Train | Validation | Test |
| GCN | ogbn-arxiv | 4 | 2 | 2 |
| GCN | ogbn-products | 16 | 8 | 8 |
| GCN | Reddit | 8 | 4 | 4 |
| GAT | ogbn-arxiv | 8 | 4 | 4 |
| GAT | ogbn-products | 1024 | 512 | 512 |
| GAT | Reddit | 400 | 200 | 200 |
| GCN | ogbn-papers100M | 256 | 32 | 48 |
+
+## B Model and training details
+
+Hardware. All experiments are run on an NVIDIA GeForce GTX 1080Ti. The experiments on ogbn-arxiv and ogbn-products use up to ${64}\mathrm{{GB}}$ of main memory. The experiments on ogbn-papers ${100}\mathrm{M}$ use up to ${256}\mathrm{{GB}}$ .
+
+Packages. Our experiments are based on the following packages and versions:
+
+- torch-geometric 1.7.0
+
+- torch-cluster 1.5.9
+
+- torch-scatter 2.0.6
+
+- torch-sparse 0.6.9
+
+- python 3.7.10
+
+- ogb 1.3.1
+
+- torch 1.8.1
+
+- cudatoolkit 10.2.89
+
+- numba 0.53.1
+
+- python-tsp 0.2.0
+
+Preprocessing. Before training, we first make the graph undirected, and add self-loops. The adjacency matrix is symmetrically normalized. We cache the symmetric adjacency matrix for graph partitioning and mini-batching. Instead of re-calculating the adjacency matrix normalization factors for GCN for each mini-batch, we re-use the global normalization factors. We found this to achieve similar accuracy at lower computational cost.
+
+Models. We use three models for all the experiments: GCN (3 layers, hidden size 256 for the ogbn datasets and 2 layers, hidden size 512 for Reddit), GAT (3 layers, hidden size 128, 4 heads for the ogbn datasets and 2 layers, hidden size 64, 4 heads for Reddit), and GraphSAGE (3 layers, hidden size 256). All models use layer normalization, ReLU activation functions, and dropout. We performed a grid search on ogbn-arxiv, ogbn-products, and Reddit to obtain the optimal model hyperparameters based on final validation accuracy. For ogbn-papers ${100}\mathrm{M}$ we use the same hyperparameters as for GCN on ogbn-arxiv, but with 32 auxiliary nodes per output node.
+
+Training. We use the Adam optimizer for all experiments, with a starting learning rate of ${10}^{-3}$ . We use an ${L}_{2}$ regularization of ${10}^{-4}$ for GCN on ogbn-arxiv and ogbn-products, and no ${L}_{2}$ regularization in all other settings. We use a ReduceLROnPlateau scheduler for the optimizer, with the decay factor 0.33, patience 30, minimum learning rate ${10}^{-4}$ , and cooldown of 10, based on validation loss. We train for 300 to 800 epochs and stop early with a patience of 100 epochs, based on validation loss. We determine the optimal batch order for IBMB via simulated annealing [11].
+
+Batch-wise IBMB. We tune the number of batches and thus the size of batches using a grid search (see Table 1). Generally, final accuracy increases with larger batch sizes, but this can lead to excessive memory usage and slower convergence speed. The resulting partitions then define the output nodes in each batch. We use as many auxiliary nodes as the size of each partition. However, the auxiliary nodes will be different than the partition since they are selected based on the output nodes via batch-wise clustering. Note that the inference batch size is double the sizes of training batches since in this case we do not need to store any gradients.
+
+Node-wise IBMB. For node-wise batching we first calculate the PPR scores for each output node, and then pick the top-k nodes as its auxiliary nodes. Generally we use the same batch size, i.e. number of nodes in a batch, as in batch-wise IBMB, to keep the GPU memory usage similar. However, if the graph is too dense, we might have to increase the batch size of node-wise IBMB, because it tends to create sparser batches. We tune the number of auxiliary nodes per output node using a logarithmic grid search using factors of 2. Based on this we use 16 neighbors for ogbn-arxiv, 64 for ogbn-products, 8 for Reddit, and 96 for ogbn-papers100M. Note that the number of auxiliary nodes is the main degree of freedom in IBMB. It influences preprocessing time, runtime, memory usage, and accuracy. The number of output nodes per batch is then determined by the available GPU memory.
+
+Table 2: Hyperparameters for LADIES
+
+| Model | Dataset | Nodes per layer |
| Train | Validation |
| GCN | ogbn-arxiv | 42336 | 84672 |
| GCN | ogbn-products | 204085 | 306 128 |
| GCN | Reddit | 90000 | 150000 |
+
+Table 3: Hyperparameters for neighbor sampling
+
+| Model | Dataset | Number of batches | Number of nodes |
| Train | Validation | Test |
| GCN | ogbn-arxiv | 12 | 8 | 8 | 6, 5, 5 |
| GCN | ogbn-products | 20 | 4 | 200 | 5,5,5 |
| GCN | Reddit | 8 | 4 | 4 | 12, 12 |
| GAT | ogbn-arxiv | 8 | 4 | 4 | 8, 7, 5 |
| GAT | ogbn-products | 1000 | 150 | 8000 | 15,10,10 |
| GAT | Reddit | 400 | 400 | 400 | 20, 20 |
+
+Approximate PPR. Calculating the full personalized PageRank (PPR) matrix is prohibitively expensive for large graphs. To enable fast preprocessing times, we approximate node-wise PPR using a push-flow algorithm [2] with a fixed number of iterations and approximate batch-wise PPR using power iterations. Both variants are based on parallel sparse matrix operations on GPU. We choose their hyperparameters so they do not impede accuracy while still having a reasonable preprocessing time. We use 50 power iterations for batch-wise PPR. For node-wise PPR we use three iterations, $\epsilon = {0.0002}$ for ogbn-arxiv, $\epsilon = {0.0005}$ for ogbn-products, and $\epsilon = {0.00002}$ for Reddit and ogbn-papers ${100}\mathrm{M}$ . For node-wise PPR we additionally downsample the unusually dense Reddit adjacency matrix to an average of 8 neighbors per node.
+
+Random batching. Random batching is similar to node-wise IBMB except that the auxiliary nodes are batched randomly. We first calculate the PPR scores and pick the top-k neighbors as the auxiliary nodes for a output node. We choose the same number of neighbors as with node-wise IBMB. We investigate 2 variants of random batching: Resampling the batches in every epoch, and sampling them once during preprocessing and then fixing the batches. We only show the results for the second method, since we found it to be significantly faster, albeit requiring significantly more main memory.
+
+Hyperparameter tuning. The priorities for tuning the hyperparameters are as follows: 1. To keep methods comparable in a realistic setup, we keep the GPU memory usage constant between methods. 2. When there are semantic hyperparameters that do not influence performance (such as the number of
+
+Table 4: Hyperparameters for GraphSAINT-RW
+
+ | | | | | Batch size |
| Model | Dataset | | | | Train | Val/Test |
| GCN | ogbn-arxiv | 2 | 100 | 4 | 25000 | 10000 |
| GCN | ogbn-products | 2 | 100 | 16 | 80000 | 5000 |
| GCN | Reddit | 2 | 100 | 8 | 23000 | 6000 |
| GAT | ogbn-arxiv | 2 | 100 | 8 | 17500 | 10000 |
| GAT | ogbn-products | 2 | 100 | 1024 | 14000 | 100 |
| GAT | Reddit | 2 | 100 | 400 | 1600 | 60 |
+
+Table 5: Methods and hyperparameters for selecting auxiliary nodes for GCN on ogbn-products with batch-wise IBMB. IBMB is very robust to this choice. We did observe a slightly lower validation accuracy for low alpha (0.05). We always use 0.25 .
+
+| Method | | Time (s) | Test accuracy (%) |
| $\alpha , t$ | per epoch | IBMB inference Full-batch | |
| PPR | 0.05 | 3.5 | 76.8±0.3 | 77.1±0.3 |
| PPR | 0.15 | 3.6 | 76.6±0.4 | ${76.9} \pm {0.4}$ |
| PPR | 0.25 | 3.5 | 76.8±0.2 | 77.2±0.3 |
| PPR | 0.35 | 3.5 | 76.9±0.5 | 77.2±0.5 |
| Heat kernel 0.1 | | 3.5 | ${76.5} \pm {0.4}$ | 76.8±0.3 |
| Heat kernel 1 | | 3.5 | 76.6±0.5 | ${76.9} \pm {0.5}$ |
| Heat kernel 3 | | 3.5 | 76.8±0.2 | 77.1±0.2 |
| Heat kernel 5 | | 3.5 | ${76.7} \pm {0.5}$ | 77.0±0.5 |
| Heat kernel 7 | | 3.5 | 76.6±0.4 | ${76.8} \pm {0.4}$ |
+
+steps per epoch in GraphSAINT-RW, which only changes how an epoch is defined), we choose them to be comparable to other methods. 3. We choose all relevant hyperparameters based on validation accuracy. If a hyperparameter is not critical to memory usage we tune it per dataset and not per model. We use this process for both IBMB and the baselines.
+
+Baseline hyperparameters. For Cluster-GCN the number of batches are the same as for batch-wise IBMB. Table 2 shows the hyperparameters for LADIES, Table 3 for neighbor sampling, and Table 4 for GraphSAINT-RW. To ensure that every node is visited exactly once during GraphSAINT-RW inference we use the validation/test nodes only as root nodes of the random walks.
+
+Full-batch inference. We chunk the adjacency matrix and feature matrix for full-batch inference to allow using the GPU even for larger datasets. The only hyperparameter is the number of chunks. We limit the chunk size to ensure that full-batch inference does not exceed the amount of GPU memory used during training.
+
+Experimental limitations. We only tested our method on homophilic node classification datasets. While proximity is a central inductive bias in all GNNs, we did not explicitly test this on a more general variety of graphs. However, note that IBMB does not require homophily. The underlying assumption is merely that nearby nodes are the most important, not that they are similar. Finally, we expect our method to perform even better in the context of billion-node graphs, but our benchmark datasets still fit into main memory.
+
+## C Additional results
+
+Main memory usage. IBMB's main memory usage depends on three aspects: 1. How large is the training/validation set compared to the full graph? 2. How many auxiliary nodes per output node are we using? 3. How well are the auxiliary nodes overlapping per batch? As shown in Table 7, IBMB increases main memory usage in some settings, which is due to the overlap between batches. However, in other settings it reduces memory requirements because it ignores irrelevant parts of the graph and removes the dataset from memory after preprocessing. Note that our hyperparameters keep ${GPU}$ memory usage consistent between methods.
+
+Table 6: Final accuracy and runtime averaged over 10 runs, with standard deviation. "Same method" refers to using the training method for inference, while "full-batch" uses the whole graph for inference. IBMB achieves similar accuracy as previous methods when used for training, while using significantly less time per epoch and without requiring full-batch inference. IBMB is up to ${900}\mathrm{x}$ faster (ogbn-papers ${100}\mathrm{M}$ ) than using full-batch inference, at comparable accuracy. Other inference methods are substantially slower or less accurate. Note that LADIES is incompatible with the self loops in GAT and GraphSAGE.
+
+| SettingTraining method | Time (s) | Test accuracy (%) |
| Preprocess | Per epoch | Inference | Same method | Full-batch |
| ogbn-arxiv, GCN | Full-batch | - | - | 2.8 | - | - |
| Neighbor sampling | 0.3 | 4.7 | 2.5 | ${70.7} \pm {0.2}$ | ${71.3} \pm {0.4}$ |
| LADIES | 0.3 | 0.62 | 0.69 | ${71.7} \pm {0.2}$ | 71.4±0.3 |
| GraphSAINT-RW | 0.4 | 0.42 | 0.34 | ${68.1} \pm {0.2}$ | ${72.3} \pm {0.2}$ |
| Cluster-GCN | 8.7 | 0.14 | 0.14 | ${72.0} \pm {0.1}$ | ${72.2} \pm {0.1}$ |
| Batch-wise IBMB | 14.1 | 0.14 | 0.13 | ${72.2} \pm {0.2}$ | ${72.2} \pm {0.2}$ |
| Node-wise IBMB | 17.5 | 0.27 | 0.16 | $\mathbf{{72.6} \pm {0.1}}$ | $\mathbf{{72.6} \pm {0.1}}$ |
| ogbn-arxiv, GAT | Full-batch | - | - | 9.4 | - | - |
| Neighbor sampling | 0.3 | 4.1 | 1.97 | ${70.9} \pm {0.1}$ | ${72.1} \pm {0.1}$ |
| GraphSAINT-RW | 0.4 | 1.2 | 0.38 | ${68.7} \pm {0.2}$ | $\mathbf{{72.6} \pm {0.1}}$ |
| Cluster-GCN | 7.6 | 0.69 | 0.28 | 69.7±0.3 | 71.6±0.2 |
| Batch-wise IBMB | 7.7 | 0.68 | 0.31 | ${71.0} \pm {0.3}$ | 71.8±0.3 |
| Node-wise IBMB | 17.6 | 1.52 | 0.93 | $\mathbf{{72.0} \pm {0.2}}$ | ${72.2} \pm {0.2}$ |
| ogbn-arxiv, GraphSAGE | Full-batch | - | - | 2.37 | - | - |
| Neighbor sampling | 0.3 | 3.44 | 1.67 | 71.1±0.1 | ${72.0} \pm {0.1}$ |
| GraphSAINT-RW | 0.3 | 0.41 | 0.35 | ${69.0} \pm {0.1}$ | ${72.2} \pm {0.1}$ |
| Cluster-GCN | 8.8 | 0.15 | 0.14 | 71.7±0.1 | 72.1±0.1 |
| Batch-wise IBMB | 7.2 | 0.15 | 0.13 | ${72.0} \pm {0.2}$ | 72.1±0.1 |
| Node-wise IBMB | 17.5 | 0.31 | 0.16 | ${72.4} \pm {0.2}$ | ${72.4} \pm {0.1}$ |
| ogbn-products, GCN | Full-batch | - | - | 130 | - | - |
| Neighbor sampling | 32 | 42 | 433 | ${78.2} \pm {0.2}$ | ${78.0} \pm {0.2}$ |
| LADIES | 33 | 25 | 22.5 | ${75.9} \pm {0.3}$ | ${79.0} \pm {0.4}$ |
| GraphSAINT-RW | 35 | 11 | 20.4 | 53.6±0.6 | 79.9±0.2 |
| Cluster-GCN | 302 | 3.7 | 3.4 | 76.2±0.3 | 76.5±0.2 |
| Batch-wise IBMB | 306 | 3.5 | 3.1 | ${76.8} \pm {0.2}$ | 77.2±0.3 |
| Node-wise IBMB | 382 | 5.4 | 13.8 | 77.3±0.3 | 77.3±0.3 |
| ogbn-products, GAT | Full-batch | - | - | 1700 | - | - |
| Neighbor sampling | 33 | 450 | 3450 | 79.1±0.3 | 77.2±0.5 |
| GraphSAINT-RW | 35 | 140 | 102 | 69.5±0.1 | ${80.8} \pm {0.2}$ |
| Cluster-GCN | 626 | 24 | 10.6 | 76.6±0.4 | 78.1±0.5 |
| Batch-wise IBMB | 767 | 25 | 10.0 | 77.0±0.4 | ${78.9} \pm {0.6}$ |
| Node-wise IBMB | 378 | 42 | 97 | ${78.9} \pm {0.3}$ | ${79.0} \pm {0.3}$ |
| ogbn-products, GraphSAGE | Full-batch | - | - | 88.0 | - | - |
| Neighbor sampling | 31.4 | 52.0 | 530 | ${81.0} \pm {0.2}$ | ${81.4} \pm {0.2}$ |
| GraphSAINT-RW | 35.8 | 10.6 | 20.0 | 69.4±0.2 | ${81.3} \pm {0.2}$ |
| Cluster-GCN | 313 | 3.1 | 3.4 | 79.5±0.4 | 79.7±0.4 |
| Batch-wise IBMB | 319 | 2.9 | 3.1 | 79.2±0.3 | 79.5±0.3 |
| Node-wise IBMB | 374 | 5.1 | 13.3 | ${80.6} \pm {0.3}$ | ${80.8} \pm {0.3}$ |
| Reddit, GCN | Full-batch | - | - | 14.8 | - | - |
| Neighbor sampling | 14.4 | 7.3 | 3.3 | 93.5±0.1 | 94.8±0.1 |
| LADIES | 15.4 | 11.4 | 11.4 | ${95.5} \pm {0.0}$ | ${95.3} \pm {0.0}$ |
| GraphSAINT-RW | 17.1 | 14.6 | 2.9 | 93.2±0.1 | 95.6±0.0 |
| Cluster-GCN | 175 | 1.8 | 1.6 | 93.7±0.2 | 94.8±0.1 |
| Batch-wise IBMB | 175 | 1.6 | 1.4 | ${93.5} \pm {0.4}$ | 94.7±0.1 |
| Node-wise IBMB | 64.8 | 0.74 | 0.59 | 95.7±0.1 | ${95.2} \pm {0.1}$ |
| Continued on the next page. |
+
+Final accuracy and runtime averaged over 10 runs, continued.
+
+| Setting | Training method | Time (s) | Test accuracy (%) |
| Preprocess | Per epoch | Inference | Same method | Full-batch |
| Reddit, GAT | Full-batch | - | - | 76.9 | - | - |
| Neighbor sampling | 14.8 | 70 | 32.5 | ${94.3} \pm {0.1}$ | 95.1±0.1 |
| GraphSAINT-RW | 17.9 | 21 | 3.2 | 79.4±0.2 | $\mathbf{{95.4} \pm {0.1}}$ |
| Cluster-GCN | 366 | 4.7 | 1.4 | ${91.4} \pm {0.1}$ | 93.5±0.7 |
| Batch-wise IBMB | 396 | 4.3 | 1.2 | 91.6±0.1 | 92.8±1.1 |
| Node-wise IBMB | 65.3 | 1.1 | 0.25 | $\mathbf{{94.2} \pm {0.1}}$ | 94.1±0.3 |
| Reddit, GraphSAGE | Full-batch | - | - | 17.3 | - | - |
| Neighbor sampling | 16.1 | 7.5 | 3.5 | ${96.2} \pm {0.0}$ | 96.8±0.0 |
| GraphSAINT-RW | 18.2 | 14.6 | 3.6 | ${95.9} \pm {0.0}$ | 96.8±0.0 |
| Cluster-GCN | 173 | 1.7 | 1.8 | ${95.5} \pm {0.2}$ | ${96.0} \pm {0.1}$ |
| Batch-wise IBMB | 175 | 1.6 | 1.7 | 95.6±0.2 | 96.1±0.1 |
| Node-wise IBMB | 66.0 | 0.78 | 0.65 | 96.8±0.0 | 96.5±0.0 |
| papers100M, GCN | Full-batch | - | - | 5700 | - | - |
| Neighbor sampling | 739 | 900 | 159 | ${64.3} \pm {0.2}$ | ${61.8} \pm {0.2}$ |
| LADIES | 735 | 2830 | 672 | ${65.4} \pm {0.2}$ | ${62.4} \pm {0.4}$ |
| Node-wise IBMB | 2290 | 51 | 6.2 | 66.1±0.1 | ${66.0} \pm {0.1}$ |
+
+Table 7: Main memory usage (GiB). In some settings, IBMB uses more main memory than previous methods due to overlapping batches (e.g. on ogbn-products). However, it can also reduce memory requirements because it ignores irrelevant parts of the graph (e.g. on Reddit). Note that our hyperpa-rameters keep GPU memory usage consistent between methods, as opposed to main memory usage.
+
+ | ogbn-arxiv | ogbn-products | Reddit |
| GCN | GAT | GraphSAGE | GCN | GAT | GraphSAGE | GCN | GAT | GraphSAGE |
| Neighbor sampling | 3.0 | 3.6 | 3.1 | 8.7 | 7.9 | 8.5 | 7.4 | 7.5 | 7.1 |
| LADIES | 3.0 | - | - | 6.0 | - | - | 4.8 | - | - |
| GraphSAINT-RW | 3.5 | 3.6 | 3.5 | 9.6 | 9.6 | 9.6 | 8.4 | 8.5 | 8.4 |
| Cluster-GCN | 3.5 | 3.4 | 3.5 | 7.8 | 6.0 | 7.3 | 6.1 | 4.2 | 6.5 |
| Batch-wise IBMB | 3.5 | 3.6 | 3.5 | 7.9 | 7.0 | 7.8 | 6.3 | 4.9 | 6.3 |
| Node-wise IBMB | 3.8 | 3.8 | 4.2 | 13.0 | 12.3 | 13.2 | 4.5 | 5.3 | 5.1 |
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/b9g0vxzYa_/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/b9g0vxzYa_/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8320a47b0728017bf60884baa5a398542c20e8c8
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/b9g0vxzYa_/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,165 @@
+§ INFLUENCE-BASED MINI-BATCHING FOR GRAPH NEURAL NETWORKS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Using graph neural networks for large graphs is challenging since there is no clear way of constructing mini-batches. To solve this, previous methods have relied on sampling or graph clustering. While these approaches often lead to good training convergence, they introduce significant overhead due to expensive random data accesses and perform poorly during inference. In this work we instead focus on model behavior during inference. We theoretically model batch construction via maximizing the influence score of nodes on the outputs. This formulation leads to optimal approximation of the output when we do not have knowledge of the trained model. We call the resulting method influence-based mini-batching (IBMB). IBMB accelerates inference by up to ${130}\mathrm{x}$ compared to previous methods that reach similar accuracy. Remarkably, with adaptive optimization and the right training schedule IBMB can also substantially accelerate training, thanks to precomputed batches and consecutive memory accesses. This results in up to ${18}\mathrm{x}$ faster training per epoch and up to ${17}\mathrm{x}$ faster convergence per runtime compared to previous methods.
+
+§ 1 INTRODUCTION
+
+Creating mini-batches is highly non-trivial for connected data, since it requires selecting a meaningful subset despite the data's connectedness. When the graph does not fit into memory, the mini-batching problem is equally relevant for both inference and training. However, mini-batching methods have so far mostly been focused on training, despite the major practical importance of inference. Once a model is put into production, it continuously runs inference to serve user queries. On AWS, more than ${90}\%$ of infrastructure cost is due to inference, and less than ${10}\%$ is due to training [24]. Even during training, inference is necessary for early stopping and performance monitoring. A training method thus has rather limited utility by itself.
+
+Selecting mini-batches for inference is distinctly different from training. Instead of averaging out stochastic sampling effects over many training steps, we need to ensure that every prediction is as accurate as possible. To achieve this, we propose a theoretical framework for creating mini-batches based on the expected influence of nodes on the outputs. Selecting nodes according to this formulation provably leads to an optimal approximation of the output. The resulting optimization problem shows that we need to distinguish between two classes of nodes: Output nodes and auxiliary nodes. Output nodes are those for which we compute a prediction in this batch, for example a set of validation nodes. Auxiliary nodes provide inputs and define the batch's subgraph. This distinction allows us to choose a meaningful neighborhood for every prediction, while ignoring irrelevant parts of the graph. Note that output nodes in one batch can be auxiliary nodes in another batch.
+
+This distinction furthermore splits mini-batching into two problems: 1. How do we partition output nodes into efficient mini-batches? 2. How do we choose the auxiliary nodes for a given set of output nodes? Having split the problem like this, we see that most previous works either focus exclusively on the first question by only using graph partitions [7] or on the second question and choose a uniformly random subset of nodes as output nodes $\left\lbrack {{21},{41}}\right\rbrack$ . Jointly considering both aspects with an overarching theoretical framework allows for substantial synergy effects. For example, batching nearby output nodes together allows one output node to leverage another one's auxiliary nodes.
+
+We call this overall framework influence-based mini-batching (IBMB). On the practical side, We propose two instantiations of IBMB by approximating the influence between nodes via personalized
+
+PageRank (PPR). We use fast approximations of PPR to select auxiliary nodes by their highest PPR scores. Accordingly, we partition output nodes using PPR-based node distances or via graph partitioning. We then use the subgraph induced by these nodes as a mini-batch. IBMB accelerates inference by up to ${130}\mathrm{x}$ compared to previous methods that achieve similar accuracy.
+
+Remarkably, we found that IBMB also works well for training, despite being derived from inference. This is due to the computational advantage of precomputed mini-batches, which can be loaded from a cache to ensure efficient memory accesses. We counteract the negative effect of the resulting sparse mini-batch gradients via adaptive optimization and batch scheduling. Overall, IBMB achieves an up to ${18}\mathrm{x}$ improvement in time per training epoch, with similar final accuracy. This fast runtime more than makes up for any slow-down in convergence per step. Its speed advantage grows even further for the common setting of low label ratios, since our method avoids computation on irrelevant parts of the graph. Our implementation is available online ${}^{1}$ . In summary, our core contributions are:
+
+ * Influence-based mini-batching (IBMB): A theoretical framework for selecting mini-batches for GNN inference based on influence scores.
+
+ * Practical instantiations of IBMB that work for a variety of GNNs and datasets. They substantially accelerate inference and training without sacrificing accuracy, especially for small label ratios.
+
+ * Methods for mitigating the impact of fixed, local mini-batches and sparse gradients on training.
+
+§ 2 BACKGROUND AND RELATED WORK
+
+Graph neural networks. We consider a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ with node set $\mathcal{V}$ and (possibly directed) edge set $\mathcal{E}.N = \left| \mathcal{V}\right|$ denotes the number of nodes, $E = \left| \mathcal{E}\right|$ the number of edges, and $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ the adjacency matrix. GNNs use one embedding per node ${\mathbf{h}}_{u} \in {\mathbb{R}}^{H}$ and edge ${\mathbf{e}}_{\left( uv\right) } \in {\mathbb{R}}^{{H}_{\mathrm{c}}}$ , and update them in each layer via message passing between neighboring nodes. We the node the embedding in layer $l$ as ${\mathbf{h}}_{u}^{\left( l\right) }$ and its $i$ ’th entry as ${\mathbf{h}}_{ui}^{\left( l\right) }$ . Most GNNs can be expressed via the following equations:
+
+$$
+{\mathbf{h}}_{u}^{\left( l + 1\right) } = {f}_{\text{ node }}\left( {{\mathbf{h}}_{u}^{\left( l\right) },\mathop{\operatorname{Agg}}\limits_{{v \in {\mathcal{N}}_{u}}}\left\lbrack {{f}_{\mathrm{{msg}}}\left( {{\mathbf{h}}_{u}^{\left( l\right) },{\mathbf{h}}_{v}^{\left( l\right) },{\mathbf{e}}_{\left( uv\right) }^{\left( l\right) }}\right) }\right\rbrack }\right) , \tag{1}
+$$
+
+$$
+{\mathbf{e}}_{\left( uv\right) }^{\left( l + 1\right) } = {f}_{\text{ edge }}\left( {{\mathbf{h}}_{u}^{\left( l + 1\right) },{\mathbf{h}}_{v}^{\left( l + 1\right) },{\mathbf{e}}_{\left( uv\right) }^{\left( l\right) }}\right) . \tag{2}
+$$
+
+The node and edge update functions ${f}_{\text{ node }}$ and ${f}_{\text{ edge }}$ , and the message function ${f}_{\text{ msg }}$ can be implemented using e.g. linear layers, multi-layer perceptrons (MLPs), and skip connections. The node's neighborhood ${\mathcal{N}}_{u}$ is usually defined directly by the graph $\mathcal{G}$ [27], but can be generalized to consider larger or even global neighborhoods $\left\lbrack {1,{16}}\right\rbrack$ , or feature similarity $\left\lbrack {10}\right\rbrack$ . The most common aggregation function Agg is summation, but multiple other alternatives have also been explored [9, 17]. Edge embeddings ${\mathbf{e}}_{\left( uv\right) }$ are often not used in GNNs, but some variants rely on them exclusively [6].
+
+Scalable GNNs. Multiple works have proposed massively scalable GNNs that leverage the peculiarities of message passing to condense it into a single step, akin to label or feature propagation $\left\lbrack {4,{14}}\right\rbrack$ . Our work focuses on general, model-agnostic scalability methods.
+
+Scalable graph learning. Classical graph learning faced issues similar to GNNs when scaling to large graphs. Multiple frameworks for distributed graph computations were proposed to solve this without approximations or sampling $\left\lbrack {{19},{28},{31},{32}}\right\rbrack$ . Other works scaled to large graphs via stochastic variational inference, e.g. by sampling nodes and node pairs [20]. Interestingly, this approach is quite similar to sampling-based mini-batching for GNNs.
+
+Mini batching for GNNs. Previous mini-batching methods can largely be divided into three categories: Node-wise sampling, layer-wise sampling, and subgraph-based sampling [29]. In node-wise sampling, we obtain a separate set of auxiliary nodes for every output node, which are sampled independently for each message passing step. Each output node is treated independently; if two output nodes sample the same auxiliary node, we compute its embedding twice [21,30,39]. Layer-wise sampling jointly considers all output nodes of a batch to compute a stochastic set of activations in each layer. Computations on auxiliary nodes are thus shared [5, 23, 41]. Subgraph-based sampling selects a meaningful subgraph and then runs the GNN on this subgraph as if it were the full graph. This method thus computes the outputs and intermediate embeddings of all nodes in that subgraph $\left\lbrack {7,{40}}\right\rbrack$ . Our method most closely resembles the subgraph-based sampling approach. However, IBMB considers both output and auxiliary nodes, resulting in better batches, and only computes the output of predetermined output nodes, similar to node-wise sampling. Note that mini-batch generation is an orthogonal problem to training frameworks such as GNNAutoScale [13]. We can also use IBMB to provide mini-batches as part of GNNAutoScale.
+
+${}^{1}$ https://figshare.com/s/f615b330391677014bc5
+
+§ 3 INFLUENCE-BASED MINI-BATCHING
+
+Influence scores. To effectively create graph-based mini-batches we must first quantify how important one node is for another node's prediction. As proposed by Xu et al. [38], we can do this via the influence score, which determines the local sensitivity of the output at node $u$ on the input at node $v$ as:
+
+$$
+I\left( {v,u}\right) = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}\left| \frac{\partial {\mathbf{h}}_{ui}^{\left( L\right) }}{\partial {\mathbf{X}}_{vj}}\right| , \tag{3}
+$$
+
+where ${\mathbf{h}}_{ui}^{\left( L\right) }$ is the $i$ ’th entry in the embedding of node $u$ in the last layer $L$ . The influence score provides a crisp understanding of how to select nodes for inference when we only have knowledge of the graph, not the model or the node features. To prove this formally, we consider a slightly limited class of GNNs and model our lack of knowledge via a randomization assumption of ReLU activations, similar to Choromanska et al. [8], and by assuming that all nodes have the same expected features, yielding (proof in App. A):
+
+Theorem 1. Given a GNN with linear, graph-dependent aggregation and ReLU activations. Assume that all paths in the model’s computation graph are activated with the same probability $\rho$ and nodes have features with expected value $\mathbb{E}\left\lbrack {X}_{v,i}\right\rbrack = {\chi }_{i}$ . If we restrict the model input features to a set of auxiliary nodes ${\mathcal{S}}_{\text{ aux }} \subseteq \mathcal{V}$ , then the error
+
+$$
+{\begin{Vmatrix}{\widetilde{\mathbf{h}}}_{u}^{\left( L\right) } - {\mathbf{h}}_{u}^{\left( L\right) }\end{Vmatrix}}_{1} \tag{4}
+$$
+
+between the approximate logits ${\widetilde{\mathbf{h}}}_{u}^{\left( L\right) }$ and the true logits ${\mathbf{h}}_{u}^{\left( L\right) }$ is minimized, in expectation, by selecting the nodes $v \in {\mathcal{S}}_{\text{ aux }}$ with maximum influence score $I\left( {v,u}\right)$ .
+
+Formalizing mini-batching. We can leverage this insight by formalizing the mini-batching as the optimization problem(5)
+
+ < g r a p h i c s >
+
+where $\mathbb{P}\left( {\mathcal{V}}_{\text{ out }}\right)$ denotes the set of partitions of the output nodes ${\mathcal{V}}_{\text{ out }},b$ the number of batches, and $B$ the maximum batch size. This optimization yields two results: The output node partition ${P}_{\text{ out }}$ and the auxiliary node set for each batch of output nodes, ${\mathcal{S}}_{\text{ aux }}$ . The hyperparameter $B$ is determined by the available (GPU) memory, while $b$ trades off runtime and approximation quality. This formulation optimizes the average approximation across all outputs. This might not be ideal since some nodes might already be approximated well with a lower number of auxiliary nodes. We can instead focus on the worst-case approximation by optimizing the minimum aggregate influence score as(6)
+
+ < g r a p h i c s >
+
+Both Eqs. (5) and (6) split the mini-batching problem into three parts: Output node partitioning, auxiliary node selection, and influence score computation. We call this approach influence-based mini-batching (IBMB).
+
+Computing influence scores. The model's influence score depends on various model details, especially when considering exact, trained models. In many cases we can calculate the expected influence score by making simplifying assumptions, similar to Theorem 1. This allows tailoring the mini-batching method to the exact model of interest. For the remainder of this work we will focus our analysis on the broad class of models that use the average as an aggregation function, such as GCN. In this case, the influence is proportional to a slightly modified random walk with $L$ steps [38]. We additionally remove the influence score’s dependence on the number of layers $L$ by using the limit distribution of random walks with restart, also known as personalized PageRank (PPR), as proposed by Gasteiger et al. [15]. Notably, PPR even works as a measure of influence for models with more complex, data-dependent influence scores, such as GAT (see Sec. 5). The PPR matrix is given by
+
+$$
+{\mathbf{\Pi }}^{\mathrm{{ppr}}} = \alpha {\left( {\mathbf{I}}_{N} - \left( 1 - \alpha \right) {\mathbf{D}}^{-1}\mathbf{A}\right) }^{-1}, \tag{7}
+$$
+
+with the teleport probability $\alpha \in (0,1\rbrack$ and the diagonal degree matrix ${\mathbf{D}}_{ii} = \mathop{\sum }\limits_{k}{\mathbf{A}}_{ik}$ . The entry ${\Pi }_{uv}^{\mathrm{{ppr}}}$ then provides a measure for the influence of node $v$ on $u$ . Calculating the above inverse is obviously infeasible for large graphs. However, we can approximate ${\mathbf{\Pi }}^{\mathrm{{ppr}}}$ with a sparse matrix ${\widetilde{\mathbf{\Pi }}}^{\mathrm{{ppr}}}$ in time $\mathcal{O}\left( \frac{1}{{\varepsilon }_{\alpha }}\right)$ per row, with error $\varepsilon \deg \left( v\right)$ [2]. Importantly, this approximation uses only the node’s local neighborhood, making its runtime independent of the overall graph size and thus massively scalable. Furthermore, the calculation is deterministic and model-independent, so we only need to perform this computation once during preprocessing.
+
+§ 3.1 AUXILIARY NODE SELECTION
+
+Node-wise selection. Selecting auxiliary nodes on large graphs requires a method that efficiently yields nodes with highest expected influence. Fortunately, there is a well-developed literature of methods for finding the top-k PPR nodes. The classic approximate PPR method [2] is guaranteed to provide all nodes with a PPR value ${\Pi }_{uv}^{\text{ PPr }} > \varepsilon \deg \left( v\right)$ w.r.t. the root (output) node $u$ . Optimizing auxiliary nodes by the worst-case influence score (Eq. (6)) thus equates to separately running approximate PPR for each output node in a batch ${\mathcal{S}}_{\text{ out }}$ , and then merging them.
+
+Batch-wise selection. Considering each output node separately does not take into account how one auxiliary node jointly affects multiple output nodes, as required for the average-case formulation in Eq. (5). Fortunately, PPR calculation can be adapted to use a set of root nodes. To do so, we use a set of nodes in the teleport vector $t$ instead of a single node, e.g. by leveraging the underlying recursive equation for a PPR vector ${\pi }_{\mathrm{{ppr}}}\left( \mathbf{t}\right) = \left( {1 - \alpha }\right) {\mathbf{D}}^{-1}\mathbf{A}{\pi }_{\mathrm{{ppr}}}\left( \mathbf{t}\right) + \alpha \mathbf{t}.\mathbf{t}$ is a one-hot vector in the node-wise setting, while for batch-wise PPR it is $1/\left| {\mathcal{S}}_{\text{ out }}\right|$ for all nodes in ${\mathcal{S}}_{\text{ out }}$ . This variant is also known as topic-sensitive PageRank. We found that batch-wise PPR is significantly faster than node-wise PPR. However, it can lead to cases where one outlier node receives almost no neighbors, while others have excessively many. Whether node-wise or batch-wise selection performs better thus often depends on the dataset and model.
+
+Subgraph generation. Creating mini-batches also requires selecting a subgraph of relevant edges. We do so by using the subgraph induced by the selected output and auxiliary nodes in a batch. Note that the above node selection methods ignore how these changes to the graph affect the influence scores. This is a limitation of these methods. However, PPR is a local clustering method and we can thus expect auxiliary nodes to be well-connected.
+
+§ 3.2 OUTPUT NODE PARTITIONING
+
+Optimal partitioning. Finding the optimal node partition in Eqs. (5) and (6) would require trying out every possible partition since a change in ${\mathcal{S}}_{\text{ out }}$ can unpredictably affect the optimal choice of auxiliary nodes. Doing so is clearly intractable since the number of partitions grows exponentially with $N$ for a fixed batch size. We thus need to approximate the optimal partition via a scalable heuristic. The implicit goal of this step is finding output nodes that share a large number of auxiliary nodes. One good proxy for these overlaps is the proximity of nodes in the graph.
+
+Distance-based partitioning. We propose two methods that leverage graph locality as a heuristic. The first is based on node distances. In this approach we first compute the pairwise node distances between nodes that are close in the graph. We can use PPR for this as well, since it is also commonly used as a node distance. If we select auxiliary nodes with node-wise PPR, we thus only need to calculate PPR scores once for both steps.
+
+Next, we greedily construct the partition ${P}_{\text{ out }}$ from ${\widetilde{\Pi }}^{\text{ ppr }}$ . To do so, we start by putting every node $u$ into a separate batch $\{ u\}$ . We then sort all elements in ${\widetilde{\mathbf{\Pi }}}^{\text{ ppr }}$ by magnitude, independent of their row or column. We scan over these values in descending order, considering the value’s indices(u, v)and merging the batches containing the two nodes. Afterwards we randomly merge any small leftover batches. We stay within memory constraints by only merging batches that stay below the maximum batch size $B$ . This method achieves well-overlapping batches and can efficiently add incrementally incoming out nodes, e.g. in a streaming setting. Our experiments show that this method achieves a good compromise between well-overlapping batches and good gradients for training (see Sec. 5). Note that the resulting partition is unbalanced, i.e. some sets will be larger than others.
+
+ < g r a p h i c s >
+
+Figure 1: Practical example of influence-based mini-batching (IBMB). The output nodes are indicated by pentagons. These nodes are first partitioned into batches, e.g. by grouping nearby nodes together. We then use influence scores to select the auxiliary nodes of each batch, e.g. neighbors with top- $k$ personalized PageRank (PPR) scores. Finally, we generate a batch using the induced subgraph of all selected nodes, but only calculate the outputs of the output nodes we chose when partitioning. Batches can overlap and do not need to cover the whole graph.
+
+Graph partitioning. For our second method, we note that partitioning output nodes into overlapping mini-batches is closely related to partitioning graphs. We can thus leverage the extensive research on this topic by using the METIS graph partitioning algorithm [25] to find a partition of output nodes ${P}_{\text{ out }}$ . We found that graph partitioning yields roughly a two times higher overlap of auxiliary nodes than distance-based partitioning, thus leading to significantly more efficient batches. However, it also results in worse gradient samples, which we found to be detrimental for training (see Sec. 5). Note that Cluster-GCN also uses graph partitioning, and thus aligns somewhat with the IBMB framework [7]. However, IBMB additionally selects relevant auxiliary nodes and ignores irrelevant parts of the graph. This significantly accelerates training on small training sets and improves the accuracy of output nodes close to the partition boundary.
+
+Computational complexity. Since IBMB ignores irrelevant parts of the graph, inference and training scale linearly in the number of output nodes $\mathcal{O}\left( {N}_{\text{ out }}\right)$ . Preprocessing runs in $\mathcal{O}\left( \frac{{N}_{\text{ out }}}{\epsilon \alpha }\right)$ for node-wise PPR-based steps, $\mathcal{O}\left( \frac{b}{\epsilon \alpha }\right)$ for batch-wise PPR, and in $\mathcal{O}\left( E\right)$ for graph partitioning. The runtime of IBMB is thus independent of the graph size if we use distance-based partitioning. Fig. 1 gives an overview of the full practical IBMB process.
+
+§ 4 TRAINING WITH IBMB
+
+Computational advantages. The above analysis focused on node outputs, not gradient estimation and training. However, this procedure also has inherent advantages for training, since we need to perform mini-batch generation only once during preprocessing. We can then cache each mini-batch in consecutive blocks of memory, thereby allowing the data to be stored where it is needed and circumventing expensive random data accesses. This significantly accelerates training, allows efficient distributed training, and enables more expensive node selection procedures. In contrast, most previous methods select both output and auxiliary nodes randomly in each epoch, which incurs significant overhead. Our experiments show that IBMB's more efficient memory accesses clearly outweigh the slightly worse gradient estimates (see Sec. 5). This seems counter-intuitive since the deterministic, fixed mini-batches in IBMB only provide sparse, fixed gradient samples. In this section we discuss these aspects and how adaptive optimization and batch scheduling counteract their effects.
+
+ < g r a p h i c s >
+
+Figure 2: Test accuracy and log. inference time for a fixed GNN. IBMB provides the best accuracy versus time trade-off (top-left corner) in all settings.
+
+Sparse gradients. Partitioning output nodes based on proximity effectively correlates the gradients sampled in a batch. The model thus sees a sparse gradient sample, which does not cover all aspects of the dataset. Fortunately, adaptive optimization methods such as Adagrad and Adam were developed exactly for such sparse gradients $\left\lbrack {{12},{26}}\right\rbrack$ . We furthermore ensure an unbiased training process by using every output (training) node exactly once per epoch.
+
+Fixed batches. Using a fixed set of batches can lead to problems with basic stochastic gradient descend (SGD) as well. Imagine training with two fixed batches whose loss functions have different minima. If training has "converged" to one of these minima, SGD would start to oscillate: It would take one step towards the other minimum, and then back, and so forth. To counteract this oscillation, we could add a "consensus constraint" to enforce a consensus between the weights after different batches, akin to distributed optimization [33]. We can solve this constraint using a primal-dual saddle-point algorithm with directed communication [18]. The resulting dynamics are ${\dot{x}}^{\left( t\right) } = - \nabla {\widetilde{f}}^{\left( t\right) }\left( {x}^{\left( t\right) }\right) - {\alpha \lambda }{\dot{x}}^{\left( t - 1\right) } - {\lambda }^{2}{\dot{x}}^{\left( t - 2\right) }$ , with the weights ${x}^{\left( t\right) }$ at time step $t$ , the learning rate $\lambda$ and the dual variable $\alpha$ . These dynamics resemble SGD with momentum, and fit perfectly into the framework of adaptive optimization methods [34]. Indeed, momentum and adaptive methods suppress the oscillations in the above example with two minima. Accordingly, prior works have also found benefits in deterministically selecting fixed mini-batches [3, 37]. We further improve convergence by adaptively reducing the learning rate when the validation loss plateaus, which ensures that the step size decreases consistently.
+
+Batch scheduling. While Adam with learning rate scheduling consistently ensures convergence, we still observe downward spikes in accuracy during training. To illustrate this issue, consider a sequence of mini-batches. In regular training every mini-batch is similar and the order of these batches is irrelevant. In our case, however, some of the mini-batches might be very similar. If the optimizer sees a series of similar batches, it will take increasingly large steps in a suboptimal direction, which leads to the observed downward spikes in accuracy. We propose to prevent these suboptimal batch sequences by optimizing the order of batches. To quantify batch similarity we measure the symmetrized KL-divergence of the label distribution between batches. In particular, we use the normalized training label distribution ${p}_{i} = {c}_{i}/\mathop{\sum }\limits_{j}{c}_{j}$ , where ${c}_{i}$ is the number of training nodes of class $i$ . This results in the pairwise batch distance ${d}_{ab}$ between batches $a$ and $b$ . We propose two ways to use this for improving the batch schedule: (i) Find the fixed batch cycle that maximizes the batch distances between consecutive batches. This is a traveling salesman problem for finding the maximum distance loop that visits all batches. It is therefore only feasible for a small number of batches. (ii) Sample the next batch weighted by the distance to the current batch. Both scheduling methods improve convergence and increase final accuracy, at almost no cost during training. Overall, our training scheme leads to consistent convergence. Even accumulating gradients over the whole epoch does not significantly change convergence or final accuracy (see Fig. 8).
+
+ < g r a p h i c s >
+
+Figure 3: Training convergence of validation accuracy in log. time. Average and 95% confidence interval of 10 runs. GraphSAINT-RW does not reach the shown accuracy range in some settings due to its bad validation performance. IBMB converges the fastest.
+
+§ 5 EXPERIMENTS
+
+Experimental setup. We show results for two variants of our method: IBMB with PPR distance-based batches and node-wise PPR clustering (node-wise IBMB), and IBMB with graph partition-based batches and batch-wise PPR clustering (batch-wise IBMB). We also experimented with the two other combinations of the output node partitioning and auxiliary node selection variants, but found these two to work best. We compare them to four state-of-the-art mini-batching methods: Neighbor sampling [21], Layer-Dependent Importance Sampling (LADIES) [41], GraphSAINT-RW [40], and Cluster-GCN [7]. We use four large node classification datasets for evaluation: ogbn-arxiv [22, 36, ODC-BY], ogbn-products [36, Amazon license], Reddit [21], and ogbn-papers100M [22, 36, ODC-BY]. While these datasets use the transductive setting, IBMB makes no assumptions about this and can equally be applied to the inductive setting. We skip the common, small datasets (Cora, Citeseer, PubMed) since they are ill-suited for evaluating scalability methods. We do not strive to set a new accuracy record but instead aim for a consistent, fair comparison based on three standard GNNs: graph convolutional networks (GCN) [27], graph attention networks (GAT) [35], and GraphSAGE [21]. We use the same training pipeline for all methods, giving them access to the same optimizations. We run each experiment 10 times and report the mean and standard deviation in all tables and the bootstrapped mean and 95% confidence intervals in all figures. We fully pipeline data loading and batch creation by prefetching batches in parallel. We found that using more than one worker for this does not improve runtime, most likely because data loading is limited by the memory bandwidth, which is shared between workers. We keep GPU memory usage constant between methods, and tune all remaining hyperparameters for both IBMB and the baselines. See App. B for full experimental details.
+
+ < g r a p h i c s >
+
+ < g r a p h i c s >
+
+Figure 4: Training convergence in log. time for GCN on ogbn- Figure 5: Trained accuracy for products with smaller training sets. The gap in convergence speed node-wise IBMB, depending on between IBMB and the baselines grows larger for small training the output nodes per batch (GCN, sets, since IBMB scales with training set size and not with overall ogbn-arxiv). IBMB is rather in-graph size. sensitive to this choice.
+
+Inference. Fig. 2 compares the inference accuracy and time of different batching methods, using the same pretrained model and varying computational budgets (number of auxiliary nodes/sampled nodes) at a fixed GPU memory budget. IBMB provides the best trade-off between accuracy and time in all settings. Node-wise IBMB performs better than graph partitioning, except on ogbn-products. IBMB provides a significant speedup over chunking-based full-batch inference on GPU, being 10 to 900 times faster at comparable accuracy. All previous methods are either significantly slower or less accurate.
+
+Training. To evaluate training performance we compare how fast the validation accuracy converges for each method. Since full inference is too slow to execute every epoch we use the same mini-batching method to approximate inference. Fig. 3 shows how the accuracy increases depending on training time. IBMB performs significantly better than previous methods, converging up to ${17}\mathrm{x}$ faster than all baselines. This is despite the fact that we always prefetch the next batch in parallel. Note that GAT is slower to compute than GCN and GraphSAGE, limiting the positive impact of a fast batching method. Compute-constrained models like GAT are less relevant in practice since data access is typically the bottleneck for GNNs on large, often even disk-based datasets [4]. Table 6 in the appendix furthermore shows that IBMB's time per epoch is significantly faster than all sampling-based methods. Cluster-GCN has a comparable runtime, which is expected due to its similarity with IBMB. However, it converges more slowly than IBMB and reaches substantially lower final accuracy. Neighbor sampling achieves good final accuracy, but is extremely slow. GraphSAINT-RW only achieves good final accuracy with prohibitively expensive full-batch inference. Node-wise IBMB achieves the best final accuracy with a scalable inference method in 8 out of 10 settings. On ogbn-papers100M, IBMB has a substantially faster time per epoch and lower memory consumption than previous methods demonstrating IBMB's favorable scaling with dataset size. It even performs as well as SIGN-XL $\left( {\left( {{66.1} \pm {0.2}}\right) \% }\right) \left\lbrack {14}\right\rbrack$ , while using ${30}\mathrm{x}$ fewer parameters and no hyperparameter tuning. Notably, we were unable to evaluate GraphSAINT-RW and Cluster-GCN on this dataset, since they use more than ${256}\mathrm{{GB}}$ of main memory.
+
+Preprocessing. IBMB requires more preprocessing than previous methods. However, since IBMB is rather insensitive to hyperparameter choices (see Fig. 5, Table 5), preprocessing rarely needs to be re-run. Instead, its result can be saved to disk and re-used for training different models. Just considering our 10 training seeds, preprocessing of node-wise IBMB only took 1.3% of the training time for GCN and ${0.25}\%$ for GAT on ogbn-arxiv.
+
+Training set size. The ogbn-arxiv and ogbn-products datasets both contain a large number of training nodes (91k and 197k, respectively). However, labeling training samples is often an expensive endeavor, and models are commonly trained with only a few hundred or thousand training samples. GraphSAINT-RW and Cluster-GCN are global training methods, i.e. they always use the whole graph for training. They are thus ill-suited for the common setting of a large overall graph containing a small number of training nodes (resulting in a small label rate). In contrast, the training time of IBMB purely scales with the number of training nodes. To demonstrate this, we reduce the label rate by sub-sampling the training nodes of ogbn-products and compare the convergence in Fig. 4. As 05 expected, the gap in convergence speed between IBMB and both Cluster-GCN and GraphSAINT-RW grows even larger for smaller training sets.
+
+ < g r a p h i c s >
+
+Figure 6: Convergence per time Figure 7: Batch scheduling for Figure 8: Gradient accumula-for training GCN on ogbn-arxiv. GAT on ogbn-arxiv. Optimal tion for batch-wise IBMB on Both batch-wise and node-wise batch order prevents downward GCN, ogbn-arxiv. The difference
+
+IBMB lead to faster convergence ence spikes in accuracy and leads to is $\mathrm{m}$ is minor, even when accumulat-than fixed random batches. higher final accuracy. ing over the full epoch.
+
+Ablation studies. We ablate our output node partitioning schemes by instead batching together random sets of nodes. We use fixed batches since we found that resampling incurs significant overhead without benefits - which is consistent with our considerations on gradient samples and contiguous memory accesses. Fig. 6 shows that this method ("Fixed random") converges more slowly and does not reach the same level of accuracy as our partition schemes. Node-wise IBMB converges the fastest, which suggests a trade-off between full gradient samples (random batching) and maximum batch overlap (graph partitioning). Fig. 2 shows that random batching ("IBMB, random batch.") is also substantially slower and often less accurate for inference. This is due to the synergy effects of output node partitioning: If output nodes have similar auxiliary nodes, they benefit from each other's neighborhood. We can ablate auxiliary node selection by comparing IBMB to Cluster-GCN, since it just uses the graph partition as a batch instead of smartly selecting auxiliary nodes. We use the graph partition size as the number of auxiliary nodes for batch-wise IBMB to allow for a direct comparison. As discussed above, Cluster-GCN consistently performs worse, especially in terms of final accuracy, for inference, and for small label rates. Finally, Fig. 7 compares the proposed batch scheduling methods. Optimal and weighted sampling-based scheduling improve convergence and prevent or reduce downward spikes in accuracy.
+
+Gradient accumulation. Accumulating gradients across multiple batches is a method for smoothing batches if the gradient noise is too high. We might expect this to happen in IBMB due to its sparse gradients. However, Fig. 8 shows that gradient accumulation in fact only has an insignificant effect on IBMB, demonstrating its stability during training.
+
+Sensitivity analysis. IBMB is largely insensitive to different local clustering methods and hyperpa-rameters for selecting auxiliary nodes (see Table 5). Even increasing the number of output nodes per batch with a fixed number of auxiliary nodes per output node only has a minor impact on accuracy, especially above 1000 output nodes per batch, as shown by Fig. 5. IBMB performs well even in extremely constrained settings with small batches of 100 output nodes per batch. In practice, IBMB only has a single free hyperparameter to choose: The number of auxiliary nodes per output node, which allows optimizing for accuracy or speed. The number of output nodes per batch is then given by the available GPU memory, while the local clustering method and other hyperparameters are not important.
+
+§ 6 CONCLUSION
+
+We propose influence-based mini-batching (IBMB), a method for extracting batches for GNNs. IBMB formalizes creating batches for inference by optimizing the influence on the output nodes. Remarkably, with the right training scheme IBMB even performs well during training. It improves training convergence by up to ${17}\mathrm{x}$ and inference time by up to ${130}\mathrm{x}$ compared to previous methods that reach similar accuracy. These improvements grow even larger in the common setting of sparse labels and when the pipeline is constrained by data access speed.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/csY9tr8mR7/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/csY9tr8mR7/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e4e0a3ad361676310229dfbb3ae85f5b9a0b0628
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/csY9tr8mR7/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,196 @@
+# Global Explainability of GNNs via Logic Combination of Learned Concepts
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer manages to provide accurate and human-interpretable global explanations in both synthetic and real world datasets.
+
+## 15 1 Introduction
+
+Graph Neural Networks (GNNs) have become increasingly popular for predictive tasks on graph structured data. However, as many other deep learning models, their inner working remains a black box. The ability to understand the reason for a certain prediction represents a critical requirement for any decision-critical application, thus representing a big issue for the transition of such algorithms from benchmarks to real-world critical applications.
+
+Over the last years, many works proposed Local Explainers [1-6] to explain the decision process of a GNN in terms of factual explanations often represented as subgraphs for each sample in the dataset. Overall, they shed light over why the network predicted a certain value for a specific input sample. However, they still lack a global understanding of the model. Global Explainers, on the other hand, are aimed at capturing the behaviour of the model as a whole, but despite their potential in interpretability and debugging little has been done in this direction [7]. GLocalX [8] is a general solution to produce global explanations of black-box models by hierarchically aggregating local explanations into global rules. This solution is however not readily applicable to GNNs as it requires local explanations to be expressed as logical rules. Yuan et al. [7] proposed to frame the Global Explanation problem for GNN as a form of input optimization [9], using policy gradient to generate synthetic prototypical graphs for each class. The approach requires prior domain knowledge, which is not always available, to drive the generation of valid prototypes. Additionally, it cannot identify any compositionality in the returned explanation, and has no principled way to generate alternative explanations for a given class.
+
+Concept-based Explainability [10-12] is a parallel line of research where explanations are constructed using "concepts" i.e., intermediate, high-level and semantically meaningful units of information commonly used by humans to explain their decisions. Concept Bottleneck Models [13] and Prototypical Part networks [14] are two popular architectures that leverage concept learning to learn explainable-by-design neural networks. Both approaches have been recently adapted to GNNs [15, 16]. However, these solutions are not conceived for explaining already learned GNNs.
+
+Our contribution consists in the first Global Explainer for GNNs which $i$ ) provides a Global Explanation in terms of logic formulas, extracted by combining in a fully differentiable manner graphical concepts derived from local explanations; ii) is faithful to the data domain, i.e., the logic formulas, being derived from local explanations, are intrinsically part of the input domain without requiring any prior knowledge. We validated our approach on both synthetic and real-world datasets, showing that our method is able to accurately summarize the behaviour of the model to explain, while providing explanations in terms of concise logic formulas.
+
+## 2 Proposed Method
+
+
+
+Figure 1: Illustration of the proposed method for a task of binary classification.
+
+Our proposed Global Explainer, named GLGExplainer (Global Logic-based GNN Explainer), is summarized in Figure 1. In the following we will describe each step in greater detail.
+
+Local Explanations Extraction: The first step of our pipeline consists in extracting local explanations. Let $\operatorname{LEXP}\left( {f,\mathcal{G}}\right) = \widehat{\mathcal{G}}$ be the weighted graph obtained by applying the local explainer LEXP to generate a local explanation for the prediction of the GNN $f$ over the input graph $\mathcal{G}$ . In principle, every Local Explainer whose output can be mapped to a subgraph of the input sample is compatible with our pipeline [1-6]. Nonetheless, in this work, we relied on PGExplainer [2] since it allows the extraction of arbitrary disconnected motifs as explanations and it gave excellent results in our experiments. By binarizing the output of the local explainer $\widehat{\mathcal{G}}$ with threshold $\theta \in \mathbb{R}$ we achieve a set of connected components ${\overline{\mathcal{G}}}_{i}$ such that $\mathop{\bigcup }\limits_{i}{\overline{\mathcal{G}}}_{i} \subseteq \widehat{\mathcal{G}}$ . For convenience, we will henceforth refer to each of these ${\overline{\mathcal{G}}}_{i}$ as local explanation. Given that we want to emulate the behaviour of $f$ on correctly predicted samples, we will discard every input graph $\mathcal{G}$ belonging to wrongly predicted samples. The result of this extraction thus consists in a list $D$ of local explanations. More details about the binarization are available in the Appendix.
+
+Embedding Local Explanations: The following step consists in learning an embedding for each local explanation that allows to cluster together functionally similar local explanations. This can be achieved with a standard GNN $h$ which maps any graph $\overline{\mathcal{G}}$ into a fixed-sized embedding $h\left( \overline{\mathcal{G}}\right) \in {\mathbb{R}}^{d}$ . Since each local explanation $\overline{\mathcal{G}}$ is a subgraph of an input graph $\mathcal{G}$ , in our experiments we used the original node features of the dataset. The outcome of this aggregation consists in a set $E = \{ h\left( \overline{\mathcal{G}}\right) ,\forall \overline{\mathcal{G}} \in D\}$ of graph embeddings.
+
+Concept Projection: Inspired by previous works on prototype learning [17, 18], we project each graph embedding $e \in E$ into a set $P$ of $m \in \mathbb{N}$ prototypes $\left\{ {{p}_{i} \in {\mathbb{R}}^{d} \mid i = 1,\ldots , m}\right\}$ via a distance function $d\left( {{p}_{i}, e}\right) = \operatorname{softmax}{\left( \log \left( \frac{{\begin{Vmatrix}e - {p}_{1}\end{Vmatrix}}^{2} + 1}{{\begin{Vmatrix}e - {p}_{1}\end{Vmatrix}}^{2} + \epsilon }\right) ,\ldots ,\log \left( \frac{\begin{Vmatrix}e - {p}_{m}{\parallel }^{2} + 1\end{Vmatrix}}{{\begin{Vmatrix}e - {p}_{m}\end{Vmatrix}}^{2} + \epsilon }\right) \right) }_{i}$ . Prototypes are initialized randomly from a uniform distribution and are learned along with the other parameters of the architecture. As training progresses, the prototypes will align as prototypical representations of every cluster of local explanations, which will represent the final groups of graphical concepts. The output of this projection is thus a set $V = \left\{ {{v}_{e},\forall e \in E}\right\}$ where ${v}_{e} = \left\lbrack {d\left( {{p}_{1}, e}\right) ,.., d\left( {{p}_{m}, e}\right) }\right\rbrack$ is a vector containing the normalized probabilities of local explanation $i$ belonging to the $m$ concepts, and will be henceforth referred to as concept vector.
+
+Formulas Learning: The final step consists of an E-LEN, i.e., a Logic Explainable Network [19] implemented with an Entropy Layer as first layer [20]. An E-LEN learns to map a concept activation vector to a class while encouraging a sparse use of concepts that allows to reliably extract Boolean formulas emulating the network behaviour. We train an E-LEN to emulate the behaviour of the GNN $f$ feeding it with the graphical concepts extracted from the local explanations. Given a set of local explanations ${\overline{\mathcal{G}}}_{a}\ldots {\overline{\mathcal{G}}}_{{n}_{i}}$ for an input graph ${\mathcal{G}}_{i}$ and a corresponding set of the concept vectors ${v}_{a}\ldots {v}_{{n}_{i}}$ , we aggregate the concept vectors via a pooling operator and feed the resulting aggregated concept vector to the E-LEN, providing $f\left( {\mathcal{G}}_{i}\right)$ as supervision. In our experiments we used a max-pooling operator. Thus, the Entropy Layer learns a mapping from the pooled concept vector to (i) the embeddings $z$ (as any linear layer) which will be used by the successive MLP for matching the predictions of $f$ . (ii) a truth table $T$ explaining how the network leveraged concepts to make predictions for the target class. Since the input pooled concept vector will constitute the premise in the truth table $T$ , a desirable property to improve human readability is discreteness, which we achieved using the Straight-Through (ST) trick used for discrete Gumbel-Softmax Estimator [21]. In practice, we compute the forward pass discretizing each ${v}_{i}$ via argmax, then, in the backward pass to favor the flow of informative gradient we use its continuous version.
+
+Supervision Losses: Our proposed GLGExplainer is trained end-to-end with the following loss: $L = {L}_{\text{surr }} + {\lambda }_{1}{L}_{R1} + {\lambda }_{2}{L}_{R2}$ , where ${L}_{\text{surr }}$ corresponds to a Focal BCELoss [22] between the prediction of our E-LEN and the predictions to explain, while ${L}_{R1}$ and ${L}_{R2}$ are respectively aimed to push every prototype to be close to at least one local explanation and to push each local explanation to be close to at least one prototype [17]. The losses are defined as follows:
+
+$$
+{L}_{\text{surr }} = - y{\left( 1 - p\right) }^{\gamma }\log p - \left( {1 - y}\right) {p}^{\gamma }\log \left( {1 - p}\right) \tag{1}
+$$
+
+$$
+{L}_{R1} = \frac{1}{m}\mathop{\sum }\limits_{{j = 1}}^{m}\mathop{\min }\limits_{{\overline{\mathcal{G}} \in D}}{\begin{Vmatrix}{p}_{j} - h\left( \overline{\mathcal{G}}\right) \end{Vmatrix}}^{2} \tag{2}
+$$
+
+$$
+{L}_{R2} = \frac{1}{\left| D\right| }\mathop{\sum }\limits_{{\overline{\mathcal{G}} \in D}}\mathop{\min }\limits_{{j \in \left\lbrack {1, m}\right\rbrack }}{\begin{Vmatrix}{p}_{j} - h\left( \overline{\mathcal{G}}\right) \end{Vmatrix}}^{2} \tag{3}
+$$
+
+where $p$ and $\gamma$ represent respectively the probability for positive class prediction and the focusing parameter which controls how much to penalize hard examples.
+
+## 3 Experiments
+
+## We tested our proposed approach on two datasets, namely:
+
+BAMultiShapes: BAMultiShapes is a newly introduced extension of some popular synthetic benchmarks [1] aimed to assess the ability of a Global Explainer to deal with logical combinations of concepts. In particular, we created a dataset composed of Barabási-Albert (BA) graphs with attached in random positions the following network motifs: house, grid, wheel. Class 0 contains plain BA graphs and BA graphs enriched with a house, a grid, a wheel, or the three motifs together. Class 1 contains BA graphs enriched with a house and a grid, a house and a wheel or a wheel and a grid.
+
+Mutagenicity: The Mutagenicity dataset is a collection of molecule graphs where each graph is labelled as either having a mutagenic effect or not. Based on [23], the mutagenicity of a molecule is correlated with the presence of electron-attracting elements conjugated with nitro groups (e.g. NO2).
+
+For Mutagenicity we replicated the model accuracy and the local explanations presented in [2], while for BAMultiShapes we trained until convergence a 3-layers GCN. Details about the implementation and the pre-processing of local explanations, along with model accuracies, are in the Appendix.
+
+Table 1: Mean and standard deviation for Fidelity, Formula Accuracy and Concept Purity computed on the Test set over 5 runs with different random seeds. The Formula Accuracy is referred to the formulas presented in Figure 2. Since the Concept Purity is computed for every cluster independently, here we report mean and standard deviation for the best run only.
+
+| Dataset | Fidelity | Formula Accuracy | Concept Purity |
| BAMultiShapes | ${0.99} \pm {0.00}$ | ${0.99} \pm {0.00}$ | ${0.85} \pm {0.22}$ |
| Mutagenicity | ${0.85} \pm {0.01}$ | ${0.85} \pm {0.01}$ | ${0.99} \pm {0.01}$ |
+
+In order to show the robustness of our proposed methodology, we have evaluated GLGExplainer on a number of metrics, namely: $i$ ) FIDELITY, which measures the accuracy between the prediction of the E-LEN and the one of the GNN to explain; ii) FORMULA ACCURACY, which represents how well the learned formulas can correctly predict the class labels; iii) CONCEPT PURITY, which is computed for every cluster independently and measures how good the embedding is at clustering the local explanations. Table 1 reports the results in terms of the three metrics, showing how GLGExplainer manages to provide reliable explanations under all these perspectives. Note that XGNN [7], the only available competitor for global explanations of GNN, cannot be evaluated according to these metrics. Figure 2 presents the final global explanations where we substituted each literal with its corresponding prototypical graphical concept, and report the explanations generated by XGNN for comparison. It's easy to see that GLGExplainer produces highly intepretable explanations that match the ground-truth formula (for BAMultiShapes) and existing knowledge (for Mutagenesis) with remarkable accuracy. It is worth mentioning that the global explanations for Class 0 of BAMultiShapes do not comprise the case with all three motifs together. We observed that the reason resides in the GNN to explain failing at classifying every sample with such structure. So, GLGExplainer is effectively explaining the GNN $f$ and not simply the dataset structure. Conversely, XGNN fails to generate interpretable explanations in most cases. Details about concepts compositions and formula extraction are available in the Appendix.
+
+
+
+Figure 2: Global explanations of GLGExplainer (ours) and XGNN. For Class 0 of BAMultiShapes, XGNN was not able to generate a graph with confidence $\geq {0.5}$
+
+## 4 Discussion & Conclusions
+
+Given the results presented in the section above, it is worth noting that concept clusters emerge solely based on the supervision defined in Section 2, while no specific supervision was added to cluster local explanations based on their similarity. Further details about the clusters' composition are available in the Appendix. Overall, the results confirm the ability of GLGExplainer in providing logic formulas, expressed over learned graphical concepts, which are accurately summarizing the global behaviour of the model, whereas the existing XGNN fails at providing concise and faithful explanations.
+
+References
+
+[1] Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnnexplainer: Generating explanations for graph neural networks, 2019. URL https://arxiv.org/abs/ 1903.03894.1,2,3
+
+[2] Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network, 2020. URL https://arxiv.org/ abs/2011.04573.2,3,6,7
+
+[3] Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. On explainability of graph neural networks via subgraph explorations, 2021. URL https://arxiv.org/abs/2102.05152.
+
+[4] Minh N. Vu and My T. Thai. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks, 2020. URL https://arxiv.org/abs/2010.05788.
+
+[5] Caihua Shan, Yifei Shen, Yao Zhang, Xiang Li, and Dongsheng Li. Reinforcement learning enhanced explainer for graph neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 22523-22533. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/be26abe76fb5c8a4921cf9d3e865b454-Paper.pdf.
+
+[6] Phillip E Pope, Soheil Kolouri, Mohammad Rostami, Charles E Martin, and Heiko Hoffmann. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10772-10781, 2019. 1,2
+
+[7] Hao Yuan, Jiliang Tang, Xia Hu, and Shuiwang Ji. XGNN: Towards model-level explanations of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, aug 2020. doi: 10.1145/3394486.3403085. URL https://doi.org/10.1145%2F3394486.3403085.1,4
+
+[8] Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi, and Fosca Giannotti. GLocalX - from local to global explanations of black box AI models. Artificial Intelligence, 294:103457, may 2021. doi: 10.1016/j.artint.2021.103457. URL https://doi.org/10.1016%2Fj.artint.2021.103457.1
+
+[9] Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, and Yu-Wing Tai. Towards global explanations of convolutional neural networks with concept attribution. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8649-8658, 2020. doi: 10.1109/CVPR42600.2020.00868. 1
+
+[10] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. Viégas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In International Conference on Machine Learning (ICML), volume 80 of Proceedings of Machine Learning Research, pages 2673-2682. PMLR, 2018. 1
+
+[11] Amirata Ghorbani, James Wexler, James Y. Zou, and Been Kim. Towards automatic concept-based explanations. In Neural Information Processing Systems (NeurIPS), pages 9273-9282, 2019.
+
+[12] Chih-Kuan Yeh, Been Kim, Sercan Ömer Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks. In Neural Information Processing Systems (NeurIPS), 2020. 1
+
+[13] Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept bottleneck models. In International Conference on Machine Learning (ICML), volume 119 of Proceedings of Machine Learning Research, pages 5338-5348. PMLR, 2020. 1
+
+[14] Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: Deep learning for interpretable image recognition. Advances in Neural Information Processing Systems, 32:8930-8941, 2019. 1
+
+[15] Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. Protgnn: Towards self-explaining graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 9127-9135, 2022. 1
+
+[16] Dobrik Georgiev, Pietro Barbiero, Dmitry Kazhdan, Petar Veličković, and Pietro Liò. Algorithmic concept-based explainable reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 6685-6693, 2022. 1
+
+[17] Oscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions, 2017. URL https:// arxiv.org/abs/1710.04806. 2,3
+
+[18] Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: Deep learning for interpretable image recognition. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/adf7ee2dcf142b0e11888e72b43fcb75-Paper.pdf.2
+
+[19] Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Lió, Marco Maggini, and Stefano Melacci. Logic explained networks. arXiv preprint arXiv:2108.05149, 2021. 3
+
+[20] Pietro Barbiero, Gabriele Ciravegna, Francesco Giannini, Pietro Lió, Marco Gori, and Stefano Melacci. Entropy-based logic explanations of neural networks, 2021. URL https://arxiv.org/abs/2106.06804.3
+
+[21] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax, 2016. URL https://arxiv.org/abs/1611.01144.3
+
+[22] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection, 2017. URL https://arxiv.org/abs/1708.02002.3
+
+[23] Asim Kumar Debnath, Rosa L. Lopez de Compadre, Gargi Debnath, Alan J. Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of Medicinal Chemistry, 34(2):786-797, 1991. doi: 10.1021/jm00106a046. URL https: //doi.org/10.1021/jm00106a046.3
+
+[24] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 6
+
+[25] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. 7
+
+## A Appendix
+
+### A.1 Training the GNN $f$
+
+For both BAMultiShapes and Mutagenicity we relied on the codebase provided by [2] for training the GNN $f$ to explain and to train the Local Explainer. For BAMultiShapes we trained a 3-layers GCN [24] (20-20-20 hidden units) with mean graph pooling for the final prediction, whereas for Mutagenicty we reproduced the results of [2]. A summary of model's performance is available in Table 2. Despite the high accuracy over BAMultiShapes, after a closer look we observed that the network did not actually learn the All concept, i.e., the three motifs together. Such detailed view is available in Table 3. This explains why the global explanations in Figure 2 Class 0 do not present such concept.
+
+Table 2: GNN accuracies for BAMultiShapes and Mutagenicity. The results for Mutagenicity are in line with the one reported in [2].
+
+| Split | BAMultiShapes | Mutagenicity |
| Train | 0.94 | 0.87 |
| Val | 0.94 | 0.86 |
| Test | 0.99 | 0.86 |
+
+Table 3: Accuracy of the model on the train set of BAMultiShapes with respect to every combination of motifs to be added to the Barabási-Albert base graph. $H, G, W$ stand respectively for House, Grid, and Wheel.
+
+ | Class 0 | Class 1 |
| Motifs | 0 | $H$ | $G$ | $W$ | ${All}$ | $H + G$ | $H + W$ | $G + W$ |
| Accuracy (%) | 1.0 | 1.0 | 0.85 | 1.0 | 0.0 | 1.0 | 0.98 | 1.0 |
+
+### A.2 Local Explanations Processing
+
+As detailed in [2], the output of PGExplainer consists in a weighted edge mask ${w}_{ij} \in \mathcal{V} \times \mathcal{V}$ where each ${w}_{ij}$ is the likelihood of the edge being an important edge. For Mutagenicity, we sticked to the original implementation which was correctly able to reproduce the results presented in the paper [2]. The only difference resides in the procedure for cutting the explanation, which is needed to remove from the final local explanation the edges which were assigned low scores. The authors in [2] limited their analysis to graphs that were containing the ground truth motifs, and proposed to just keep the top-k edges. We, instead, selected the numeric threshold $\theta \in \mathbb{R}$ which maximises the F1 score of the explainer over all graphs. Afterwards, such threshold will be used to cut out the irrelevant edges, by applying the indicator function ${\mathbf{1}}_{{w}_{ij} \geq \theta }$ to the edge mask. The resulting edge mask is thus the binary adjacency matrix of the final explanation. For BAMultiShapes, however, we adopted a dynamic algorithm to select $\theta$ that does not require any prior knowledge about the ground truth motifs. This algorithm resembles the elbow-method, i.e., for each local explanation choose as $\theta$ the first value that is different enough from the previous ordered values. Figure 3 shows some examples for each dataset along with their local explanations in bold.
+
+### A.3 The GLGExplainer
+
+The reference implementation of our Local Explanation Embedder $h$ is constituted by a 2-layers GIN [25] network with 20 hidden units, followed by a non-linear combination of max, mean, and sum graph pooling. We chose a number $m$ of 6 and 2 prototypes for, respectively, BAMultiShapes and Mutagenicity, keeping the dimensionality $d$ to 10 . We trained using ADAM optimizer with early stopping and with a learning rate for $h$ and the prototypes $P$ of $1{e}^{-3}$ , while for the E-LEN of $5{e}^{-4}$ . The batch size is set to 128, while the auxiliary loss coefficients ${\lambda }_{1}$ and ${\lambda }_{2}$ are chosen via cross-validation and set respectively to 0.09 and 0.00099, while the focusing parameter $\gamma$ is kept fixed at 2. The E-LEN is constituted by the input Entropy Layer (Entr.Layer $: {\mathbb{R}}^{m} \rightarrow {\mathbb{R}}^{10}$ ), a hidden layer (HiddenLayer $: {\mathbb{R}}^{10} \rightarrow {\mathbb{R}}^{5}$ ), and the output layer with LeakyReLU activation function.
+
+In the rest of this section we provide an ablation study to demonstrate the effectiveness of the Focal loss, the Discretization trick, and the impact of the number of prototypes in use.
+
+Focal loss: Figure 4 presents a comparison of the learning curve for BAMultiShapes showing that using Focal loss with a focusing parameter of 2 helps to achieve a faster convergence while not being detrimental for the overall performances.
+
+Number of prototypes: An effective approach to select an appropriate value $m$ for the number of prototypes in use is via cross-validation, and by selecting the smallest $m$ which achieves a competitive fidelity. In Figure 5 we show how different values of $m$ impact the Fidelity and the Formula Accuracy.
+
+Discretization trick: The Discretization trick was introduced in Section 2 to enforce a discrete prototype assignment, something essential for an unambiguous definition of the concepts on which the formulas are based on. In Figure 6 we show for BAMultiShapes that this trick is also effective in improving the overall performance of GLGExplainer, since it forces the hidden layers of the E-LEN to just exploit the information relative to the closest prototype, while not relying on other positional information. Thus, the E-LEN's predictions are much more aligned with the discrete formulas being extracted. In the Figure we further compare against a plain model without Discretization and against the addition to the overall loss of an entropy loss over the concept vector (Concept Entropy loss) with different scaling parameters ${\lambda }_{3} \in \{ {0.01},{0.1}\}$ . This Concept Entropy loss (CE loss) pushes the pre-pooling concept vector to have low entropy, thus effectively pushing every local explanation to be assigned with confidence to just one prototype.
+
+
+
+Figure 3: Random examples of input graphs along with their explanations in bold as extracted by PGExplainer, for respectively BAMultiShapes and Mutagenicity.
+
+### A.4 Cluster Composition & Formulas Renaming
+
+To effectively explore the content of each local explanations cluster, we plot in Figure 7 some random elements for each dataset. In most cases, the clusters contain atomic motifs (House, Grid, NO2, etc..) while in others the embedder $h$ clustered together heterogeneous motifs. This is particularly evident for the cluster relative to the prototype ${p}_{3}$ of BAMultiShapes in which every local explanation comprising two atomic motifs are aggregated. The reason for this behaviour is that we are aggregating local explanation solely based on the ability of the E-LEN to emulate the predictions of $f$ . Thus,
+
+
+
+Figure 4: Is the Focal Loss actually needed?
+
+Table 4: Raw formulas as extracted by the Entropy Layer. Each formula was rewritten following the Closed-World Assumption for convenience.
+
+| Dataset | Raw Formulas |
| BAMultiShapes | ${\text{Class}}_{0} \Leftrightarrow {P}_{0} \vee {P}_{5} \vee {P}_{1} \vee {P}_{4} \vee {P}_{2} \vee \left( {{P}_{4} \land {P}_{2}}\right)$ ${\text{Class}}_{1}\;\Longleftrightarrow \;{P}_{3} \vee ({P}_{5} \land {P}_{2}) \vee ({P}_{5} \land {P}_{1}) \vee ({P}_{2} \land {P}_{1})$ |
| Mutagenicity | ${\mathrm{{Class}}}_{0} \Leftrightarrow {P}_{1} \vee \left( {{P}_{0} \land {P}_{1}}\right)$ ${\mathrm{{Class}}}_{1} \Leftrightarrow {P}_{0}$ |
+
+since the simultaneous presence of two motifs appears only in Class 1, one single cluster aggregating all these mixed local explanations is enough for maximizing the performances. This is also the reason for the high variability in Concept Purity reported in Table 1, since it is computed considering the Purity in terms of labelled atomic motifs. For completeness, we additionally report in Figure 8 a 2D PCA-reduced view of the embedding space, annotated with the prototypes position.
+
+Given that the default implementation of the Entropy layer returns formulas expressed in terms of the single concepts in input, Figure 7 is also useful to rename each literal into its corresponding graphical concept. Table 4 shows an example of such raw formulas, while Figure 2 presents the final formulas after replacing each raw name with the corresponding graphical concept.
+
+
+
+Figure 5: How can we choose the number of prototypes to use? The first row is referred to BAMultiShapes, while the second to Mutagenicity
+
+
+
+Figure 6: Is the Discretization trick useful?
+
+
+
+Figure 7: Random representative elements for each prototype in BAMultiShapes and Mutagenicity.
+
+
+
+Figure 8: 2D view of the embedding space annotated with prototypes positions.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/csY9tr8mR7/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/csY9tr8mR7/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..094fee35b402c58c18e9d7a9cad63dacd263a881
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/csY9tr8mR7/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,87 @@
+§ GLOBAL EXPLAINABILITY OF GNNS VIA LOGIC COMBINATION OF LEARNED CONCEPTS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer manages to provide accurate and human-interpretable global explanations in both synthetic and real world datasets.
+
+§ 15 1 INTRODUCTION
+
+Graph Neural Networks (GNNs) have become increasingly popular for predictive tasks on graph structured data. However, as many other deep learning models, their inner working remains a black box. The ability to understand the reason for a certain prediction represents a critical requirement for any decision-critical application, thus representing a big issue for the transition of such algorithms from benchmarks to real-world critical applications.
+
+Over the last years, many works proposed Local Explainers [1-6] to explain the decision process of a GNN in terms of factual explanations often represented as subgraphs for each sample in the dataset. Overall, they shed light over why the network predicted a certain value for a specific input sample. However, they still lack a global understanding of the model. Global Explainers, on the other hand, are aimed at capturing the behaviour of the model as a whole, but despite their potential in interpretability and debugging little has been done in this direction [7]. GLocalX [8] is a general solution to produce global explanations of black-box models by hierarchically aggregating local explanations into global rules. This solution is however not readily applicable to GNNs as it requires local explanations to be expressed as logical rules. Yuan et al. [7] proposed to frame the Global Explanation problem for GNN as a form of input optimization [9], using policy gradient to generate synthetic prototypical graphs for each class. The approach requires prior domain knowledge, which is not always available, to drive the generation of valid prototypes. Additionally, it cannot identify any compositionality in the returned explanation, and has no principled way to generate alternative explanations for a given class.
+
+Concept-based Explainability [10-12] is a parallel line of research where explanations are constructed using "concepts" i.e., intermediate, high-level and semantically meaningful units of information commonly used by humans to explain their decisions. Concept Bottleneck Models [13] and Prototypical Part networks [14] are two popular architectures that leverage concept learning to learn explainable-by-design neural networks. Both approaches have been recently adapted to GNNs [15, 16]. However, these solutions are not conceived for explaining already learned GNNs.
+
+Our contribution consists in the first Global Explainer for GNNs which $i$ ) provides a Global Explanation in terms of logic formulas, extracted by combining in a fully differentiable manner graphical concepts derived from local explanations; ii) is faithful to the data domain, i.e., the logic formulas, being derived from local explanations, are intrinsically part of the input domain without requiring any prior knowledge. We validated our approach on both synthetic and real-world datasets, showing that our method is able to accurately summarize the behaviour of the model to explain, while providing explanations in terms of concise logic formulas.
+
+§ 2 PROPOSED METHOD
+
+ < g r a p h i c s >
+
+Figure 1: Illustration of the proposed method for a task of binary classification.
+
+Our proposed Global Explainer, named GLGExplainer (Global Logic-based GNN Explainer), is summarized in Figure 1. In the following we will describe each step in greater detail.
+
+Local Explanations Extraction: The first step of our pipeline consists in extracting local explanations. Let $\operatorname{LEXP}\left( {f,\mathcal{G}}\right) = \widehat{\mathcal{G}}$ be the weighted graph obtained by applying the local explainer LEXP to generate a local explanation for the prediction of the GNN $f$ over the input graph $\mathcal{G}$ . In principle, every Local Explainer whose output can be mapped to a subgraph of the input sample is compatible with our pipeline [1-6]. Nonetheless, in this work, we relied on PGExplainer [2] since it allows the extraction of arbitrary disconnected motifs as explanations and it gave excellent results in our experiments. By binarizing the output of the local explainer $\widehat{\mathcal{G}}$ with threshold $\theta \in \mathbb{R}$ we achieve a set of connected components ${\overline{\mathcal{G}}}_{i}$ such that $\mathop{\bigcup }\limits_{i}{\overline{\mathcal{G}}}_{i} \subseteq \widehat{\mathcal{G}}$ . For convenience, we will henceforth refer to each of these ${\overline{\mathcal{G}}}_{i}$ as local explanation. Given that we want to emulate the behaviour of $f$ on correctly predicted samples, we will discard every input graph $\mathcal{G}$ belonging to wrongly predicted samples. The result of this extraction thus consists in a list $D$ of local explanations. More details about the binarization are available in the Appendix.
+
+Embedding Local Explanations: The following step consists in learning an embedding for each local explanation that allows to cluster together functionally similar local explanations. This can be achieved with a standard GNN $h$ which maps any graph $\overline{\mathcal{G}}$ into a fixed-sized embedding $h\left( \overline{\mathcal{G}}\right) \in {\mathbb{R}}^{d}$ . Since each local explanation $\overline{\mathcal{G}}$ is a subgraph of an input graph $\mathcal{G}$ , in our experiments we used the original node features of the dataset. The outcome of this aggregation consists in a set $E = \{ h\left( \overline{\mathcal{G}}\right) ,\forall \overline{\mathcal{G}} \in D\}$ of graph embeddings.
+
+Concept Projection: Inspired by previous works on prototype learning [17, 18], we project each graph embedding $e \in E$ into a set $P$ of $m \in \mathbb{N}$ prototypes $\left\{ {{p}_{i} \in {\mathbb{R}}^{d} \mid i = 1,\ldots ,m}\right\}$ via a distance function $d\left( {{p}_{i},e}\right) = \operatorname{softmax}{\left( \log \left( \frac{{\begin{Vmatrix}e - {p}_{1}\end{Vmatrix}}^{2} + 1}{{\begin{Vmatrix}e - {p}_{1}\end{Vmatrix}}^{2} + \epsilon }\right) ,\ldots ,\log \left( \frac{\begin{Vmatrix}e - {p}_{m}{\parallel }^{2} + 1\end{Vmatrix}}{{\begin{Vmatrix}e - {p}_{m}\end{Vmatrix}}^{2} + \epsilon }\right) \right) }_{i}$ . Prototypes are initialized randomly from a uniform distribution and are learned along with the other parameters of the architecture. As training progresses, the prototypes will align as prototypical representations of every cluster of local explanations, which will represent the final groups of graphical concepts. The output of this projection is thus a set $V = \left\{ {{v}_{e},\forall e \in E}\right\}$ where ${v}_{e} = \left\lbrack {d\left( {{p}_{1},e}\right) ,..,d\left( {{p}_{m},e}\right) }\right\rbrack$ is a vector containing the normalized probabilities of local explanation $i$ belonging to the $m$ concepts, and will be henceforth referred to as concept vector.
+
+Formulas Learning: The final step consists of an E-LEN, i.e., a Logic Explainable Network [19] implemented with an Entropy Layer as first layer [20]. An E-LEN learns to map a concept activation vector to a class while encouraging a sparse use of concepts that allows to reliably extract Boolean formulas emulating the network behaviour. We train an E-LEN to emulate the behaviour of the GNN $f$ feeding it with the graphical concepts extracted from the local explanations. Given a set of local explanations ${\overline{\mathcal{G}}}_{a}\ldots {\overline{\mathcal{G}}}_{{n}_{i}}$ for an input graph ${\mathcal{G}}_{i}$ and a corresponding set of the concept vectors ${v}_{a}\ldots {v}_{{n}_{i}}$ , we aggregate the concept vectors via a pooling operator and feed the resulting aggregated concept vector to the E-LEN, providing $f\left( {\mathcal{G}}_{i}\right)$ as supervision. In our experiments we used a max-pooling operator. Thus, the Entropy Layer learns a mapping from the pooled concept vector to (i) the embeddings $z$ (as any linear layer) which will be used by the successive MLP for matching the predictions of $f$ . (ii) a truth table $T$ explaining how the network leveraged concepts to make predictions for the target class. Since the input pooled concept vector will constitute the premise in the truth table $T$ , a desirable property to improve human readability is discreteness, which we achieved using the Straight-Through (ST) trick used for discrete Gumbel-Softmax Estimator [21]. In practice, we compute the forward pass discretizing each ${v}_{i}$ via argmax, then, in the backward pass to favor the flow of informative gradient we use its continuous version.
+
+Supervision Losses: Our proposed GLGExplainer is trained end-to-end with the following loss: $L = {L}_{\text{ surr }} + {\lambda }_{1}{L}_{R1} + {\lambda }_{2}{L}_{R2}$ , where ${L}_{\text{ surr }}$ corresponds to a Focal BCELoss [22] between the prediction of our E-LEN and the predictions to explain, while ${L}_{R1}$ and ${L}_{R2}$ are respectively aimed to push every prototype to be close to at least one local explanation and to push each local explanation to be close to at least one prototype [17]. The losses are defined as follows:
+
+$$
+{L}_{\text{ surr }} = - y{\left( 1 - p\right) }^{\gamma }\log p - \left( {1 - y}\right) {p}^{\gamma }\log \left( {1 - p}\right) \tag{1}
+$$
+
+$$
+{L}_{R1} = \frac{1}{m}\mathop{\sum }\limits_{{j = 1}}^{m}\mathop{\min }\limits_{{\overline{\mathcal{G}} \in D}}{\begin{Vmatrix}{p}_{j} - h\left( \overline{\mathcal{G}}\right) \end{Vmatrix}}^{2} \tag{2}
+$$
+
+$$
+{L}_{R2} = \frac{1}{\left| D\right| }\mathop{\sum }\limits_{{\overline{\mathcal{G}} \in D}}\mathop{\min }\limits_{{j \in \left\lbrack {1,m}\right\rbrack }}{\begin{Vmatrix}{p}_{j} - h\left( \overline{\mathcal{G}}\right) \end{Vmatrix}}^{2} \tag{3}
+$$
+
+where $p$ and $\gamma$ represent respectively the probability for positive class prediction and the focusing parameter which controls how much to penalize hard examples.
+
+§ 3 EXPERIMENTS
+
+§ WE TESTED OUR PROPOSED APPROACH ON TWO DATASETS, NAMELY:
+
+BAMultiShapes: BAMultiShapes is a newly introduced extension of some popular synthetic benchmarks [1] aimed to assess the ability of a Global Explainer to deal with logical combinations of concepts. In particular, we created a dataset composed of Barabási-Albert (BA) graphs with attached in random positions the following network motifs: house, grid, wheel. Class 0 contains plain BA graphs and BA graphs enriched with a house, a grid, a wheel, or the three motifs together. Class 1 contains BA graphs enriched with a house and a grid, a house and a wheel or a wheel and a grid.
+
+Mutagenicity: The Mutagenicity dataset is a collection of molecule graphs where each graph is labelled as either having a mutagenic effect or not. Based on [23], the mutagenicity of a molecule is correlated with the presence of electron-attracting elements conjugated with nitro groups (e.g. NO2).
+
+For Mutagenicity we replicated the model accuracy and the local explanations presented in [2], while for BAMultiShapes we trained until convergence a 3-layers GCN. Details about the implementation and the pre-processing of local explanations, along with model accuracies, are in the Appendix.
+
+Table 1: Mean and standard deviation for Fidelity, Formula Accuracy and Concept Purity computed on the Test set over 5 runs with different random seeds. The Formula Accuracy is referred to the formulas presented in Figure 2. Since the Concept Purity is computed for every cluster independently, here we report mean and standard deviation for the best run only.
+
+max width=
+
+Dataset Fidelity Formula Accuracy Concept Purity
+
+1-4
+BAMultiShapes ${0.99} \pm {0.00}$ ${0.99} \pm {0.00}$ ${0.85} \pm {0.22}$
+
+1-4
+Mutagenicity ${0.85} \pm {0.01}$ ${0.85} \pm {0.01}$ ${0.99} \pm {0.01}$
+
+1-4
+
+In order to show the robustness of our proposed methodology, we have evaluated GLGExplainer on a number of metrics, namely: $i$ ) FIDELITY, which measures the accuracy between the prediction of the E-LEN and the one of the GNN to explain; ii) FORMULA ACCURACY, which represents how well the learned formulas can correctly predict the class labels; iii) CONCEPT PURITY, which is computed for every cluster independently and measures how good the embedding is at clustering the local explanations. Table 1 reports the results in terms of the three metrics, showing how GLGExplainer manages to provide reliable explanations under all these perspectives. Note that XGNN [7], the only available competitor for global explanations of GNN, cannot be evaluated according to these metrics. Figure 2 presents the final global explanations where we substituted each literal with its corresponding prototypical graphical concept, and report the explanations generated by XGNN for comparison. It's easy to see that GLGExplainer produces highly intepretable explanations that match the ground-truth formula (for BAMultiShapes) and existing knowledge (for Mutagenesis) with remarkable accuracy. It is worth mentioning that the global explanations for Class 0 of BAMultiShapes do not comprise the case with all three motifs together. We observed that the reason resides in the GNN to explain failing at classifying every sample with such structure. So, GLGExplainer is effectively explaining the GNN $f$ and not simply the dataset structure. Conversely, XGNN fails to generate interpretable explanations in most cases. Details about concepts compositions and formula extraction are available in the Appendix.
+
+ < g r a p h i c s >
+
+Figure 2: Global explanations of GLGExplainer (ours) and XGNN. For Class 0 of BAMultiShapes, XGNN was not able to generate a graph with confidence $\geq {0.5}$
+
+§ 4 DISCUSSION & CONCLUSIONS
+
+Given the results presented in the section above, it is worth noting that concept clusters emerge solely based on the supervision defined in Section 2, while no specific supervision was added to cluster local explanations based on their similarity. Further details about the clusters' composition are available in the Appendix. Overall, the results confirm the ability of GLGExplainer in providing logic formulas, expressed over learned graphical concepts, which are accurately summarizing the global behaviour of the model, whereas the existing XGNN fails at providing concise and faithful explanations.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dF6aEW3_62O/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/dF6aEW3_62O/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb8eaf3266e34fdc11056fb90619fc236c77117f
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dF6aEW3_62O/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,430 @@
+# You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained Graph Tickets
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find untrained sparse subnetworks at the initialization, that can match the performance of fully trained dense GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB). Our source codes are submitted in the Supplementary.
+
+## 191 Introduction
+
+Graph Neural Networks (GNNs) [1, 2] have shown the power to learn representations from graph-structured data. Over the past decade, GNNs and their variants such as Graph Convolutional Networks (GCN) [3], Graph Isomorphism Networks (GIN) [4], Graph Attention Networks (GAT) [5] have been successfully applied to a wide range of scenarios, e.g., social analysis [6, 7], protein feature learning [8], traffic prediction [9], and recommendation systems [10]. In parallel, works on untrained networks [11, 12] surprisingly discover the presence of untrained subnetworks in CNNs that can already match the accuracy of their fully trained dense CNNs with their initial weights, without any weight update. In this paper, we attempt to explore discovering untrained sparse networks in GNNs by asking the following question:
+
+Is it possible to find a well-performing graph neural (sub-) network without any training of the model weights?
+
+Positive answers to this question will have significant impacts on the research field of GNNs. ① If the answer is yes, it will shed light on a new direction of obtaining performant GNNs, e.g., traditional training might not be indispensable towards performant GNNs. ② The existence of such performant subnetworks will extend the recently proposed untrained subnetwork techniques [11, 12] in GNNs. Prior works [11-13] successfully find that randomly weighted full networks contain untrained subnetworks which perform well without ever modifying the weights, in convolutional neural networks (CNNs). However, the similar study has never been discussed for GNNs. While CNNs reasonably contain well-performing untrained subnetworks due to heavy over-parameterization, 9 GNN models are usually much more compact, and it is unclear whether a performant subnetwork "should" still exist in GNNs. Furthermore, we investigate the connection between untrained sparse networks and widely-known barriers in deep GNNs, such as over-smoothing. For instance, as analyzed in [14], by naively stacking many layers and adding non-linearity, the output features are prone to collapsing and becoming indistinguishable. Such undesirable properties significantly limit the power of deeper/wider GNNs, hindering the potential application of GNNs on large-scale graph datasets such as the latest Open Graph Benchmark (OGB) [15]. It is interesting to see what would happen for untrained graph neural networks. Note that the goal of sparsity in our paper is not for efficiency, but to obtain nontrivial predictive performance without training (a.k.a., "masking is training" [11]). We summarize our contributions as follows:
+
+
+
+Figure 1: Performance of untrained graph subnetworks (UGTs (ours) and Edge-Popup [12]) and the corresponding trained dense GNNs. We demonstrate that as the model size increases, UGTs is able to find an untrained subnetwork with its random initializations, that can match the performance of the corresponding fully-trained dense GNNs. The x-axis denotes the corresponding model size for each point, e.g. "64-2" represents a model with 2 layers and 64 widths.
+
+- We demonstrate for the first time that there exist untrained graph subnetworks with matching performance (referring to as good as the trained full networks), within randomly initialized dense networks and without any model weight training. Distinct from the popular LTH [16, 17], neither the original dense networks nor the identified subnetworks need to be trained.
+
+- We find that the gradual sparsification technique $\left\lbrack {{18},{19}}\right\rbrack$ can be a stronger performance booster. Leveraging its global sparse variant [20], we propose our method - UGTs, which discovers matching untrained subnetworks within the dense GNNs at extremely high sparsities. For example, our method discovers untrained matching subnetworks with up to 99% sparsity. We validate it across various GNN architectures (GCN, GIN, GAT) on eight datasets, including the large-scale OGBN-ArXiv and OGBN-Products.
+
+- We empirically show a surprising observation that our method significantly mitigates the over-smoothing problem without any additional tricks and can successfully scale GNNs up with negligible performance loss. Additionally, we show that UGTs also enjoys favorable performance on OOD detection and robustness on different types of perturbations.
+
+## 64 2 Related Work
+
+Graph Neural Networks. Graph neural networks is a powerful deep learning approach for graph-structured data. Since proposed in [1], many variants of GNNs have been developed, e.g., GAT [5], GCN [3], GIN [4], GraphSage [21], SGC [22], and GAE [23]. More and more recent works point out that deeper GNN architectures potentially provide benefits to practical graph structures, e.g., molecules [8], point clouds [24], and meshes [25], as well as large-scale graph dataset OGB. However, training deep GNNs usually is a well-known challenge due to various difficulties such as gradient vanishing and over-smoothing problems $\left\lbrack {{14},{26}}\right\rbrack$ . The existing approaches to address the above-mentioned problem can be categorized into three groups: (1) skip connection, e.g., Jumping connections [27, 28], Residual connections [24], and Initial connections [29]; (2) graph normalization, e.g., PairNorm [26], NodeNorm [30]; (3) random dropping including DropNode [31] and DropEdge [32].
+
+Untrained Subnetworks. Untrained subnetworks refer to the hypothesis that there exists a subnetwork in a randomly intialized neural network that can achieve almost the same accuracy as a fully trained neural network without weight update. [11] and [12] first demonstrate that randomly initialized CNNs contain subnetworks that achieve impressive performance without updating weights
+
+at all. [13] enhanced the performance of untrained subnetworks by iteratively reinitializing the weights that have been pruned. Besides the image classification task, some works also explore the power of untrained subnetworks in other domains, such as multi-tasks learning [33] and adversarial robustness [34].
+
+Instead of proposing well-versed techniques to enable deep GNNs training, we explore the possibility of finding well-performing deeper graph subnetworks at initialization in the hope of avoiding the difficulties of building deep GNNs without model weight training.
+
+## 3 Untrained Graph Tickets
+
+### 3.1 Preliminaries and Setups
+
+Notations. We represent matrices by bold uppercase characters, e.g. $\mathbf{X}$ , vectors by bold lowercase characters, e.g. $x$ , and scalars by normal lowercase characters, e.g. x. We denote the ${i}^{th}$ row of a matrix $\mathbf{A}$ by $\mathbf{A}\left\lbrack {i, : }\right\rbrack$ , and the ${\left( i, j\right) }^{\text{th }}$ element of matrix $\mathbf{A}$ by $\mathbf{A}\left\lbrack {i, j}\right\rbrack$ . We consider a graph $\mathcal{G} = \{ \mathcal{V},\mathcal{E}\}$ where $\mathcal{E}$ is a set of edges and $\mathcal{V}$ is a set of nodes. Let $g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta }}\right)$ be a graph neural network where $\mathbf{A} \in \{ 0,1{\} }^{\left| V\right| \times \left| V\right| }$ is adjacency matrix for describing the overall graph topology, and $\mathbf{X}$ denotes nodal features . $\mathbf{A}\left\lbrack {i, j}\right\rbrack = 1$ denotes the edge between node ${v}_{i}$ and node ${v}_{j}$ . Let $f\left( {\mathbf{X};\mathbf{\theta }}\right)$ be a neural network.
+
+Graph Neural Networks. GNNs denote a family of algorithms that extract structural information from graphs [35] and it is consisted of Aggregate and Combine operations. Usually, Aggregate is a function that aggregates messages from its neighbor nodes, and Combine is an update function that updates the representation of the current node. Formally, given the graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{X}}\right)$ with node set $\mathcal{V}$ and edge set $\mathcal{E}$ , the $l$ -th layer of a GNN is represented as follows:
+
+$$
+{\mathbf{a}}_{v}^{l} = {\operatorname{Aggregate}}^{l}\left( \left\{ {{\mathbf{h}}_{u}^{l - 1} : \forall u \in \mathcal{N}\left( v\right) }\right\} \right) \tag{1}
+$$
+
+$$
+{\mathbf{h}}_{v}^{l} = {\operatorname{Combine}}^{l}\left( {{\mathbf{h}}_{v}^{l - 1},{\mathbf{a}}_{v}^{l}}\right) \tag{2}
+$$
+
+where ${\mathbf{a}}_{v}^{l}$ is the aggregated representation of the neighborhood for node $v$ and $\mathcal{N}\left( v\right)$ denotes the neighbor nodes set of the node $v$ , and ${\mathbf{h}}_{v}^{l}$ is the node representations at the $l$ -th layer. After propagating through $L$ layers, we achieve the final node representations ${\mathbf{h}}_{v}^{L}$ which can be applied to downstream node-level tasks, such as node classification, link prediction.
+
+Untrained Subnetworks. Following the prior work [11], [12] proposed Edge-Popup which enables finding untrained subnetworks hidden in the a randomly initialized full network $f\left( \mathbf{\theta }\right)$ by solving the following discrete optimization problem:
+
+$$
+\mathop{\min }\limits_{{\mathbf{m} \in \{ 0,1{\} }^{\left| \mathbf{\theta }\right| }}}\mathcal{L}\left( {f\left( {\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) ,\mathbf{y}}\right) \tag{3}
+$$
+
+where $\mathcal{L}$ is task-dependent loss function; $\odot$ represents an element-wise multiplication; $\mathbf{y}$ is the label for the input $\mathbf{X}$ and $\mathbf{m}$ is the binary mask that controls the sparsity level $s$ .
+
+Different from the traditional training of deep neural networks, here the network weights are never updated, masks $\mathbf{m}$ are instead generated to search for the optimal untrained subnetwork. In practice, each mask ${\mathbf{m}}_{i}$ has a latent score variable ${\mathbf{S}}_{i} \in \mathcal{R}$ that represents the importance score of the corresponding weight ${\mathbf{\theta }}_{i}$ . During training in the forward pass, the binary mask $\mathbf{m}$ is generated by setting top- $s$ largest elements of $\mathbf{S}$ to 1 otherwise 0 . In the backward pass, all the values in $\mathbf{S}$ will be updated with straight-through estimation [36]. At the end of the training, an untrained subnetwork can be found by the generated mask $\mathbf{m}$ according to the converged scores $\mathbf{S}$ .
+
+### 3.2 Untrained Graph Tickets - UGTs
+
+In this section, we adopt the untrained subnetwork techniques to GNNs and introduce our new approach - Untrained Graph Tickets (UGTs). We share the pseudocode of UGTs in the Appendix C.
+
+Formally, given a graph neural network $g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta }}\right)$ , where $\mathbf{A}$ and $\mathbf{X}$ are adjacency matrix and nodal features respectively. The optimization problem of finding an untrained subnetwork in GNNs can be therefore described as follows:
+
+$$
+\mathop{\min }\limits_{{\mathbf{m} \in \{ 0,1{\} }^{\left| \mathbf{\theta }\right| }}}\mathcal{L}\left( {g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) ,\mathbf{y}}\right) \tag{4}
+$$
+
+Although Edge-Popup [12] can find untrained subnetworks with proper predictive accuracy, its performance is still away from satisfactory. For instance, Edge-Popup can only obtain matching subnetworks at a relatively low sparsity i.e., 50%.
+
+
+
+Figure 2: The performance of GNNs with increasing model depths. Experiments are conducted on various GNNs with Cora, Citeseer, Pubmed and OGBN-Arxiv. We observe that as the model goes deeper, fully-trained dense GNNs suffer from a sharp accuracy drop, while UGTs preserves the high accuracy. All the results reported are averaged from 5 runs.
+
+We highlight two limitations of the existing prior research. First of all, prior works [12, 13] initially set the sparsity level of ${\mathbf{m}}_{i}$ as $s$ and maintain it throughout the optimization process. This is very appealing for the scenarios of sparse training [37-39] that chases a better trade-off between performance and efficiency, since the fixed sparsity usually translates to fewer floating-point operations (FLOPs). This scheme, however, is not necessary and perhaps harmful to the finding of the smallest possible untrained subnetwork that still performs well. Particularly as shown in [20], larger searching space for sparse neural networks at the early optimization phase leads to better sparse solutions. The second limitation is that the existing methods sparsify networks layer-wisely with a uniform sparsity ratio, which typically leads to inferior performance compared with the non-uniform layer-wise sparsity $\left\lbrack {{20},{39},{40}}\right\rbrack$ , especially for deep architectures $\left\lbrack {41}\right\rbrack$ .
+
+Table 1: Test accuracy (%) of different training techniques. The experiments are based on GCN models with 16, 32 layers, respectively. Width is set to 448. See Appendix B. 6 for GAT architecture. The results of the other methods are obtained from [42].
+
+ | Cora | Citeseer | Pubmed |
| N-Layers | 16 | 32 | 16 | 32 | 16 | 32 |
| Trained Dense GCN | 21.4 | 21.2 | 19.5 | 20.2 | 39.1 | 38.7 |
| +Residual | 20.1 | 19.6 | 20.8 | 20.90 | 38.8 | 38.7 |
| +Jumping | 76.0 | 75.5 | 58.3 | 55.0 | 75.6 | 75.3 |
| +NodeNorm | 21.5 | 21.4 | 18.8 | 19.1 | 18.9 | 18 |
| +PairNorm | 55.7 | 17.7 | 27.4 | 20.6 | 71.3 | 61.5 |
| +DropNode | 27.6 | 27.6 | 21.8 | 22.1 | 40.3 | 40.3 |
| +DropEdge | 28.0 | 27.8 | 22.9 | 22.9 | 40.6 | 40.5 |
| UGTs-GCN | ${77.3} \pm {0.9}$ | ${77.5} \pm {0.8}$ | ${61.1} \pm {0.9}$ | ${56.2} \pm {0.4}$ | $\mathbf{{77.6} \pm {0.9}}$ | ${76.3} \pm {1.2}$ |
+
+Untrained Graph Tickets (UGTs). Leveraging the above-mentioned insights, we propose a new approach UGTs here which can discover matching untrained subnetworks with extremely high sparsity levels, i.e., up to 99%. Instead of keeping the sparsity of $\mathbf{m}$ fixed throughout the sparsification process, we start from an untrained dense GNNs and gradually increase the sparsity to the target sparsity during the whole sparsification process. We adjust the original gradual sparsification schedule $\left\lbrack {{18},{19}}\right\rbrack$ to the linear decay schedule, since no big performance difference can be observed. The sparsity level ${s}_{t}$ of each adjusting step $t$ is calculated as follows:
+
+$$
+{s}_{t} = {s}_{f} + \left( {{s}_{i} - {s}_{f}}\right) \left( {1 - \frac{t - {t}_{0}}{n\Delta t}}\right) \tag{5}
+$$
+
+$$
+t \in \left\{ {{t}_{0},{t}_{0} + {\Delta t},\ldots ,{t}_{0} + {n\Delta t}}\right\}
+$$
+
+where ${s}_{f}$ and ${s}_{i}$ refer to the final sparsity and initial sparsity, respectively; ${t}_{0}$ is the starting point of sparsification; ${\Delta t}$ is the time between two adjusting steps; $n$ is the total number of adjusting steps. We set ${\Delta t}$ as one epoch of mask optimization in this paper.
+
+To obtain a good non-uniform layer-wise sparsity ratio, we remove the weights with the smallest score values(S)across layers at each adjusting step. We do this because [20] showed that the layer-wise sparsity obtained by this scheme outperforms the other well-studied sparsity ratios [19, 37, 39]. More importantly, removing weights across layers theoretically has a larger search space than solely considering one layer. The former can be more appealing as the GNN architecture goes deeper.
+
+## 4 Experimental Results
+
+In this section, we conduct extensive experiments among multiple GNN architectures and datasets to evaluate UGTs. We summarize the experimental setups here.
+
+Table 2: Graph datasets statistics.
+
+| DataSets | #Graphs | #Nodes | #Edges | #Classes | #Features | Metric |
| Cora | 1 | 2708 | 5429 | 7 | 1433 | Accuracy |
| Citeseer | 1 | 3327 | 4732 | 6 | 3703 | Accuracy |
| Pubmed | 1 | 19717 | 44338 | 3 | 3288 | Accuracy |
| OGBN-Arxiv | 1 | 169343 | 1166243 | 40 | 128 | Accuracy |
| Texas | 1 | 183 | 309 | 5 | 1703 | Accuracy |
| OGBN-Products | 1 | 24449029 | 61859140 | 47 | 100 | Accuracy |
| OGBG-molhiv | 41127 | 25.5(Average) | 27.5(Average) | 2 | - | ROC-AUC |
| OGBG-molbace | 1513 | 34.1(Average) | 36.9(Average) | 2 | - | ROC-AUC |
+
+GNN Architectures. We use the three most widely used GNN architectures: GCN, GIN, and GAT ${}^{1}$ in our paper.
+
+Datasets. We choose three popular small-scale graph datasets including Cora, Citeseer, PubMed [3] and one latest large-scale graph dataset OGBN-Arxiv [15] for our main experiments. To draw a solid conclusion, we also evaluate our method on other datasets including OGBN-Products [15], TEXAS [43], OGBG-molhiv [15] and OGBG-molbace [15, 44]. More detailed information can be found in Table 2.
+
+### 4.1 The Existence of Matching Subnetworks
+
+Figure 1 shows the effectiveness of UGTs with different GNNs, including GCN, GIN and GAT, on the four datasets. We can observe that as the model size increases, UGTs can find untrained subnetworks that match the fully-trained dense GNNs. This observation is perfectly in line with the previous findings $\left\lbrack {{12},{13}}\right\rbrack$ , which reveal that model size plays a crucial role to the existence of matching untrained subnetworks. Besides, it can be observed that the proposed UGTs consistently outperforms Edge-Popup across different settings.
+
+### 4.2 Over-smoothing Analysis
+
+Deep architecture has been shown as a key factor that improves the model capability in computer vision [45]. However, it becomes less appealing in GNNs mainly because the node interaction through
+
+---
+
+${}^{1}$ All experiments based on GAT architecture are conducted with heads $= 1$ in this study.
+
+---
+
+
+
+Figure 3: TSNE visualization of node representations learned by densely trained GCN and UGTs. Ten classes are randomly sampled from OGBN-Arxiv for visualization. Model depth is set as 16 and 32 respectively; width is set as 448. See Appendix B. 1 for GAT architecture.
+
+the message-passing mechanism (i.e., aggregation operator) would make node representations less distinguishable $\left\lbrack {{26},{46}}\right\rbrack$ , leading to a drastic drop of task performance. This phenomenon is well known as the over-smoothing problem [14, 42]. In this paper, we show a surprising result that UGTs can effectively mitigate over-smoothing in deep GNNs. We conduct extensive experiments to evaluate this claim in this section.
+
+UGTs preserves the high accuracy as GNNs go deeper. In Figure 2, we vary the model depth of various architectures and report the test accuracy. All the experiments are conducted with architectures containing 448 widths except for GAT on OGBN-Arxiv, in which we choose 256 widths for GAT with $2 \sim {10}$ layers and 128 widths for GAT with ${11} \sim {20}$ layers, due to the memory limitation.
+
+As we can see, the performance of trained dense GNNs suffers from a sharp performance drop when the model goes deeper, whereas UGTs impressively preserves the high accuracy across models. Especially at the mild sparsity, i.e., 0.1, UGTs almost has no deterioration with the increased number of layers. Edge-Popup achieves a comparable performance with UGTs on the GIN architecture. However, such comparable performance does not exist in GAT and GCN. A plausible explanation is that the global sparsification schedule used in UGTs enjoys a larger search space than the uniform sparse scheme used in Edge-Popup, leading to a better sparse connectivity that is more suitable for deep GNNs.
+
+UGTs achieves competitive performance with the well-versed training techniques. To further validate the effectiveness of UGTs in mitigating over-smoothing, we compare UGTs with six state-of-the-art techniques for the over-smoothing problem, including Residual connections, Jumping connections, NodeNorm, PairNorm, DropEdge, and DropNode. We follow the experimental setting in [42] and conduct experiments on Cora/Citeseer/Pubmed with GAT containing 16 and 32 layers. Model width is set to 448 for GAT on Cora/Citeseer/Pubmed. The results of the other methods are obtained from [42] ${}^{2}$ .
+
+Table 1 shows that UGTs consistently outperforms all these advanced techniques on Cora, Citeseer, and Pubmed. For instance, UGTs outperforms the best performing technique (+Jumping) by 2.0%, ${1.2}\% ,{1.0}\%$ on Cora, Citeseer and Pubmed respectively with 32 layers. These results again verify our hypothesis that training bottlenecks of deep GNNs (e.g., over-smoothing) can be avoided or mitigated by finding untrained subnetworks without training weights at all.
+
+Mean Average Distance (MAD). To further evaluate whether or not the good performance of UGTs can be contributed to the mitigation of over-smoothing, we visualize the smoothness of the node representations learned by UGTs and trained dense GNNs respectively. Following [46], we calculate the MAD distance among node representations for each layer during the process of sparsification. Concretely, MAD [46] is the quantitative metric for measuring the smoothness of the node representations. The smaller the MAD is, the smoother the node representations are. Results are reported in Figure 4. It can be observed that the node representations learned by UGTs keeps having a large distance throughout the optimization process, indicating a relieving of over-smoothing. On the contrary, the densely trained GCN suffers from severely indistinguishable representations of nodes.
+
+---
+
+${}^{2}$ https://github.com/VITA-Group/Deep_GCN_Benchmarking.git
+
+---
+
+
+
+Figure 4: Mean Average Distance among node representations of each GNN layer. Experiments are conducted on Cora with GCN containing 32 layers and 448 widths.
+
+
+
+Figure 5: The accuracy of GNNs w.r.t varying sparsities. Experiments are conducted on various GNNs with 2 layers and 256 widths for Cora, Citeseer and Pubmed, 4 layers and 386 widths for OGBN-Arxiv.
+
+TSNE Visualizations. Additionally, we visualize the node representations learned by UGTs and the trained dense GNNs with 16 and 32 layers, respectively, on both GCN and GAT architectures. Due to the limited space, we show the results of GCN in Figure 3 and put the visualization of GAT in the Appendix B.1. We can see that the node representations learned by the trained dense GCN are over-mixing in all scenarios and, in the deeper models (i.e., 32 layers), seem to be more indistinguishable. Meanwhile, the projection of node representations learned by UGTs maintains clearly distinguishable, again providing the empirical evidence of UGTs in mitigating over-smoothing problem.
+
+### 4.3 The Effect of Sparsity on UGTs
+
+To better understand the effect of sparsity on the performance of UGTs, we provide a comprehensive study in Figure 5 where the performance of UGTs with respect to different sparsity levels on different architectures. We summarize our observations below.
+
+① UGTs consistently finds matching untrained graph subnetworks at a large range of sparsities, including the extreme ones. A matching untrained graph subnetwork can be identified with sparsities from 0.1 even up to 0.99 on small-scale datasets such as Cora, Citeseer and Pubmed. For large-scale OGBN-Arxiv, it is more difficult to find matching untrained subnetworks. Matching subnetworks are mainly located within sparsities of ${0.3} \sim {0.6}$ .
+
+② What's more, UGTs consistently outperforms Edge-Popup. UGTs shows better performance than Edge-Popup at high sparsities across different architectures on Cora, Citeseer, Pubmed and OGBN-Arxiv. Surprisingly, increasing sparsity from 0.7 to 0.99 , UGTs maintains very a high accuracy, whereas the accuracy of Edge-Popup shows a notable degradation. It is in accord with our
+
+expectation since UGTs finds important weights globally by searching for the well-performing sparse
+
+232 topology across layers.
+
+233
+
+### 4.4 Broader Evaluation of UGTs
+
+
+
+Figure 6: Out-of-distribution performance (ROC-AUC). Experiments are conducted with GCN (Width: 256, Depth: 2).
+
+
+
+Figure 7: The robust performance on feature perturbations with the fraction of perturbed nodes varying from 0% to 40%. Experiments are conducted with GCN and GAT (Width: 256, Depth. 2).
+
+In this section, we systematically study the performance of UGTs on out of distribution (OOD) detection, robustness against the input perturbations including feature and edge perturbations. Following [47], we create OOD samples by specifying all samples from 40% of classes and removing them from the training set, feature perturbations by adding noise from Bernoulli distribution and edge perturbations by moving edge's end point at random. The results of OOD experiments are reported in Figure 6 and Figure 10 (shown in Appendix B.2). The results of robustness experiments are reported in Figure 8 and Figure 7. We summarize our observations as follows:
+
+① UGTs enjoys matching performance on OOD detection. Figure 6 and Figure 10 show that untrained graph subnetworks discovered by UGTs achieve matching performance on OOD detection compared with the trained dense GNNs in most cases. Besides, UGTs consistently outperforms Edge-Popup method at a large range of sparsities on OOD detection.
+
+② UGTs produces highly sparse yet robust subnetworks on input perturbations. Figure 7 and Figure 8 demonstrate that UGTs with high sparsity level (Sparsity=0.9) achieves more robust results than the trained dense GNNs on both feature and edge perturbations with perturbation percentage ranging from 0 to 40%. Again, UGTs consistently outperforms Edge-Popup with both perturbation types.
+
+
+
+Figure 8: The robust performance on edge perturbations with the fraction of perturbed edges varying from 0% to 40%. Experiments are conducted with GCN and GAT (Width: 256, Depth: 2).
+
+### 4.5 Experiments on Graph-level Task and Other Datasets
+
+To draw a solid conclusion, we further conduct extensive experiments of graph-level task on OGBG-molhiv and OGBG-molbace; node-level task on TEXAS and OGBN-Products. The experiments are based on GCN model with width=448 and depth=3 . Table 3 consistently verifies that a matching untrained subnetwork can be identified in GNNs across multiple tasks and datasets.
+
+Table 3: Experments on graph-level tasks and other datasets. GCN Model with width:448, depth:3 are adopted for this experiments.
+
+
+
+## 5 Conclusion
+
+In this work, we for the first time confirm the existence of matching untrained subnetworks at a large range of sparsity. UGTs consistently outperforms the previous untrained technique - Edge-Popup on multiple graph datasets across various GNN architectures. What's more, we show a surprising result that searching for an untrained subnetwork within a randomly weighted dense GNN instead of directly training the latter can significantly mitigate the over-smoothing problem of deep GNNs. Across popular datasets, e.g., Cora, Citeseer, Pubmed, and OGBN-Arxiv, our method UGTs can achieve comparable or better performance with the various well-studied techniques that are specifically designed for over-smoothing. Moreover, we empirically find that UGTs also achieves appealing performance on other desirable aspects, such as out-of-distribution detection and robustness. The strong results of our paper point out a surprising but perhaps worth-a-try direction to obtain high-performing GNNs, i.e., finding the untrained graph tickets located within a randomly weighted dense GNN instead of training it.
+
+References
+
+[1] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80, 2008. 1, 2
+
+[2] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations, 2016. 1
+
+[3] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations, 2017. 1, 2, 5, 13
+
+[4] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https: //openreview.net/forum?id=ryGs6iA5Km. 1, 2
+
+[5] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. 1, 2
+
+[6] Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, and Jie Tang. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2110-2119, 2018. 1
+
+[7] Jia Li, Zhichao Han, Hong Cheng, Jiao Su, Pengyun Wang, Jianfeng Zhang, and Lujia Pan. Predicting path failure in time-evolving graphs. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1279-1289, 2019. 1
+
+[8] Marinka Zitnik and Jure Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190-i198, 2017. 1, 2
+
+[9] Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 922-929, 2019. 1
+
+[10] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 974-983, 2018. 1
+
+[11] Hattie Zhou, Janice Lan, Rosanne Liu, and Jason Yosinski. Deconstructing lottery tickets: Zeros, signs, and the supermask. arXiv preprint arXiv:1905.01067, 2019. 1, 2, 3
+
+[12] Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Rastegari. What's hidden in a randomly weighted neural network? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11893-11902, 2020. 1,2,3,4,5
+
+[13] Daiki Chijiwa, Shin'ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, and Tomohiro Inoue. Pruning randomly initialized neural networks with iterative randomization. arXiv preprint arXiv:2106.09269, 2021. 1, 3, 4, 5
+
+[14] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI conference on artificial intelligence, 2018. 2,6
+
+[15] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020. 2, 5, 13
+
+[16] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018. 2
+
+[17] Tianlong Chen, Yongduo Sui, Xuxi Chen, Aston Zhang, and Zhangyang Wang. A unified lottery ticket hypothesis for graph neural networks. In International Conference on Machine Learning, pages 1695-1706. PMLR, 2021. 2
+
+[18] Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017. 2, 5
+
+[19] Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574, 2019. 2, 5
+
+[20] Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, and Decebal Constantin Mocanu. Sparse training via boosting pruning plasticity with neuroregeneration. Advances in Neural Information Processing Systems., 2021. 2, 4, 5
+
+[21] William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1025-1035, 2017. 2
+
+[22] Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In International conference on machine learning, pages 6861-6871. PMLR, 2019. 2
+
+[23] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016. 2
+
+[24] Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9267-9276, 2019. 2
+
+[25] Shunwang Gong, Mehdi Bahri, Michael M Bronstein, and Stefanos Zafeiriou. Geometrically principled connections in graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11415-11424, 2020. 2
+
+[26] Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. arXiv preprint arXiv:1909.12223, 2019. 2, 6
+
+[27] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning, pages 5453-5462. PMLR, 2018. 2
+
+[28] Meng Liu, Hongyang Gao, and Shuiwang Ji. Towards deeper graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 338-348, 2020. 2
+
+[29] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In International Conference on Machine Learning, pages 1725-1735. PMLR, 2020. 2
+
+[30] Kuangqi Zhou, Yanfei Dong, Kaixin Wang, Wee Sun Lee, Bryan Hooi, Huan Xu, and Jiashi Feng. Understanding and resolving performance degradation in graph convolutional networks. arXiv preprint arXiv:2006.07107, 2020. 2
+
+[31] Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang. Tackling over-smoothing for general graph convolutional networks. arXiv preprint arXiv:2008.09864, 2020. 2
+
+[32] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. arXiv preprint arXiv:1907.10903, 2019. 2
+
+[33] Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi. Supermasks in superposition. arXiv preprint arXiv:2006.14769, 2020. 3
+
+[34] Yonggan Fu, Qixuan Yu, Yang Zhang, Shang Wu, Xu Ouyang, David Cox, and Yingyan Lin. Drawing robust scratch tickets: Subnetworks with inborn robustness are found within randomly initialized networks. Advances in Neural Information Processing Systems, 34, 2021. 3
+
+[35] William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017. 3
+
+[36] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. 3
+
+[37] Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):1-12, 2018. 4,5
+
+[38] Shiwei Liu, Decebal Constantin Mocanu, Amarsagar Reddy Ramapuram Matavalam, Yulong Pei, and Mykola Pechenizkiy. Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware. Neural Computing and Applications, 2020.
+
+[39] Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In International Conference on Machine Learning, pages 2943-2952. PMLR, 2020. 4, 5
+
+[40] Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, and Ali Farhadi. Soft threshold weight reparameterization for learnable sparsity. In International Conference on Machine Learning, 2020. 4
+
+[41] Tim Dettmers and Luke Zettlemoyer. Sparse networks from scratch: Faster training without losing performance. arXiv preprint arXiv:1907.04840, 2019. 4
+
+[42] Tianlong Chen, Kaixiong Zhou, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, and Zhangyang Wang. Bag of tricks for training deeper graph neural networks: A comprehensive benchmark study. arXiv preprint arXiv:2108.10521, 2021. 4, 6, 13
+
+[43] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287, 2020. 5
+
+[44] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018. 5
+
+[45] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 5
+
+[46] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3438-3445, 2020. 6
+
+[47] Maximilian Stadler, Bertrand Charpentier, Simon Geisler, Daniel Zügner, and Stephan Gün-nemann. Graph posterior network: Bayesian predictive uncertainty for node classification. Advances in Neural Information Processing Systems, 34, 2021. 8
+
+[48] Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform bad for graph representation? arXiv preprint arXiv:2106.05234,2021. 13
+
+## A Implementation Details
+
+In this paper, all experiments on Cora/Citeseer/Pubmed datasets are conducted on 1 GeForce RTX 2080TI (11GB) and all experiments on OGBN-Arxiv are conducted on 1 DGX-A100 (40GB). All the results reported in this paper are conducted by 5 independent repeated runs.
+
+Train-Val-Test splitting Datasets We use 140 (Cora), 120 (Citeseer) and 60 (PubMed) labeled data for training, 500 nodes for validation and 1000 nodes for testing. We follow the strategy in [15] for splitting OGBN-Arxiv dataset.
+
+Hyper-parameter Configuration We follow [3, 42, 48] to configure the hyper-parameters for 410 training dense GNN models. All hyper-parameters configurations for UGTs are summarized in Table 4. 411
+
+Table 4: Implementation details for UGTs.
+
+| DataSets | Cora | Citeseer | Pubmed | OGBN-Arxiv |
| Total Epoches | 400 | 400 | 400 | 400 |
| Learning Rate | 0.01 | 0.01 | 0.01 | 0.01 (GNNs with Layers<10) 0.001 (GNNs with Layers>10) |
| Optimizer | Adam | Adam | Adam | Adam |
| Weight Decay | 0.0 | 0.0 | 0.0 | 0.0 |
| $\mathrm{n}$ (total adjustion epoches) | 200 | 200 | 200 | 200 |
| ${t}_{0}$ | 0 | 0 | 0 | 0 |
| ${\Delta t}$ | 1 epoch | 1 epoch | 1 epoch | 1 epoch |
+
+## 412 B More Experimental Results
+
+### B.1 TSNE visualization.
+
+Figure 9 provides the TSNE visualization for node representations learned by UGTs and dense GAT. 5 It can be observed that the node representations learned by the trained dense GAT are mixed while the node representations learned by UGTs are disentangled.
+
+
+
+Figure 9: TSNE visualization for node representations. Experiments are based on GAT with fixed 448 widths.
+
+### B.2 Out of distribution detection
+
+Figure 10 shows the OOD performance for UGTs and the trained dense GNNs based on GAT architecture. As we can observe, UGTs achieves very appealing results on OOD performance than the corresponding trained dense GAT.
+
+### B.3 Robustness against input perturbations
+
+In this section, we explore the robustness against input perturbations with varying the sparsity of untrained GNNs. Experiments are conducted on GAT and GCN architectures with width $= {256}$ and depth=2. Results are reported in Figure 12 and Figure 11.
+
+
+
+Figure 10: Out-of-distribution performance (ROC-AUC). Experiments are based on GAT architecture (Width:256, Depth:2)
+
+425 It can be observed that the robustness achieved by UGTs is increasing with the increase of sparsity for
+
+426 both edge and feature perturbation types. Besides, the robustness achieved by UGTs at large sparsity,
+
+427 e.g., sparsity = 0.9 , can outperform the counterpart trained dense GNNs.
+
+
+
+Figure 11: The robust performance on edge perturbations. $R$ denotes the fraction of perturbed edges.(width:256, Depth:2)
+
+
+
+Figure 12: The robust performance on feature perturbations. $R$ denotes the fraction of perturbed nodes. (width:256, Depth:2)
+
+
+
+Figure 13: Model Width: The accuracy performance of subnetworks from untrained GNNs w.r.t varying Hidden-Size. The "S01, S05, S09" represents the sparsity of the untrained GNNs. The dashed line represents the results of the trained dense GNNs. Experiments are based on GNNs with 2 layers.
+
+### B.4 The accuracy performance w.r.t model width
+
+Figure 13 shows the performance of UGTs on different architectures with varying model width from 16 to 1024 and fix depth=2 . We summarize observations as follows:
+
+① Performance of UGTs improves with the width of the GNN models. With width increasing from 16 to 256, the performance of UGTs improves apparently and after width=256, the benefits from model width are saturated.
+
+### B.5 Observations via gradient norm
+
+To preliminary understand why UGTs can mitigate over-smoothing while the trained dense GNNs can not, we calculate the gradient norm of each layer for UGTs and dense GCN during training. In order to have a fair comparison, we calculate the gradient norm of ${\nabla }_{\left( {\mathbf{m}}^{l} \odot {\mathbf{\theta }}^{l}\right) }\mathcal{L}\left( {g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) , y}\right)$ for UGTs and the gradient norm of ${\nabla }_{{\mathbf{\theta }}^{l}}\mathcal{L}\left( {g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta }}\right) , y}\right)$ for dense GCN where $l$ denotes the layer. Results are reported in Figure 14.
+
+As we can observe, the gradient vanishing problem may exist for training deep dense GCN since the gradient norm for dense GCN is extremely small while UGTs does not have this problem. This problem might also be indicated by the training loss where the training loss for dense GCN does not decrease while the training loss for UGTs decreases a lot. This might explain why UGTs performs well for deep GNNs.
+
+### B.6 More experiments for mitigating over-smoothing problem
+
+We conduct experiments on Cora, Citeseer and Pubmed for GAT with deeper layers. The width is fixed to 448 . The results of the other methods are obtained by running the code ${}^{3}$ .
+
+The results are reported in Table 5. It can be observed again that UGTs consistently outperforms all the baselines.
+
+## C Pseudocode
+
+Pseudocode is showed in Algothrim 1.
+
+---
+
+${}^{3}$ https://github.com/VITA-Group/Deep_GCN_Benchmarking.git
+
+---
+
+
+
+Figure 14: Gradient norm w.r.t each layer during training. Experiments are conducted on Cora with GCN architecture containing 32 layers and 448 widths.
+
+Table 5: Test accuracy (%) of different training techniques. The experiments are based on GAT models with 16, 32 layers, respectively. Width is set to 448.
+
+ | Cora | Citeseer | Pubmed |
| N-Layers | 16 | 32 | 16 | 32 | 16 | 32 |
| Trained Dense GAT | 20.6 | 13.0 | 20.0 | 16.9 | 17.9 | 18.0 |
| +Residual | 19.9 | 20.7 | 17.7 | 19.2 | 41.6 | 40.8 |
| +Jumping | 39.7 | 27.8 | 29.1 | 25.5 | 57.3 | 57.1 |
| +NodeNorm | 70.9 | 11.0 | 17.1 | 18.4 | 72.2 | 59.7 |
| +PairNorm | 27.9 | 12.1 | 22.8 | 17.7 | 73.0 | 44.0 |
| +DropNode | 23.6 | 13.0 | 18.8 | 7.0 | 26.7 | 18.0 |
| +DropEdge | 24.8 | 13.0 | 19.4 | 7.0 | 19.3 | 18.0 |
| UGTs-GAT | ${76.7} \pm {1.1}$ | ${74.9} \pm {0.2}$ | ${62.7} \pm {0.7}$ | ${56.5} \pm {1.1}$ | ${77.9} \pm {0.5}$ | ${75.5} \pm {1.5}$ |
+
+Algorithm 1 Untrained Graph Tickets (UGTs)
+
+---
+
+Input: a GNN $g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta }}\right)$ , initial mask $\mathbf{m} = 1 \in {\mathcal{R}}^{\left| \mathbf{\theta }\right| }$ with latent scores $\mathbf{S}$ , learning rate $\lambda$ ,
+
+hyperparameters for the gradual sparsification schedule ${s}_{i},{s}_{f},{t}_{0}$ , and ${\Delta t}$ .
+
+Output: $g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) ,\mathbf{y}$
+
+Randomly initialize model weights $\mathbf{\theta }$ and $\mathbf{S}$ .
+
+for $t = 1$ to $T$ do
+
+ #Calculate the current sparsity level ${s}_{t}$ by Eq. 5 .
+
+ ${s}_{t} \leftarrow {s}_{f} + \left( {{s}_{i} - {s}_{f}}\right) \left( {1 - \frac{t - {t}_{0}}{n\Delta t}}\right)$
+
+ #Get the global threshold for sparsification.
+
+ ${\mathbf{S}}_{\text{thres }} \leftarrow$ Thresholding $\left( {\mathbf{S},{s}_{t}}\right)$
+
+ #Generate the binary mask.
+
+ $m \leftarrow 1$ if $\mathbf{S} > {\mathbf{S}}_{\text{thres }}$ else 0
+
+ #Update $S$ .
+
+ $\mathbf{S} \leftarrow \mathbf{S} - \lambda {\nabla }_{\mathbf{S}}\mathcal{L}\left( {g\left( {A,\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) ,\mathbf{y}}\right)$
+
+end for
+
+Return $g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) ,\mathbf{y}$
+
+---
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dF6aEW3_62O/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/dF6aEW3_62O/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b5935b7554e8781f28779506f2ef0b5f981a17a4
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dF6aEW3_62O/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,258 @@
+§ YOU CAN HAVE BETTER GRAPH NEURAL NETWORKS BY NOT TRAINING WEIGHTS AT ALL: FINDING UNTRAINED GRAPH TICKETS
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find untrained sparse subnetworks at the initialization, that can match the performance of fully trained dense GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB). Our source codes are submitted in the Supplementary.
+
+§ 191 INTRODUCTION
+
+Graph Neural Networks (GNNs) [1, 2] have shown the power to learn representations from graph-structured data. Over the past decade, GNNs and their variants such as Graph Convolutional Networks (GCN) [3], Graph Isomorphism Networks (GIN) [4], Graph Attention Networks (GAT) [5] have been successfully applied to a wide range of scenarios, e.g., social analysis [6, 7], protein feature learning [8], traffic prediction [9], and recommendation systems [10]. In parallel, works on untrained networks [11, 12] surprisingly discover the presence of untrained subnetworks in CNNs that can already match the accuracy of their fully trained dense CNNs with their initial weights, without any weight update. In this paper, we attempt to explore discovering untrained sparse networks in GNNs by asking the following question:
+
+Is it possible to find a well-performing graph neural (sub-) network without any training of the model weights?
+
+Positive answers to this question will have significant impacts on the research field of GNNs. ① If the answer is yes, it will shed light on a new direction of obtaining performant GNNs, e.g., traditional training might not be indispensable towards performant GNNs. ② The existence of such performant subnetworks will extend the recently proposed untrained subnetwork techniques [11, 12] in GNNs. Prior works [11-13] successfully find that randomly weighted full networks contain untrained subnetworks which perform well without ever modifying the weights, in convolutional neural networks (CNNs). However, the similar study has never been discussed for GNNs. While CNNs reasonably contain well-performing untrained subnetworks due to heavy over-parameterization, 9 GNN models are usually much more compact, and it is unclear whether a performant subnetwork "should" still exist in GNNs. Furthermore, we investigate the connection between untrained sparse networks and widely-known barriers in deep GNNs, such as over-smoothing. For instance, as analyzed in [14], by naively stacking many layers and adding non-linearity, the output features are prone to collapsing and becoming indistinguishable. Such undesirable properties significantly limit the power of deeper/wider GNNs, hindering the potential application of GNNs on large-scale graph datasets such as the latest Open Graph Benchmark (OGB) [15]. It is interesting to see what would happen for untrained graph neural networks. Note that the goal of sparsity in our paper is not for efficiency, but to obtain nontrivial predictive performance without training (a.k.a., "masking is training" [11]). We summarize our contributions as follows:
+
+ < g r a p h i c s >
+
+Figure 1: Performance of untrained graph subnetworks (UGTs (ours) and Edge-Popup [12]) and the corresponding trained dense GNNs. We demonstrate that as the model size increases, UGTs is able to find an untrained subnetwork with its random initializations, that can match the performance of the corresponding fully-trained dense GNNs. The x-axis denotes the corresponding model size for each point, e.g. "64-2" represents a model with 2 layers and 64 widths.
+
+ * We demonstrate for the first time that there exist untrained graph subnetworks with matching performance (referring to as good as the trained full networks), within randomly initialized dense networks and without any model weight training. Distinct from the popular LTH [16, 17], neither the original dense networks nor the identified subnetworks need to be trained.
+
+ * We find that the gradual sparsification technique $\left\lbrack {{18},{19}}\right\rbrack$ can be a stronger performance booster. Leveraging its global sparse variant [20], we propose our method - UGTs, which discovers matching untrained subnetworks within the dense GNNs at extremely high sparsities. For example, our method discovers untrained matching subnetworks with up to 99% sparsity. We validate it across various GNN architectures (GCN, GIN, GAT) on eight datasets, including the large-scale OGBN-ArXiv and OGBN-Products.
+
+ * We empirically show a surprising observation that our method significantly mitigates the over-smoothing problem without any additional tricks and can successfully scale GNNs up with negligible performance loss. Additionally, we show that UGTs also enjoys favorable performance on OOD detection and robustness on different types of perturbations.
+
+§ 64 2 RELATED WORK
+
+Graph Neural Networks. Graph neural networks is a powerful deep learning approach for graph-structured data. Since proposed in [1], many variants of GNNs have been developed, e.g., GAT [5], GCN [3], GIN [4], GraphSage [21], SGC [22], and GAE [23]. More and more recent works point out that deeper GNN architectures potentially provide benefits to practical graph structures, e.g., molecules [8], point clouds [24], and meshes [25], as well as large-scale graph dataset OGB. However, training deep GNNs usually is a well-known challenge due to various difficulties such as gradient vanishing and over-smoothing problems $\left\lbrack {{14},{26}}\right\rbrack$ . The existing approaches to address the above-mentioned problem can be categorized into three groups: (1) skip connection, e.g., Jumping connections [27, 28], Residual connections [24], and Initial connections [29]; (2) graph normalization, e.g., PairNorm [26], NodeNorm [30]; (3) random dropping including DropNode [31] and DropEdge [32].
+
+Untrained Subnetworks. Untrained subnetworks refer to the hypothesis that there exists a subnetwork in a randomly intialized neural network that can achieve almost the same accuracy as a fully trained neural network without weight update. [11] and [12] first demonstrate that randomly initialized CNNs contain subnetworks that achieve impressive performance without updating weights
+
+at all. [13] enhanced the performance of untrained subnetworks by iteratively reinitializing the weights that have been pruned. Besides the image classification task, some works also explore the power of untrained subnetworks in other domains, such as multi-tasks learning [33] and adversarial robustness [34].
+
+Instead of proposing well-versed techniques to enable deep GNNs training, we explore the possibility of finding well-performing deeper graph subnetworks at initialization in the hope of avoiding the difficulties of building deep GNNs without model weight training.
+
+§ 3 UNTRAINED GRAPH TICKETS
+
+§ 3.1 PRELIMINARIES AND SETUPS
+
+Notations. We represent matrices by bold uppercase characters, e.g. $\mathbf{X}$ , vectors by bold lowercase characters, e.g. $x$ , and scalars by normal lowercase characters, e.g. x. We denote the ${i}^{th}$ row of a matrix $\mathbf{A}$ by $\mathbf{A}\left\lbrack {i, : }\right\rbrack$ , and the ${\left( i,j\right) }^{\text{ th }}$ element of matrix $\mathbf{A}$ by $\mathbf{A}\left\lbrack {i,j}\right\rbrack$ . We consider a graph $\mathcal{G} = \{ \mathcal{V},\mathcal{E}\}$ where $\mathcal{E}$ is a set of edges and $\mathcal{V}$ is a set of nodes. Let $g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta }}\right)$ be a graph neural network where $\mathbf{A} \in \{ 0,1{\} }^{\left| V\right| \times \left| V\right| }$ is adjacency matrix for describing the overall graph topology, and $\mathbf{X}$ denotes nodal features . $\mathbf{A}\left\lbrack {i,j}\right\rbrack = 1$ denotes the edge between node ${v}_{i}$ and node ${v}_{j}$ . Let $f\left( {\mathbf{X};\mathbf{\theta }}\right)$ be a neural network.
+
+Graph Neural Networks. GNNs denote a family of algorithms that extract structural information from graphs [35] and it is consisted of Aggregate and Combine operations. Usually, Aggregate is a function that aggregates messages from its neighbor nodes, and Combine is an update function that updates the representation of the current node. Formally, given the graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{X}}\right)$ with node set $\mathcal{V}$ and edge set $\mathcal{E}$ , the $l$ -th layer of a GNN is represented as follows:
+
+$$
+{\mathbf{a}}_{v}^{l} = {\operatorname{Aggregate}}^{l}\left( \left\{ {{\mathbf{h}}_{u}^{l - 1} : \forall u \in \mathcal{N}\left( v\right) }\right\} \right) \tag{1}
+$$
+
+$$
+{\mathbf{h}}_{v}^{l} = {\operatorname{Combine}}^{l}\left( {{\mathbf{h}}_{v}^{l - 1},{\mathbf{a}}_{v}^{l}}\right) \tag{2}
+$$
+
+where ${\mathbf{a}}_{v}^{l}$ is the aggregated representation of the neighborhood for node $v$ and $\mathcal{N}\left( v\right)$ denotes the neighbor nodes set of the node $v$ , and ${\mathbf{h}}_{v}^{l}$ is the node representations at the $l$ -th layer. After propagating through $L$ layers, we achieve the final node representations ${\mathbf{h}}_{v}^{L}$ which can be applied to downstream node-level tasks, such as node classification, link prediction.
+
+Untrained Subnetworks. Following the prior work [11], [12] proposed Edge-Popup which enables finding untrained subnetworks hidden in the a randomly initialized full network $f\left( \mathbf{\theta }\right)$ by solving the following discrete optimization problem:
+
+$$
+\mathop{\min }\limits_{{\mathbf{m} \in \{ 0,1{\} }^{\left| \mathbf{\theta }\right| }}}\mathcal{L}\left( {f\left( {\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) ,\mathbf{y}}\right) \tag{3}
+$$
+
+where $\mathcal{L}$ is task-dependent loss function; $\odot$ represents an element-wise multiplication; $\mathbf{y}$ is the label for the input $\mathbf{X}$ and $\mathbf{m}$ is the binary mask that controls the sparsity level $s$ .
+
+Different from the traditional training of deep neural networks, here the network weights are never updated, masks $\mathbf{m}$ are instead generated to search for the optimal untrained subnetwork. In practice, each mask ${\mathbf{m}}_{i}$ has a latent score variable ${\mathbf{S}}_{i} \in \mathcal{R}$ that represents the importance score of the corresponding weight ${\mathbf{\theta }}_{i}$ . During training in the forward pass, the binary mask $\mathbf{m}$ is generated by setting top- $s$ largest elements of $\mathbf{S}$ to 1 otherwise 0 . In the backward pass, all the values in $\mathbf{S}$ will be updated with straight-through estimation [36]. At the end of the training, an untrained subnetwork can be found by the generated mask $\mathbf{m}$ according to the converged scores $\mathbf{S}$ .
+
+§ 3.2 UNTRAINED GRAPH TICKETS - UGTS
+
+In this section, we adopt the untrained subnetwork techniques to GNNs and introduce our new approach - Untrained Graph Tickets (UGTs). We share the pseudocode of UGTs in the Appendix C.
+
+Formally, given a graph neural network $g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta }}\right)$ , where $\mathbf{A}$ and $\mathbf{X}$ are adjacency matrix and nodal features respectively. The optimization problem of finding an untrained subnetwork in GNNs can be therefore described as follows:
+
+$$
+\mathop{\min }\limits_{{\mathbf{m} \in \{ 0,1{\} }^{\left| \mathbf{\theta }\right| }}}\mathcal{L}\left( {g\left( {\mathbf{A},\mathbf{X};\mathbf{\theta } \odot \mathbf{m}}\right) ,\mathbf{y}}\right) \tag{4}
+$$
+
+Although Edge-Popup [12] can find untrained subnetworks with proper predictive accuracy, its performance is still away from satisfactory. For instance, Edge-Popup can only obtain matching subnetworks at a relatively low sparsity i.e., 50%.
+
+ < g r a p h i c s >
+
+Figure 2: The performance of GNNs with increasing model depths. Experiments are conducted on various GNNs with Cora, Citeseer, Pubmed and OGBN-Arxiv. We observe that as the model goes deeper, fully-trained dense GNNs suffer from a sharp accuracy drop, while UGTs preserves the high accuracy. All the results reported are averaged from 5 runs.
+
+We highlight two limitations of the existing prior research. First of all, prior works [12, 13] initially set the sparsity level of ${\mathbf{m}}_{i}$ as $s$ and maintain it throughout the optimization process. This is very appealing for the scenarios of sparse training [37-39] that chases a better trade-off between performance and efficiency, since the fixed sparsity usually translates to fewer floating-point operations (FLOPs). This scheme, however, is not necessary and perhaps harmful to the finding of the smallest possible untrained subnetwork that still performs well. Particularly as shown in [20], larger searching space for sparse neural networks at the early optimization phase leads to better sparse solutions. The second limitation is that the existing methods sparsify networks layer-wisely with a uniform sparsity ratio, which typically leads to inferior performance compared with the non-uniform layer-wise sparsity $\left\lbrack {{20},{39},{40}}\right\rbrack$ , especially for deep architectures $\left\lbrack {41}\right\rbrack$ .
+
+Table 1: Test accuracy (%) of different training techniques. The experiments are based on GCN models with 16, 32 layers, respectively. Width is set to 448. See Appendix B. 6 for GAT architecture. The results of the other methods are obtained from [42].
+
+max width=
+
+X 2|c|Cora 2|c|Citeseer 2|c|Pubmed
+
+1-7
+N-Layers 16 32 16 32 16 32
+
+1-7
+Trained Dense GCN 21.4 21.2 19.5 20.2 39.1 38.7
+
+1-7
++Residual 20.1 19.6 20.8 20.90 38.8 38.7
+
+1-7
++Jumping 76.0 75.5 58.3 55.0 75.6 75.3
+
+1-7
++NodeNorm 21.5 21.4 18.8 19.1 18.9 18
+
+1-7
++PairNorm 55.7 17.7 27.4 20.6 71.3 61.5
+
+1-7
++DropNode 27.6 27.6 21.8 22.1 40.3 40.3
+
+1-7
++DropEdge 28.0 27.8 22.9 22.9 40.6 40.5
+
+1-7
+UGTs-GCN ${77.3} \pm {0.9}$ ${77.5} \pm {0.8}$ ${61.1} \pm {0.9}$ ${56.2} \pm {0.4}$ $\mathbf{{77.6} \pm {0.9}}$ ${76.3} \pm {1.2}$
+
+1-7
+
+Untrained Graph Tickets (UGTs). Leveraging the above-mentioned insights, we propose a new approach UGTs here which can discover matching untrained subnetworks with extremely high sparsity levels, i.e., up to 99%. Instead of keeping the sparsity of $\mathbf{m}$ fixed throughout the sparsification process, we start from an untrained dense GNNs and gradually increase the sparsity to the target sparsity during the whole sparsification process. We adjust the original gradual sparsification schedule $\left\lbrack {{18},{19}}\right\rbrack$ to the linear decay schedule, since no big performance difference can be observed. The sparsity level ${s}_{t}$ of each adjusting step $t$ is calculated as follows:
+
+$$
+{s}_{t} = {s}_{f} + \left( {{s}_{i} - {s}_{f}}\right) \left( {1 - \frac{t - {t}_{0}}{n\Delta t}}\right) \tag{5}
+$$
+
+$$
+t \in \left\{ {{t}_{0},{t}_{0} + {\Delta t},\ldots ,{t}_{0} + {n\Delta t}}\right\}
+$$
+
+where ${s}_{f}$ and ${s}_{i}$ refer to the final sparsity and initial sparsity, respectively; ${t}_{0}$ is the starting point of sparsification; ${\Delta t}$ is the time between two adjusting steps; $n$ is the total number of adjusting steps. We set ${\Delta t}$ as one epoch of mask optimization in this paper.
+
+To obtain a good non-uniform layer-wise sparsity ratio, we remove the weights with the smallest score values(S)across layers at each adjusting step. We do this because [20] showed that the layer-wise sparsity obtained by this scheme outperforms the other well-studied sparsity ratios [19, 37, 39]. More importantly, removing weights across layers theoretically has a larger search space than solely considering one layer. The former can be more appealing as the GNN architecture goes deeper.
+
+§ 4 EXPERIMENTAL RESULTS
+
+In this section, we conduct extensive experiments among multiple GNN architectures and datasets to evaluate UGTs. We summarize the experimental setups here.
+
+Table 2: Graph datasets statistics.
+
+max width=
+
+DataSets #Graphs #Nodes #Edges #Classes #Features Metric
+
+1-7
+Cora 1 2708 5429 7 1433 Accuracy
+
+1-7
+Citeseer 1 3327 4732 6 3703 Accuracy
+
+1-7
+Pubmed 1 19717 44338 3 3288 Accuracy
+
+1-7
+OGBN-Arxiv 1 169343 1166243 40 128 Accuracy
+
+1-7
+Texas 1 183 309 5 1703 Accuracy
+
+1-7
+OGBN-Products 1 24449029 61859140 47 100 Accuracy
+
+1-7
+OGBG-molhiv 41127 25.5(Average) 27.5(Average) 2 - ROC-AUC
+
+1-7
+OGBG-molbace 1513 34.1(Average) 36.9(Average) 2 - ROC-AUC
+
+1-7
+
+GNN Architectures. We use the three most widely used GNN architectures: GCN, GIN, and GAT ${}^{1}$ in our paper.
+
+Datasets. We choose three popular small-scale graph datasets including Cora, Citeseer, PubMed [3] and one latest large-scale graph dataset OGBN-Arxiv [15] for our main experiments. To draw a solid conclusion, we also evaluate our method on other datasets including OGBN-Products [15], TEXAS [43], OGBG-molhiv [15] and OGBG-molbace [15, 44]. More detailed information can be found in Table 2.
+
+§ 4.1 THE EXISTENCE OF MATCHING SUBNETWORKS
+
+Figure 1 shows the effectiveness of UGTs with different GNNs, including GCN, GIN and GAT, on the four datasets. We can observe that as the model size increases, UGTs can find untrained subnetworks that match the fully-trained dense GNNs. This observation is perfectly in line with the previous findings $\left\lbrack {{12},{13}}\right\rbrack$ , which reveal that model size plays a crucial role to the existence of matching untrained subnetworks. Besides, it can be observed that the proposed UGTs consistently outperforms Edge-Popup across different settings.
+
+§ 4.2 OVER-SMOOTHING ANALYSIS
+
+Deep architecture has been shown as a key factor that improves the model capability in computer vision [45]. However, it becomes less appealing in GNNs mainly because the node interaction through
+
+${}^{1}$ All experiments based on GAT architecture are conducted with heads $= 1$ in this study.
+
+ < g r a p h i c s >
+
+Figure 3: TSNE visualization of node representations learned by densely trained GCN and UGTs. Ten classes are randomly sampled from OGBN-Arxiv for visualization. Model depth is set as 16 and 32 respectively; width is set as 448. See Appendix B. 1 for GAT architecture.
+
+the message-passing mechanism (i.e., aggregation operator) would make node representations less distinguishable $\left\lbrack {{26},{46}}\right\rbrack$ , leading to a drastic drop of task performance. This phenomenon is well known as the over-smoothing problem [14, 42]. In this paper, we show a surprising result that UGTs can effectively mitigate over-smoothing in deep GNNs. We conduct extensive experiments to evaluate this claim in this section.
+
+UGTs preserves the high accuracy as GNNs go deeper. In Figure 2, we vary the model depth of various architectures and report the test accuracy. All the experiments are conducted with architectures containing 448 widths except for GAT on OGBN-Arxiv, in which we choose 256 widths for GAT with $2 \sim {10}$ layers and 128 widths for GAT with ${11} \sim {20}$ layers, due to the memory limitation.
+
+As we can see, the performance of trained dense GNNs suffers from a sharp performance drop when the model goes deeper, whereas UGTs impressively preserves the high accuracy across models. Especially at the mild sparsity, i.e., 0.1, UGTs almost has no deterioration with the increased number of layers. Edge-Popup achieves a comparable performance with UGTs on the GIN architecture. However, such comparable performance does not exist in GAT and GCN. A plausible explanation is that the global sparsification schedule used in UGTs enjoys a larger search space than the uniform sparse scheme used in Edge-Popup, leading to a better sparse connectivity that is more suitable for deep GNNs.
+
+UGTs achieves competitive performance with the well-versed training techniques. To further validate the effectiveness of UGTs in mitigating over-smoothing, we compare UGTs with six state-of-the-art techniques for the over-smoothing problem, including Residual connections, Jumping connections, NodeNorm, PairNorm, DropEdge, and DropNode. We follow the experimental setting in [42] and conduct experiments on Cora/Citeseer/Pubmed with GAT containing 16 and 32 layers. Model width is set to 448 for GAT on Cora/Citeseer/Pubmed. The results of the other methods are obtained from [42] ${}^{2}$ .
+
+Table 1 shows that UGTs consistently outperforms all these advanced techniques on Cora, Citeseer, and Pubmed. For instance, UGTs outperforms the best performing technique (+Jumping) by 2.0%, ${1.2}\% ,{1.0}\%$ on Cora, Citeseer and Pubmed respectively with 32 layers. These results again verify our hypothesis that training bottlenecks of deep GNNs (e.g., over-smoothing) can be avoided or mitigated by finding untrained subnetworks without training weights at all.
+
+Mean Average Distance (MAD). To further evaluate whether or not the good performance of UGTs can be contributed to the mitigation of over-smoothing, we visualize the smoothness of the node representations learned by UGTs and trained dense GNNs respectively. Following [46], we calculate the MAD distance among node representations for each layer during the process of sparsification. Concretely, MAD [46] is the quantitative metric for measuring the smoothness of the node representations. The smaller the MAD is, the smoother the node representations are. Results are reported in Figure 4. It can be observed that the node representations learned by UGTs keeps having a large distance throughout the optimization process, indicating a relieving of over-smoothing. On the contrary, the densely trained GCN suffers from severely indistinguishable representations of nodes.
+
+${}^{2}$ https://github.com/VITA-Group/Deep_GCN_Benchmarking.git
+
+ < g r a p h i c s >
+
+Figure 4: Mean Average Distance among node representations of each GNN layer. Experiments are conducted on Cora with GCN containing 32 layers and 448 widths.
+
+ < g r a p h i c s >
+
+Figure 5: The accuracy of GNNs w.r.t varying sparsities. Experiments are conducted on various GNNs with 2 layers and 256 widths for Cora, Citeseer and Pubmed, 4 layers and 386 widths for OGBN-Arxiv.
+
+TSNE Visualizations. Additionally, we visualize the node representations learned by UGTs and the trained dense GNNs with 16 and 32 layers, respectively, on both GCN and GAT architectures. Due to the limited space, we show the results of GCN in Figure 3 and put the visualization of GAT in the Appendix B.1. We can see that the node representations learned by the trained dense GCN are over-mixing in all scenarios and, in the deeper models (i.e., 32 layers), seem to be more indistinguishable. Meanwhile, the projection of node representations learned by UGTs maintains clearly distinguishable, again providing the empirical evidence of UGTs in mitigating over-smoothing problem.
+
+§ 4.3 THE EFFECT OF SPARSITY ON UGTS
+
+To better understand the effect of sparsity on the performance of UGTs, we provide a comprehensive study in Figure 5 where the performance of UGTs with respect to different sparsity levels on different architectures. We summarize our observations below.
+
+① UGTs consistently finds matching untrained graph subnetworks at a large range of sparsities, including the extreme ones. A matching untrained graph subnetwork can be identified with sparsities from 0.1 even up to 0.99 on small-scale datasets such as Cora, Citeseer and Pubmed. For large-scale OGBN-Arxiv, it is more difficult to find matching untrained subnetworks. Matching subnetworks are mainly located within sparsities of ${0.3} \sim {0.6}$ .
+
+② What's more, UGTs consistently outperforms Edge-Popup. UGTs shows better performance than Edge-Popup at high sparsities across different architectures on Cora, Citeseer, Pubmed and OGBN-Arxiv. Surprisingly, increasing sparsity from 0.7 to 0.99, UGTs maintains very a high accuracy, whereas the accuracy of Edge-Popup shows a notable degradation. It is in accord with our
+
+expectation since UGTs finds important weights globally by searching for the well-performing sparse
+
+232 topology across layers.
+
+233
+
+§ 4.4 BROADER EVALUATION OF UGTS
+
+ < g r a p h i c s >
+
+Figure 6: Out-of-distribution performance (ROC-AUC). Experiments are conducted with GCN (Width: 256, Depth: 2).
+
+ < g r a p h i c s >
+
+Figure 7: The robust performance on feature perturbations with the fraction of perturbed nodes varying from 0% to 40%. Experiments are conducted with GCN and GAT (Width: 256, Depth. 2).
+
+In this section, we systematically study the performance of UGTs on out of distribution (OOD) detection, robustness against the input perturbations including feature and edge perturbations. Following [47], we create OOD samples by specifying all samples from 40% of classes and removing them from the training set, feature perturbations by adding noise from Bernoulli distribution and edge perturbations by moving edge's end point at random. The results of OOD experiments are reported in Figure 6 and Figure 10 (shown in Appendix B.2). The results of robustness experiments are reported in Figure 8 and Figure 7. We summarize our observations as follows:
+
+① UGTs enjoys matching performance on OOD detection. Figure 6 and Figure 10 show that untrained graph subnetworks discovered by UGTs achieve matching performance on OOD detection compared with the trained dense GNNs in most cases. Besides, UGTs consistently outperforms Edge-Popup method at a large range of sparsities on OOD detection.
+
+② UGTs produces highly sparse yet robust subnetworks on input perturbations. Figure 7 and Figure 8 demonstrate that UGTs with high sparsity level (Sparsity=0.9) achieves more robust results than the trained dense GNNs on both feature and edge perturbations with perturbation percentage ranging from 0 to 40%. Again, UGTs consistently outperforms Edge-Popup with both perturbation types.
+
+ < g r a p h i c s >
+
+Figure 8: The robust performance on edge perturbations with the fraction of perturbed edges varying from 0% to 40%. Experiments are conducted with GCN and GAT (Width: 256, Depth: 2).
+
+§ 4.5 EXPERIMENTS ON GRAPH-LEVEL TASK AND OTHER DATASETS
+
+To draw a solid conclusion, we further conduct extensive experiments of graph-level task on OGBG-molhiv and OGBG-molbace; node-level task on TEXAS and OGBN-Products. The experiments are based on GCN model with width=448 and depth=3 . Table 3 consistently verifies that a matching untrained subnetwork can be identified in GNNs across multiple tasks and datasets.
+
+Table 3: Experments on graph-level tasks and other datasets. GCN Model with width:448, depth:3 are adopted for this experiments.
+
+ < g r a p h i c s >
+
+§ 5 CONCLUSION
+
+In this work, we for the first time confirm the existence of matching untrained subnetworks at a large range of sparsity. UGTs consistently outperforms the previous untrained technique - Edge-Popup on multiple graph datasets across various GNN architectures. What's more, we show a surprising result that searching for an untrained subnetwork within a randomly weighted dense GNN instead of directly training the latter can significantly mitigate the over-smoothing problem of deep GNNs. Across popular datasets, e.g., Cora, Citeseer, Pubmed, and OGBN-Arxiv, our method UGTs can achieve comparable or better performance with the various well-studied techniques that are specifically designed for over-smoothing. Moreover, we empirically find that UGTs also achieves appealing performance on other desirable aspects, such as out-of-distribution detection and robustness. The strong results of our paper point out a surprising but perhaps worth-a-try direction to obtain high-performing GNNs, i.e., finding the untrained graph tickets located within a randomly weighted dense GNN instead of training it.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dI6KBKNRp7/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/dI6KBKNRp7/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..96122298bd20b5e4f7c738a25ef9e374e512edd0
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dI6KBKNRp7/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,421 @@
+# An Analysis of Virtual Nodes in Graph Neural Networks for Link Prediction (Extended Abstract)
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## 1 I Introduction
+
+It is well known that the graph classification performance of graph neural networks (GNNs) often improves by adding an artificial virtual node to the graphs, which is connected to all graph nodes [1-4]. While virtual nodes were originally thought as aggregated representations of the entire graph, they also provide shortcuts for message passing between nodes along the graph edges. Surprisingly, the advantages of virtual nodes have never been theoretically investigated, and their impact on other problems is still an open research question. We adapt and study the virtual node concept for problems over networks, which are usually larger, often very sparse or dense, and overall more heterogeneous.
+
+Many popular GNNs are based on message passing, which computes node embeddings by iteratively aggregating the features of (usually direct) neighbor nodes along the graph edges [1]. In this way, they are able to distinguish (non-)isomorphic nodes (to great extent) [5], but this does not transfer to links [6]; for links, extra procedures may be needed (e.g., modeling enclosing subgraphs [7]). Furthermore, on large graphs, GNNs may face the under-reaching problem if long-range dependencies beyond the model's computing radius are important for the problem at hand (e.g., complex chains of protein-protein interactions). Over dense graphs, GNNs with many layers struggle with over-smoothing, node representations converging to similar values. There have been several proposals to overcome these problems. On the one hand, several works propose techniques that allow for larger numbers of GNN layers [8-14]. However, as shown in our later results, many of them do not perform well on link predictions tasks, especially on comparably dense graphs. On the other hand, there are approaches that adapt message passing to consider neighbors beyond the one-hop neighborhood: based on graph diffusion [15-20] and other theories [21, 22]. Yet, most of these models are relatively complex and, in fact, in our experiments over the challenging graphs from the Open Graph Benchmark (OGB) [23], several ran out of memory. In this paper, we show that virtual nodes may alleviate these typical issues of GNNs over larger graphs.
+
+We focus on link prediction, ${}^{1}$ which is important in view of incomplete graph data in practice in various different domains [24-27]. Numerous models have been proposed to solve this problem in the past, ranging from knowledge-graph-specific predictors [27] to GNNs [7, 24]. We explore the application and effects of virtual nodes in link prediction both theoretically and empirically:
+
+- We propose to use multiple virtual nodes in the network graph scenario and describe a graph-based technique to connect them to the graph nodes. In a nutshell, we use a graph clustering algorithm to determine groups of nodes in the graph belonging together and then connect these nodes to a common virtual node (see Figure 1). In this way, we add expressiveness, and under-reaching is decreased because clustered nodes can share information easily; meanwhile, the nodes are spared of unnecessary information from unrelated nodes (i.e., in contrast to the single virtual node model).
+
+- We also investigate alternative methods to determine the virtual node connections (i.e., randomization) and compare to the original model with a single virtual node.
+
+- We provide the first theoretical analysis of the benefits of virtual nodes in terms of (I) expressiveness of the learned link representation and (II) potential impact on under-reaching and over-smoothing. - We conducted experiments over challenging datasets, provide ablation studies and a detailed analysis of important factors.
+
+---
+
+${}^{1}$ Most results can be easily extended to node classification.
+
+---
+
+
+
+Figure 1: Multiple virtual nodes increase expressiveness: a regular GNN computes the same representation for isomorphic nodes ${v}_{2}$ and ${v}_{3}$ , and hence cannot discriminate links $\left\{ {{v}_{1},{v}_{2}}\right\}$ and $\left\{ {{v}_{1},{v}_{3}}\right\}$ . The embeddings of the nodes can be influenced by virtual nodes ${s}_{1}$ and ${s}_{2}$ and become different.
+
+Altogether, we show that, also for link prediction, virtual nodes are simple but powerful extensions that may yield rather stable performance increases for various standard GNNs. Since the latter represent simple and proven models which are especially interesting for applications, our study provides practical guidance and explanations about where and how virtual nodes may provide benefits. In this abstract, we give an overview of our main findings; for details and additional results, see the appendix.
+
+## 2 Preliminaries
+
+Link Prediction. We consider a graph $G = \left( {V, E}\right)$ with nodes $V$ and undirected edges $E \subseteq V \times V$ . This basic choice is only for ease of presentation; our techniques work for directed graphs and (with simple adaptation) for graphs with labelled nodes (edges). We assume $V$ to be ordered and may refer to a node by its index in $V$ . For a node $v \in V,{\mathcal{N}}_{v}$ denotes the set of its neighbors. Given two nodes, the link prediction task is to predict if there is a link between them.
+
+Message-Passing Graph Neural Networks. In this paper, we usually use the term graph neural networks (GNNs) to denote GNNs that use message passing as described in [1]. These networks compute for every $v \in V$ a node representation ${h}_{v}^{\ell }$ at layer $\ell$ , by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with ${h}_{v}^{\ell - 1}$ as below.
+
+$$
+{h}_{v}^{\ell } = {\operatorname{COMB}}^{\ell }\left( {{h}_{v}^{\ell - 1},{\operatorname{AGG}}^{\ell }\left( \left\{ {{h}_{u}^{\ell - 1} \mid u \in {\mathcal{N}}_{v}}\right\} \right) }\right) \tag{1}
+$$
+
+Link prediction with GNNs is usually done by combining the final representations of nodes $u, v$ under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring. Our implementation follows this approach.
+
+## 3 Virtual Nodes in Graph Neural Networks for Link Prediction
+
+Multiple Virtual Nodes. The intuition of using virtual nodes is to provide a shortcut for sharing information between the graph nodes. However, the amount of information in a graph with possibly millions of nodes is enormous, and likely too much to be captured in a single virtual node embedding. Further, not all information is equally relevant to all nodes. Therefore we suggest to use multiple virtual nodes $S = {\left\{ {s}_{1},{s}_{2}\ldots ,{s}_{n}\right\} }^{2}$ each being connected to a subset of graph nodes, as determined by an assignment $\sigma : V \rightarrow \left\lbrack {1, n}\right\rbrack ;n$ is a hyperparameter. We consider two options to obtain $\sigma$ :
+
+Randomness: GNN-RM. Most simply, we can determine a fixed $\sigma$ randomly once with initialization.
+
+Clustering: GNN-CM. Many types of graph data incorporate some cluster structure that reflects which nodes belong closely together (e.g., collaboration or social networks). We propose to connect nodes in such a cluster to a common virtual node, such that the structure inherent to the given graph is reflected in our virtual node assignment $\sigma$ . More precisely, during initialization, we use a generic clustering algorithm (e.g., METIS [28]) which, given a number $m$ , creates a set $C =$ $\left\{ {{C}_{1},{C}_{2}\ldots ,{C}_{m}}\right\}$ of clusters (i.e., sets of graph nodes) by computing an assignment $\rho : V \rightarrow \left\lbrack {1, m}\right\rbrack$ , assigning each graph node to a cluster. We then set $m = n$ and $\sigma = \rho$ .
+
+The Model. We integrate the multiple virtual nodes into a generic message-passing GNN in a straightforward way extending the approach from [23] to include multiple virtual nodes, computing
+
+---
+
+${}^{2}$ Since notation $V$ is standard for nodes, we use $S$ for the set of virtual nodes. Think of "supernodes".
+
+---
+
+node representations ${h}_{v}^{\ell }$ for a node $v \in V$ at layer $\ell$ as below. The highlighted adaptation of the standard GNN (Equation (1)) is only minor, but powerful. In our implementation, ${\mathrm{{COMB}}}_{\mathrm{{VN}}}^{\ell }$ is addition combined with linear layers and layer normalization, ${\mathrm{{AGG}}}_{\mathrm{{VN}}}^{\ell }$ is a sum.
+
+$$
+{h}_{{s}_{i}}^{\ell } = {\operatorname{COMB}}_{\mathrm{{VN}}}^{\ell }\left( {{h}_{{s}_{i}}^{\ell - 1},{\operatorname{AGG}}_{\mathrm{{VN}}}^{\ell }\left( \left\{ {{h}_{u}^{\ell - 1} \mid u \in V,\sigma \left( u\right) = i}\right\} \right) }\right)
+$$
+
+$$
+{h}_{v}^{\ell } = {\operatorname{COMB}}^{\ell }\left( {{h}_{v}^{\ell - 1} + {h}_{{s}_{\sigma \left( v\right) }}^{\ell },{\operatorname{AGG}}^{\ell }\left( \left\{ {{h}_{u}^{\ell - 1} \mid u \in {\mathcal{N}}_{v}}\right\} \right) }\right)
+$$
+
+### 3.1 Analysis I: Virtual Nodes Increase Expressiveness
+
+Additional structure-related features such as distance encodings [6, 29] are known to make graph representation learning more powerful. Our multiple virtual notes have a similar effect. Figure 1 gives an intuition of how they can increase the expressiveness of the regular 1-WL-GNN; see also Thm. B.1.
+
+### 3.2 Analysis II: Virtual Nodes Impact Node Influence
+
+We assume we can learn useful embeddings for virtual nodes if the assignment is chosen appropriately. Based on the above analysis, we expect virtual nodes to positively impact learning and prediction performance. Following [8,16], we measure the sensitivity of a node $y$ on a node $x$ by the influence score. For a $k$ -layer GCN, this score is known to be proportional in expectation to the $k$ -step random walk distribution ${P}_{rw}$ from $x$ to $y{.}^{3}$ We exploit this relationship and argument in terms of ${P}_{rw}$ .
+
+Impact of Virtual Nodes. For simplicity, we consider the influence score in an $r$ -regular graph. Consider the message passing between two nodes $x$ and $y$ . For $k = 1$ , all the nodes can be classified into two cases: if $y$ is not connected to $x$ , the influence changes from 0 to $\frac{1}{\left( r + 2\right) }$ ; otherwise:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y,1}\right) = {P}_{rw}\left( {x \rightarrow y, k}\right) + {P}_{rw}\left( {x \rightarrow s, s \rightarrow y}\right) = \frac{1}{\left( r + 2\right) } + \frac{1}{\left( {r + 2}\right) \left| V\right| }.
+$$
+
+For $k > = 2$ , by adding a virtual node $s$ in one GNN layer, the probability changes to:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y, k}\right) = {P}_{rw}\left( {x \rightarrow y, k}\right) + {P}_{rw}\left( {x \rightarrow s, s \rightarrow y}\right) {P}_{rw}\left( {x \rightarrow y, k - 1}\right)
+$$
+
+$$
+= \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| V\right| }{P}_{rw}^{s}\left( {x \rightarrow y, k - 1}\right) .
+$$
+
+Multiple Virtual Nodes. We continue along these lines and assume there is a shortest path of length $\leq k$ between $x$ and $y$ . If $x$ and $y$ connect to the same virtual node $s$ , then the above changes to:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y, k}\right) = \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| {C}_{s}\right| }{P}_{rw}^{s}\left( {x \rightarrow y, k - 1}\right) . \tag{2}
+$$
+
+Since the set ${C}_{s}$ of nodes connecting to $s$ is much smaller than $V$ , multiple virtual nodes can increase the impact of potentially important distant nodes more than a single virtual node.
+
+Impact on Over-Smoothing The idea is to show that multiple virtual nodes help preserve more local information. If we consider the influence of $x$ onto itself, we can show that, with a single virtual node, a graph node can preserve less information for itself at each layer. However, this changes in view of multiple virtual nodes; in particular, when $\left| {C}_{s}\right| \leq r + 1$ . We encounter this scenario practically especially with dense graphs. This fits nicely since dense graphs are particularly prone to over-smoothing and, as shown in $\left\lbrack {8,{16}}\right\rbrack$ , additional capability to preserve local information in message passing steps helps to reduce over-smoothing. More details are shown in Appendix B.2.
+
+## 4 Evaluation
+
+Datasets. We use two datasets from OGB: ddi, a drug-drug interaction network; and collab, an author collaboration network [23]. Data statistics are in Appendix (Table 2). ddi is dense with a low graph diameter; while collab is sparser with large diameter. Both have high clustering coefficients.
+
+Models. Standard GNNs: GCN [30] and SAGE [31], which we extend with virtual nodes; deep GNNs: SGC, APPNP, DeeperGCN, GCN-JKNet; message passing beyond the direct neighborhood: P-GNN [22], APPNP [16], GDC [20]; and an advanced GNN-based link predictor: SEAL [7].
+
+---
+
+${}^{3}$ See Theorem 1 in [8]; that theorem makes some simplifying assumptions (e.g., on the shape of GCN).
+
+---
+
+Table 1: Comparison of virtual-node augmented GNNs to models with similar goal; *: from OGB leaderboard.
+
+ | ddi Hits@20 | collab Hits@50 |
| SEAL* | ${30.56} \pm {3.86}$ | $\mathbf{{64.74} \pm {00.43}}$ |
| DeeperGCN* | n/a | ${52.73} \pm {00.47}$ |
| SGC | ${06.76} \pm {05.86}$ | ${46.35} \pm {01.97}$ |
| P-GNN | ${10.50} \pm {00.00}$ | mem. |
| APPNP | ${14.92} \pm {02.98}$ | ${31.85} \pm {02.05}$ |
| GCN | ${40.76} \pm {10.73}$ | ${49.55} \pm {00.64}$ |
| GCN-GDC | ${25.50} \pm {12.42}$ | mem. |
| GCN+JKNet* | ${60.56} \pm {08.69}$ | n/a |
| GCN+LRGA* | ${62.30} \pm {09.12}$ | ${52.21} \pm {00.72}$ |
| GCN-VN | ${62.17} \pm {12.41}$ | ${50.49} \pm {00.88}$ |
| GCN-RM | ${55.32} \pm {12.62}$ | ${50.83} \pm {01.09}$ |
| GCN-CM | ${61.05} \pm {15.63}$ | ${51.81} \pm {00.76}$ |
| SAGE | ${61.73} \pm {10.68}$ | ${55.16} \pm {01.71}$ |
| SAGE-GDC | ${31.41} \pm {12.54}$ | mem. |
| SAGE+edges* | ${74.95} \pm {03.17}$ | n/a |
| SAGE-VN | ${64.91} \pm {13.60}$ | ${58.75} \pm {00.91}$ |
| SAGE-RM | ${70.68} \pm {11.74}$ | ${58.30} \pm {00.87}$ |
| SAGE-CM | 76.21 ± 11.57 | ${60.17} \pm {01.37}$ |
+
+
+
+Figure 2: Impact of virtual node number; ddi (top) and collab (bottom).
+
+Results, Table 1. Overall Impact of Virtual Nodes. The common approach of using a single virtual node (GNN-VN) yields good improvements over ddi and slight improvements over collab. The numbers for GNN-RM reflect the randomness of their connections to the virtual nodes, there is no clear trend; but they clearly outperform the original models. The virtual node assignment based on the graph structure (GNN-CM) yields consistently good improvements over ddi and collab. We note that we obtained some ambiguous results with data that has less cluster structure, but overall can observe a positive impact.
+
+Model Comparison. The results of the best models from the OGB leaderboard vary strongly with the different datasets (e.g., SEAL), or have not been reported at all. Most deep GNNs and models that use complex message-passing techniques perform disappointing and, overall, much worse than the standard GNNs. We did thorough hyperparameter tuning for these models and it is hard to explain. A possible reason may be most of their original evaluations focus on node or graph classification and consider very different types of data. The model closest to our approach is the position-aware graph neural network (P-GNN) [22]. It assigns nodes to random subsets of nodes called "anchor-sets", and then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. So, it creates a message for each node for every anchor-set, instead of for each direct neighbor. The fact that it ran out of memory on collab shows that practice may benefit from simpler or more efficient schemes.
+
+Impact of Virtual Node Number, Figure 2. The configurations of the best models provided in the appendix show that the chosen numbers of virtual nodes are indeed random for the "random" models, but GNN-CM consistently uses a high number of virtual nodes, which also suits it better according to our theoretical analysis. In line with this, the more detailed analysis varying the numbers of virtual nodes, yields best results (also in terms of standard deviations) for SAGE-CM at rather high values. For GCN, we do not see a clear trend, but (second) best performance with 64 virtual nodes.
+
+Using Virtual Nodes Only at the Last GNN Layer, Table 3 (Appendix C.2). [32] show that using a fully connected adjacency matrix at the last layer of a standard GNN helps to better capture information over long ranges. We therefore investigated if it is a better architectural choice to use virtual nodes only at the last layer. However, we observed that this can lead to extreme performance drops.
+
+Conclusions and Discussions. In a nutshell, our clustering-based virtual node assignment provides stable performance increases if the graph contains good cluster structure and is sufficiently large. In smaller graphs, the GNNs alone were usually sufficient. In line with our theoretical investigation, we expect virtual nodes to be especially beneficial over dense graphs.
+
+References
+
+[1] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Doina Precup and Yee Whye Teh, editors, Proc. of ICML, volume 70 of Proceedings of Machine Learning Research, pages 1263-1272. PMLR, 06-11 Aug 2017. URL http://proceedings.mlr.press/v70/gilmer17a.html.1,2
+
+[2] Junying Li, Deng Cai, and Xiaofei He. Learning graph-level representation for drug discovery. CoRR, abs/1709.03741, 2017. URL http://arxiv.org/abs/1709.03741.
+
+[3] Trang Pham, Truyen Tran, Khanh Hoa Dam, and Svetha Venkatesh. Graph classification via deep learning with virtual nodes. CoRR, abs/1708.04357, 2017. URL http://arxiv.org/ abs/1708.04357.
+
+[4] Katsuhiko Ishiguro, Shin-ichi Maeda, and Masanori Koyama. Graph warp module: an auxiliary module for boosting the power of graph neural networks. CoRR, abs/1902.01020, 2019. URL http://arxiv.org/abs/1902.01020.1
+
+[5] Laszlo Babai and Ludik Kucera. Canonical labelling of graphs in linear average time. SFCS '79, page 39-46, USA, 1979. IEEE Computer Society. doi: 10.1109/SFCS.1979.8. URL https://doi.org/10.1109/SFCS.1979.8.1
+
+[6] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, 34, 2021. 1, 3, 7, 10
+
+[7] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In Proc. of NIPS, pages 5171-5181, 2018. 1, 3
+
+[8] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In Proc. of ICML, pages 5453-5462. PMLR, 2018. 1, 3, 8, 9
+
+[9] Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proc. of ICML, volume 97 of Proceedings of Machine Learning Research, pages 6861-6871. PMLR, 2019. URL http://proceedings.mlr.press/v97/wu19e.html.
+
+[10] Meng Liu, Hongyang Gao, and Shuiwang Ji. Towards deeper graph neural networks. In Rajesh Gupta, Yan Liu, Jiliang Tang, and B. Aditya Prakash, editors, Proc. of KDD, pages 338-348. ACM, 2020. doi: 10.1145/3394486.3403076. URL https://doi.org/10.1145/3394486.3403076.
+
+[11] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In Hal Daumé III and Aarti Singh, editors, Proc. of ICML, volume 119 of Proceedings of Machine Learning Research, pages 1725-1735. PMLR, 2020. URL http://proceedings.mlr.press/v119/chen20v.html.
+
+[12] Ke Sun, Zhouchen Lin, and Zhanxing Zhu. Adagcn: Adaboosting graph convolutional networks into deep models. 2021.
+
+[13] Kaixiong Zhou, Xiao Huang, Yuening Li, Daochen Zha, Rui Chen, and Xia Hu. Towards deeper graph neural networks with differentiable group normalization. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Proc. of NeurIPS, 2020.
+
+[14] Guohao Li, Chenxin Xiong, Ali K. Thabet, and Bernard Ghanem. Deepergcn: All you need to train deeper gcns. CoRR, abs/2006.07739, 2020. URL https://arxiv.org/abs/2006.07739.1
+
+[15] James Atwood and Don Towsley. Diffusion-convolutional neural networks. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Proc. of NIPS, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/ file/390e982518a50e280d8e2b535462ec1f-Paper.pdf. 1
+
+[16] Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In Proc. of ICLR. OpenReview.net, 2019. URL https://openreview.net/forum?id=H1gL-2A9Ym.3, 9
+
+[17] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. MixHop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In Kamalika Chaudhuri and Ruslan Salakhut-dinov, editors, Proc. of ICML, volume 97 of Proceedings of Machine Learning Research, pages 21-29. PMLR, 2019. URL http://proceedings.mlr.press/v97/abu-el-haija19a.html.
+
+[18] Bingbing Xu, Huawei Shen, Qi Cao, Keting Cen, and Xueqi Cheng. Graph convolutional networks using heat kernel for semi-supervised learning. In Sarit Kraus, editor, Proc. of IJCAI, pages 1928-1934. ijcai.org, 2019. doi: 10.24963/ijcai.2019/267. URL https://doi.org/10.24963/ijcai.2019/267.
+
+[19] Zheng Ma, Junyu Xuan, Yu Guang Wang, Ming Li, and Pietro Liò. Path integral based convolution and pooling for graph neural networks. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Proc. of NeurIPS, 2020.
+
+[20] Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Proc. of NeurIPS, pages 13333-13345, 2019. 1, 3
+
+[21] Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proc. of AAAI, pages 4602-4609. AAAI Press, 2019. doi: 10.1609/aaai.v33i01. 33014602. URL https://doi.org/10.1609/aaai.v33i01.33014602.1
+
+[22] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proc. of ICML, volume 97 of Proceedings of Machine Learning Research, pages 7134-7143. PMLR, 2019. URL http://proceedings.mlr.press/v97/you19b.html.1,3,4
+
+[23] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In Proc. of NeurIPS, 2020. 1, 2, 3
+
+[24] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016. 1
+
+[25] Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social Networks, 25(3): 211-230, 2003. ISSN 0378-8733. doi: https://doi.org/10.1016/S0378-8733(03)00009-1.URL https://www.sciencedirect.com/science/article/pii/S0378873303000091.
+
+[26] Khushnood Abbas, Alireza Abbasi, Shi Dong, Ling Niu, Laihang Yu, Bolun Chen, Shi-Min Cai, and Qambar Hasan. Application of network link prediction in drug discovery. BMC bioinformatics, 22(1):1-21, 2021.
+
+[27] Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and S Yu Philip. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 2021. 1
+
+[28] George Karypis and Vipin Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on Scientific Computing, 20:359-392, 1998. 2
+
+[29] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. Proc. of NeurIPS, 33, 2020. 3
+
+[30] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proc. of ICLR. OpenReview.net, 2017. URL https://openreview.net/forum? id=SJU4ayYgl. 3, 8
+
+[31] William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Proc. of NIPS, pages 1024-1034, 2017. 3
+
+[32] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. Proc. of ICLR, 2021. 4
+
+## A Appendix
+
+## B Additional Theoretical Results
+
+### B.1 Expressiveness of link representation
+
+Given multiple virtual nodes $S = \left\{ {{s}_{1},\ldots ,{s}_{m}}\right\}$ , we obtain a node labeling that includes the node representations ${h}_{{s}_{i}}^{\ell }$ of the virtual nodes. For every $u \in V$ , we have additional features $l\left( {u \mid S}\right) =$ ${\left( {h}_{{s}_{1}}^{\ell },\ldots ,{h}_{{s}_{m}}^{\ell }\right) }^{\mathrm{T}}\left( {\gamma \left( {u \mid {s}_{1}}\right) ,\ldots ,\gamma \left( {u \mid {s}_{m}}\right) }\right)$ , where $\gamma \left( {u \mid {s}_{i}}\right) = 1$ if $u$ is connected to the virtual node ${s}_{i}$ , and $\gamma \left( {u \mid {v}_{i}}\right) = 0$ otherwise. We can initialize ${h}_{{s}_{i}}^{0}$ with the one-hot encoding of $i$ to ensure the ${s}_{i}$ have different labels. We can then show that this labeling increases the power of GNNs.
+
+Theorem B.1 Given an arbitrary non-attributed graph with $n$ nodes, if the degree of each node in the graph is between 1 and $\mathcal{O}\left( {{\log }^{\frac{n - \epsilon }{2k}}\left( n\right) }\right)$ , for any constant $\epsilon > 0$ , given $m$ virtual nodes which evenly divide the node set into $m$ clusters, there are $\omega \left( {{\left( m - 1\right) }^{2}{\left( \frac{{n}^{\epsilon }}{m} - 1\right) }^{3}}\right)$ many pairs of non-isomorphic links $\left( {u, w}\right) ,\left( {v, w}\right)$ such that a $k$ -layer1-WL-GNN gives $u, v$ the same representation, while using $m$ virtual nodes give $\left( {u, w}\right) ,\left( {v, w}\right)$ different representations.
+
+The theorem says that 1-WL-GNN with virtual nodes can discriminate many links that 1-WL-GNN cannot discriminate. On the other hand, it is intuitive adding virtual nodes can be at least as powerful as 1-WL-GNNs since it keeps all other components. If there are links that 1-WL-GNN can discriminate we only need to assign the related nodes to the same virtual nodes, so that the virtual-nodes-based method can also discriminate them.
+
+#### B.1.1 Proof of Theorem B.1
+
+The proof can be separated into two steps. The first step is to prove that there exist $n/o\left( {n}^{1 - \epsilon }\right) =$ $\omega \left( {n}^{\epsilon }\right)$ many nodes that are locally $h$ -isomorphic (which means their $h$ -hop enclosing subgraphs are isomorphic). This step is same as the proof of Theorem 2 in [6], so we omit the details here. The basic idea is to expand the $h$ -hop enclosing subgraph ${G}^{h}\left( v\right)$ of $v$ to another subgraph ${\widetilde{G}}^{h}\left( v\right)$ and then use the pigeon hole principle to count the possible isomorphic ${\widetilde{G}}^{h}\left( v\right)$ . After getting these locally isomorphic nodes, we denote the set of these nodes as ${V}_{iso}$ . The second step is to find the non-isomorphic links.
+
+Step 2. We partition ${V}_{iso} = { \cup }_{i = 1}{V}_{i}$ where ${V}_{i}$ is the subset of nodes connected to virtual node ${s}_{i}$ . To be simple, we call each ${V}_{i}$ a cluster, and the sizes of different clusters are assumed to be the same $\left| {V}_{i}\right| = \left| {V}_{iso}\right| /m$ . Consider two nodes $u \in {V}_{i}$ and $v \in {V}_{j}$ from different clusters. Since both of them are in ${V}_{iso}$ , they have identical $h$ -hop neighborhood structures, and $h$ -layer 1-WL-GNN will give them the same representations. Then let us select another node $w$ in ${V}_{i}, h$ -layer 1-WL-GNN will also make(u, w)and(v, w)have the same representation.
+
+However, if we use virtual nodes to label nodes and give them additional features, because $u, w$ are in the same cluster while $v, w$ belong to different clusters,(u, w)will have different representation from(v, w). Now we count the number of such non-isomorphic link pairs $Y$ , we obtain:
+
+$$
+Y \geq \mathop{\prod }\limits_{{i, j = 1, j \neq i}}^{m}\left| {V}_{i}\right| \left| {{V}_{i} - 1}\right| \left| {V}_{j}\right|
+$$
+
+$$
+= \frac{1}{2}m\left( {m - 1}\right) \left( {\left( {\frac{\left| {V}_{iso}\right| }{m} - 1}\right) {\left( \frac{\left| {V}_{iso}\right| }{m}\right) }^{2}}\right)
+$$
+
+283 Taking $\left| {V}_{iso}\right| = \omega \left( {n}^{\epsilon }\right)$ into the above in-equation, we get
+
+$$
+Y \geq \frac{1}{2}m\left( {m - 1}\right) \omega \left( {\left( \frac{{n}^{\epsilon }}{m} - 1\right) }^{3}\right)
+$$
+
+$$
+= \omega \left( {{\left( m - 1\right) }^{2}{\left( \frac{{n}^{\epsilon }}{m} - 1\right) }^{3}}\right)
+$$
+
+### B.2 Node Influence
+
+In the following, without loss of generality, we take a $k$ -layer GCN [30] as the example, and hence consider layers described as follows:
+
+$$
+{h}_{v}^{l} = \operatorname{ReLU}\left( {{W}_{l}\frac{1}{\deg \left( v\right) }\mathop{\sum }\limits_{{u \in N\left( v\right) }}{h}_{u}^{l - 1}}\right) .
+$$
+
+Influence Score. We measure the sensitivity of a node $y$ on a node $x$ by the influence score [8] $I\left( {x, y}\right) = {e}^{T}\frac{\partial {h}_{x}^{x}}{\partial {h}^{0}};e$ is a vector of all ones and ${h}_{x}^{k}$ is the embedding of $x$ at the ${k}^{th}$ layer. The influence score is known to be proportional in expectation to the $k$ -step random walk distribution ${P}_{rw}$ from $x$ to $y{ : }^{4}$
+
+$$
+\mathbb{E}\left\lbrack {I\left( {x, y}\right) }\right\rbrack \propto {P}_{rw}\left( {x \rightarrow y, k}\right) = \mathop{\sum }\limits_{{r \in {R}^{k}}}\mathop{\prod }\limits_{{\ell = 1}}^{k}\frac{1}{\deg \left( {v}_{r}^{\ell }\right) }, \tag{3}
+$$
+
+$\left( {{v}_{r}^{0},{v}_{r}^{1},\ldots ,{v}_{r}^{k}}\right)$ are the nodes in the path $r$ from $x \mathrel{\text{:=}} {v}_{r}^{0}$ to $y \mathrel{\text{:=}} {v}_{r}^{k},{R}^{k}$ is the set of paths of length $k$ . In what follows, we exploit the relationship between the influence score and the probability ${P}_{rw}$ and argument in terms of the latter. In particular, we will show how ${P}_{rw}$ changes in view of virtual nodes. Note that we assume all the paths of message passing have the same probability. We assume a self-loop at each regular graph node, this is standard and supported by Equation (1). Hence, the denominator in the above equation changes slightly:
+
+$$
+{P}_{rw}\left( {x \rightarrow y, k}\right) = \mathop{\sum }\limits_{{r \in {R}^{k}}}\mathop{\prod }\limits_{{\ell = 1}}^{k}\frac{1}{\deg \left( {v}_{r}^{\ell }\right) + 1}. \tag{4}
+$$
+
+We neglect the self-loops with virtual nodes only for reasons of readability. But it can be readily checked that the later equations hold similarly with an additional " + 1 " in denominators. For simplicity, we further consider the graph to be $r$ -regular; in the standard case without virtual nodes, Equation (4) then simplifies to:
+
+$$
+{P}_{rw}\left( {x \rightarrow y, k}\right) = \frac{\left| {R}^{k}\right| }{{\left( r + 1\right) }^{k}}. \tag{5}
+$$
+
+We hypothesize that we can come to similar conclusions in a general graph with average degree $r$ .
+
+Impact of One Virtual Node. We focus on the message passing between two nodes $x$ and $y$ , in layer $k$ and calculate ${P}_{rw}^{s}\left( {x \rightarrow y, k}\right)$ , the influence score in the setting with virtual nodes, here with one, $s$ . In particular we assume $x, y \in V$ and hence $x, y \notin \{ s\}$ . We argument inductively, based on $k$ and, for each GNN layer, separate the impact of the messages coming from the virtual node. For $k = 1$ , all the nodes can be classified into two cases: if $y$ is not connected to $x$ , the influence changes from 0 to $\frac{1}{\left( r + 2\right) }$ ; if $y$ is connected to $x$ , the influence score is:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y,1}\right) = {P}_{rw}\left( {x \rightarrow y,1}\right) + {P}_{rw}\left( {x \rightarrow s, s \rightarrow y}\right)
+$$
+
+$$
+= \frac{1}{\left( r + 2\right) } + \frac{1}{\left( {r + 2}\right) \left| V\right| }. \tag{6}
+$$
+
+Note that the probability for $x \rightarrow s$ is the same as from $x$ to any other neighbor, $\frac{1}{\left| V\right| }$ for $s \rightarrow y$ follows from the $\left| V\right|$ connected nodes at $s$ .
+
+For $k \geq 2$ , we obtain:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y, k}\right) = {P}_{rw}\left( {x \rightarrow y, k}\right) + {P}_{rw}\left( {x \rightarrow s, s \rightarrow y}\right) {P}_{rw}^{s}\left( {x \rightarrow y, k - 1}\right)
+$$
+
+$$
+= \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| V\right| }{P}_{rw}^{s}\left( {x \rightarrow y, k - 1}\right) . \tag{7}
+$$
+
+---
+
+${}^{4}$ See Theorem 1 in [8]. Note that the theorem makes some simplifying assumptions that all paths in the computation graph of the model are activated with the same probability of success. Nevertheless, empirical experiments presented in [8] confirm that the theory is close to what happens in practice. In addition, the GCN is assumed to use a simple average as AGG function. However, the factor in the equation can be easily adapted to other GNNs.
+
+---
+
+Multiple Virtual Nodes. In view of multiple virtual nodes, the above analysis gets more appealing.
+
+We continue along the above lines and assume there is a shortest path of length $\leq k$ between $x$ and $y$ . If $x$ and $y$ connect to the same virtual node $s$ , then Equation (7) changes as follows:
+
+$$
+{P}_{rw}^{ms}\left( {x \rightarrow y, k}\right) = \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| {C}_{s}\right| }{P}_{rw}^{ms}\left( {x \rightarrow y, k - 1}\right) . \tag{8}
+$$
+
+Since the set ${C}_{s}$ of nodes connecting to $s$ is much smaller than $V$ , i.e., $\left| {C}_{s}\right| < < \left| V\right|$ , the impact of multiple virtual nodes on the influence score is greater than that of a single virtual node. In case that $x$ and $y$ do not connect to the same virtual node, the probability just slightly decreases. The maximum possible decrease occurs when no nodes in the path between $x$ and $y$ are connected to a common virtual node, including $x$ and $y : {\delta }_{wc} = \frac{\left| {R}^{k}\right| }{{\left( r + 1\right) }^{k}} - \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}}$ ; here we subtract from the regular ${P}_{rw}$ our ${P}_{rw}^{ms}$ , in which the second (virtual node) component is 0 .
+
+Impact on Over-Smoothing The idea is to show that multiple virtual nodes help to preserve local information at the graph nodes. To this end, we consider the influence of $x$ onto itself. For the setting with a single virtual node and $k = 1$ , the change in influence score is
+
+$$
+{\delta }^{s}\left( {x, k = 1}\right) = {P}_{rw}^{s}\left( {x \rightarrow x,1}\right) - {P}_{rw}\left( {x \rightarrow x,1}\right)
+$$
+
+$$
+= \frac{1}{\left( r + 2\right) } + \frac{1}{\left( {r + 2}\right) \left| V\right| } - \frac{1}{\left( r + 1\right) } \tag{9}
+$$
+
+$$
+= \frac{\left( {1 + \frac{1}{\left| V\right| }}\right) \left( {r + 1}\right) - \left( {r + 2}\right) }{\left( {r + 1}\right) \left( {r + 2}\right) }
+$$
+
+$$
+= \frac{\frac{1}{\left| V\right| }\left( {r + 1}\right) - 1}{\left( {r + 1}\right) \left( {r + 2}\right) } < 0.
+$$
+
+4 This means the node will preserve less information for itself at layer considering the message coming from the single virtual node. However, in view of multiple virtual nodes, we come to a different 316 conclusion.
+
+$$
+{\delta }^{ms}\left( {x, k = 1}\right) = \frac{1}{\left( r + 2\right) } + \frac{1}{\left( {r + 2}\right) \left| V\right| } - \frac{1}{\left( r + 1\right) }
+$$
+
+$$
+= \frac{\left( {1 + \frac{1}{\left| {C}_{s}\right| }}\right) \left( {r + 1}\right) - \left( {r + 2}\right) }{\left( {r + 1}\right) \left( {r + 2}\right) }
+$$
+
+$$
+= \frac{\frac{1}{\left| {C}_{s}\right| }\left( {r + 1}\right) - 1}{\left( {r + 1}\right) \left( {r + 2}\right) }
+$$
+
+Since $\left| {C}_{s}\right| < < \left| V\right|$ , we obtain ${\delta }^{ms}\left( {x, k = 1}\right) > > > \frac{\frac{1}{\left| V\right| }\left( {r + 1}\right) - 1}{\left( {r + 1}\right) \left( {r + 2}\right) }$ , which means we can preserve much more local information than in the setting with a single virtual node. Especially, when $\left| {C}_{s}\right| \leq r + 1$ , the self-transition probability is even higher than in the original setting without virtual nodes. We encounter this scenario practically especially with dense graphs. This fits nicely since these graphs are particularly prone to over-smoothing and, as shown in $\left\lbrack {8,{16}}\right\rbrack$ , additional capability to preserve local information in message passing steps helps to reduce over-smoothing.
+
+For $k \geq 2$ ,
+
+$$
+{\delta }^{ms}\left( {x, k}\right) = \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| {C}_{s}\right| }{P}_{rw}^{ms}\left( {x \rightarrow x, k - 1}\right) - \frac{\left| {R}^{k}\right| }{{\left( r + 1\right) }^{k}}
+$$
+
+324 Assume ${P}_{rw}^{ms}\left( {x \rightarrow x, k - 1}\right) > {P}_{rw}^{s}\left( {x \rightarrow x, k - 1}\right)$ , since $\left| {C}_{s}\right| < \left| V\right|$ , then we get
+
+$$
+{\delta }^{ms}\left( {x, k}\right) < \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| V\right| }{P}_{rw}^{s}\left( {x \rightarrow x, k - 1}\right) - \frac{\left| {R}^{k}\right| }{{\left( r + 1\right) }^{k}} = {\delta }^{s}\left( {x, k}\right) .
+$$
+
+Adding the condition that ${\delta }^{ms}\left( {x, k = 1}\right) > {\delta }^{s}\left( {x, k = 1}\right)$ , we know that for any $k$ , multiple virtual nodes can preserve more local information than single nodes.
+
+Table 2: Data. All graphs are undirected, have no edge features, and all but ddi have node features.
+
+ | #Nodes | $\mathbf{\# {Edges}}$ | Average $\mathbf{{NodeDeg}.}$ | Average Clust. Coeff. | MaxSCC Ratio | Graph Diameter |
| ddi | 4,267 | 1,334,889 | 500.5 | 0.514 | 1.000 | 5 |
| collab | 235,868 | 1,285,465 | 8.2 | 0.729 | 0.987 | 23 |
+
+### B.3 Relationship with Labeling Tricks
+
+Although the concept of link representation is from [6], we would like to clarify that our labeling strategy is not a valid labeling trick by the definition of [6].
+
+Consider an undirected graph $G$ as described in Section 2. In addition, the tensor $\mathbf{A} \in {\mathbb{R}}^{n \times n \times k}$ contains all node and edge features (if available). The diagonal components ${\mathbf{A}}_{v, v, : }$ denote the node features, while the off-diagonal components ${\mathbf{A}}_{u.v. : }$ denote the edge features of edge(u, v). The labeling trick uses a target node set $S \subseteq V$ and a labeling function to label all nodes in the node set $V$ and stack the labels with $\mathbf{A}$ . A valid labeling trick must meet two conditions: (1) the nodes in $S$ have different labels from the rest of the nodes, (2) the labeling function must be permutation invariant.
+
+Using virtual nodes is not a valid labeling trick in the following two aspects: First, the virtual node set $S$ is not a subset of graph nodes $V$ , and we use addition instead of concatenation. Even if we extend $V$ to $V \cup S$ , our labeling strategy still does not fit the permutation-invariant requirement. Nevertheless, it can achieve similar effects in learning structural link representations.
+
+## C Additional Experimental Results
+
+### C.1 Data Statistics
+
+Data Statistics of the ddi and collab are shown in Table 2. From the table, we can see both of the datasets have good clustering structure. ddi is extremely dense and collab is sparser. That is interesting to note that on the denser ddi, our virtual nodes approaches achieved better performance gain.
+
+### C.2 Model Configurations and Training
+
+We trained all models for 80 runs using the Bayesian optimization provided by wandb ${}^{5}$ and the following hyperparameters. 349
+
+hidden dimension 32, 64, 128, 256
+
+learning rate 0.1,0.05,0.01,0.005,0.001,0.0005,0.0001
+
+dropout 0.0.3.0.6
+
+#of layers 1-7
+
+#of virtual nodes (random) 1-10
+
+#of virtual nodes 1,2,4,8,16,32,64
+
+SGC - K 2-7
+
+APPNP - $\alpha$ 0.05,0.1,0.2,0.3
+
+GNN-GDC - k 64, 128
+
+GNN-GDC - $\alpha$ 0.05,0.1,0.2,0.3
+
+Please note that we considered the wide ranges of values only in order to find a good general setting. For practical usage a hidden dimension of 256 , learning rate of 0.0001 , and dropout of 0.3 should work well; only on the small graphs a dropout of 0 might work better. As usual, the number of layers depends on the type of data; however, note that the virtual nodes make it possible to use more that then the usual 2-3 layers. Generally, higher numbers of virtual nodes work better, in line with our theoretical results.
+
+Also note that we used less virtual nodes in the selection for the models $\left( {-\mathrm{{RM}}, - {\mathrm{{RM}}}^{F}}\right)$ since especially - ${\mathrm{{RM}}}^{F}$ was very slow and preliminary results showed that larger numbers did not change the results greatly - probably due to the randomness. We used maximally 64 virtual nodes due to memory issues with larger numbers (e.g., 128), especially on the larger datasets. For the first clustering in GNN-CM ${}^{ + }$ , we created 200 clusters on ddi and collab. We used 500 epochs with a patience of 30 . Furthermore, for collab, we used the validation edges during testing (OGB contains both settings, with and without them).
+
+---
+
+5 https://wandb.ai/site
+
+---
+
+
+
+Figure 3: Performance depending on layers: Hits@k and time per epoch (sec.); ddi (left), collab.
+
+We tuned all models for 80 runs, and thereafter ran the models with the best 3 configurations for 3 runs and chose the best of these model as the final model (configuration). We trained as suggested by the OGB (e.g., the splits, negative sampling) but used a batch size of ${2}^{12}$ .
+
+### C.3 Additional Results
+
+Using Virtual Nodes Only at the Last GNN Layer, Table 3 As we discussed in Sec.4, we investigated using virtual nodes only at the last layer and compared it with our proposed method. The results are shown in Table 3. ${\mathrm{{VN}}}_{OL}$ and ${\mathrm{{CM}}}_{OL}$ indicate the ablation models which use virtual nodes only at the last layer. It shows that these ablations models decrease the performance a lot. That means using virtual nodes only at the last layer is not enough.
+
+Table 3: Comparison of using the virtual nodes at every and only at the last layer; Hits@20, ddi.
+
+ | GCN | SAGE | GIN |
| w/o virtual nodes | ${0.5062} \pm {0.2186}$ | ${0.6128} \pm {0.2122}$ | ${0.4829} \pm {0.1608}$ |
| - VN | ${0.5932} \pm {0.2390}$ | ${0.7160} \pm {0.1457}$ | ${0.6523} \pm {0.0446}$ |
| - VNOL | ${0.6180} \pm {0.0088}$ | ${0.5167} \pm {0.1364}$ | ${0.6472} \pm {0.0542}$ |
| - CM | ${0.6322} \pm {0.1565}$ | $\mathbf{{0.8819} \pm {0.0341}}$ | 0.6544 $\pm$ 0.0960 |
| - CMOL | $\mathbf{{0.6338} \pm {0.1188}}$ | ${0.6151} \pm {0.1545}$ | ${0.4420} \pm {0.1694}$ |
+
+Impact of Virtual Nodes on Number of GNN Layers and Efficiency, Figure 3. For the virtual nodes models, the scores increase with the number of layers for a longer time, GCN drops earlier. On ddi, GCN-VN and -CM reach their best scores at 6 and 8 layers, respectively, which is remarkable for that very dense dataset, being prone to over-smoothing. On collab it is the other way around. The figure also gives an idea about the runtime increase with using virtual nodes. It compares the 6-layer models, and shows the 4-layer GCN-CM which obtains performance similar to the 6-layer GCN-VN.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dI6KBKNRp7/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/dI6KBKNRp7/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1b37e861842111195363026ed9f3d30df4be9284
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dI6KBKNRp7/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,181 @@
+§ AN ANALYSIS OF VIRTUAL NODES IN GRAPH NEURAL NETWORKS FOR LINK PREDICTION (EXTENDED ABSTRACT)
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ 1 I INTRODUCTION
+
+It is well known that the graph classification performance of graph neural networks (GNNs) often improves by adding an artificial virtual node to the graphs, which is connected to all graph nodes [1-4]. While virtual nodes were originally thought as aggregated representations of the entire graph, they also provide shortcuts for message passing between nodes along the graph edges. Surprisingly, the advantages of virtual nodes have never been theoretically investigated, and their impact on other problems is still an open research question. We adapt and study the virtual node concept for problems over networks, which are usually larger, often very sparse or dense, and overall more heterogeneous.
+
+Many popular GNNs are based on message passing, which computes node embeddings by iteratively aggregating the features of (usually direct) neighbor nodes along the graph edges [1]. In this way, they are able to distinguish (non-)isomorphic nodes (to great extent) [5], but this does not transfer to links [6]; for links, extra procedures may be needed (e.g., modeling enclosing subgraphs [7]). Furthermore, on large graphs, GNNs may face the under-reaching problem if long-range dependencies beyond the model's computing radius are important for the problem at hand (e.g., complex chains of protein-protein interactions). Over dense graphs, GNNs with many layers struggle with over-smoothing, node representations converging to similar values. There have been several proposals to overcome these problems. On the one hand, several works propose techniques that allow for larger numbers of GNN layers [8-14]. However, as shown in our later results, many of them do not perform well on link predictions tasks, especially on comparably dense graphs. On the other hand, there are approaches that adapt message passing to consider neighbors beyond the one-hop neighborhood: based on graph diffusion [15-20] and other theories [21, 22]. Yet, most of these models are relatively complex and, in fact, in our experiments over the challenging graphs from the Open Graph Benchmark (OGB) [23], several ran out of memory. In this paper, we show that virtual nodes may alleviate these typical issues of GNNs over larger graphs.
+
+We focus on link prediction, ${}^{1}$ which is important in view of incomplete graph data in practice in various different domains [24-27]. Numerous models have been proposed to solve this problem in the past, ranging from knowledge-graph-specific predictors [27] to GNNs [7, 24]. We explore the application and effects of virtual nodes in link prediction both theoretically and empirically:
+
+ * We propose to use multiple virtual nodes in the network graph scenario and describe a graph-based technique to connect them to the graph nodes. In a nutshell, we use a graph clustering algorithm to determine groups of nodes in the graph belonging together and then connect these nodes to a common virtual node (see Figure 1). In this way, we add expressiveness, and under-reaching is decreased because clustered nodes can share information easily; meanwhile, the nodes are spared of unnecessary information from unrelated nodes (i.e., in contrast to the single virtual node model).
+
+ * We also investigate alternative methods to determine the virtual node connections (i.e., randomization) and compare to the original model with a single virtual node.
+
+ * We provide the first theoretical analysis of the benefits of virtual nodes in terms of (I) expressiveness of the learned link representation and (II) potential impact on under-reaching and over-smoothing. - We conducted experiments over challenging datasets, provide ablation studies and a detailed analysis of important factors.
+
+${}^{1}$ Most results can be easily extended to node classification.
+
+ < g r a p h i c s >
+
+Figure 1: Multiple virtual nodes increase expressiveness: a regular GNN computes the same representation for isomorphic nodes ${v}_{2}$ and ${v}_{3}$ , and hence cannot discriminate links $\left\{ {{v}_{1},{v}_{2}}\right\}$ and $\left\{ {{v}_{1},{v}_{3}}\right\}$ . The embeddings of the nodes can be influenced by virtual nodes ${s}_{1}$ and ${s}_{2}$ and become different.
+
+Altogether, we show that, also for link prediction, virtual nodes are simple but powerful extensions that may yield rather stable performance increases for various standard GNNs. Since the latter represent simple and proven models which are especially interesting for applications, our study provides practical guidance and explanations about where and how virtual nodes may provide benefits. In this abstract, we give an overview of our main findings; for details and additional results, see the appendix.
+
+§ 2 PRELIMINARIES
+
+Link Prediction. We consider a graph $G = \left( {V,E}\right)$ with nodes $V$ and undirected edges $E \subseteq V \times V$ . This basic choice is only for ease of presentation; our techniques work for directed graphs and (with simple adaptation) for graphs with labelled nodes (edges). We assume $V$ to be ordered and may refer to a node by its index in $V$ . For a node $v \in V,{\mathcal{N}}_{v}$ denotes the set of its neighbors. Given two nodes, the link prediction task is to predict if there is a link between them.
+
+Message-Passing Graph Neural Networks. In this paper, we usually use the term graph neural networks (GNNs) to denote GNNs that use message passing as described in [1]. These networks compute for every $v \in V$ a node representation ${h}_{v}^{\ell }$ at layer $\ell$ , by aggregating its neighbor nodes based on a generic aggregation function and then combine the obtained vector with ${h}_{v}^{\ell - 1}$ as below.
+
+$$
+{h}_{v}^{\ell } = {\operatorname{COMB}}^{\ell }\left( {{h}_{v}^{\ell - 1},{\operatorname{AGG}}^{\ell }\left( \left\{ {{h}_{u}^{\ell - 1} \mid u \in {\mathcal{N}}_{v}}\right\} \right) }\right) \tag{1}
+$$
+
+Link prediction with GNNs is usually done by combining the final representations of nodes $u,v$ under consideration and passing them through several feed-forward layers with a final sigmoid function for scoring. Our implementation follows this approach.
+
+§ 3 VIRTUAL NODES IN GRAPH NEURAL NETWORKS FOR LINK PREDICTION
+
+Multiple Virtual Nodes. The intuition of using virtual nodes is to provide a shortcut for sharing information between the graph nodes. However, the amount of information in a graph with possibly millions of nodes is enormous, and likely too much to be captured in a single virtual node embedding. Further, not all information is equally relevant to all nodes. Therefore we suggest to use multiple virtual nodes $S = {\left\{ {s}_{1},{s}_{2}\ldots ,{s}_{n}\right\} }^{2}$ each being connected to a subset of graph nodes, as determined by an assignment $\sigma : V \rightarrow \left\lbrack {1,n}\right\rbrack ;n$ is a hyperparameter. We consider two options to obtain $\sigma$ :
+
+Randomness: GNN-RM. Most simply, we can determine a fixed $\sigma$ randomly once with initialization.
+
+Clustering: GNN-CM. Many types of graph data incorporate some cluster structure that reflects which nodes belong closely together (e.g., collaboration or social networks). We propose to connect nodes in such a cluster to a common virtual node, such that the structure inherent to the given graph is reflected in our virtual node assignment $\sigma$ . More precisely, during initialization, we use a generic clustering algorithm (e.g., METIS [28]) which, given a number $m$ , creates a set $C =$ $\left\{ {{C}_{1},{C}_{2}\ldots ,{C}_{m}}\right\}$ of clusters (i.e., sets of graph nodes) by computing an assignment $\rho : V \rightarrow \left\lbrack {1,m}\right\rbrack$ , assigning each graph node to a cluster. We then set $m = n$ and $\sigma = \rho$ .
+
+The Model. We integrate the multiple virtual nodes into a generic message-passing GNN in a straightforward way extending the approach from [23] to include multiple virtual nodes, computing
+
+${}^{2}$ Since notation $V$ is standard for nodes, we use $S$ for the set of virtual nodes. Think of "supernodes".
+
+node representations ${h}_{v}^{\ell }$ for a node $v \in V$ at layer $\ell$ as below. The highlighted adaptation of the standard GNN (Equation (1)) is only minor, but powerful. In our implementation, ${\mathrm{{COMB}}}_{\mathrm{{VN}}}^{\ell }$ is addition combined with linear layers and layer normalization, ${\mathrm{{AGG}}}_{\mathrm{{VN}}}^{\ell }$ is a sum.
+
+$$
+{h}_{{s}_{i}}^{\ell } = {\operatorname{COMB}}_{\mathrm{{VN}}}^{\ell }\left( {{h}_{{s}_{i}}^{\ell - 1},{\operatorname{AGG}}_{\mathrm{{VN}}}^{\ell }\left( \left\{ {{h}_{u}^{\ell - 1} \mid u \in V,\sigma \left( u\right) = i}\right\} \right) }\right)
+$$
+
+$$
+{h}_{v}^{\ell } = {\operatorname{COMB}}^{\ell }\left( {{h}_{v}^{\ell - 1} + {h}_{{s}_{\sigma \left( v\right) }}^{\ell },{\operatorname{AGG}}^{\ell }\left( \left\{ {{h}_{u}^{\ell - 1} \mid u \in {\mathcal{N}}_{v}}\right\} \right) }\right)
+$$
+
+§ 3.1 ANALYSIS I: VIRTUAL NODES INCREASE EXPRESSIVENESS
+
+Additional structure-related features such as distance encodings [6, 29] are known to make graph representation learning more powerful. Our multiple virtual notes have a similar effect. Figure 1 gives an intuition of how they can increase the expressiveness of the regular 1-WL-GNN; see also Thm. B.1.
+
+§ 3.2 ANALYSIS II: VIRTUAL NODES IMPACT NODE INFLUENCE
+
+We assume we can learn useful embeddings for virtual nodes if the assignment is chosen appropriately. Based on the above analysis, we expect virtual nodes to positively impact learning and prediction performance. Following [8,16], we measure the sensitivity of a node $y$ on a node $x$ by the influence score. For a $k$ -layer GCN, this score is known to be proportional in expectation to the $k$ -step random walk distribution ${P}_{rw}$ from $x$ to $y{.}^{3}$ We exploit this relationship and argument in terms of ${P}_{rw}$ .
+
+Impact of Virtual Nodes. For simplicity, we consider the influence score in an $r$ -regular graph. Consider the message passing between two nodes $x$ and $y$ . For $k = 1$ , all the nodes can be classified into two cases: if $y$ is not connected to $x$ , the influence changes from 0 to $\frac{1}{\left( r + 2\right) }$ ; otherwise:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y,1}\right) = {P}_{rw}\left( {x \rightarrow y,k}\right) + {P}_{rw}\left( {x \rightarrow s,s \rightarrow y}\right) = \frac{1}{\left( r + 2\right) } + \frac{1}{\left( {r + 2}\right) \left| V\right| }.
+$$
+
+For $k > = 2$ , by adding a virtual node $s$ in one GNN layer, the probability changes to:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y,k}\right) = {P}_{rw}\left( {x \rightarrow y,k}\right) + {P}_{rw}\left( {x \rightarrow s,s \rightarrow y}\right) {P}_{rw}\left( {x \rightarrow y,k - 1}\right)
+$$
+
+$$
+= \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| V\right| }{P}_{rw}^{s}\left( {x \rightarrow y,k - 1}\right) .
+$$
+
+Multiple Virtual Nodes. We continue along these lines and assume there is a shortest path of length $\leq k$ between $x$ and $y$ . If $x$ and $y$ connect to the same virtual node $s$ , then the above changes to:
+
+$$
+{P}_{rw}^{s}\left( {x \rightarrow y,k}\right) = \frac{\left| {R}^{k}\right| }{{\left( r + 2\right) }^{k}} + \frac{1}{\left( {r + 2}\right) \left| {C}_{s}\right| }{P}_{rw}^{s}\left( {x \rightarrow y,k - 1}\right) . \tag{2}
+$$
+
+Since the set ${C}_{s}$ of nodes connecting to $s$ is much smaller than $V$ , multiple virtual nodes can increase the impact of potentially important distant nodes more than a single virtual node.
+
+Impact on Over-Smoothing The idea is to show that multiple virtual nodes help preserve more local information. If we consider the influence of $x$ onto itself, we can show that, with a single virtual node, a graph node can preserve less information for itself at each layer. However, this changes in view of multiple virtual nodes; in particular, when $\left| {C}_{s}\right| \leq r + 1$ . We encounter this scenario practically especially with dense graphs. This fits nicely since dense graphs are particularly prone to over-smoothing and, as shown in $\left\lbrack {8,{16}}\right\rbrack$ , additional capability to preserve local information in message passing steps helps to reduce over-smoothing. More details are shown in Appendix B.2.
+
+§ 4 EVALUATION
+
+Datasets. We use two datasets from OGB: ddi, a drug-drug interaction network; and collab, an author collaboration network [23]. Data statistics are in Appendix (Table 2). ddi is dense with a low graph diameter; while collab is sparser with large diameter. Both have high clustering coefficients.
+
+Models. Standard GNNs: GCN [30] and SAGE [31], which we extend with virtual nodes; deep GNNs: SGC, APPNP, DeeperGCN, GCN-JKNet; message passing beyond the direct neighborhood: P-GNN [22], APPNP [16], GDC [20]; and an advanced GNN-based link predictor: SEAL [7].
+
+${}^{3}$ See Theorem 1 in [8]; that theorem makes some simplifying assumptions (e.g., on the shape of GCN).
+
+Table 1: Comparison of virtual-node augmented GNNs to models with similar goal; *: from OGB leaderboard.
+
+max width=
+
+X ddi Hits@20 collab Hits@50
+
+1-3
+SEAL* ${30.56} \pm {3.86}$ $\mathbf{{64.74} \pm {00.43}}$
+
+1-3
+DeeperGCN* n/a ${52.73} \pm {00.47}$
+
+1-3
+SGC ${06.76} \pm {05.86}$ ${46.35} \pm {01.97}$
+
+1-3
+P-GNN ${10.50} \pm {00.00}$ mem.
+
+1-3
+APPNP ${14.92} \pm {02.98}$ ${31.85} \pm {02.05}$
+
+1-3
+GCN ${40.76} \pm {10.73}$ ${49.55} \pm {00.64}$
+
+1-3
+GCN-GDC ${25.50} \pm {12.42}$ mem.
+
+1-3
+GCN+JKNet* ${60.56} \pm {08.69}$ n/a
+
+1-3
+GCN+LRGA* ${62.30} \pm {09.12}$ ${52.21} \pm {00.72}$
+
+1-3
+GCN-VN ${62.17} \pm {12.41}$ ${50.49} \pm {00.88}$
+
+1-3
+GCN-RM ${55.32} \pm {12.62}$ ${50.83} \pm {01.09}$
+
+1-3
+GCN-CM ${61.05} \pm {15.63}$ ${51.81} \pm {00.76}$
+
+1-3
+SAGE ${61.73} \pm {10.68}$ ${55.16} \pm {01.71}$
+
+1-3
+SAGE-GDC ${31.41} \pm {12.54}$ mem.
+
+1-3
+SAGE+edges* ${74.95} \pm {03.17}$ n/a
+
+1-3
+SAGE-VN ${64.91} \pm {13.60}$ ${58.75} \pm {00.91}$
+
+1-3
+SAGE-RM ${70.68} \pm {11.74}$ ${58.30} \pm {00.87}$
+
+1-3
+SAGE-CM 76.21 ± 11.57 ${60.17} \pm {01.37}$
+
+1-3
+
+ < g r a p h i c s >
+
+Figure 2: Impact of virtual node number; ddi (top) and collab (bottom).
+
+Results, Table 1. Overall Impact of Virtual Nodes. The common approach of using a single virtual node (GNN-VN) yields good improvements over ddi and slight improvements over collab. The numbers for GNN-RM reflect the randomness of their connections to the virtual nodes, there is no clear trend; but they clearly outperform the original models. The virtual node assignment based on the graph structure (GNN-CM) yields consistently good improvements over ddi and collab. We note that we obtained some ambiguous results with data that has less cluster structure, but overall can observe a positive impact.
+
+Model Comparison. The results of the best models from the OGB leaderboard vary strongly with the different datasets (e.g., SEAL), or have not been reported at all. Most deep GNNs and models that use complex message-passing techniques perform disappointing and, overall, much worse than the standard GNNs. We did thorough hyperparameter tuning for these models and it is hard to explain. A possible reason may be most of their original evaluations focus on node or graph classification and consider very different types of data. The model closest to our approach is the position-aware graph neural network (P-GNN) [22]. It assigns nodes to random subsets of nodes called "anchor-sets", and then learns a non-linear aggregation scheme that combines node feature information from each anchor-set and weighs it by the distance between the node and the anchor-set. So, it creates a message for each node for every anchor-set, instead of for each direct neighbor. The fact that it ran out of memory on collab shows that practice may benefit from simpler or more efficient schemes.
+
+Impact of Virtual Node Number, Figure 2. The configurations of the best models provided in the appendix show that the chosen numbers of virtual nodes are indeed random for the "random" models, but GNN-CM consistently uses a high number of virtual nodes, which also suits it better according to our theoretical analysis. In line with this, the more detailed analysis varying the numbers of virtual nodes, yields best results (also in terms of standard deviations) for SAGE-CM at rather high values. For GCN, we do not see a clear trend, but (second) best performance with 64 virtual nodes.
+
+Using Virtual Nodes Only at the Last GNN Layer, Table 3 (Appendix C.2). [32] show that using a fully connected adjacency matrix at the last layer of a standard GNN helps to better capture information over long ranges. We therefore investigated if it is a better architectural choice to use virtual nodes only at the last layer. However, we observed that this can lead to extreme performance drops.
+
+Conclusions and Discussions. In a nutshell, our clustering-based virtual node assignment provides stable performance increases if the graph contains good cluster structure and is sufficiently large. In smaller graphs, the GNNs alone were usually sufficient. In line with our theoretical investigation, we expect virtual nodes to be especially beneficial over dense graphs.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dK8vOIBENa3/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/dK8vOIBENa3/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8485e730479a2f19322f78eafc134b38133cae9c
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dK8vOIBENa3/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,492 @@
+# Transductive Linear Probing: A Novel Paradigm for Few-Shot Node Classification
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Few-shot node classification is tasked to provide accurate predictions for nodes from novel classes with only few representative labeled nodes. This problem has drawn tremendous attention for its projection to prevailing real-world applications, such as product categorization for newly added commodity categories on an E-commerce platform with scarce records or diagnosis for rare diseases on a patient similarity graph. To tackle such challenging label scarcity issues in the non-Euclidean graph domain, meta-learning has become a successful and predominant paradigm. More recently, inspired by the development of few-shot learning in the image domain, transferring pretrained node embeddings for few-shot node classification could be a promising alternative to meta-learning but remains unexposed. In this work, we empirically demonstrate the potential of an alternative paradigm, Transductive Linear Probing, that transfers pretrained node embeddings, which are learned from graph contrastive learning methods. We further extend the setting of few-shot node classification from standard fully supervised to a more realistic self-supervised setting, where meta-learning methods cannot be easily deployed due to the shortage of supervision from training classes. Surprisingly, even without any ground-truth labels, transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol. We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs.
+
+## 23 1 Introduction
+
+Graph Neural Networks (GNNs) [1-4] are a family of neural network models designed for graph-structured data. In this work, we concentrate on GNNs for the node classification task, where GNNs recurrently aggregate neighborhoods to simultaneously preserve graph structure information and learn node representations. However, most GNN models focus on the (semi-)supervised learning setting, assuming access to abundant labels. This assumption could be practically infeasible due to the high cost of data collection and labeling, especially for large graphs. Moreover, recent works have manifested that directly training GNNs with limited nodes can result in severe performance degradation [5-7]. Such a challenge has led to a proliferation of studies [8-10] that try to learn fast-adaptable GNNs with extremely scarce known labels, i.e., Few-Shot Node Classification (FSNC) tasks. Particularly, in FSNC, there exist two disjoint label spaces: base classes are assumed to contain substantial labeled nodes while target novel classes only contain few available labeled nodes. If the target FSNC task contains $N$ novel classes with $K$ labeled nodes in each class, the problem is denoted as an $N$ -way $K$ -shot node classification task. Here the $K$ labeled nodes are termed as a support set, and the unlabeled nodes are termed as a query set for evaluation.
+
+Currently, meta-learning has become a prevailing and successful paradigm to tackle such a shortage of labels on graphs. Inspired by the way humans learn unseen classes with few samples via utilizing previously learned prior knowledge, a typical meta-learning based framework will randomly sample a number of episodes, or meta-tasks, to emulate the target $N$ -way $K$ -shot setting [5]. Based on this principle, various models [5-10] have been proposed, which makes meta-learning a plausible default
+
+choice for FSNC tasks. On the other hand, despite the remarkable breakthroughs that have been made, meta-learning based methods still have several limitations. First, relying on different arbitrarily sampled meta-tasks to extract transferable meta-knowledge, meta-learning based frameworks suffer from the piecemeal knowledge issue [11]. That being said, a small portion of the nodes and classes are selected per episode for training, which leads to an undesired loss of generalizability of the learned GNNs regarding nodes from unseen novel classes. Second, the feasibility for sampling meta-tasks is based on the assumption that there exist sufficient base classes where substantial labeled nodes are accessible. However, this assumption can be easily overturned for real-world graphs where the number of base classes can be limited, or the labels of nodes in base classes can be inaccessible. In a nutshell, these two concerns motivate us to design an alternative paradigm for meta-learning to cover more realistic scenarios.
+
+Inspired by $\left\lbrack {{12},{13}}\right\rbrack$ , we postulate that the key to solving FSNC is to learn a generalizable GNN encoder. We validate this postulation by a motivating example in Section 2.3. Then, without the episodic emulation, the proposed novel paradigm, Transductive Linear Probing (TLP), directly transfers pretrained node embeddings for nodes in novel classes learned from Graph Contrastive Learning (GCL) methods [14-19], and fine-tunes a separate linear classifier with the support set to predict labels for unlabeled nodes. GCL methods are proven to learn generalizable node embeddings by maximizing the representation consistency under different augmented views [14, 15, 20]. If the representations of nodes in novel classes are discriminative enough, probing them with a simple linear classifier should provide decent accuracy. Based on this intuition, we propose two instantiations of the TLP paradigm in this paper: TLP with the self-supervised form of GCL methods and TLP with the supervised GCL counterparts. We evaluate TLP by transferring node embeddings from various GCL methods to the linear classifier and compare TLP with meta-learning based methods under the same evaluation protocol. Moreover, we examine the effect of supervision during GCL pretraining for target FSNC tasks to further analyze what role labels from base classes play in TLP.
+
+Throughout this paper, we aim to shed new light on the few-shot node classification problem through the lens of empirical evaluations of both the "old" meta-learning paradigm and the "new" transductive linear probing paradigm. The summary of our contributions is as follows:
+
+New Paradigm We are the first to break with convention and precedent to propose a novel paradigm, transductive linear probing, as a competitive alternative to meta-learning for FSNC tasks.
+
+Comprehensive Study We perform comprehensive reviews on current literature and the research community and conduct a large-scale study on six widely-used real-world datasets that cover different scenarios in FSNC: (1) a sufficient number of base classes with substantial labeled nodes in each class, (2) a sufficient number of base classes with no labeled nodes in each class, (3) a limited number of base classes with substantial labeled nodes in each class, and (4) a limited number of base classes with no labeled nodes in each class. We evaluate all the compared methods under the same protocol.
+
+Findings We demonstrate that despite the recent advances in few-shot node classification, meta-learning based methods struggle to outperform TLP methods. Moreover, the TLP-based methods with self-supervised GCL can outperform their supervised counterparts and those meta-learning based methods even if all the labels from base classes are inaccessible. This signifies that without label information, self-supervised GCL can focus more on node-level structural information, which results in better node representations. However, TLP also inherits its limitation for scalability due to the large memory consumption of GCL, which makes it hard to deploy on extremely large graphs. Based on those observations, we identify that improving adaptability and scalability are the promising directions for meta-learning based and TLP-based methods, respectively.
+
+Our implementations for experiments are released ${}^{1}$ . We hope to facilitate the sharing of insights and accelerate the progress on the goal of learning from scarcely labeled instances on graphs.
+
+## 2 Preliminaries
+
+### 2.1 Problem Statement
+
+Formally, given an attributed network $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right) = \left( {\mathbf{A},\mathbf{X}}\right)$ , where $\mathcal{V}$ denotes the set of nodes $\left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{n}}\right\} ,\mathcal{E}$ denotes the set of edges $\left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{m}}\right\} ,\mathbf{X} = \left\lbrack {{\mathbf{x}}_{1};{\mathbf{x}}_{2};\ldots ;{\mathbf{x}}_{n}}\right\rbrack \in {\mathbb{R}}^{n \times d}$ denotes all the node features, and $\mathbf{A} = \{ 0,1{\} }^{n \times n}$ is the adjacency matrix representing the network structure. Specifically, ${\mathbf{A}}_{j, k} = 1$ indicates that there is an edge between node ${v}_{j}$ and node ${v}_{k}$ ; otherwise, ${\mathbf{A}}_{j, k} = 0$ . The few-shot node classification problem assumes that there exist a series of target node classification tasks, $\mathcal{T} = {\left\{ {\mathcal{T}}_{i}\right\} }_{i = 1}^{I}$ , where ${\mathcal{T}}_{i}$ denotes the given dataset of a task, and $I$ denotes the number of such tasks. We term the classes of nodes available during training as base classes (i.e., ${\mathbb{C}}_{\text{base }}$ ) and the classes of nodes during target test phase as novel classes (i.e., ${\mathbb{C}}_{\text{novel }}$ ) and ${\mathbb{C}}_{\text{base }} \cap {\mathbb{C}}_{\text{novel }} = \varnothing$ . Notably, under different settings, labels of nodes for training (i.e., ${\mathbb{C}}_{\text{base }}$ ) may or may not be available during training. Conventionally, there are few labeled nodes for novel classes ${\mathbb{C}}_{\text{novel }}$ during the test phase. The problem of few-shot node classification is defined as follows:
+
+---
+
+${}^{1}$ https://github.com/anonymous-LoG22/TLP-FSNC.git
+
+---
+
+Definition 1. Few-shot Node Classification: Given an attributed graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{X}}\right)$ with a divided node label space $\mathbb{C} = \left\{ {{\mathbb{C}}_{\text{base }},{\mathbb{C}}_{\text{novel }}}\right\}$ , we only have few-shot labeled nodes (support set $\mathbb{S}$ ) for ${\mathbb{C}}_{\text{novel }}$ . The task $\mathcal{T}$ is to predict the labels for unlabeled nodes (query set $\mathbb{Q}$ ) from ${\mathbb{C}}_{\text{novel }}$ . If the support set in each target (test) task has $N$ novel classes with $K$ labeled nodes, then we term this task an $N$ -way $K$ -shot node classification task.
+
+The goal of few-shot node classification is to learn an encoder that can transfer the topological and semantic knowledge learned from substantial data in base classes $\left( {\mathbb{C}}_{\text{base }}\right)$ and generate discriminative embeddings for nodes from novel classes $\left( {\mathbb{C}}_{\text{novel }}\right)$ with limited labeled nodes.
+
+### 2.2 Episodic Meta-learning for Few-shot Node Classification.
+
+Episodic meta-learning is a proven effective paradigm for few-shot learning tasks [21-27]. The main idea is to train the neural networks in a way that emulates the evaluation conditions. This is hypothesized to be beneficial for the prediction performance on test tasks [21-23]. Based on this philosophy, many recent works in few-shot node classification [6, 8-10, 28-32] successfully transfer the idea to the graph domain. It works as follows: during the training phase, it generates a number of meta-train tasks (or episodes) ${\mathcal{T}}_{tr}$ from ${\mathbb{C}}_{\text{base }}$ to emulate the test tasks, following their $N$ -way $K$ -shot node classification specifications:
+
+$$
+{\mathcal{T}}_{tr} = {\left\{ {\mathcal{T}}_{t}\right\} }_{t = 1}^{T} = \left\{ {{\mathcal{T}}_{1},{\mathcal{T}}_{2},\ldots ,{\mathcal{T}}_{T}}\right\} ,
+$$
+
+$$
+{\mathcal{T}}_{t} = \left\{ {{\mathcal{S}}_{t},{\mathcal{Q}}_{t}}\right\} \tag{1}
+$$
+
+$$
+{\mathcal{S}}_{t} = \left\{ {\left( {{v}_{1},{y}_{1}}\right) ,\left( {{v}_{2},{y}_{2}}\right) ,\ldots ,\left( {{v}_{N \times K},{y}_{N \times K}}\right) }\right\} ,
+$$
+
+$$
+{\mathcal{Q}}_{t} = \left\{ {\left( {{v}_{1},{y}_{1}}\right) ,\left( {{v}_{2},{y}_{2}}\right) ,\ldots ,\left( {{v}_{N \times K},{y}_{N \times K}}\right) }\right\} .
+$$
+
+For a typical meta-learning based method, in each episode, $K$ labeled nodes are randomly sampled from $N$ base classes, forming a support set, to train the GNN model while emulating the $N$ -way $K$ -shot node classification in the test phase. Then GNN predicts labels for an emulated query set of nodes randomly sampled from the same classes as the support set. The Cross-Entropy Loss $\left( {L}_{CE}\right)$ is calculated to optimize the GNN encoder ${g}_{\theta }$ and the classifier ${f}_{\psi }$ in an end-to-end fashion:
+
+$$
+\theta ,\psi = \arg \mathop{\min }\limits_{{\theta ,\psi }}{L}_{CE}\left( {{\mathcal{T}}_{t};\theta ,\psi }\right) . \tag{2}
+$$
+
+Based on this, Meta-GNN [28] combines MAML [27] with GNNs to achieve optimization for different meta-tasks. GPN [6] applies ProtoNet [26] and computes node importance for a transferable metric function. G-Meta [8] aims to establish a local subgraph for each node to achieve fast adaptations to new meta-tasks. RALE [29] obtains relative and absolute node embeddings based on node positions on graphs to model node dependencies in each meta-task. An exhaustive survey is beyond the scope of this paper; see [33] for an overview. However, all those methods are evaluated on different datasets with each own evaluation protocol, which fragments the practical knowledge on how meta-learning performs with a few labeled nodes and makes it hard to explicitly compare their superiority or inferiority. To bridge this gap, in this paper, we conduct extensive experiments to compare new advances and prior works for FSNC tasks uniformly and comprehensively.
+
+### 2.3 A Motivating Example and Preliminary Analysis
+
+More recently, related works in the image domain demonstrate that the reason for the fast adaptation lies in feature reuse rather than those complicated mate-learning algorithms [12, 13]. In other words, with a carefully pretrained encoder, decent performance can be obtained through directly fine-tuning a simple classifier on the target task. However, few studies have been done on the graph domain due to its important difference from images that nodes in a graph are not i.i.d. Their interactive relationships
+
+are reflected by both the topological and semantic information. To validate such hypothesis on graphs, based on [13], we construct an Intransigent ${GNN}$ model, namely $I - {GNN}$ , that simply does not adapt to new tasks. We decouple the training procedure to two separate phases. In the first phase, a GNN encoder ${g}_{\theta }$ with a linear classifier ${f}_{\phi }$ as the classifier is simply pretrained on all base classes ${\mathbb{C}}_{\text{base }}$ with vanilla supervision through ${L}_{CE}$ :
+
+$$
+{\mathcal{T}}_{tr}^{\prime } = \cup {\left\{ {\mathcal{T}}_{t}\right\} }_{t = 1}^{T} = \cup \left\{ {{\mathcal{T}}_{1},{\mathcal{T}}_{2},\ldots ,{\mathcal{T}}_{T}}\right\} ,
+$$
+
+$$
+\theta ,\phi = \arg \mathop{\min }\limits_{{\theta ,\phi }}{L}_{CE}\left( {{\mathcal{T}}_{tr}^{\prime };\theta ,\phi }\right) + \mathcal{R}\left( \theta \right) , \tag{3}
+$$
+
+where $\mathcal{R}\left( \theta \right)$ is a weight-decay regularization term: $\mathcal{R}\left( \theta \right) = \parallel \theta {\parallel }^{2}/2$ . Then, we freeze the parameter of the GNN encoder ${g}_{\theta }$ and discard the classifier ${f}_{\phi }$ . When fine-tuning on a target few-shot node classification task ${\mathcal{T}}_{i} = \left\{ {{\mathcal{S}}_{i},{\mathcal{Q}}_{i}}\right\}$ , the embeddings of all nodes from ${\mathcal{T}}_{i}$ are directly transferred from the pretrained GNN encoder ${g}_{\theta }$ . Then another linear classifier ${f}_{\psi }$ is involved and tuned with few-shot labeled nodes from the support set ${\mathcal{S}}_{i}$ to predict labels of nodes in the query set ${\mathcal{Q}}_{i}$ :
+
+$$
+\psi = \arg \mathop{\min }\limits_{\psi }{L}_{CE}\left( {{\mathcal{S}}_{i};\theta ,\psi }\right) . \tag{4}
+$$
+
+Results and Analysis of the Intransigent GNN model I-GNN. We demonstrate the performance of the intransigent model and compare it with those meta-learning based models in Table 1, 5. Under the same evaluation protocol (defined in Section 3.2), the simple intransigent model I-GNN has very competitive performance with meta-learning based methods. On datasets (e.g., CiteSeer) where the number of base classes $\left| {\mathbb{C}}_{\text{base }}\right|$ is limited, I-GNN consistently outperforms meta-learning based methods in terms of accuracy. This motivating example concludes that transferring node embeddings from the vanilla supervised training method I-GNN could be an alternative to meta-learning. Moreover, we take one step further and postulate that if more transferable node embeddings are obtained during pretraining, the performance on target FSNC tasks could be improved even more.
+
+
+
+Figure 1: The framework of TLP with supervised GCL: (a) Supervised GCL framework. (b) Fine-tuning on few-shot labeled nodes from novel classes with support and query sets. Colors indicate different classes (e.g., Neural Networks, SVM, Fair ML, Explainable AI). Specially, white nodes mean labels of those nodes are unavailable. Labels of all nodes in base classes are available. Different types of nodes indicate if nodes are from base classes or novel classes. The counterpart of TLP with self-supervised GCL is very simliar to this, and a figure is included in Appendix B.
+
+### 2.4 Transductive Linear Probing for Few-shot Node Classification.
+
+Inspired by the motivating example above, we generalize it to a new paradigm, Transductive Linear Probing (TLP), for few-shot node classification. The only difference between TLP and I-GNN is that the pretraining method can be an arbitrary strategy rather than the vanilla supervised learning. It can even be self-supervised training methods that do not have any requirement on base classes. In this way, the second line of Eq. (3) can be generalized to:
+
+$$
+\theta = \arg \mathop{\min }\limits_{\theta }{L}_{\text{pretrain }}\left( {{\mathcal{T}}_{tr}^{\prime };\theta }\right) , \tag{5}
+$$
+
+where ${L}_{\text{pretrain }}$ is an arbitrary loss function to pretrain the GNN encoder ${g}_{\theta }$ . Then following Eq. (4), we can exploit a linear classifier to probe the transferred embeddings of nodes from novel classes, and perform the final node classification.
+
+In this paper, we thoroughly investigate Graph Contrastive Learning (GCL) as the pretraining strategy for TLP due to two reasons: (1) GCL [14, 16, 17, 34, 35] is a proved effective way to learn generalizable node representations in either a supervised or self-supervised manner. By maximizing the consistency over differently transformed positive and negative examples (termed as views), GCL enforces the GNNs to be aware of the semantic and topological knowledge and injected perturbations on graphs. Trained on the global structures, GCL should be capable of addressing the piecemeal knowledge issue in meta-learning to increase the generalizability of the learned GNNs. Also, [36] summarizes the characteristics of GCL frameworks and empirically demonstrates the transferability of the learned representations. (2) GCL has no requirement for the base classes, which means GCL can be deployed even when the number of base classes is limited, or the nodes in base classes are unlabeled. The effectiveness of GCL highly relies on the contrastive loss function. There are two categories of contrastive loss function for graphs: (1) Supervised Contrastive Loss $\left( {L}_{\text{SupCon }}\right) \left\lbrack {{37},{38}}\right\rbrack$ . (2) Self-supervised Contrastive Loss: Information Noise Contrastive Estimation $\left( {L}_{InfoNCE}\right) \left\lbrack {{16},{17},{19}}\right\rbrack$ and Jensen-Shannon Divergence $\left( {L}_{JSD}\right) \left\lbrack {{14},{15}}\right\rbrack$ . We also consider a special GCL method, BGRL [18], which does not explicitly require negative examples. The framework for TLP with an iconic supervised GCL method is provided in Fig. 1. From another perspective, our work is the first to focus on the extrapolation ability of GCL methods, especially under extremer few-shot settings without labels for nodes in base classes.
+
+## 3 Experimental Study
+
+### 3.1 Experimental Settings
+
+We conduct systematic experiments to compare the performance of meta-learning and TLP methods (with self-supervised and supervised GCL) on the few-shot node classification task. For meta-learning, we evaluate ProtoNet [26], MAML [27], Meta-GNN [28], G-Meta [8], GPN [6], AMM-GNN [7], and TENT [10]. For TLP methods with both self-supervised and supervised forms, we evaluate MVGRL [14], GraphCL [15], GRACE [16], MERIT [17], and SUGRL [19]. Moreover, BGRL [39] and I-GNN [13] are exclusively used for TLP methods with self-supervised GCL or supervised GCL, respectively. The detailed descriptions of these models can be found in Appendix E. For comprehensive studies, we benchmark those methods on six prevalent real-world graph datasets: CoraFull [40], ogbn-arxiv [41], Coauthor-CS [42], Amazon-Computer [42], Cora [43], and CiteSeer [43]. Specifically, each dataset is a connected graph and consists of multiple node classes for training and evaluation. A more detailed description of those datasets is provided in Appendix G with their statistics and class split policies in Table 3 in Appendix F.
+
+### 3.2 Evaluation Protocol
+
+In this section, we specify the evaluation protocol used to compare both meta-learning based methods and TLP based methods. For an attributed graph dataset $\mathcal{G} = \left( {\mathbf{A},\mathbf{X}}\right)$ with a divided node label space $\mathbb{C} = \left\{ {{\mathbb{C}}_{\text{base }},{\mathbb{C}}_{\text{novel }}\left( {\operatorname{or}{\mathbb{C}}_{\text{test }}}\right) }\right\}$ , we split ${\mathbb{C}}_{\text{base }}$ into ${\mathbb{C}}_{\text{train }}$ and ${\mathbb{C}}_{\text{dev }}$ (The split policy for each datasets are listed in Table 3). For evaluation, given a GNN encoder ${g}_{\theta }$ , a classifier ${f}_{\psi }$ , the validation epoch interval $V$ , the number of sampled meta-tasks for evaluation $I$ , the epoch patience $P$ , the maximum epoch number $E$ , the experiment repeated times $R$ , and the $N$ -way, $K$ -shot, $M$ -query setting specification, the final FSNC accuracy $\mathcal{A}$ and the confident interval $\mathcal{I}$ (two mainly-concerned metrics) are calculated according to Algorithm 1 in Appendix C. The default values of all those parameters are given in Table 2 in Appendix D.
+
+Table 1: The overall few-shot node classification results of meta-learning methods and TLP with various GCL methods under different settings. Accuracy $\left( \uparrow \right)$ and confident interval $\left( \downarrow \right)$ are in $\%$ . The best and second best results are bold and underlined, respectively. OOM denotes out of memory.
+
+| Dataset | CoraFull | ogbn-arxiv | CiteSeer |
| Setting | 5-way 1-shot | 5-way 5-shot | 5-way 1-shot | 5-way 5-shot | 2-way 1-shot | 2-way 5-shot |
| Meta-learning |
| MAML [27] | ${22.63} \pm {1.19}$ | ${27.21} \pm {1.32}$ | ${27.36} \pm {1.48}$ | ${29.09} \pm {1.62}$ | ${52.39} \pm {2.20}$ | ${54.13} \pm {2.18}$ |
| ProtoNet [26] | ${32.43} \pm {1.61}$ | ${51.54} \pm {1.68}$ | ${37.30} \pm {2.00}$ | ${53.31} \pm {1.71}$ | ${52.51} \pm {2.44}$ | ${55.69} \pm {2.27}$ |
| Meta-GNN [28] | ${55.33} \pm {2.43}$ | ${70.50} \pm {2.02}$ | ${27.14} \pm {1.94}$ | ${31.52} \pm {1.71}$ | ${56.14} \pm {2.62}$ | ${67.34} \pm {2.10}$ |
| GPN [6] | ${52.75} \pm {2.32}$ | ${72.82} \pm {1.88}$ | ${37.81} \pm {2.34}$ | ${50.50} \pm {2.13}$ | ${53.10} \pm {2.39}$ | ${63.09} \pm {2.50}$ |
| AMM-GNN [7] | ${58.77} \pm {2.49}$ | ${75.61} \pm {1.78}$ | ${33.92} \pm {1.80}$ | ${48.94} \pm {1.87}$ | ${54.53} \pm {2.51}$ | ${62.93} \pm {2.42}$ |
| G-Meta [8] | ${60.44} \pm {2.48}$ | ${75.84} \pm {1.70}$ | ${31.48} \pm {1.70}$ | ${47.16} \pm {1.73}$ | ${55.15} \pm {2.68}$ | ${64.53} \pm {2.35}$ |
| TENT [10] | ${55.44} \pm {2.08}$ | ${70.10} \pm {1.73}$ | ${48.26} \pm {1.73}$ | ${61.38} \pm {1.72}$ | ${62.75} \pm {3.23}$ | ${72.95} \pm {2.13}$ |
| TLP with Supervised GCL |
| I-GNN [13] | ${42.70} \pm {1.92}$ | ${51.46} \pm {1.69}$ | ${38.46} \pm {1.77}$ | ${51.46} \pm {1.69}$ | ${58.70} \pm {3.17}$ | ${65.60} \pm {2.58}$ |
| MVGRL [14] | ${44.98} \pm {1.99}$ | ${71.18} \pm {1.75}$ | OOM | OOM | ${55.79} \pm {1.39}$ | ${66.72} \pm {2.13}$ |
| GraphCL [15] | ${47.00} \pm {1.64}$ | ${67.94} \pm {1.71}$ | OOM | OOM | ${53.55} \pm {1.68}$ | ${69.50} \pm {1.41}$ |
| GRACE [16] | ${65.48} \pm {2.45}$ | ${85.08} \pm {1.49}$ | OOM | OOM | ${61.20} \pm {2.39}$ | ${81.76} \pm {1.74}$ |
| MERIT [17] | ${52.80} \pm {2.72}$ | ${81.30} \pm {1.53}$ | OOM | OOM | ${61.25} \pm {2.59}$ | ${81.45} \pm {1.80}$ |
| SUGRL [19] | ${54.26} \pm {2.24}$ | ${77.55} \pm {1.95}$ | ${52.13} \pm {2.11}$ | ${70.05} \pm {1.56}$ | ${65.34} \pm {2.55}$ | ${75.81} \pm {1.43}$ |
| TLP with Self-supervised GCL |
| MVGRL [14] | ${59.91} \pm {2.39}$ | ${76.76} \pm {1.63}$ | OOM | OOM | ${64.45} \pm {2.77}$ | ${80.25} \pm {1.82}$ |
| GraphCL [15] | ${64.20} \pm {2.56}$ | ${83.74} \pm {1.46}$ | OOM | OOM | ${73.55} \pm {3.09}$ | ${92.35} \pm {1.24}$ |
| BGRL [39] | ${43.83} \pm {2.11}$ | ${70.44} \pm {1.62}$ | ${36.76} \pm {1.74}$ | ${53.44} \pm {0.36}$ | ${54.32} \pm {1.63}$ | ${70.50} \pm {2.11}$ |
| GRACE [16] | ${72.42} \pm {2.06}$ | ${83.82} \pm {1.67}$ | OOM | OOM | ${60.75} \pm {2.54}$ | ${78.42} \pm {2.01}$ |
| MERIT [17] | ${73.38} \pm {2.25}$ | ${87.66} \pm {1.43}$ | OOM | OOM | ${64.53} \pm {2.81}$ | ${90.32} \pm {1.66}$ |
| SUGRL [19] | ${77.35} \pm {2.20}$ | ${83.96} \pm {1.52}$ | ${60.04} \pm {2.11}$ | ${77.52} \pm {1.45}$ | ${77.34} \pm {2.83}$ | ${86.32} \pm {1.57}$ |
+
+### 3.3 Comparison
+
+Table 1 presents the performance comparison of all methods on the few-shot node classification task. Specifically, we give results under four different few-shot settings to exhibit a more comprehensive comparison: 5-way 1-shot, 5-way 5-shot, 2-way 1-shot, and 2-way 5-shot. More results are given in Appendix I. We choose the average classification accuracy and the ${95}\%$ confidence interval over $R$ repetitions as the evaluation metrics. From Table 1, we discover the following observations:
+
+- TLP methods consistently outperforms meta-learning methods, which indicates the importance of transferring comprehensive node representations in FSNC tasks. In TLP methods, the model is forced to extract node-level structural information, while the meta-learning methods mainly focus on label information. As a result, TLP methods can transfer better node representations and exhibit superior performance on meta-test tasks.
+
+- Even without using any label information from base classes, TLP with self-supervised GCL methods can mostly outperform TLP with supervised GCL methods. This signifies that directly injecting supervision can potentially hinder the generalizability for TLP, which is further investigated in the following sections.
+
+- Increasing the number of shots $K$ (i.e., number of labeled nodes in the support set) has more significant effect on performance of both forms of TLP methods, compared with meta-learning methods. This is due to the fact that with the additional support nodes, TLP with GCL can provide more informative node representations to learn a more powerful classifier. Instead, the meta-learning methods are based on the extracted label information and thus cannot benefit from additional node-level information.
+
+- Most TLP methods encounter the OOM (out of memory) problem when applied to the ogbn-arxiv dataset. This is due to the fact that the contrastive strategy in TLP methods will consume a larger memory compared with traditional supervised learning. Thus, the scalability problem is not negligible for TLP with GCL methods.
+
+- BGRL [39] exhibits less competitive performance compared with other TLP methods with self-supervised GCL. The result indicates that negative samples are important for self-supervised GCL in FSNC, which can help the model exploit node-level information. Nevertheless, without the requirement of negative samples, BGRL can parallel better to handle the OOM problem.
+
+### 3.4 Further Analysis
+
+To explicitly compare the results between meta-learning and TLP and between two forms of TLP, we provide further results of all methods on various $N$ -way $K$ -shot settings in Fig. 2 and Fig. 3. From the results, we can obtain the following observations:
+
+- The performance of all methods significantly degrades when a larger value of $N$ is presented (i.e., more classes in each meta-task). The main reason is that with a larger $N$ , the variety of classes in each meta-task can result in a more complex class distribution and thus increase the classification difficulties. Nevertheless, the performance drop is less significant on TLP with both forms of GCL methods. This is because the utilized GCL methods focus more on node-level structural patterns, which incorporate more potentially useful information for classification. As a result, these methods are more capable of alleviating the problem of difficult classification caused by a larger $N$ .
+
+- As shown in Fig. 3, the performance improvement of TLP with self-supervised GCL methods over meta-learning methods on CiteSeer is generally more impressive than other datasets. The main reason is that CiteSeer bears a significantly smaller class set $(2/2/2$ classes for ${\mathbb{C}}_{\text{train }}/{\mathbb{C}}_{\text{dev }}/{\mathbb{C}}_{\text{test }}$ ). In consequence, the meta-learning methods cannot effectively leverage the supervision information during training. Nevertheless, TLP with self-supervised GCL can extract useful structural information for better generalization performance.
+
+
+
+Figure 2: $N$ -way $K$ -shot results on CoraFull, meta-learning and TLP. TLP Methods with $*$ are based on supervised GCL methods and I-GNN.
+
+
+
+Figure 3: 2-way $K$ -shot results on CiteSeer and Amazon-Computer, meta-learning and two forms of TLP. TLP Methods with $*$ are based on supervised GCL methods and I-GNN.
+
+### 3.5 Effect of Supervision Information in Base Classes
+
+In this section, we further investigate the effectiveness of the supervised information in TLP with supervised GCL methods. Specifically, we leverage a combined loss ${L}_{\text{JointCon }} = \lambda {L}_{\text{SupCon }} +$ $\left( {1 - \lambda }\right) {L}_{SelfCon}$ , where ${L}_{SelfCon}$ indicates a self-supervised GCL loss, either ${L}_{JSD}$ or ${L}_{InfoNCE}$ according to the models, and ${L}_{\text{JointCon }}$ is a mixture of supervised GCL loss and self-supervised GCL loss. In this way, we can gradually adjust the value of $\lambda$ to inject different levels of supervision signals into GCL and then observe the performance fluctuation. Note that due to the unstable training curve brought by the joint loss ${L}_{\text{JointCon }}$ , we increase the epoch patience number from $P$ to ${2P}$ to ensure convergence. The results on Cora dataset (we observe similar results on other datasets) with different values of $\lambda$ are provided in Fig. 4. From the results, we can obtain the following observations:
+
+- In general, the classification performance increases with a larger value of $\lambda$ . In other words, directly injecting supervision information into GCL for TLP will usually reduce the performance on few-shot node classification tasks. Nevertheless, carefully injecting supervision information can slightly increase the accuracy by choosing a suitable value of $\lambda$ . On the other hand, the results also verify that the TLP paradigm can still achieve considerable performance without any explicit restrictions for base classes.
+
+- Even with a relatively small value of $\lambda$ (e.g.,0.1), the performance improvement over TLP with totally supervised GCL (i.e., $\lambda = {0.0}$ ) is still significant. That being said, the contrastive strategy that leverages graph structures can provide better performance by providing comprehensive node representations.
+
+
+
+Figure 4: Results on dataset Cora (2-way)
+
+### 3.6 Evaluating Learned Node Representations on Novel Classes
+
+In this section, we further validate the quality of the learned node representations from different training strategies. Particularly, we leverage two prevalent clustering evaluation metrics: normalized mutual information (NMI) and adjusted random index (ARI), on learned node representations clustered based on K-Means. We evaluate the representations learned from two datasets CoraFull and CiteSeer for a fair comparison. The results are presented in Table 6 in Appendix I.3 . Based on the results, we can obtain the following observations:
+
+- The meta-learning methods typically exhibit inferior NMI and ARI scores compared with both forms of TLP. This is because meta-learning methods are dedicated for extracting supervision information from node samples and thus cannot fully utilize node-level structural information.
+
+- In general, TLP with self-supervised GCL methods can result in larger values of both NMI and ARI scores than TLP with supervised GCL. This is due to the fact that the self-supervised GCL model focuses more on extracting structural information without the interruption of label information. As a result, the learned node representations are more comprehensive and thus exhibit superior clustering performance.
+
+- The difference of NMI and ARI scores between meta-learning and TLP is more significant on CiteSeer than CoraFull. This phenomenon potentially results from the fact that CiteSeer consists of fundamentally fewer classes than CoraFull. In consequence, for CiteSeer, the meta-learning methods will largely rely on label information instead of node-level structural information for classification.
+
+### 3.7 Visualization
+
+To provide an explicit comparison of different baselines, we visualize the learned node representations from CoraFull and CiteSeer via the t-SNE algorithm, where colors denote different classes. It is noteworthy that for clarity, we randomly select five classes from ${\mathbb{C}}_{\text{test }}$ for the visualization. The results are provided in Fig. 5 (more results are included in Fig. 12). Specifically, we discover that:
+
+- TLP with self-supervised GCL generally outperforms TLP with supervised GCL. This is because without learning label information, TLP with self-supervised GCL can concentrate on node representation patterns, which are easier to transfer to target unseen novel classes.
+
+- The learned node representations are less discriminative for meta-learning on CiteSeer compared with CoraFull. This is because CiteSeer contains fewer classes, which means the node representations learned by meta-learning methods will be less informative, since they are only required to classify nodes from a small class set.
+
+
+
+Figure 5: The t-SNE visualization results. Fig. (a)-(f) are for dataset CoraFull (5-way). Fig. (g)-(h) are for dataset CiteSeer (2-way). TLP methods with * are based on supervised GCL methods.
+
+## 4 Conclusion, Limitations, and Outlook
+
+In this paper, we propose TLP as an alternative paradigm to meta-learning for FSNC tasks. First, we provide a motivating example, a vanilla intransigent GNN model, to validate our postulation that a generalizable GNN encoder is the key to FSNC tasks. Then, we provide a formal definition for TLP, which transfers node embeddings from GCL pretraining to the prevailing meta-learning paradigm. We conduct comprehensive experiments and compare various meta-learning based and TLP-based methods under the same protocol. Our rigorous empirical study reveals several interesting findings on the strengths and weaknesses of the two paradigms and identifies that adaptability and scalability are the promising directions for meta-learning based and TLP-based methods, respectively.
+
+However, due to limited space, several limitations of our work need to be acknowledged.
+
+- Limited design considerations. Even though an exhaustive survey on FSNC or GCL is out of the scope of this work, we do not provide a more fine-grained comparison on model details, such as different GNN encoders or various transformations during GCL pretraining.
+
+- Lack of theoretical justifications. Our findings are based on empirical studies, which cannot disclose the underlying mathematical mechanisms of those methods, such as the performance guarantee by transferring node embeddings from different GCL methods.
+
+How to address these limitations is saved as future work. In broader terms, this work lies at the confluence of graph few-shot learning and graph contrastive learning. We hope this work can facilitate the sharing of insights for both communities. On the one hand, we hope our work provides a necessary yardstick to measure progress across the FSNC field. On the other hand, our work should have exhibited several practical guidelines for future research in both vigorous fields. For example, the meta-learning community can get inspired by GCL to learn more transferable graph patterns. Also, few-shot TLP can serve as a new metric to evaluate the extrapolation ability of GCL methods.
+
+References
+
+[1] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, 2017. 1, 16
+
+[2] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In ICLR, 2018.
+
+[3] William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. In NeurIPS, pages 1024-1034, 2017.
+
+[4] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In Proceedings of the 2019 International Conference on Learning Representations, 2019. 1
+
+[5] Shengzhong Zhang, Ziang Zhou, Zengfeng Huang, and Zhongyu Wei. Few-shot classification on graphs with structural regularized gcns. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018. 1
+
+[6] Kaize Ding, Jianling Wang, Jundong Li, Kai Shu, Chenghao Liu, and Huan Liu. Graph prototypical networks for few-shot learning on attributed networks. In ${CIKM},{2020.3},5,6,{15}$
+
+[7] Ning Wang, Minnan Luo, Kaize Ding, Lingling Zhang, Jundong Li, and Qinghua Zheng. Graph few-shot learning with attribute matching. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management, 2020. 1, 5, 6, 15
+
+[8] Kexin Huang and Marinka Zitnik. Graph meta learning via local subgraphs. In NeurIPS, 2020. 1,3,5,6,15
+
+[9] Lin Lan, Pinghui Wang, Xuefeng Du, Kaikai Song, Jing Tao, and Xiaohong Guan. Node classification on graphs with few-shot novel labels via meta transformed network embedding. Advances in Neural Information Processing Systems, 33:16520-16531, 2020.
+
+[10] Song Wang, Kaize Ding, Chuxu Zhang, Chen Chen, and Jundong Li. Task-adaptive few-shot node classification. arXiv preprint arXiv:2206.11972, 2022. 1, 3, 5, 6, 15
+
+[11] Zhen Tan, Kaize Ding, Ruocheng Guo, and Huan Liu. A simple yet effective pretraining strategy for graph few-shot learning. arXiv preprint arXiv:2203.15936, 2022. 2
+
+[12] Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. In International Conference on Learning Representations, 2019.2,3
+
+[13] Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenenbaum, and Phillip Isola. Rethinking few-shot image classification: a good embedding is all you need? Proceedings of the 16th European Conference on Computer Vision, 2020. 2, 3, 4, 5, 6, 15
+
+[14] Kaveh Hassani and Amir Hosein Khasahmadi. Contrastive multi-view representation learning on graphs. In International Conference on Machine Learning, pages 4116-4126. PMLR, 2020. 2, 5, 6, 15
+
+[15] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. NeurIPS, 2020. 2, 5, 6, 15
+
+[16] Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131, 2020. 5, 6, 15
+
+[17] Ming Jin, Yizhen Zheng, Yuan-Fang Li, Chen Gong, Chuan Zhou, and Shirui Pan. Multi-scale contrastive siamese networks for self-supervised graph representation learning. In International Joint Conference on Artificial Intelligence 2021, pages 1477-1483. Association for the Advancement of Artificial Intelligence (AAAI), 2021. 5, 6, 15
+
+[18] Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, and Michal Valko. Large-scale representation learning on graphs via bootstrapping. arXiv preprint arXiv:2102.06514, 2021. 5
+
+[19] Yujie Mo, Liang Peng, Jie Xu, Xiaoshuang Shi, and Xiaofeng Zhu. Simple unsupervised graph representation learning. AAAI, 2022. 2, 5, 6, 15
+
+[20] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 2
+
+[21] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. In ${ICLR},{2018.3}$
+
+[22] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Representations, 2016.
+
+[23] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. In arXiv:1803.02999, 2018. 3
+
+[24] Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. Learning to propagate for graph meta-learning. In NeurIPS, 2019.
+
+[25] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: relation network for few-shot learning. In CVPR, 2018.
+
+[26] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NeurIPS, 2017. 3, 5, 6, 15, 16
+
+[27] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ${ICML},{2017.3},5,6,{15},{16}$
+
+[28] Fan Zhou, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, and Ji Geng. Meta-gnn: On few-shot node classification in graph meta-learning. In ${CIKM},{2019.3},5,6,{15}$
+
+[29] Zemin Liu, Yuan Fang, Chenghao Liu, and Steven CH Hoi. Relative and absolute location embedding for few-shot node classification on graph. In AAAI, 2021. 3
+
+[30] Zhen Tan, Kaize Ding, Ruocheng Guo, and Huan Liu. Graph few-shot class-incremental learning. In ${WSDM},{2022}$ .
+
+[31] Yonghao Liu, Mengyu Li, Ximing Li, Fausto Giunchiglia, Xiaoyue Feng, and Renchu Guan. Few-shot node classification on attributed networks with graph meta-learning. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 471-481, 2022.
+
+[32] Zongqian Wu, Peng Zhou, Guoqiu Wen, Yingying Wan, Junbo Ma, Debo Cheng, and Xiaofeng Zhu. Information augmentation for few-shot node classification. 3
+
+[33] Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V Chawla, and Huan Liu. Few-shot learning on graphs: A survey. arXiv preprint arXiv:2203.09308, 2022. 3
+
+[34] Minghao Xu, Hang Wang, Bingbing Ni, Hongyu Guo, and Jian Tang. Self-supervised graph-level representation learning with local and global structure. In International Conference on Machine Learning, pages 11548-11558. PMLR, 2021. 5
+
+[35] Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34: 15920-15933, 2021. 5
+
+[36] Yanqiao Zhu, Yichen Xu, Qiang Liu, and Shu Wu. An empirical study of graph contrastive learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. 5
+
+[37] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in Neural Information Processing Systems, 2020. 5
+
+[38] Selahattin Akkas and Ariful Azad. Jgcl: Joint self-supervised and supervised graph contrastive learning. 2022. 5
+
+[39] Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Remi Munos, Petar Veličković, and Michal Valko. Bootstrapped representation learning on graphs. In ICLR Workshop on Geometrical and Topological Representation Learning, 2021. 5, 6, 7, 15
+
+[40] Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In ${ICLR},{2018.5},{16}$
+
+[41] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020. 5, 16
+
+[42] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls of graph neural network evaluation. Relational Representation Learning Workshop, NeurIPS 2018, 2018. 5, 16
+
+[43] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40-48. PMLR, 2016.5, 16
+
+[44] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Pires, Zhaohan Guo, Mohammad Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. In NeurIPS, 2020. 15
+
+[45] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 16
+
+[46] Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019. 16
+
+[47] Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. Microsoft academic graph: When experts are not enough. Quantitative Science Studies, 2020. 16
+
+[48] Julian McAuley, Rahul Pandey, and Jure Leskovec. Inferring networks of substitutable and complementary products. In SIGKDD, 2015. 16
+
+[49] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 2015 International Conference on Learning Representations, 2015. 16
+
+[50] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedfor-ward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010. 16
+
+[51] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In Proceedings of the 31st Conference on Neural Information Processing Systems, 2017.17
+
+## 4 A Framework for Meta-learning Based FSNC Methods
+
+
+
+Figure 6: The framework for meta-learning methods. Colors indicate different classes (e.g., Neural Networks, SVM, Fair ML, Explainable AI). Specifically, white nodes denotes that the labels of those nodes are unavailable. Labels of all nodes in base classes are available. Different types of nodes indicate if nodes are from base classes or novel classes.
+
+## 465 B Framework for TLP with Self-Supervised GCL
+
+
+
+Figure 7: The framework for TLP with self-supervised methods. Labels of all nodes in base classes are unavailable. Different types of nodes indicate if nodes are from base classes or novel classes.
+
+## C Pseudo-Code Style Description of Evaluation Protocol
+
+Algorithm 1 UNIFIED EVALUATION PROTOCOL FOR FEW-SHOT NODE CLASSIFICATION
+
+---
+
+Input: Graph $\mathcal{G},{\mathbb{C}}_{\text{train }},{\mathbb{C}}_{\text{dev }},{\mathbb{C}}_{\text{test }};$ GNN ${g}_{\theta }$ , classifier ${f}_{\psi }$ ; parameters $V, I, P, E, R, N, K, M$
+
+Output: Trained models ${g}_{\theta }$ and ${f}_{\psi }$ , accuracy $\mathcal{A}$ , confident interval $\mathcal{I}$ .
+
+ // Repeat experiment for $R$ times
+
+ for $r = 1,2,\ldots , R$ do
+
+ $p \leftarrow 1, t \leftarrow 1,{s}_{\text{best }} \leftarrow 0$ ;
+
+ while $t \leq E$ do
+
+ Optimize ${g}_{\theta }$ based on the specific training strategy (i.e., meta-learning and TLP); II Trainin
+
+ if $t{\;\operatorname{mod}\;V} = 0$ then
+
+ Sample $I$ meta-tasks from ${\mathbb{C}}_{dev}$ on $\mathcal{G}$ ; // Validation
+
+ Calculate the obtained few-shot node classification accuracy $s$ ;
+
+ if $s > {s}_{\text{best }}$ then
+
+ ${s}_{\text{best }} \leftarrow s, p \leftarrow 0$ ;
+
+ else
+
+ $p \leftarrow p + 1$
+
+ end if
+
+ end if
+
+ if $p = P$ then
+
+ break; // Early Break
+
+ end if
+
+ end while
+
+ Sample $I$ meta-tasks from ${\mathbb{C}}_{\text{test }}$ on $\mathcal{G}$ ; // Test
+
+ Calculate the obtained classification accuracy ${s}_{\text{test }}$ ;
+
+ ${s}_{r} \leftarrow {s}_{\text{test }}, r \leftarrow r + 1$ ;
+
+ end for
+
+ : Calculate averaged accuracy $\mathcal{A}$ and confident interval $\mathcal{I}$ based on $\left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{r}}\right\}$ ;
+
+---
+
+## D Default Values of Parameters in Evaluation Protocol
+
+In this section, we provide the default values of parameters used in our experiments. The details are provided in Table 2. It is noteworthy that the parameters are consistent for all models in both meta-learning and TLP methods. For the experiments that utilize a joint loss of TLP with self-supervised GCL and supervised GCL, we increase the patience number from $P$ to ${2P}$ to ensure convergence.
+
+Table 2: Default Values of Parameters in Evaluation Protocol for Experiments
+
+| Parameters | Description | Value |
| $V$ | validation epoch interval | 10 |
| $I$ | number of sampled meta-tasks for evaluation | 100 |
| $P$ | patience number | 10 |
| $E$ | maximum epoch number | 10000 |
| $R$ | number of repeated experiments | 5 |
| $N$ | number of classes in each meta-task | 2,5 |
| $K$ | number of nodes for each class in each meta-task | 1,3,5 |
| $M$ | number of queries for each class in each meta-task | 10 |
+
+## E Description of Baselines
+
+In this section, we provide further details about the baselines used in our experiments. Meta-learning based methods:
+
+- ProtoNet [26]: ProtoNet learns a prototype for each class in meta-tasks by averaging the embeddings of samples in this class. Then it conducts classification on query instances based on their distances to prototypes.
+
+- MAML [27]: MAML first optimizes model parameters according to the gradients calculated on the support instances for several steps. Then it meta-updates parameters based on the loss of query instances calculated with the parameters updated on support instances.
+
+- Meta-GNN [28]: Meta-GNN combines GNNs with the MAML strategy to apply meta-learning on graph-structured data. Specifically, Meta-GNN learns node embeddings with GNNs, while updating and meta-updating the GNN parameters based on the MAML strategy.
+
+- G-Meta [8]: G-Meta extracts a subgraph for each node to learn the node representation with GNNs. Then it conducts the classification on query nodes based on the MAML strategy to update and meta-update the parameters of GNNs.
+
+- GPN [6]: GPN proposes to learn node importance for each node in meta-tasks to select more beneficial nodes for classification. Then GPN utilizes ProtoNet to learn node prototypes via averaging node embeddings in a weighted manner.
+
+- AMM-GNN [7]: AMM-GNN proposes to extend MAML with an attribute matching mechanism. Specifically, the node embeddings will be adjusted according to the embeddings of nodes in the entire meta-task in an adaptive manner.
+
+- TENT [10]: TENT reduces the variance among different meta-tasks for better generalization performance. In particular, TENT learns node and class representations by conducting node-level and class-level adaptations. It also incorporates task-level adaptations that maximizes the mutual information between the support set and the query set.
+
+Transductive Linear Probing with different Pretraining methods:
+
+- I-GNN [13]: I-GNN learns a GNN encoder with a classifier that is trained on all base classes ${\mathbb{C}}_{\text{base }}$ with the vanilla Cross-Entropy loss ${L}_{CE}$ . Then for each meta-test task, the GNN will be frozen and a new classifier is learned based on the support set for classification.
+
+- MVGRL [14]: MVGRL learns node and graph level representations by contrasting the representations of two structural views of graphs, which include first-order neighbors and a graph diffusion. It utilizes a Jensen-Shannon Divergence based contrastive loss ${L}_{JSD}$ .
+
+- GraphCL [15]: GraphCL proposes to leverage combinations of different transformations in GCL to facilitate GNNs with generalizability, transferrability, and robustness without sophisticated architectures. It also uses ${L}_{JSD}$ as the objective.
+
+- GRACE [16]: GRACE proposes a hybrid scheme for generating different graph views on both structure and attribute levels. GRACE further provides theoretical justifications behind the motivation. It proposes a variant of Information Noise Contrastive Estimation ${L}_{\text{InfoNCE }}$ as the contrastive loss.
+
+- MERIT [17]: MERIT employs two different objectives named cross-view and cross-network contrastiveness to further maximize the agreement between node representations across different views and networks. It uses ${L}_{\text{InfoNCE }}$ similar to that in GRACE as the loss function.
+
+- SUGRL [19]: SUGRL proposes to simultaneously enlarge inter-class variation and reduce intra-class variation. The experimental results show promising improvements of generalization error with SUGRL. It also uses ${L}_{\text{InfoNCE }}$ similar to that in GRACE as the loss function.
+
+- BGRL [39]: BGRL leverages the concept of BYOL [44] and applies it to graph-structured data by enforcing the agreement between positive views without any explicitly designs on negative views. Specially, it uses Mean Squared Error ${L}_{MSE}$ between positive views as the final loss.
+
+## F Statistics of Benchmark Datasets
+
+Table 3: Statistics of node classification datasets.
+
+| $\mathbf{{Dataset}}$ | #Nodes | #Edges | #Features | C | $\left| {\mathbb{C}}_{train}\right|$ | $\left| {\mathbb{C}}_{dev}\right|$ | $\left| {\mathbb{C}}_{test}\right|$ |
| CoraFull | 19,793 | 63,421 | 8,710 | 70 | 40 | 15 | 15 |
| ogbn-arxiv | 169,343 | 1,166,243 | 128 | 40 | 20 | 10 | 10 |
| Coauthor-CS | 18,333 | 81,894 | 6,805 | 15 | 5 | 5 | 5 |
| Amazon-Computer | 13,752 | 245,861 | 767 | 10 | 4 | 3 | 3 |
| Cora | 2,708 | 5,278 | 1,433 | 7 | 3 | 2 | 2 |
| CiteSeer | 3,327 | 4,552 | 3,703 | 6 | 2 | 2 | 2 |
+
+## G Description of Benchmark Datasets
+
+In this section, we provide the detailed descriptions of the benchmark datasets used in our experiments. All the datasets are public and available on both PyTorch-Geometric [45] and DGL [46].
+
+- CoraFull [40] is a citation network that extends the prevalent small cora network. Specifically, it is achieved from the entire citation network, where nodes are papers, and edges denote the citation relations. The classes of nodes are obtained based on the paper topic. For this dataset, we use ${40}/{15}/{15}$ node classes for ${\mathbb{C}}_{\text{train }}/{\mathbb{C}}_{\text{dev }}/{\mathbb{C}}_{\text{test }}$ .
+
+- ogbn-arxiv [41] is a directed citation network that consists of CS papers from MAG [47]. Here nodes represent CS arXiv papers, and edges denote the citation relations. The classes of nodes are assigned based on the 40 subject areas of CS papers in arXiv. For this dataset, we use 20/10/10 node classes for ${\mathbb{C}}_{\text{train }}/{\mathbb{C}}_{\text{dev }}/{\mathbb{C}}_{\text{test }}$ .
+
+- Coauthor-CS [42] is a co-authorship graph based on the Microsoft Academic Graph from the KDD Cup 2016 challenge. Here, nodes are authors, and are connected by an edge if they co-authored a paper; node features represent paper keywords for each author's papers, and class labels indicate most active fields of study for each author. For this dataset, we use 5/5/5 node classes for ${\mathbb{C}}_{\text{train }}/{\mathbb{C}}_{\text{dev }}/{\mathbb{C}}_{\text{test }}$ .
+
+- Amazon-Computer [42] includes segments of the Amazon co-purchase graph [48], where nodes represent goods, edges indicate that two goods are frequently bought together, node features are bag-of-words encoded product reviews, and class labels are given by the product category. For this dataset, we use $4/3/3$ node classes for ${\mathbb{C}}_{\text{train }}/{\mathbb{C}}_{\text{dev }}/{\mathbb{C}}_{\text{test }}$ .
+
+- Cora [43] is a citation network dataset where nodes mean paper and edges mean citation relationships. Each node has a predefined feature with 1,433 dimensions. The dataset is designed for the node classification task. The task is to predict the category of certain paper. For this dataset, we use $3/2/2$ node classes for ${\mathbb{C}}_{\text{train }}/{\mathbb{C}}_{\text{dev }}/{\mathbb{C}}_{\text{test }}$ .
+
+- CiteSeer [43] is also a citation network dataset where nodes mean scientific publications and edges mean citation relationships. Each node has a predefined feature with 3,703 dimensions. The dataset is designed for the node classification task. The task is to predict the category of certain publication. For this dataset, we use $2/2/2$ node classes for ${\mathbb{C}}_{\text{train }}/{\mathbb{C}}_{\text{dev }}/{\mathbb{C}}_{\text{test }}$ .
+
+## H Implementation Details
+
+In this section, we introduce the implementation details for all methods compared in our experiments. Specifically, for the encoders used in TLP methods, we follow the settings in the original papers of the corresponding models to ensure consistency, and we choose Logistic Regression as the linear classifier for the final classification. For encoders in meta-learning methods, we utilize the original designs for papers using GNNs. For papers without using GNNs (i.e., ProtoNet [26] and MAML [27]), we use a two-layer GCN [1] as the encoder with a hidden size of 16 . We utilize the Adam optimizer [49] for all experiments with a learning rate of 0.001 . To effectively initialize the GNNs in our experiments, we leverage the Xavier initialization [50]. For meta-learning methods using the MAML framework, we set the number of meta-update steps as 20 with a meta-learning rate of 0.05 . To ensure more stable convergence in meta-learning methods, we set the weight decay rate as ${10}^{-4}$ . We set the dropout rate as 0.5 for better generalization performance. The evaluation protocol parameters are provided in Table 2. All experiments are implemented using PyTorch [51]. We run all experiments on a single 80GB Nvidia A100 GPU.
+
+## I More Results
+
+### I.1 Main Results for the Other Three Datasets or Other Settings
+
+In this section, we further provide results for the other three datasets used in our experiments: Coauthor-CS, Amazon-Computer, and Cora, and 2-way classification results on CoraFull, ogbn-arxiv, and Coauthor-CS:
+
+Table 4: The overall few-shot node classification results of meta-learning methods and TLP with different GCL methods under different settings. Accuracy $\left( \uparrow \right)$ and confidence interval $\left( \downarrow \right)$ are in $\%$ . The best and second best results are bold and underlined, respectively.
+
+| Dataset | Coauthor-CS | Amazon-Computer | Cora |
| Setting | 5-way 1-shot | 5-way 5-shot | 2-way 1-shot | 2-way 5-shot | 2-way 1-shot | 2-way 5-shot |
| Meta-learning |
| MAML | ${27.98} \pm {1.42}$ | ${42.12} \pm {1.40}$ | ${52.67} \pm {2.11}$ | ${58.23} \pm {2.53}$ | ${53.13} \pm {2.26}$ | ${57.39} \pm {2.23}$ |
| ProtoNet | ${32.13} \pm {1.52}$ | ${49.25} \pm {1.50}$ | ${61.98} \pm {2.95}$ | ${70.20} \pm {2.64}$ | ${53.04} \pm {2.36}$ | ${57.92} \pm {2.34}$ |
| Meta-GNN | ${52.86} \pm {2.14}$ | ${68.59} \pm {1.49}$ | ${65.19} \pm {3.29}$ | ${78.65} \pm {3.12}$ | ${65.27} \pm {2.93}$ | ${72.51} \pm {1.91}$ |
| GPN | ${60.66} \pm {2.07}$ | ${81.79} \pm {1.18}$ | ${57.26} \pm {1.50}$ | ${77.63} \pm {2.91}$ | ${62.61} \pm {2.71}$ | ${76.39} \pm {2.33}$ |
| AMM-GNN | ${62.04} \pm {2.26}$ | ${81.78} \pm {1.24}$ | ${71.04} \pm {3.56}$ | ${79.21} \pm {3.38}$ | ${65.23} \pm {2.67}$ | ${82.30} \pm {2.07}$ |
| G-Meta | ${59.68} \pm {2.16}$ | ${74.18} \pm {1.29}$ | ${63.68} \pm {3.05}$ | ${70.21} \pm {3.16}$ | ${67.03} \pm {3.22}$ | ${80.05} \pm {1.98}$ |
| TENT | ${63.70} \pm {1.88}$ | ${76.90} \pm {1.19}$ | ${71.15} \pm {3.11}$ | ${79.25} \pm {2.61}$ | ${53.05} \pm {2.78}$ | ${62.15} \pm {2.13}$ |
| TLP with Supervised GCL |
| I-GNN | ${43.89} \pm {1.82}$ | ${55.93} \pm {1.46}$ | ${62.32} \pm {2.89}$ | ${72.81} \pm {2.93}$ | ${54.45} \pm {3.13}$ | ${65.18} \pm {2.21}$ |
| MVGRL | ${62.16} \pm {2.05}$ | ${84.79} \pm {1.13}$ | ${64.69} \pm {2.84}$ | ${84.84} \pm {2.10}$ | ${57.24} \pm {2.07}$ | ${78.04} \pm {2.08}$ |
| GraphCL | ${54.72} \pm {2.62}$ | ${84.02} \pm {1.23}$ | ${75.65} \pm {3.05}$ | ${88.31} \pm {1.86}$ | ${57.10} \pm {2.27}$ | ${79.53} \pm {1.98}$ |
| GRACE | ${76.48} \pm {1.95}$ | ${90.22} \pm {0.84}$ | ${75.57} \pm {3.01}$ | ${87.69} \pm {2.17}$ | ${66.79} \pm {2.96}$ | ${89.77} \pm {1.59}$ |
| MERIT | ${71.70} \pm {2.88}$ | ${91.54} \pm {0.75}$ | ${72.10} \pm {3.86}$ | ${94.56} \pm {1.19}$ | ${65.29} \pm {3.23}$ | ${91.02} \pm {2.00}$ |
| SUGRL | ${84.78} \pm {1.47}$ | ${93.01} \pm {0.62}$ | ${71.42} \pm {2.68}$ | ${84.12} \pm {0.75}$ | ${53.21} \pm {1.80}$ | ${57.64} \pm {1.79}$ |
| TLP with Self-supervised GCL |
| MVGRL | ${67.51} \pm {2.21}$ | ${88.72} \pm {1.04}$ | ${66.49} \pm {2.75}$ | ${86.31} \pm {2.09}$ | ${71.17} \pm {3.04}$ | ${89.91} \pm {1.45}$ |
| GraphCL | ${70.26} \pm {2.19}$ | ${87.32} \pm {1.19}$ | ${77.26} \pm {3.12}$ | ${94.13} \pm {1.34}$ | ${73.51} \pm {3.18}$ | ${92.38} \pm {1.30}$ |
| BGRL | ${64.72} \pm {2.35}$ | ${90.10} \pm {0.88}$ | ${68.58} \pm {3.06}$ | ${89.15} \pm {1.97}$ | ${60.14} \pm {2.33}$ | ${79.86} \pm {1.92}$ |
| GRACE | ${79.38} \pm {1.75}$ | ${91.68} \pm {0.72}$ | ${75.23} \pm {2.59}$ | ${90.48} \pm {1.24}$ | ${71.21} \pm {2.97}$ | ${89.68} \pm {1.65}$ |
| MERIT | ${85.74} \pm {1.70}$ | ${95.78} \pm {0.61}$ | ${78.14} \pm {3.82}$ | ${95.98} \pm {1.38}$ | ${67.67} \pm {2.99}$ | ${95.42} \pm {1.21}$ |
| SUGRL | ${91.63} \pm {1.22}$ | ${96.30} \pm {0.51}$ | ${85.05} \pm {2.23}$ | ${97.15} \pm {0.81}$ | ${82.35} \pm {2.21}$ | ${92.22} \pm {1.15}$ |
+
+Table 5: The overall few-shot node classification results of meta-learning methods and TLP with different GCL methods under different settings. Accuracy $\left( \uparrow \right)$ and confidence interval $\left( \downarrow \right)$ are in $\%$ . The best and second best results are bold and underlined, respectively.
+
+| Dataset | CoraFull | ogbn-arxiv | Coauthor-CS |
| Setting | 2-way 1-shot | 2-way 5-shot | 2-way 1-shot | 2-way 5-shot | 2-way 1-shot | 2-way 5-shot |
| Meta-learning |
| MAML | ${50.90} \pm {2.30}$ | ${56.19} \pm {2.37}$ | ${58.16} \pm {2.35}$ | ${65.10} \pm {2.56}$ | ${56.90} \pm {2.41}$ | ${66.78} \pm {2.35}$ |
| ProtoNet | ${57.10} \pm {2.47}$ | ${72.71} \pm {2.55}$ | ${62.56} \pm {2.86}$ | ${75.82} \pm {2.79}$ | ${59.92} \pm {2.70}$ | ${71.69} \pm {2.51}$ |
| Meta-GNN | ${75.28} \pm {3.85}$ | ${84.59} \pm {2.89}$ | ${62.52} \pm {3.41}$ | ${70.15} \pm {2.68}$ | ${85.90} \pm {2.96}$ | ${90.11} \pm {2.17}$ |
| GPN | ${74.29} \pm {3.47}$ | ${85.58} \pm {2.53}$ | ${64.00} \pm {3.71}$ | ${76.78} \pm {3.50}$ | ${84.31} \pm {2.73}$ | ${90.36} \pm {1.90}$ |
| AMM-GNN | ${77.29} \pm {3.40}$ | ${88.66} \pm {2.06}$ | ${64.68} \pm {3.13}$ | ${78.42} \pm {2.71}$ | ${84.38} \pm {2.85}$ | ${94.74} \pm {1.20}$ |
| G-Meta | ${78.23} \pm {3.41}$ | ${89.49} \pm {2.04}$ | ${63.03} \pm {3.32}$ | ${76.56} \pm {2.89}$ | ${84.19} \pm {2.97}$ | ${91.02} \pm {1.61}$ |
| TENT | ${77.75} \pm {3.29}$ | ${88.20} \pm {2.61}$ | ${70.30} \pm {2.85}$ | ${81.35} \pm {2.77}$ | ${87.85} \pm {2.48}$ | ${91.75} \pm {1.60}$ |
| Supervised GCL |
| I-GNN | ${68.43} \pm {2.94}$ | ${78.20} \pm {2.83}$ | ${65.21} \pm {2.86}$ | ${77.10} \pm {2.46}$ | ${65.35} \pm {3.09}$ | ${76.83} \pm {2.48}$ |
| MVGRL | ${65.62} \pm {3.11}$ | ${84.41} \pm {2.35}$ | OOM | OOM | ${78.08} \pm {3.59}$ | ${91.78} \pm {1.66}$ |
| GraphCL | ${60.81} \pm {2.23}$ | ${81.25} \pm {2.29}$ | OOM | OOM | ${74.16} \pm {2.88}$ | ${88.43} \pm {1.73}$ |
| GRACE | ${76.78} \pm {3.49}$ | ${93.62} \pm {1.32}$ | OOM | OOM | ${86.22} \pm {2.53}$ | ${94.11} \pm {1.27}$ |
| MERIT | ${75.52} \pm {6.53}$ | ${88.03} \pm {5.11}$ | OOM | OOM | ${77.52} \pm {7.58}$ | ${96.62} \pm {2.12}$ |
| SUGRL | ${75.98} \pm {2.98}$ | ${90.02} \pm {1.53}$ | ${73.48} \pm {2.55}$ | ${81.04} \pm {1.68}$ | ${88.45} \pm {1.62}$ | ${95.10} \pm {0.56}$ |
| Self-supervised GCL |
| MVGRL | ${78.81} \pm {3.32}$ | ${91.03} \pm {1.80}$ | OOM | OOM | ${78.59} \pm {2.92}$ | ${93.54} \pm {1.40}$ |
| GraphCL | ${78.49} \pm {3.26}$ | ${91.32} \pm {2.11}$ | OOM | OOM | ${78.51} \pm {3.12}$ | ${91.34} \pm {1.57}$ |
| BGRL | ${61.08} \pm {2.65}$ | ${85.03} \pm {2.25}$ | ${59.91} \pm {2.36}$ | ${76.75} \pm {0.86}$ | ${76.85} \pm {3.23}$ | ${94.69} \pm {1.29}$ |
| GRACE | ${82.80} \pm {3.13}$ | ${93.06} \pm {2.17}$ | OOM | OOM | ${89.46} \pm {2.26}$ | ${95.53} \pm {1.05}$ |
| MERIT | ${77.46} \pm {3.14}$ | ${94.65} \pm {1.31}$ | OOM | OOM | ${94.31} \pm {1.73}$ | ${98.35} \pm {0.57}$ |
| SUGRL | ${87.98} \pm {2.72}$ | ${95.81} \pm {1.69}$ | ${82.45} \pm {2.94}$ | ${91.68} \pm {1.57}$ | ${96.81} \pm {1.31}$ | ${98.90} \pm {0.48}$ |
+
+
+
+Figure 8: $N$ -way $K$ -shot results on Coauthor-CS, meta-learning and TLP. TLP Methods with $*$ are based on supervised GCL methods and I-GNN.
+
+
+
+Figure 9: $N$ -way $K$ -shot results on CoraFull, TLP with self-supervised and supervised GCL. TLP Methods with $*$ are based on supervised GCL methods.
+
+
+
+Figure 10: $N$ -way $K$ -shot results on Coauthor-CS, TLP with self-supervised and supervised GCL. TLP Methods with $*$ are based on supervised GCL methods.
+
+
+
+Figure 11: 2-way $K$ -shot results on Amazon-Computer and CiteSeer, TLP with self-supervised and supervised GCL. TLP Methods with $*$ are based on supervised GCL methods.
+
+### I.2 Visualization
+
+In this section, we provide additional visualization results for more meta-learning and TLP methods on CoraFull dataset in Fig. 12. 570
+
+
+
+Figure 12: The t-SNE visualization results of meta-learning and TLP methods on CoraFull. TLP methods with $*$ are based on supervised GCL methods.
+
+### I.3 Node Representation Evaluation
+
+In this section, we provide the detailed node representation evaluations on two datasets CoraFull and CiterSeer based on NMI and ARI scores in Table 6.
+
+Table 6: The overall NMI $\left( \uparrow \right)$ and ARI $\left( \uparrow \right)$ results of meta-learning and TLP methods on two datasets
+
+| Dataset | CoraFull | CiteSeer |
| Metrics | NMI | ARI | NMI | ARI |
| Meta-learning |
| MAML | 0.1622 | 0.0597 | 0.0754 | 0.0602 |
| ProtoNet | 0.2669 | 0.1263 | 0.0915 | 0.0765 |
| AMM-GNN | 0.6247 | 0.5087 | 0.2090 | 0.1781 |
| G-Meta | 0.5003 | 0.3702 | 0.1913 | 0.1502 |
| Meta-GNN | 0.5534 | 0.4196 | 0.1317 | 0.1171 |
| GPN | 0.6001 | 0.4599 | 0.2119 | 0.2087 |
| TENT | 0.5760 | 0.4652 | 0.0930 | 0.0811 |
| Supervised GCL |
| GRACE | 0.7199 | 0.6239 | 0.4693 | 0.4769 |
| MERIT | 0.6119 | 0.4470 | 0.3471 | 0.3482 |
| GraphCL | 0.2474 | 0.0852 | 0.1321 | 0.0711 |
| SUGRL | 0.7298 | 0.6626 | 0.3927 | 0.4451 |
| MVGRL | 0.6412 | 0.5038 | 0.2445 | 0.2146 |
| Self-supervised GCL |
| GRACE | 0.6781 | 0.5856 | 0.2663 | 0.2778 |
| MERIT | 0.7419 | 0.6590 | 0.3923 | 0.4014 |
| GraphCL | 0.7023 | 0.5628 | 0.5579 | 0.5890 |
| SUGRL | 0.7680 | 0.7049 | 0.3952 | 0.4460 |
| MVGRL | 0.6227 | 0.4788 | 0.2554 | 0.2232 |
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dK8vOIBENa3/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/dK8vOIBENa3/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..4fe4cad88f015b65a58e7318cd0418627df54870
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dK8vOIBENa3/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,272 @@
+§ TRANSDUCTIVE LINEAR PROBING: A NOVEL PARADIGM FOR FEW-SHOT NODE CLASSIFICATION
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Few-shot node classification is tasked to provide accurate predictions for nodes from novel classes with only few representative labeled nodes. This problem has drawn tremendous attention for its projection to prevailing real-world applications, such as product categorization for newly added commodity categories on an E-commerce platform with scarce records or diagnosis for rare diseases on a patient similarity graph. To tackle such challenging label scarcity issues in the non-Euclidean graph domain, meta-learning has become a successful and predominant paradigm. More recently, inspired by the development of few-shot learning in the image domain, transferring pretrained node embeddings for few-shot node classification could be a promising alternative to meta-learning but remains unexposed. In this work, we empirically demonstrate the potential of an alternative paradigm, Transductive Linear Probing, that transfers pretrained node embeddings, which are learned from graph contrastive learning methods. We further extend the setting of few-shot node classification from standard fully supervised to a more realistic self-supervised setting, where meta-learning methods cannot be easily deployed due to the shortage of supervision from training classes. Surprisingly, even without any ground-truth labels, transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol. We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs.
+
+§ 23 1 INTRODUCTION
+
+Graph Neural Networks (GNNs) [1-4] are a family of neural network models designed for graph-structured data. In this work, we concentrate on GNNs for the node classification task, where GNNs recurrently aggregate neighborhoods to simultaneously preserve graph structure information and learn node representations. However, most GNN models focus on the (semi-)supervised learning setting, assuming access to abundant labels. This assumption could be practically infeasible due to the high cost of data collection and labeling, especially for large graphs. Moreover, recent works have manifested that directly training GNNs with limited nodes can result in severe performance degradation [5-7]. Such a challenge has led to a proliferation of studies [8-10] that try to learn fast-adaptable GNNs with extremely scarce known labels, i.e., Few-Shot Node Classification (FSNC) tasks. Particularly, in FSNC, there exist two disjoint label spaces: base classes are assumed to contain substantial labeled nodes while target novel classes only contain few available labeled nodes. If the target FSNC task contains $N$ novel classes with $K$ labeled nodes in each class, the problem is denoted as an $N$ -way $K$ -shot node classification task. Here the $K$ labeled nodes are termed as a support set, and the unlabeled nodes are termed as a query set for evaluation.
+
+Currently, meta-learning has become a prevailing and successful paradigm to tackle such a shortage of labels on graphs. Inspired by the way humans learn unseen classes with few samples via utilizing previously learned prior knowledge, a typical meta-learning based framework will randomly sample a number of episodes, or meta-tasks, to emulate the target $N$ -way $K$ -shot setting [5]. Based on this principle, various models [5-10] have been proposed, which makes meta-learning a plausible default
+
+choice for FSNC tasks. On the other hand, despite the remarkable breakthroughs that have been made, meta-learning based methods still have several limitations. First, relying on different arbitrarily sampled meta-tasks to extract transferable meta-knowledge, meta-learning based frameworks suffer from the piecemeal knowledge issue [11]. That being said, a small portion of the nodes and classes are selected per episode for training, which leads to an undesired loss of generalizability of the learned GNNs regarding nodes from unseen novel classes. Second, the feasibility for sampling meta-tasks is based on the assumption that there exist sufficient base classes where substantial labeled nodes are accessible. However, this assumption can be easily overturned for real-world graphs where the number of base classes can be limited, or the labels of nodes in base classes can be inaccessible. In a nutshell, these two concerns motivate us to design an alternative paradigm for meta-learning to cover more realistic scenarios.
+
+Inspired by $\left\lbrack {{12},{13}}\right\rbrack$ , we postulate that the key to solving FSNC is to learn a generalizable GNN encoder. We validate this postulation by a motivating example in Section 2.3. Then, without the episodic emulation, the proposed novel paradigm, Transductive Linear Probing (TLP), directly transfers pretrained node embeddings for nodes in novel classes learned from Graph Contrastive Learning (GCL) methods [14-19], and fine-tunes a separate linear classifier with the support set to predict labels for unlabeled nodes. GCL methods are proven to learn generalizable node embeddings by maximizing the representation consistency under different augmented views [14, 15, 20]. If the representations of nodes in novel classes are discriminative enough, probing them with a simple linear classifier should provide decent accuracy. Based on this intuition, we propose two instantiations of the TLP paradigm in this paper: TLP with the self-supervised form of GCL methods and TLP with the supervised GCL counterparts. We evaluate TLP by transferring node embeddings from various GCL methods to the linear classifier and compare TLP with meta-learning based methods under the same evaluation protocol. Moreover, we examine the effect of supervision during GCL pretraining for target FSNC tasks to further analyze what role labels from base classes play in TLP.
+
+Throughout this paper, we aim to shed new light on the few-shot node classification problem through the lens of empirical evaluations of both the "old" meta-learning paradigm and the "new" transductive linear probing paradigm. The summary of our contributions is as follows:
+
+New Paradigm We are the first to break with convention and precedent to propose a novel paradigm, transductive linear probing, as a competitive alternative to meta-learning for FSNC tasks.
+
+Comprehensive Study We perform comprehensive reviews on current literature and the research community and conduct a large-scale study on six widely-used real-world datasets that cover different scenarios in FSNC: (1) a sufficient number of base classes with substantial labeled nodes in each class, (2) a sufficient number of base classes with no labeled nodes in each class, (3) a limited number of base classes with substantial labeled nodes in each class, and (4) a limited number of base classes with no labeled nodes in each class. We evaluate all the compared methods under the same protocol.
+
+Findings We demonstrate that despite the recent advances in few-shot node classification, meta-learning based methods struggle to outperform TLP methods. Moreover, the TLP-based methods with self-supervised GCL can outperform their supervised counterparts and those meta-learning based methods even if all the labels from base classes are inaccessible. This signifies that without label information, self-supervised GCL can focus more on node-level structural information, which results in better node representations. However, TLP also inherits its limitation for scalability due to the large memory consumption of GCL, which makes it hard to deploy on extremely large graphs. Based on those observations, we identify that improving adaptability and scalability are the promising directions for meta-learning based and TLP-based methods, respectively.
+
+Our implementations for experiments are released ${}^{1}$ . We hope to facilitate the sharing of insights and accelerate the progress on the goal of learning from scarcely labeled instances on graphs.
+
+§ 2 PRELIMINARIES
+
+§ 2.1 PROBLEM STATEMENT
+
+Formally, given an attributed network $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right) = \left( {\mathbf{A},\mathbf{X}}\right)$ , where $\mathcal{V}$ denotes the set of nodes $\left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{n}}\right\} ,\mathcal{E}$ denotes the set of edges $\left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{m}}\right\} ,\mathbf{X} = \left\lbrack {{\mathbf{x}}_{1};{\mathbf{x}}_{2};\ldots ;{\mathbf{x}}_{n}}\right\rbrack \in {\mathbb{R}}^{n \times d}$ denotes all the node features, and $\mathbf{A} = \{ 0,1{\} }^{n \times n}$ is the adjacency matrix representing the network structure. Specifically, ${\mathbf{A}}_{j,k} = 1$ indicates that there is an edge between node ${v}_{j}$ and node ${v}_{k}$ ; otherwise, ${\mathbf{A}}_{j,k} = 0$ . The few-shot node classification problem assumes that there exist a series of target node classification tasks, $\mathcal{T} = {\left\{ {\mathcal{T}}_{i}\right\} }_{i = 1}^{I}$ , where ${\mathcal{T}}_{i}$ denotes the given dataset of a task, and $I$ denotes the number of such tasks. We term the classes of nodes available during training as base classes (i.e., ${\mathbb{C}}_{\text{ base }}$ ) and the classes of nodes during target test phase as novel classes (i.e., ${\mathbb{C}}_{\text{ novel }}$ ) and ${\mathbb{C}}_{\text{ base }} \cap {\mathbb{C}}_{\text{ novel }} = \varnothing$ . Notably, under different settings, labels of nodes for training (i.e., ${\mathbb{C}}_{\text{ base }}$ ) may or may not be available during training. Conventionally, there are few labeled nodes for novel classes ${\mathbb{C}}_{\text{ novel }}$ during the test phase. The problem of few-shot node classification is defined as follows:
+
+${}^{1}$ https://github.com/anonymous-LoG22/TLP-FSNC.git
+
+Definition 1. Few-shot Node Classification: Given an attributed graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{X}}\right)$ with a divided node label space $\mathbb{C} = \left\{ {{\mathbb{C}}_{\text{ base }},{\mathbb{C}}_{\text{ novel }}}\right\}$ , we only have few-shot labeled nodes (support set $\mathbb{S}$ ) for ${\mathbb{C}}_{\text{ novel }}$ . The task $\mathcal{T}$ is to predict the labels for unlabeled nodes (query set $\mathbb{Q}$ ) from ${\mathbb{C}}_{\text{ novel }}$ . If the support set in each target (test) task has $N$ novel classes with $K$ labeled nodes, then we term this task an $N$ -way $K$ -shot node classification task.
+
+The goal of few-shot node classification is to learn an encoder that can transfer the topological and semantic knowledge learned from substantial data in base classes $\left( {\mathbb{C}}_{\text{ base }}\right)$ and generate discriminative embeddings for nodes from novel classes $\left( {\mathbb{C}}_{\text{ novel }}\right)$ with limited labeled nodes.
+
+§ 2.2 EPISODIC META-LEARNING FOR FEW-SHOT NODE CLASSIFICATION.
+
+Episodic meta-learning is a proven effective paradigm for few-shot learning tasks [21-27]. The main idea is to train the neural networks in a way that emulates the evaluation conditions. This is hypothesized to be beneficial for the prediction performance on test tasks [21-23]. Based on this philosophy, many recent works in few-shot node classification [6, 8-10, 28-32] successfully transfer the idea to the graph domain. It works as follows: during the training phase, it generates a number of meta-train tasks (or episodes) ${\mathcal{T}}_{tr}$ from ${\mathbb{C}}_{\text{ base }}$ to emulate the test tasks, following their $N$ -way $K$ -shot node classification specifications:
+
+$$
+{\mathcal{T}}_{tr} = {\left\{ {\mathcal{T}}_{t}\right\} }_{t = 1}^{T} = \left\{ {{\mathcal{T}}_{1},{\mathcal{T}}_{2},\ldots ,{\mathcal{T}}_{T}}\right\} ,
+$$
+
+$$
+{\mathcal{T}}_{t} = \left\{ {{\mathcal{S}}_{t},{\mathcal{Q}}_{t}}\right\} \tag{1}
+$$
+
+$$
+{\mathcal{S}}_{t} = \left\{ {\left( {{v}_{1},{y}_{1}}\right) ,\left( {{v}_{2},{y}_{2}}\right) ,\ldots ,\left( {{v}_{N \times K},{y}_{N \times K}}\right) }\right\} ,
+$$
+
+$$
+{\mathcal{Q}}_{t} = \left\{ {\left( {{v}_{1},{y}_{1}}\right) ,\left( {{v}_{2},{y}_{2}}\right) ,\ldots ,\left( {{v}_{N \times K},{y}_{N \times K}}\right) }\right\} .
+$$
+
+For a typical meta-learning based method, in each episode, $K$ labeled nodes are randomly sampled from $N$ base classes, forming a support set, to train the GNN model while emulating the $N$ -way $K$ -shot node classification in the test phase. Then GNN predicts labels for an emulated query set of nodes randomly sampled from the same classes as the support set. The Cross-Entropy Loss $\left( {L}_{CE}\right)$ is calculated to optimize the GNN encoder ${g}_{\theta }$ and the classifier ${f}_{\psi }$ in an end-to-end fashion:
+
+$$
+\theta ,\psi = \arg \mathop{\min }\limits_{{\theta ,\psi }}{L}_{CE}\left( {{\mathcal{T}}_{t};\theta ,\psi }\right) . \tag{2}
+$$
+
+Based on this, Meta-GNN [28] combines MAML [27] with GNNs to achieve optimization for different meta-tasks. GPN [6] applies ProtoNet [26] and computes node importance for a transferable metric function. G-Meta [8] aims to establish a local subgraph for each node to achieve fast adaptations to new meta-tasks. RALE [29] obtains relative and absolute node embeddings based on node positions on graphs to model node dependencies in each meta-task. An exhaustive survey is beyond the scope of this paper; see [33] for an overview. However, all those methods are evaluated on different datasets with each own evaluation protocol, which fragments the practical knowledge on how meta-learning performs with a few labeled nodes and makes it hard to explicitly compare their superiority or inferiority. To bridge this gap, in this paper, we conduct extensive experiments to compare new advances and prior works for FSNC tasks uniformly and comprehensively.
+
+§ 2.3 A MOTIVATING EXAMPLE AND PRELIMINARY ANALYSIS
+
+More recently, related works in the image domain demonstrate that the reason for the fast adaptation lies in feature reuse rather than those complicated mate-learning algorithms [12, 13]. In other words, with a carefully pretrained encoder, decent performance can be obtained through directly fine-tuning a simple classifier on the target task. However, few studies have been done on the graph domain due to its important difference from images that nodes in a graph are not i.i.d. Their interactive relationships
+
+are reflected by both the topological and semantic information. To validate such hypothesis on graphs, based on [13], we construct an Intransigent ${GNN}$ model, namely $I - {GNN}$ , that simply does not adapt to new tasks. We decouple the training procedure to two separate phases. In the first phase, a GNN encoder ${g}_{\theta }$ with a linear classifier ${f}_{\phi }$ as the classifier is simply pretrained on all base classes ${\mathbb{C}}_{\text{ base }}$ with vanilla supervision through ${L}_{CE}$ :
+
+$$
+{\mathcal{T}}_{tr}^{\prime } = \cup {\left\{ {\mathcal{T}}_{t}\right\} }_{t = 1}^{T} = \cup \left\{ {{\mathcal{T}}_{1},{\mathcal{T}}_{2},\ldots ,{\mathcal{T}}_{T}}\right\} ,
+$$
+
+$$
+\theta ,\phi = \arg \mathop{\min }\limits_{{\theta ,\phi }}{L}_{CE}\left( {{\mathcal{T}}_{tr}^{\prime };\theta ,\phi }\right) + \mathcal{R}\left( \theta \right) , \tag{3}
+$$
+
+where $\mathcal{R}\left( \theta \right)$ is a weight-decay regularization term: $\mathcal{R}\left( \theta \right) = \parallel \theta {\parallel }^{2}/2$ . Then, we freeze the parameter of the GNN encoder ${g}_{\theta }$ and discard the classifier ${f}_{\phi }$ . When fine-tuning on a target few-shot node classification task ${\mathcal{T}}_{i} = \left\{ {{\mathcal{S}}_{i},{\mathcal{Q}}_{i}}\right\}$ , the embeddings of all nodes from ${\mathcal{T}}_{i}$ are directly transferred from the pretrained GNN encoder ${g}_{\theta }$ . Then another linear classifier ${f}_{\psi }$ is involved and tuned with few-shot labeled nodes from the support set ${\mathcal{S}}_{i}$ to predict labels of nodes in the query set ${\mathcal{Q}}_{i}$ :
+
+$$
+\psi = \arg \mathop{\min }\limits_{\psi }{L}_{CE}\left( {{\mathcal{S}}_{i};\theta ,\psi }\right) . \tag{4}
+$$
+
+Results and Analysis of the Intransigent GNN model I-GNN. We demonstrate the performance of the intransigent model and compare it with those meta-learning based models in Table 1, 5. Under the same evaluation protocol (defined in Section 3.2), the simple intransigent model I-GNN has very competitive performance with meta-learning based methods. On datasets (e.g., CiteSeer) where the number of base classes $\left| {\mathbb{C}}_{\text{ base }}\right|$ is limited, I-GNN consistently outperforms meta-learning based methods in terms of accuracy. This motivating example concludes that transferring node embeddings from the vanilla supervised training method I-GNN could be an alternative to meta-learning. Moreover, we take one step further and postulate that if more transferable node embeddings are obtained during pretraining, the performance on target FSNC tasks could be improved even more.
+
+ < g r a p h i c s >
+
+Figure 1: The framework of TLP with supervised GCL: (a) Supervised GCL framework. (b) Fine-tuning on few-shot labeled nodes from novel classes with support and query sets. Colors indicate different classes (e.g., Neural Networks, SVM, Fair ML, Explainable AI). Specially, white nodes mean labels of those nodes are unavailable. Labels of all nodes in base classes are available. Different types of nodes indicate if nodes are from base classes or novel classes. The counterpart of TLP with self-supervised GCL is very simliar to this, and a figure is included in Appendix B.
+
+§ 2.4 TRANSDUCTIVE LINEAR PROBING FOR FEW-SHOT NODE CLASSIFICATION.
+
+Inspired by the motivating example above, we generalize it to a new paradigm, Transductive Linear Probing (TLP), for few-shot node classification. The only difference between TLP and I-GNN is that the pretraining method can be an arbitrary strategy rather than the vanilla supervised learning. It can even be self-supervised training methods that do not have any requirement on base classes. In this way, the second line of Eq. (3) can be generalized to:
+
+$$
+\theta = \arg \mathop{\min }\limits_{\theta }{L}_{\text{ pretrain }}\left( {{\mathcal{T}}_{tr}^{\prime };\theta }\right) , \tag{5}
+$$
+
+where ${L}_{\text{ pretrain }}$ is an arbitrary loss function to pretrain the GNN encoder ${g}_{\theta }$ . Then following Eq. (4), we can exploit a linear classifier to probe the transferred embeddings of nodes from novel classes, and perform the final node classification.
+
+In this paper, we thoroughly investigate Graph Contrastive Learning (GCL) as the pretraining strategy for TLP due to two reasons: (1) GCL [14, 16, 17, 34, 35] is a proved effective way to learn generalizable node representations in either a supervised or self-supervised manner. By maximizing the consistency over differently transformed positive and negative examples (termed as views), GCL enforces the GNNs to be aware of the semantic and topological knowledge and injected perturbations on graphs. Trained on the global structures, GCL should be capable of addressing the piecemeal knowledge issue in meta-learning to increase the generalizability of the learned GNNs. Also, [36] summarizes the characteristics of GCL frameworks and empirically demonstrates the transferability of the learned representations. (2) GCL has no requirement for the base classes, which means GCL can be deployed even when the number of base classes is limited, or the nodes in base classes are unlabeled. The effectiveness of GCL highly relies on the contrastive loss function. There are two categories of contrastive loss function for graphs: (1) Supervised Contrastive Loss $\left( {L}_{\text{ SupCon }}\right) \left\lbrack {{37},{38}}\right\rbrack$ . (2) Self-supervised Contrastive Loss: Information Noise Contrastive Estimation $\left( {L}_{InfoNCE}\right) \left\lbrack {{16},{17},{19}}\right\rbrack$ and Jensen-Shannon Divergence $\left( {L}_{JSD}\right) \left\lbrack {{14},{15}}\right\rbrack$ . We also consider a special GCL method, BGRL [18], which does not explicitly require negative examples. The framework for TLP with an iconic supervised GCL method is provided in Fig. 1. From another perspective, our work is the first to focus on the extrapolation ability of GCL methods, especially under extremer few-shot settings without labels for nodes in base classes.
+
+§ 3 EXPERIMENTAL STUDY
+
+§ 3.1 EXPERIMENTAL SETTINGS
+
+We conduct systematic experiments to compare the performance of meta-learning and TLP methods (with self-supervised and supervised GCL) on the few-shot node classification task. For meta-learning, we evaluate ProtoNet [26], MAML [27], Meta-GNN [28], G-Meta [8], GPN [6], AMM-GNN [7], and TENT [10]. For TLP methods with both self-supervised and supervised forms, we evaluate MVGRL [14], GraphCL [15], GRACE [16], MERIT [17], and SUGRL [19]. Moreover, BGRL [39] and I-GNN [13] are exclusively used for TLP methods with self-supervised GCL or supervised GCL, respectively. The detailed descriptions of these models can be found in Appendix E. For comprehensive studies, we benchmark those methods on six prevalent real-world graph datasets: CoraFull [40], ogbn-arxiv [41], Coauthor-CS [42], Amazon-Computer [42], Cora [43], and CiteSeer [43]. Specifically, each dataset is a connected graph and consists of multiple node classes for training and evaluation. A more detailed description of those datasets is provided in Appendix G with their statistics and class split policies in Table 3 in Appendix F.
+
+§ 3.2 EVALUATION PROTOCOL
+
+In this section, we specify the evaluation protocol used to compare both meta-learning based methods and TLP based methods. For an attributed graph dataset $\mathcal{G} = \left( {\mathbf{A},\mathbf{X}}\right)$ with a divided node label space $\mathbb{C} = \left\{ {{\mathbb{C}}_{\text{ base }},{\mathbb{C}}_{\text{ novel }}\left( {\operatorname{or}{\mathbb{C}}_{\text{ test }}}\right) }\right\}$ , we split ${\mathbb{C}}_{\text{ base }}$ into ${\mathbb{C}}_{\text{ train }}$ and ${\mathbb{C}}_{\text{ dev }}$ (The split policy for each datasets are listed in Table 3). For evaluation, given a GNN encoder ${g}_{\theta }$ , a classifier ${f}_{\psi }$ , the validation epoch interval $V$ , the number of sampled meta-tasks for evaluation $I$ , the epoch patience $P$ , the maximum epoch number $E$ , the experiment repeated times $R$ , and the $N$ -way, $K$ -shot, $M$ -query setting specification, the final FSNC accuracy $\mathcal{A}$ and the confident interval $\mathcal{I}$ (two mainly-concerned metrics) are calculated according to Algorithm 1 in Appendix C. The default values of all those parameters are given in Table 2 in Appendix D.
+
+Table 1: The overall few-shot node classification results of meta-learning methods and TLP with various GCL methods under different settings. Accuracy $\left( \uparrow \right)$ and confident interval $\left( \downarrow \right)$ are in $\%$ . The best and second best results are bold and underlined, respectively. OOM denotes out of memory.
+
+max width=
+
+Dataset 2|c|CoraFull 2|c|ogbn-arxiv 2|c|CiteSeer
+
+1-7
+Setting 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 2-way 1-shot 2-way 5-shot
+
+1-7
+7|c|Meta-learning
+
+1-7
+MAML [27] ${22.63} \pm {1.19}$ ${27.21} \pm {1.32}$ ${27.36} \pm {1.48}$ ${29.09} \pm {1.62}$ ${52.39} \pm {2.20}$ ${54.13} \pm {2.18}$
+
+1-7
+ProtoNet [26] ${32.43} \pm {1.61}$ ${51.54} \pm {1.68}$ ${37.30} \pm {2.00}$ ${53.31} \pm {1.71}$ ${52.51} \pm {2.44}$ ${55.69} \pm {2.27}$
+
+1-7
+Meta-GNN [28] ${55.33} \pm {2.43}$ ${70.50} \pm {2.02}$ ${27.14} \pm {1.94}$ ${31.52} \pm {1.71}$ ${56.14} \pm {2.62}$ ${67.34} \pm {2.10}$
+
+1-7
+GPN [6] ${52.75} \pm {2.32}$ ${72.82} \pm {1.88}$ ${37.81} \pm {2.34}$ ${50.50} \pm {2.13}$ ${53.10} \pm {2.39}$ ${63.09} \pm {2.50}$
+
+1-7
+AMM-GNN [7] ${58.77} \pm {2.49}$ ${75.61} \pm {1.78}$ ${33.92} \pm {1.80}$ ${48.94} \pm {1.87}$ ${54.53} \pm {2.51}$ ${62.93} \pm {2.42}$
+
+1-7
+G-Meta [8] ${60.44} \pm {2.48}$ ${75.84} \pm {1.70}$ ${31.48} \pm {1.70}$ ${47.16} \pm {1.73}$ ${55.15} \pm {2.68}$ ${64.53} \pm {2.35}$
+
+1-7
+TENT [10] ${55.44} \pm {2.08}$ ${70.10} \pm {1.73}$ ${48.26} \pm {1.73}$ ${61.38} \pm {1.72}$ ${62.75} \pm {3.23}$ ${72.95} \pm {2.13}$
+
+1-7
+7|c|TLP with Supervised GCL
+
+1-7
+I-GNN [13] ${42.70} \pm {1.92}$ ${51.46} \pm {1.69}$ ${38.46} \pm {1.77}$ ${51.46} \pm {1.69}$ ${58.70} \pm {3.17}$ ${65.60} \pm {2.58}$
+
+1-7
+MVGRL [14] ${44.98} \pm {1.99}$ ${71.18} \pm {1.75}$ OOM OOM ${55.79} \pm {1.39}$ ${66.72} \pm {2.13}$
+
+1-7
+GraphCL [15] ${47.00} \pm {1.64}$ ${67.94} \pm {1.71}$ OOM OOM ${53.55} \pm {1.68}$ ${69.50} \pm {1.41}$
+
+1-7
+GRACE [16] ${65.48} \pm {2.45}$ ${85.08} \pm {1.49}$ OOM OOM ${61.20} \pm {2.39}$ ${81.76} \pm {1.74}$
+
+1-7
+MERIT [17] ${52.80} \pm {2.72}$ ${81.30} \pm {1.53}$ OOM OOM ${61.25} \pm {2.59}$ ${81.45} \pm {1.80}$
+
+1-7
+SUGRL [19] ${54.26} \pm {2.24}$ ${77.55} \pm {1.95}$ ${52.13} \pm {2.11}$ ${70.05} \pm {1.56}$ ${65.34} \pm {2.55}$ ${75.81} \pm {1.43}$
+
+1-7
+7|c|TLP with Self-supervised GCL
+
+1-7
+MVGRL [14] ${59.91} \pm {2.39}$ ${76.76} \pm {1.63}$ OOM OOM ${64.45} \pm {2.77}$ ${80.25} \pm {1.82}$
+
+1-7
+GraphCL [15] ${64.20} \pm {2.56}$ ${83.74} \pm {1.46}$ OOM OOM ${73.55} \pm {3.09}$ ${92.35} \pm {1.24}$
+
+1-7
+BGRL [39] ${43.83} \pm {2.11}$ ${70.44} \pm {1.62}$ ${36.76} \pm {1.74}$ ${53.44} \pm {0.36}$ ${54.32} \pm {1.63}$ ${70.50} \pm {2.11}$
+
+1-7
+GRACE [16] ${72.42} \pm {2.06}$ ${83.82} \pm {1.67}$ OOM OOM ${60.75} \pm {2.54}$ ${78.42} \pm {2.01}$
+
+1-7
+MERIT [17] ${73.38} \pm {2.25}$ ${87.66} \pm {1.43}$ OOM OOM ${64.53} \pm {2.81}$ ${90.32} \pm {1.66}$
+
+1-7
+SUGRL [19] ${77.35} \pm {2.20}$ ${83.96} \pm {1.52}$ ${60.04} \pm {2.11}$ ${77.52} \pm {1.45}$ ${77.34} \pm {2.83}$ ${86.32} \pm {1.57}$
+
+1-7
+
+§ 3.3 COMPARISON
+
+Table 1 presents the performance comparison of all methods on the few-shot node classification task. Specifically, we give results under four different few-shot settings to exhibit a more comprehensive comparison: 5-way 1-shot, 5-way 5-shot, 2-way 1-shot, and 2-way 5-shot. More results are given in Appendix I. We choose the average classification accuracy and the ${95}\%$ confidence interval over $R$ repetitions as the evaluation metrics. From Table 1, we discover the following observations:
+
+ * TLP methods consistently outperforms meta-learning methods, which indicates the importance of transferring comprehensive node representations in FSNC tasks. In TLP methods, the model is forced to extract node-level structural information, while the meta-learning methods mainly focus on label information. As a result, TLP methods can transfer better node representations and exhibit superior performance on meta-test tasks.
+
+ * Even without using any label information from base classes, TLP with self-supervised GCL methods can mostly outperform TLP with supervised GCL methods. This signifies that directly injecting supervision can potentially hinder the generalizability for TLP, which is further investigated in the following sections.
+
+ * Increasing the number of shots $K$ (i.e., number of labeled nodes in the support set) has more significant effect on performance of both forms of TLP methods, compared with meta-learning methods. This is due to the fact that with the additional support nodes, TLP with GCL can provide more informative node representations to learn a more powerful classifier. Instead, the meta-learning methods are based on the extracted label information and thus cannot benefit from additional node-level information.
+
+ * Most TLP methods encounter the OOM (out of memory) problem when applied to the ogbn-arxiv dataset. This is due to the fact that the contrastive strategy in TLP methods will consume a larger memory compared with traditional supervised learning. Thus, the scalability problem is not negligible for TLP with GCL methods.
+
+ * BGRL [39] exhibits less competitive performance compared with other TLP methods with self-supervised GCL. The result indicates that negative samples are important for self-supervised GCL in FSNC, which can help the model exploit node-level information. Nevertheless, without the requirement of negative samples, BGRL can parallel better to handle the OOM problem.
+
+§ 3.4 FURTHER ANALYSIS
+
+To explicitly compare the results between meta-learning and TLP and between two forms of TLP, we provide further results of all methods on various $N$ -way $K$ -shot settings in Fig. 2 and Fig. 3. From the results, we can obtain the following observations:
+
+ * The performance of all methods significantly degrades when a larger value of $N$ is presented (i.e., more classes in each meta-task). The main reason is that with a larger $N$ , the variety of classes in each meta-task can result in a more complex class distribution and thus increase the classification difficulties. Nevertheless, the performance drop is less significant on TLP with both forms of GCL methods. This is because the utilized GCL methods focus more on node-level structural patterns, which incorporate more potentially useful information for classification. As a result, these methods are more capable of alleviating the problem of difficult classification caused by a larger $N$ .
+
+ * As shown in Fig. 3, the performance improvement of TLP with self-supervised GCL methods over meta-learning methods on CiteSeer is generally more impressive than other datasets. The main reason is that CiteSeer bears a significantly smaller class set $(2/2/2$ classes for ${\mathbb{C}}_{\text{ train }}/{\mathbb{C}}_{\text{ dev }}/{\mathbb{C}}_{\text{ test }}$ ). In consequence, the meta-learning methods cannot effectively leverage the supervision information during training. Nevertheless, TLP with self-supervised GCL can extract useful structural information for better generalization performance.
+
+ < g r a p h i c s >
+
+Figure 2: $N$ -way $K$ -shot results on CoraFull, meta-learning and TLP. TLP Methods with $*$ are based on supervised GCL methods and I-GNN.
+
+ < g r a p h i c s >
+
+Figure 3: 2-way $K$ -shot results on CiteSeer and Amazon-Computer, meta-learning and two forms of TLP. TLP Methods with $*$ are based on supervised GCL methods and I-GNN.
+
+§ 3.5 EFFECT OF SUPERVISION INFORMATION IN BASE CLASSES
+
+In this section, we further investigate the effectiveness of the supervised information in TLP with supervised GCL methods. Specifically, we leverage a combined loss ${L}_{\text{ JointCon }} = \lambda {L}_{\text{ SupCon }} +$ $\left( {1 - \lambda }\right) {L}_{SelfCon}$ , where ${L}_{SelfCon}$ indicates a self-supervised GCL loss, either ${L}_{JSD}$ or ${L}_{InfoNCE}$ according to the models, and ${L}_{\text{ JointCon }}$ is a mixture of supervised GCL loss and self-supervised GCL loss. In this way, we can gradually adjust the value of $\lambda$ to inject different levels of supervision signals into GCL and then observe the performance fluctuation. Note that due to the unstable training curve brought by the joint loss ${L}_{\text{ JointCon }}$ , we increase the epoch patience number from $P$ to ${2P}$ to ensure convergence. The results on Cora dataset (we observe similar results on other datasets) with different values of $\lambda$ are provided in Fig. 4. From the results, we can obtain the following observations:
+
+ * In general, the classification performance increases with a larger value of $\lambda$ . In other words, directly injecting supervision information into GCL for TLP will usually reduce the performance on few-shot node classification tasks. Nevertheless, carefully injecting supervision information can slightly increase the accuracy by choosing a suitable value of $\lambda$ . On the other hand, the results also verify that the TLP paradigm can still achieve considerable performance without any explicit restrictions for base classes.
+
+ * Even with a relatively small value of $\lambda$ (e.g.,0.1), the performance improvement over TLP with totally supervised GCL (i.e., $\lambda = {0.0}$ ) is still significant. That being said, the contrastive strategy that leverages graph structures can provide better performance by providing comprehensive node representations.
+
+ < g r a p h i c s >
+
+Figure 4: Results on dataset Cora (2-way)
+
+§ 3.6 EVALUATING LEARNED NODE REPRESENTATIONS ON NOVEL CLASSES
+
+In this section, we further validate the quality of the learned node representations from different training strategies. Particularly, we leverage two prevalent clustering evaluation metrics: normalized mutual information (NMI) and adjusted random index (ARI), on learned node representations clustered based on K-Means. We evaluate the representations learned from two datasets CoraFull and CiteSeer for a fair comparison. The results are presented in Table 6 in Appendix I.3 . Based on the results, we can obtain the following observations:
+
+ * The meta-learning methods typically exhibit inferior NMI and ARI scores compared with both forms of TLP. This is because meta-learning methods are dedicated for extracting supervision information from node samples and thus cannot fully utilize node-level structural information.
+
+ * In general, TLP with self-supervised GCL methods can result in larger values of both NMI and ARI scores than TLP with supervised GCL. This is due to the fact that the self-supervised GCL model focuses more on extracting structural information without the interruption of label information. As a result, the learned node representations are more comprehensive and thus exhibit superior clustering performance.
+
+ * The difference of NMI and ARI scores between meta-learning and TLP is more significant on CiteSeer than CoraFull. This phenomenon potentially results from the fact that CiteSeer consists of fundamentally fewer classes than CoraFull. In consequence, for CiteSeer, the meta-learning methods will largely rely on label information instead of node-level structural information for classification.
+
+§ 3.7 VISUALIZATION
+
+To provide an explicit comparison of different baselines, we visualize the learned node representations from CoraFull and CiteSeer via the t-SNE algorithm, where colors denote different classes. It is noteworthy that for clarity, we randomly select five classes from ${\mathbb{C}}_{\text{ test }}$ for the visualization. The results are provided in Fig. 5 (more results are included in Fig. 12). Specifically, we discover that:
+
+ * TLP with self-supervised GCL generally outperforms TLP with supervised GCL. This is because without learning label information, TLP with self-supervised GCL can concentrate on node representation patterns, which are easier to transfer to target unseen novel classes.
+
+ * The learned node representations are less discriminative for meta-learning on CiteSeer compared with CoraFull. This is because CiteSeer contains fewer classes, which means the node representations learned by meta-learning methods will be less informative, since they are only required to classify nodes from a small class set.
+
+ < g r a p h i c s >
+
+Figure 5: The t-SNE visualization results. Fig. (a)-(f) are for dataset CoraFull (5-way). Fig. (g)-(h) are for dataset CiteSeer (2-way). TLP methods with * are based on supervised GCL methods.
+
+§ 4 CONCLUSION, LIMITATIONS, AND OUTLOOK
+
+In this paper, we propose TLP as an alternative paradigm to meta-learning for FSNC tasks. First, we provide a motivating example, a vanilla intransigent GNN model, to validate our postulation that a generalizable GNN encoder is the key to FSNC tasks. Then, we provide a formal definition for TLP, which transfers node embeddings from GCL pretraining to the prevailing meta-learning paradigm. We conduct comprehensive experiments and compare various meta-learning based and TLP-based methods under the same protocol. Our rigorous empirical study reveals several interesting findings on the strengths and weaknesses of the two paradigms and identifies that adaptability and scalability are the promising directions for meta-learning based and TLP-based methods, respectively.
+
+However, due to limited space, several limitations of our work need to be acknowledged.
+
+ * Limited design considerations. Even though an exhaustive survey on FSNC or GCL is out of the scope of this work, we do not provide a more fine-grained comparison on model details, such as different GNN encoders or various transformations during GCL pretraining.
+
+ * Lack of theoretical justifications. Our findings are based on empirical studies, which cannot disclose the underlying mathematical mechanisms of those methods, such as the performance guarantee by transferring node embeddings from different GCL methods.
+
+How to address these limitations is saved as future work. In broader terms, this work lies at the confluence of graph few-shot learning and graph contrastive learning. We hope this work can facilitate the sharing of insights for both communities. On the one hand, we hope our work provides a necessary yardstick to measure progress across the FSNC field. On the other hand, our work should have exhibited several practical guidelines for future research in both vigorous fields. For example, the meta-learning community can get inspired by GCL to learn more transferable graph patterns. Also, few-shot TLP can serve as a new metric to evaluate the extrapolation ability of GCL methods.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dnRSxTNIvjK/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/dnRSxTNIvjK/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..71fcea95e16d522c9ef61cfad445066bf8ade165
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dnRSxTNIvjK/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,549 @@
+# Jointly Modelling Uncertainty and Diversity for Active Molecular Property Prediction
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Molecular property prediction is a fundamental task in AI-driven drug discovery. Deep learning has achieved great success in this task, but relies heavily on abundant annotated data. However, annotating molecules is particularly costly because it often requires lab experiments conducted by experts. Active Learning (AL) tackles this issue by querying (i.e., selecting) the most valuable samples to annotate, according to two criteria: uncertainty of the model and diversity of data. Combining both criteria (a.k.a. hybrid AL) generally leads to better performance than using only one single criterion. However, existing best hybrid methods rely on some trade-off hyperparameters for balancing uncertainty and diversity, and hence need to carefully tune the hyperparameters in each experiment setting, causing great annotation and time inefficiency. In this paper, we propose a novel AL method that jointly models uncertainty and diversity without the trade-off hyperparameters. Specifically, we model the joint distribution of the labeled data and the model prediction. Based on this distribution, we introduce a Minimum Maximum Probability Querying (MMPQ) strategy, in which a single selection score naturally captures how the model is uncertain about its prediction, and how dissimilar the sample is to the currently labeled data. To model the joint distribution, we adapt the energy-based models to the non-Euclidean molecular graph data, by learning chemically-meaningful embedding vectors as the proxy of the graphs. Extensive experiments on various benchmark datasets show that our method achieves superior AL performance, outperforming existing methods by a large margin. We also conduct ablation studies to verify different design choices of our approach.
+
+## 1 Introduction
+
+AI-driven drug discovery is an important application of data mining and machine learning. In drug discovery pipeline, a fundamental step is to use computational methods to predict the molecular properties (e.g., toxicity and binding specificity) of candidate compounds [1, 2]. Recently, deep learning models have achieved great success in molecular property prediction [1, 3-5], but their high performance relies on a large amount of annotation. However, annotating molecules is particularly time-consuming and costly, since it often requires lab experiments or complex theoretical computation $\left\lbrack {3,6}\right\rbrack$ .
+
+One promising way to alleviate this problem is Active Learning (AL) [7], which aims at finding a strategy for iteratively querying (i.e., selecting) the most valuable data samples to annotate, so as to maximize model performance under a low annotation budget. AL strategies query samples mainly based on two criteria: uncertainty of the model [8], and diversity of queried data [9]. Strategies taking into account both criteria (a.k.a. hybrid strategies) are recently shown to outperform methods based on only uncertainty or diversity in many learning tasks [10-12]. Existing best hybrid methods generally rely on some trade-off hyperparameters for balancing uncertainty and diversity [11-15]. For example, WAAL [12] requires manually-tuned coefficients to obtain a weighted sum of its uncertainty and diversity terms. EADA [13] relies on two selection ratios for its two-step selection process. These trade-off hyperparameters are crucial to the AL performance and hence need to be carefully tuned for each experiment setting.
+
+However, tuning trade-off hyperparameters can cause substantial inefficiency in AL. For one thing, since these hyperparameters have a large influence on the outcome of corresponding AL strategies, the selected samples under different choices of the hyperparameters often vary a lot, and thus the total annotation cost needed for tuning will greatly exceed the budget. For another, the tuning process can take a long time, since each AL experiment iterates between query selection and and model (re)training for several rounds.
+
+In this paper, we propose a novel AL strategy that naturally takes into account uncertainty and diversity without the need of trade-off hyperparameters. Our strategy is based on a joint distribution $q\left( {x, y}\right) \triangleq p\left( {y \mid x}\right) p\left( x\right)$ , which contains information about both uncertainty and diversity: $p\left( {y \mid x}\right)$ is the prediction distribution of the model (with input $x$ and prediction $y$ ), which is widely used to define uncertainty metrics $\left\lbrack {7,8,{16}}\right\rbrack ;p\left( x\right)$ is the density of the currently annotated data, which is shown to be useful for identifying samples that can effectively increase data diversity [11, 12, 17].
+
+Specifically, our strategy operates by first maximizing $q\left( {x, y}\right)$ via varying $y$ , and then minimizing $\mathop{\max }\limits_{y}q\left( {x, y}\right)$ via varying $x$ . We thus name our strategy Minimum Maximum Probability Querying (MMPQ). Importantly, we show that the selection score of MMPQ can be viewed as the product of two terms - the first term leads to samples on which the model has low prediction confidence, while the second favors samples that are dissimilar to labeled data. In this way, the selected samples are naturally those that the model is most uncertain about, while at the same time being able to increase the data diversity.
+
+For modelling the joint distribution, we propose to use an Energy-Based Model (EBM) [18, 19], since it can explicitly output the desired probability value. For training the EBM, we need to tackle one key challenge in our setting: the variable $x$ in the joint distribution has a non-Euclidean data structure (i.e., a molecule graph), which renders the commonly-used EBM training scheme inapplicable [19, 20]. To address this challenge, we take a learned embedding vector $z$ as a proxy of the non-Euclidean input $x$ , which allows us to train the EBM on $z$ and $y$ with the commonly-used EBM training scheme. Specifically, inspired from [21, 22], we learn the embeddings by training an autoencoder to reconstruct the input SMILES strings, which is an expert-defined sequence representation of molecules. The EBM is trained by Denoising Score Matching (DSM) [20, 23], i.e., to learn the "Stein score" [24] of $q\left( {x, y}\right)$ , which has been shown to be an efficient and robust EBM training scheme.
+
+To evaluate our MMPQ strategy, we apply it to actively train a commonly-used Graph Neural Network (GNN) [4] on various benchmark datasets of molecular property prediction. Extensive results show that MMPQ enables the GNN to achieve high performance with limited annotation budget, significantly outperforming other competitive AL methods. In addition, we conduct ablation studies to verify different design choices of our method. In particular, we show that the uncertainty and diversity terms make complementary contributions to the good performance: the diversity term is important in early iterations of AL, while the uncertainty term is essential in later iterations. Anonymized code is available at https://anonymous.4open.science/r/MMPQ-5EBD/.
+
+## 2 Related works
+
+Molecular property prediction is a critical step in drug discovery $\left\lbrack {1,2,5}\right\rbrack$ . Traditional methods $(e.g$ ., based on density function theory [25]) are too slow to be applied in practice. To resolve this problem, deep learning methods $\left\lbrack {1,3,4,{26},{27}}\right\rbrack$ have been widely proposed, which can be categorized into two types: (1) descriptor-based methods [26, 27] that represent the input molecules as expert-crafted molecular descriptors (e.g., fingerprints [28]), and (2) GNN-based methods [1,3,4] that directly take molecule graphs as input. As found in [1, 5], GNNs generally outperform descriptor-based methods, and thus this work focuses on GNN-based molecular property prediction.
+
+Active learning improves annotation efficiency by iteratively querying samples based on two criteria: uncertainty of the task learner $\left\lbrack {8,{16},{29},{30}}\right\rbrack$ , and/or diversity of queried data $\left\lbrack {9,{17},{21}}\right\rbrack$ . Uncertainty-based methods define various uncertainty metrics for querying data $\left\lbrack {8,{10},{16},{30}}\right\rbrack$ , while diversity-based methods aim to find a representative subset of the whole dataset by querying diverse samples [9, 21]. Compared to using only uncertainty or diversity, recent works find that combining the two criteria (a.k.a. hybrid methods) leads to better performance [10, 11, 13-15]. However, existing hybrid approaches generally need to balance uncertainty and diversity via some trade-off hyperparameters. For example, WAAL [12] uses manually-tuned coefficients to weight its uncertainty term and diversity term. EADA [13] adopts a two-stage querying approach, where each stage requires a prefixed selection ratio. We note that EADA also trains an EBM for active selection. Our method differs from EADA mainly in 3 aspects. First, the motivation of EADA to adopt EBMs is to identify out-of-distribution samples, while we use EBMs for modelling the distribution capturing both uncertainty and diversity. Second, they train their EBM via contrastive divergence [31], while we use denoising score matching [20]. Third, they need two separate selection steps with different selection scores, and require two hyperparameters to trade off between uncertainty and diversity, while we only have one single selection score that naturally captures both uncertainty and diversity.
+
+Apart from the above, some other hybrid methods rely on trade-off hyperparameters during their model training process [11, 14, 15]. We note that there is an existing hybrid strategy, BADGE [10], that is also free from trade-off hyperparameters like ours. However, BADGE [10] assumes that task learner's prediction is a faithful proxy of the ground-truth label. This may not hold in early AL iterations on (typically) small molecule datasets, since the task learner would be inaccurate due to limited training data.
+
+Energy-based models are a class of powerful methods of explicit generative modeling. Recently, some works [32-35] leverage EBMs for modelling molecular data. To tackle difficulties caused by the discrete nature of molecule graphs, [32] leverages a dequantization technique, and [33] designs a diffusion process based on stochastic differential equations. Different from [32, 33], we propose to train our EBM on a continuous embedding space of molecules. On the other hand, $\left\lbrack {{34},{35}}\right\rbrack$ focuses on molecule conformation generation, which is essentially a continuous problem, since the conformation of a molecule is represented by the $3\mathrm{D}$ space coordinates of its atoms.
+
+## 3 Preliminaries
+
+Problem setting. We consider batch-mode pool-based active learning [7], a practical AL setting for deep models. In each AL round, a batch of samples from the unlabeled pool ${\mathcal{D}}_{U}$ are queried according to a strategy, annotated by an oracle (e.g., a chemist), and added to the labeled pool ${\mathcal{D}}_{L}$ . The updated ${\mathcal{D}}_{L}$ is then used to train the task learner. A more formal description of this setting is in Appx. A.1.
+
+Notations. A molecule is represented as a graph $G = \left( {V, E}\right)$ , with nodes $V$ and edges $E$ corresponding to atoms and chemical bonds. As in [4,36-38], we are interested in $n$ binary molecular properties (e.g., toxicity), which are denoted by a label vector $\mathbf{y} = \left( {{y}_{1},\cdots ,{y}_{n}}\right) \in \{ 0,1{\} }^{n}$ , where ${y}_{i} = 1$ or 0 means the molecule has the $i$ -th property or not. A task learner $h\left( \cdot \right)$ is trained to predict the properties. The $i$ -th output of the task learner, $h{\left( G\right) }_{i}$ , specifies a distribution $p\left( {{y}_{i} \mid G}\right)$ over the predicted label of the $i$ -th property of $G$ , which is essentially a Bernoulli distribution with success probability $h{\left( G\right) }_{i}$ (denoted as $\operatorname{Ber}\left( {h{\left( G\right) }_{i}}\right)$ ).
+
+Energy-Based Models. EBMs [18] specify probability density or mass functions as follows:
+
+$$
+{p}_{\theta }\left( \mathbf{x}\right) = \frac{\exp \left( {-{E}_{\theta }\left( \mathbf{x}\right) }\right) }{{Z}_{\theta }}, \tag{1}
+$$
+
+where $\mathbf{x} \in {\mathbb{R}}^{D}$ is a random sample, ${E}_{\theta }\left( \mathbf{x}\right)$ is the energy function with learnable parameters $\theta$ , and ${Z}_{\theta } = \int \exp \left( {-{E}_{\theta }\left( \mathbf{x}\right) }\right) \mathrm{d}\mathbf{x}$ is a normalizing constant. By learning $\theta$ , we can use an EBM to approximate a real data distribution, i.e., ${p}_{\theta } \approx {p}_{\text{data }}$ .
+
+Denoising Score Matching. DSM [23, 39] is an efficient approach for training EBMs. Here, the "(Stein) score" of a distribution $f\left( \mathbf{x}\right)$ is defined as the log-probability’s first-order gradient function w.r.t. $\mathbf{x}$ , i.e., ${\nabla }_{\mathbf{x}}\log f\left( \mathbf{x}\right)$ . DSM first disturbs ${p}_{\text{data }}\left( \mathbf{x}\right)$ with a pre-defined noise distribution ${p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right)$ , and then trains the EBM via
+
+$$
+{\mathbb{E}}_{\begin{matrix} {\mathbf{x} \sim {p}_{\text{data }}\left( \mathbf{x}\right) } \\ {\widetilde{\mathbf{x}} \sim {p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right) } \end{matrix}}\left\lbrack {\frac{1}{2}{\begin{Vmatrix}{\nabla }_{\mathbf{x}}\log {p}_{\theta }\left( \widetilde{\mathbf{x}}\right) - {\nabla }_{\mathbf{x}}\log {p}_{N}\left( \widetilde{\mathbf{x}} \mid \mathbf{x}\right) \end{Vmatrix}}_{2}^{2}}\right\rbrack . \tag{2}
+$$
+
+With a proper ${p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right)$ , we can easily obtain ${\nabla }_{\mathbf{x}}{p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right)$ [39].
+
+## 4 Method
+
+### 4.1 The minimum-maximum-probability query strategy
+
+Our proposed query strategy is based on the joint distribution of two key probability distributions used in existing works. The first is the prediction distribution of the task learner, i.e., $p\left( {y = \widehat{y} \mid G}\right)$ (abbreviated as $p\left( {\widehat{y} \mid G}\right)$ ), which is widely used to define different uncertainty metrics [7,8,16]. The second is the distribution of currently labeled pool ${\mathcal{D}}_{L}$ , denoted as ${p}_{L}\left( G\right)$ . As shown in $\left\lbrack {{11},{12},{17},{21}}\right\rbrack ,{p}_{L}\left( G\right)$ is useful for identifying samples that are dissimilar to the labeled ones, and annotating these samples effectively increases data diversity. Inspired by these works, we propose to model the joint distribution of $p\left( {y = \widehat{y} \mid G}\right)$ and ${p}_{L}\left( G\right)$ .
+
+Formally, let $q\left( {G,\widehat{\mathbf{y}}}\right)$ denote the joint distribution:
+
+$$
+q\left( {G,\widehat{\mathbf{y}}}\right) \triangleq p\left( {\widehat{\mathbf{y}} \mid G}\right) {p}_{L}\left( G\right) , \tag{3}
+$$
+
+where we use the boldface $\widehat{\mathbf{y}}$ because we may be interested in more than 1 tasks. Note that $\widehat{\mathbf{y}}$ is a random variable following $p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , not the ground-truth label of $G$ .
+
+Then, we perform active selection by first maximizing $q\left( {G,\widehat{\mathbf{y}}}\right)$ via varying $\widehat{\mathbf{y}}$ for each single $G$ , and then selecting a batch of $G$ that minimizes the obtained $\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {G,\widehat{\mathbf{y}}}\right)$ . Denote the selected batch as $\mathcal{B} = \left\{ {{G}_{1},\cdots ,{G}_{b}}\right\}$ . Our strategy is formalized as:
+
+$$
+\mathcal{B} = \underset{{G}_{1},\cdots ,{G}_{b} \in {\mathcal{D}}_{U}}{\arg \min }\left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {{G}_{1},\widehat{\mathbf{y}}}\right) ,\cdots ,\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {{G}_{b},\widehat{\mathbf{y}}}\right) }\right) . \tag{4}
+$$
+
+We name our strategy Minimum-Maximum-Probability Querying (MMPQ). The whole active learning process with this MMPQ strategy is summarized in Appx. A.4.
+
+#### 4.1.1 MMPQ as a tuning-free hybrid strategy
+
+Here we show that MMPQ naturally captures both uncertainty of the task learner and diversity w.r.t. the whole data space in a tuning-free manner. First, from Eqn. (4) and Eqn. (3), we can see that the selection score of MMPQ is:
+
+$$
+\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {G,\widehat{\mathbf{y}}}\right) = \left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}p\left( {\widehat{\mathbf{y}} \mid G}\right) }\right) {p}_{L}\left( G\right) . \tag{5}
+$$
+
+Then, let ${p}^{M} = \mathop{\max }\limits_{\widehat{\mathbf{y}}}p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , and it can be seen that the MMPQ strategy essentially selects data with smaller ${p}^{M}$ and smaller ${p}_{L}\left( G\right)$ .
+
+Uncertainty. Smaller ${p}^{M}$ corresponds to samples that the task learner is less confident about. Specifically, let ${\widehat{\mathbf{y}}}^{ * } = \left( {{\widehat{y}}_{1}^{ * },\cdots ,{\widehat{y}}_{n}^{ * }}\right)$ denote the prediction that achieves ${p}^{M}$ (i.e., ${p}^{M} = p\left( {{\widehat{\mathbf{y}}}^{ * } \mid G}\right)$ ), and ${\widehat{\mathbf{y}}}^{\prime } = \left( {{\widehat{y}}_{1}^{\prime },\cdots ,{\widehat{y}}_{n}^{\prime }}\right)$ denote any other prediction that is different from ${\widehat{\mathbf{y}}}^{ * }$ . Note that, since $p\left( {{\widehat{\mathbf{y}}}^{ * } \mid G}\right) + \mathop{\sum }\limits_{{{\widehat{\mathbf{y}}}^{\prime } \in \{ 0,1{\} }^{n},{\widehat{\mathbf{y}}}^{\prime } \neq {\widehat{\mathbf{y}}}^{ * }}}p\left( {{\widehat{\mathbf{y}}}^{\prime } \mid G}\right) = 1$ , smaller ${p}^{M}$ means that ${p}^{M}$ is closer to the second-largest (and all other) predictions, implying that the task learner is more uncertain about its prediction on molecule $G$ .
+
+- Diversity. Smaller ${p}_{L}\left( G\right)$ means that $G$ lies in low-density regions of the distribution of currently labeled data, and hence is dissimilar to the labeled data. Thus, querying those with small ${p}_{L}\left( G\right)$ increases the diversity of the obtained labeled pool ${\mathcal{D}}_{L}^{t}\left\lbrack {{11},{12},{17},{21}}\right\rbrack$ .
+
+Based on above reasoning, samples with lowest selection score (i.e., those taken by the arg min operation in Eqn. (4)) are naturally those the model is most uncertain about, while at the same time being able to increase data diversity. As such, MMPQ does not need a hyperparameter to trade off between uncertainty and diversity.
+
+#### 4.1.2 Implementation of MMPQ
+
+Since MMPQ is based on the value of $q\left( {G,\widehat{\mathbf{y}}}\right)$ , we need to model $q\left( {G,\widehat{\mathbf{y}}}\right)$ with an explicit deep generative model. In particular, we instantiate an Energy-Based Model (EBM) using a neural network, since EBMs have been shown to be quite expressive and stable in distribution modelling [39, 40]. Formally, $q\left( {G,\widehat{\mathbf{y}}}\right)$ is modelled by
+
+$$
+q\left( {G,\widehat{\mathbf{y}}}\right) = \frac{\exp \left( {-E\left( {G,\widehat{\mathbf{y}}}\right) }\right) }{Z}, \tag{6}
+$$
+
+
+
+Figure 1: Model design and data flow. Colored in blue are inputs and outputs of the labeled pool and corresponding objective. Those corresponding to unlabeled pool are in red.
+
+## where $E\left( {G,\widehat{\mathbf{y}}}\right)$ is the energy value given by the EBM, and $Z$ is a normalizing constant.
+
+In this subsection, we focus on how to implement MMPQ with the EBM, and thus here we assume that the EBM is already trained and fixed. Model design and training of the EBM will be presented later in Sec. 4.2.
+
+From Eqn. (6), we have
+
+$$
+\underset{G}{\arg \min }\left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {G,\widehat{\mathbf{y}}}\right) }\right) \tag{7}
+$$
+
+$$
+= \underset{G}{\arg \min }\left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}\left( {\log q\left( {G,\widehat{\mathbf{y}}}\right) + \log Z}\right) }\right) = \underset{G}{\arg \min }\left( {-\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right) }\right) = \underset{G}{\arg \max }\left( {\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right) }\right) .
+$$
+
+This reveals that we can implement MMPQ based on the learned energy values, without the need to calculate the normalizing constant $Z$ .
+
+One may argue that, $\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right)$ would be difficult to compute for large $n$ , since it involves all ${2}^{n}$ possible combination of $\left( {{\widehat{y}}_{1},\cdots ,{\widehat{y}}_{n}}\right)$ . We show in Appx. A. 2 that, by leveraging the conditional independence assumption of labels [41], we can simply compute $\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right)$ in a task-wise manner.
+
+### 4.2 Model design and training of the EBM
+
+#### 4.2.1 Model design
+
+Designing an EBM for learning $q\left( {G,\widehat{\mathbf{y}}}\right)$ is not trivial, since the two variables have different data structure: $G$ is an attributed graph, while $\widehat{\mathbf{y}}$ is a vector. Moreover, learning EBMs for attributed graphs is itself a challenging open problem, due to the non-Euclidean and discrete nature [32, 42].
+
+To address the above issues, we propose to embed molecules graphs $G$ into a learned embedding space, and then build the EBM model on $\widehat{\mathbf{y}}$ and embeddings $\mathbf{z}$ (see Fig. 1). Inspired by Sinha ${et}$ al. [21], we learn the space by training an Auto-Encoder (AE) to reconstruct its inputs. However, due to graph isomorphism, directly reconstructing molecule graphs is difficult [22, 43, 44]. We thus propose to train the AE to reconstruct the molecules' SMILES strings [45], as shown in Fig. 1. SMILES is an expert-defined sequence representation of molecules, where the sub-strings correspond to chemically-meaningful substructures in molecules (e.g., functional groups). Such a sequence-based reconstruction task enables the auto-encoder to learn molecules embeddings without struggling to reconstruct graphs.
+
+Formally, let $\operatorname{Enc}\left( \cdot \right)$ and $\operatorname{Dec}\left( \cdot \right)$ denote the encoder and decoder respectively, and let $\operatorname{Sml}\left( \cdot \right)$ denote the operation of retrieving the SMILES string of a molecule (which can be easily pre-computed using open-sourced cheminformatics libraries). Then, for a molecule $G$ , the ground-truth and the reconstructed SMILES strings are
+
+$$
+S \triangleq \operatorname{Sml}\left( G\right) ,\;\widehat{S} \triangleq \operatorname{Dec}\left( {\operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right) }\right) . \tag{8}
+$$
+
+For learning high-quality embeddings, we use both labeled and unlabeled data to train the AE:
+
+$$
+{L}_{\mathrm{{rec}}} = {\mathbb{E}}_{G \in {\mathcal{D}}_{L}}\left\lbrack {d\left( {S,\widehat{S}}\right) }\right\rbrack + {\mathbb{E}}_{{G}^{\prime } \in {\mathcal{D}}_{U}}\left\lbrack {d\left( {{S}^{\prime },{\widehat{S}}^{\prime }}\right) }\right\rbrack , \tag{9}
+$$
+
+where $d\left( {\cdot , \cdot }\right)$ is a distance between sequence pairs.
+
+In the rest of this paper, we take $\mathbf{z} \triangleq \operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right)$ as a proxy of $G$ in some cases, and use $\mathbf{x}$ to denote the tuple $\left( {\mathbf{z},\widehat{\mathbf{y}}}\right)$ , which is implemented by concatenating $\mathbf{z}$ and $\widehat{\mathbf{y}}$ .
+
+Following previous works [39,40], we instantiate the EBM as a "score net" ${s}_{\theta }\left( \mathbf{x}\right)$ , which learns the score of the target distribution $q\left( \mathbf{x}\right)$ , i.e., ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ . When ${s}_{\theta }\left( \mathbf{x}\right)$ is trained, we use summation to approximate integral over ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ (see Appx. A.3). An alternative choice is to approximate the energy function ${E}_{\theta }\left( \mathbf{x}\right)$ , which however is more difficult than modeling the score, as experimentally shown in Sec. 5.3.3.
+
+#### 4.2.2 Model training
+
+We train the EBM ${s}_{\theta }\left( \mathbf{x}\right)$ via denoising score matching. With the noise ${p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right) = \mathcal{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x},{\sigma }^{2}, I}\right)$ , we have ${\nabla }_{\mathbf{x}}{p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right) = - \frac{\widetilde{\mathbf{x}} - \mathbf{x}}{{\sigma }^{2}}$ [39]. Then, the DSM objective is:
+
+$$
+{L}_{\mathrm{{DSM}}} = \mathbb{E}\underset{\widetilde{\mathbf{x}} \sim \mathcal{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x},\sigma }\right) }{ \approx }\left\lbrack {\begin{Vmatrix}{s}_{\theta }\left( \widetilde{\mathbf{x}}\right) + \frac{\widetilde{\mathbf{x}} - \mathbf{x}}{{\sigma }^{2}}\end{Vmatrix}}_{2}^{2}\right\rbrack , \tag{10}
+$$
+
+where we slightly abuse the notation, using $\mathbf{x} \in {\mathcal{D}}_{L}$ to denote $\mathbf{x} \in \left\{ {\left( {\operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right) ,\widehat{\mathbf{y}}}\right) \mid G \in {\mathcal{D}}_{L}}\right\}$ .
+
+Note that the second term in the target distribution (Eqn. (3)) is the density of the labeled data only. Therefore, in Eqn. (10), we calculate ${L}_{\mathrm{{DSM}}}$ only on the labeled pool ${\mathcal{D}}_{L}$ (cf. the reconstruction objective in Eqn. (9)).
+
+One challenge of calculating ${L}_{\mathrm{{DSM}}}$ is that it requires $\left( {G,\widehat{\mathbf{y}}}\right)$ pairs i.i.d. sampled from $q\left( {G,\widehat{\mathbf{y}}}\right)$ , but we do not have such samples at hand. To address this challenge, we propose a two-step sampling method: first randomly pick $G$ from ${\mathcal{D}}_{L}$ ; then draw a sample $\widehat{\mathbf{y}} = \left\{ {{\widehat{y}}_{1},\cdots ,{\widehat{y}}_{n}}\right\}$ from $p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , which can be implemented by drawing a ${\widehat{y}}_{i} \sim \operatorname{Ber}\left( {h{\left( G\right) }_{i}}\right)$ for all $i$ (under the conditional-independence assumption of labels).
+
+The EBM and the AE are jointly trained via
+
+$$
+{L}_{\text{joint }} = {L}_{\mathrm{{DSM}}} + {L}_{\text{rec }}. \tag{11}
+$$
+
+Pseudo code of model training process is summarize in Appx. A.4.
+
+## 5 Experiments
+
+### 5.1 Experiment setup
+
+We run experiments under the batch-mode pool-based AL setting (elaborated in Appx. A.1). The labeled pool is initialized by randomly selecting ${10}\%$ samples of the entire training; the initial unlabeled pool is the rest ${90}\%$ . Then ${10}\mathrm{{AL}}$ rounds are performed; in each round, an unlabeled batch of 4% samples of the entire training set is queried, so the total annotation budget is 50% of the training set. We use the BACE, BBBP, HIV and SIDER datasets from the widely-used MoleculeNet benchmark [5] (also included in the Open Graph Benchmark [46]). Statistics and detailed descriptions of the datasets are in Appx. A.5. Following [4], we use scaffold split, with train/val/test $= {80}\% /{10}\% /{10}\%$ . We use AUROC as the performance metric, as suggested in [5]. Please refer to Appx. A. 6 for implementation details.
+
+### 5.2 Active learning performance
+
+We compare MMPQ against following 8 baselines, with $\mathbf{U},\mathbf{D}$ and $\mathbf{H}$ denoting Uncertainty-based, Diversity-based and Hybrid methods respectively: Random (random selection), Entropy (U) (selecting samples with the largest prediction entropy), MC-Dropout (U) [8], CoreSet (D) [9], ASGN (D) [47], BADGE (H) [10], WAAL (H) [12], EADA (H) [13]. Among them, ASGN is the only existing method that investigates AL in molecular property prediction. BADGE, WAAL and EADA are representative hybrid methods, and are the state-of-the-arts on many image classification datasets. We note that there are other hybrid methods [11, 14, 15], but their code is not or only partly released. We fail to reproduce their results, and hence do not include them for comparison.
+
+
+
+Figure 2: Active learning performance of MMPQ (ours) and baseline hybrid methods. "Round 0" corresponds to the performance on initial labeled pool.
+
+
+
+Figure 3: Active learning performance of MMPQ (ours) and uncertainty-based or diversity-based methods. "Round 0" corresponds to the performance on initial labeled pool.
+
+To avoid cluttered presentation, we show results of baseline hybrid methods in Fig. 2, and those of uncertainty-based or diversity-based methods in Fig. 3. In both figures, we include results of our MMPQ and the Random baseline.
+
+Results: From the figures we can see that our MMPQ outperforms the baselines on all 4 datasets. Specifically, on HIV, our MMPQ achieves 0.7302 AUROC in the 3-rd active round (using only 22% annotations of the entire training set), which is very close to the performance of using ${100}\%$ of the annotations (0.7344). This also explains why the performance of MMPQ almost saturates after the 3-rd round. Furthermore, the performance of hybrid methods requiring trade-off hyperparameters, i.e., WAAL and EADA, is not stable. In particular, though WAAL achieves performance comparable to our proposed MMPQ on BACE, it performs quite unsatisfactorily on HIV. Similarly, EADA performs well on SIDER but is the worst baseline on HIV. By contrast, our method achieves consistently superior performance. One may note that the performance of WAAL and EADA at round 0 is quite different from that of other methods. The reason is that, in other methods, the task learner is only trained with classification loss on the currently labeled data. Contrarily, in WAAL and EADA, apart from classification loss, the task learner is also trained with some auxiliary loss (i.e., adversarial loss in WAAL, and free-energy alignment loss in EADA). Therefore, even though the training data at round 0 are the same across all methods, the WAAL and EADA can have different performance.
+
+### 5.3 Ablation studies
+
+Here we conduct ablative experiments on HIV, which is the largest dataset used (see Tab. 1).
+
+#### 5.3.1 Uncertainty or Diversity Only
+
+In Sec. 4.1.1, we show that our MMPQ strategy captures both uncertainty and diversity through the two terms in Eqn. (5) respectively. Here we ablatively study the effectiveness of the two terms. Since we have only 1 target property on HIV, we use $\widehat{y}$ instead of $\widehat{\mathbf{y}}$ . Specifically, under the setup described in Sec. 5.1, we compare our MMPQ with another two strategies based on the trained EBM:
+
+- The U.O. strategy that considers Uncertainty Only: querying data with minimum ${p}^{M} =$ $\mathop{\max }\limits_{\widehat{y}}p\left( {\widehat{y} \mid G}\right)$ . Let ${\widehat{y}}^{ * }$ denote the predicted label that achieves ${p}^{M}$ . Then, based on the learned
+
+
+
+Figure 4: (a) AL performance of MMPQ, U.O. and D.O. strategies. (b) Mean ground-truth-label loss of data queried by MMPQ or D.O. strategy. (c) Mean average Tanimoto similarity of data queried by MMPQ or U.O. strategy.
+
+EBM, ${p}^{M}$ can be calculated by:
+
+$$
+{p}^{M} = \frac{\exp \left( {-E\left( {G,{\widehat{y}}^{ * }}\right) }\right) }{\mathop{\sum }\limits_{{\widehat{y} \in \{ 0,1\} }}\exp \left( {-E\left( {G,\widehat{y}}\right) }\right) }. \tag{12}
+$$
+
+- The D.O. strategy that considers Diversity Only: querying those with minimum ${p}_{L}\left( G\right)$ , which
+
+satisfies
+
+$$
+{p}_{L}\left( G\right) \propto \mathop{\sum }\limits_{{\widehat{y}\{ 0,1\} }}\exp \left( {-E\left( {G,\widehat{y}}\right) }\right) . \tag{13}
+$$
+
+In Fig. 4 (a), as AL proceeds, the performance of the U.O. strategy rises slower than that of MMPQ or the D.O. strategy, though the final performance of U.O. (at round 10) is as good as MMPQ. On the other hand, the D.O. strategy reaches a peak very quickly (i.e., at the 2-nd AL round), but then its performance degrades as more data are annotated for training. One possible reason for such degradation is that the data queried in later rounds cannot provide the task learner with useful information about the learning task. Adding these data to the training pool may lead to overfitting, since more data means more training iterations. This shows that uncertainty and diversity are complementary to each other - diversity is important in early AL stages, while uncertainty is critical in later stages. Interestingly, this corroborates the finding in [48].
+
+Furthermore, we then dig deeper into the advantages of the MMPQ strategy, by examining how the two terms in Eqn. (5) affect the queried data.
+
+For studying the effectiveness of the uncertainty-based term $\mathop{\max }\limits_{\widehat{\mathbf{y}}}p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , we examine whether the queried data of the MMPQ strategy have higher uncertainty than those of the D.O. strategy (since the MMPQ and D.O. only differ in this term). Note that, since HIV has only 1 property of interest, this term becomes $p\left( {\widehat{y} \mid G}\right)$ . For measuring uncertainty, distribution-based metrics such as entropy and classification margin are often used. However, for binary classification (as our case), these metrics are equivalent to selecting those with smallest $p\left( {\widehat{y} \mid G}\right)$ . Therefore, instead of distribution-based metrics, we adopt the ground-truth-label loss, as used in [30]. For the $t$ -th AL round, we calculate the mean loss of data in the current queries ${\mathcal{D}}_{L}^{t}$ . As shown in Fig. 4 (b), compared with the D.O. strategy, queries of our MMPQ have larger ground-truth-label loss, suggesting larger uncertainty of the task learner.
+
+For the diversity-based term ${p}_{L}\left( G\right)$ , we examine whether queries of MMPQ have smaller chemical similarity (i.e., larger diversity) than those of U.O. strategy (since of MMPQ and U.O. only differ in this term). We adopt the Tanimoto similarity [49], which is a widely used expert-defined molecular similarity metric. Formally, let ${T}_{ij}$ denote the Tanimoto similarity between molecule ${G}_{i}$ and ${G}_{j}$ , we calculate the mean Average Similarity (mAS) among molecules in ${\mathcal{D}}_{L}^{t}$ :
+
+$$
+{mAS} = \frac{1}{{N}_{L}^{t}\left( {{N}_{L}^{t} - 1}\right) }\mathop{\sum }\limits_{{{G}_{i},{G}_{j} \in {\mathcal{D}}_{L}^{t}, j \neq i}}{T}_{ij}. \tag{14}
+$$
+
+Fig. 4 (c) shows that queries of MMPQ are less chemically similar than those of U.O., implying larger diversity.
+
+#### 5.3.2 Robustness of Energy Calculation
+
+In this part, we investigate the robustness of MMPQ w.r.t. how energy is calculated. Specifically, we consider the choice of zero-energy point, and the number of points for approximating the integral (i.e., $K$ in Eqn. (18)).
+
+
+
+Figure 5: Active learning performance of differ- Figure 6: DSM loss of modeling score or energy. ent settings of $\left( {{\mathbf{z}}_{0},{\widehat{y}}_{0}, K}\right)$ .
+
+In above experiments, we set the zero-energy point as $\left( {{\mathbf{z}}_{0} = {\overline{\mathbf{z}}}_{U},{\widehat{y}}_{0} = 0}\right)$ , where ${\overline{\mathbf{z}}}_{U}$ is the where ${\overline{\mathbf{z}}}_{U}$ is the mean embedding of the unlabeled pool, and let $K = {100}$ . We name this setting the "default setting", and denote it with the triple $\left( {{\overline{\mathbf{z}}}_{U},0,{100}}\right)$ .
+
+Then, based on $\left( {{\overline{\mathbf{z}}}_{U},0,{100}}\right)$ , we vary one of the three hyperparameters while keeping the other two unchanged, and run AL experiments under the setup described in Sec. 5.1. Specifically, we set (1) ${\mathbf{z}}_{0} \in \left\{ {{\overline{\mathbf{z}}}_{L},{\overline{\mathbf{z}}}_{F}}\right\}$ , where ${\overline{\mathbf{z}}}_{L}$ and ${\overline{\mathbf{z}}}_{F}$ are the mean embedding of the Labeled pool and that of the Full training set; (2) ${\widehat{y}}_{0} = 1$ ; (3) $K \in \{ {50},{500}\}$ .
+
+Fig. 5 shows the AL performance of the above settings and the default one. We can see that the AL performance of different settings are similar, which demonstrates that the MMPQ strategy is robust to the above hyperparameters.
+
+#### 5.3.3 Implementing EBM by Modeling Energy
+
+As introduced in Sec. 4.2, we instantiate the EBM as a score net that learns the score of the true distribution $q\left( {G,\widehat{\mathbf{y}}}\right)$ . An alternative is to implement the EBM as an energy net that models the energy function.
+
+We try this alternative in our experiments, but find that the training process fails to converge. Specifically, the energy net is also built on the embedding space of the AE, and has the same architecture as the score net, except that the final layer has a output dimension of 1 and a ReLU activation (since the energy is a non-negative scalar). The training objective of the energy net (denoted as ${E}_{\theta }$ ) is
+
+$$
+{\mathbb{E}}_{\widetilde{\mathbf{x}} \sim \mathcal{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x},\sigma }\right) }\left\lbrack {\begin{Vmatrix}-{\nabla }_{\mathbf{x}}{E}_{\theta }\left( \widetilde{\mathbf{x}}\right) + \frac{\widetilde{\mathbf{x}} - \mathbf{x}}{{\sigma }^{2}}\end{Vmatrix}}_{2}^{2}\right\rbrack . \tag{15}
+$$
+
+Fig. 6 shows the loss curve (in a log scale) under best tuned hyperparameters (i.e., those yielding lowest loss). For comparison, the loss curve on HIV of our score net used in the MMPQ experiment in Sec. 5.2 is also given. We can see that, even with the best tuned hyperparameters, the training process of ${E}_{\theta }$ cannot converge well.
+
+## 6 Conclusion
+
+We propose Maximum Minimum Probability Querying (MMPQ), a hybrid active learning method for molecular property prediction, without the need of manually trading off between uncertainty and diversity. The strategy is based on an EBM that models the joint distribution of labeled data and task learner's prediction. The EBM is built in an embedding space learned by an auto-encoder that reconstructs molecules' SMILES string. We propose train the EBM via denoising score matching. Once the EBM is trained, MMPQ selects data according to one single selection criterion that naturally captures uncertainty of the task learner and high diversity w.r.t. in the data space.
+
+References
+
+[1] Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, et al. Analyzing learned molecular representations for property prediction. Journal of chemical information and modeling, 59 (8):3370-3388,2019. 1,2
+
+[2] Gregory Sliwoski, Sandeepkumar Kothiwale, Jens Meiler, and Edward W Lowe. Computational methods in drug discovery. Pharmacological reviews, 66(1):334-395, 2014. 1, 2
+
+[3] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272. PMLR, 2017. 1, 2
+
+[4] W Hu, B Liu, J Gomes, M Zitnik, P Liang, V Pande, and J Leskovec. Strategies for pre-training graph neural networks. In International Conference on Learning Representations, 2020. 2, 3, 6, 15
+
+[5] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018. 1, 2, 6, 14, 15
+
+[6] Andreas Mayr, Günter Klambauer, Thomas Unterthiner, Marvin Steijaert, Jörg K Wegner, Hugo Ceulemans, Djork-Arné Clevert, and Sepp Hochreiter. Large-scale comparison of machine learning methods for drug target prediction on chembl. Chemical science, 9(24):5441-5451, 2018.1
+
+[7] Burr Settles. Active learning literature survey. 2009. 1, 2, 3, 4, 13
+
+[8] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR, 2017. 1, 2, 4, 6
+
+[9] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, 2018. 1, 2, 6
+
+[10] Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. In International Conference on Learning Representations, 2019. 1, 2, 3, 6
+
+[11] Kwanyoung Kim, Dongwon Park, Kwang In Kim, and Se Young Chun. Task-aware variational adversarial active learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8166-8175, 2021. 1, 2, 3, 4, 6
+
+[12] Changjian Shui, Fan Zhou, Christian Gagné, and Boyu Wang. Deep active learning: Unified and principled method for query and training. In International Conference on Artificial Intelligence and Statistics, pages 1308-1318. PMLR, 2020. 1, 2, 4, 6
+
+[13] Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, Xinjing Cheng, and Guoren Wang. Active learning for domain adaptation: An energy-based approach. AAAI conference on artificial intelligence, 2022. 1, 2, 6
+
+[14] Beichen Zhang, Liang Li, Shijie Yang, Shuhui Wang, Zheng-Jun Zha, and Qingming Huang. State-relabeling adversarial active learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8753-8762. IEEE Computer Society, 2020. 3, 6
+
+[15] Shuo Wang, Yuexiang Li, Kai Ma, Ruhui Ma, Haibing Guan, and Yefeng Zheng. Dual adversarial network for deep active learning. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIV 16, pages 680-696. Springer, 2020. 1, 2, 3, 6
+
+[16] Jongwon Choi, Kwang Moo Yi, Jihoon Kim, Jinho Choo, Byoungjip Kim, Jinyeop Chang, Youngjune Gwon, and Hyung Jin Chang. Vab-al: incorporating class imbalance and difficulty with variational bayes for active learning. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. 2, 4
+
+[17] Daniel Gissin and Shai Shalev-Shwartz. Discriminative active learning. arXiv preprint arXiv:1907.06347, 2019. 2, 4
+
+[18] Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-based learning. Predicting structured data, 1(0), 2006. 2, 3
+
+[19] Yang Song and Diederik P Kingma. How to train your energy-based models. arXiv preprint arXiv:2101.03288, 2021. 2
+
+[20] Aapo Hyvärinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005. 2, 3
+
+[21] Samrath Sinha, Sayna Ebrahimi, and Trevor Darrell. Variational adversarial active learning. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 5971-5980. IEEE Computer Society, 2019. 2, 4, 5
+
+[22] Orion Dollar, Nisarg Joshi, David AC Beck, and Jim Pfaendtner. Attention-based generative models for de novo molecular design. Chemical Science, 2021. 2, 5, 15
+
+[23] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661-1674, 2011. 2, 3
+
+[24] Qiang Liu, Jason Lee, and Michael Jordan. A kernelized stein discrepancy for goodness-of-fit tests. In International conference on machine learning, pages 276-284. PMLR, 2016. 2
+
+[25] Pierre Hohenberg and Walter Kohn. Inhomogeneous electron gas. Physical review, 136(3B): B864, 1964. 2
+
+[26] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations, 2016. 2
+
+[27] Sheng Wang, Yuzhi Guo, Yuhong Wang, Hongmao Sun, and Junzhou Huang. Smiles-bert: Large scale unsupervised pre-training for molecular property prediction. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, 2019. 2
+
+[28] Davide Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of chemical information and modeling, 50(5):742-754, 2010. 2
+
+[29] William H Beluch, Tim Genewein, Andreas Nürnberger, and Jan M Köhler. The power of ensembles for active learning in image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9368-9377, 2018. 2
+
+[30] Donggeun Yoo and In So Kweon. Learning loss for active learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 93-102, 2019. 2, 8
+
+[31] Tijmen Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th international conference on Machine learning, pages 1064-1071, 2008. 3
+
+[32] Meng Liu, Keqiang Yan, Bora Oztekin, and Shuiwang Ji. Graphebm: Molecular graph generation with energy-based models. arXiv preprint arXiv:2102.00546, 2021.3, 5
+
+[33] Jaehyeong Jo, Seul Lee, and Sung Ju Hwang. Score-based generative modeling of graphs via the system of stochastic differential equations. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 10362-10383. PMLR, 17-23 Jul 2022. 3
+
+[34] Shitong Luo, Chence Shi, Minkai Xu, and Jian Tang. Predicting molecular conformation via dynamic graph score matching. Advances in Neural Information Processing Systems, 34, 2021. 3
+
+[35] Chence Shi, Shitong Luo, Minkai Xu, and Jian Tang. Learning gradient fields for molecular conformation generation. In International Conference on Machine Learning, pages 9558-9568. PMLR, 2021. 3
+
+[36] Yaqing Wang, Abulikemu Abuduweili, Quanming Yao, and Dejing Dou. Property-aware relation networks for few-shot molecular property prediction. Advances in Neural Information Processing Systems, 34, 2021. 3, 15
+
+[37] Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Chee-Kong Lee. Motif-based graph self-supervised learning for molecular property prediction. Advances in Neural Information Processing Systems, 34, 2021. 15
+
+[38] Shengchao Liu, Meng Qu, Zuobai Zhang, Huiyu Cai, and Jian Tang. Structured multi-task learning for molecular property prediction. In International Conference on Artificial Intelligence and Statistics, pages 8906-8920. PMLR, 2022. 3
+
+[39] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019. 3, 4, 6
+
+[40] Yang Song, Sahaj Garg, Jiaxin Shi, and Stefano Ermon. Sliced score matching: A scalable approach to density and score estimation. In Uncertainty in Artificial Intelligence, pages 574-584. PMLR, 2020. 4, 6
+
+[41] Haoran Wang, Weitang Liu, Alex Bocchieri, and Yixuan Li. Can multi-label classification networks know what they don't know? Advances in Neural Information Processing Systems, 34, 2021. 5
+
+[42] Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, and Stefano Ermon. Permutation invariant graph generation via score-based generative modeling. In International Conference on Artificial Intelligence and Statistics, pages 4474-4484. PMLR, 2020. 5
+
+[43] Jaechang Lim, Seongok Ryu, Jin Woo Kim, and Woo Youn Kim. Molecular generative model based on conditional variational autoencoder for de novo molecular design. Journal of cheminformatics, 10(1):1-9, 2018. 5
+
+[44] Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268-276, 2018. 5
+
+[45] David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 28 (1):31-36,1988. 5
+
+[46] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. 6
+
+[47] Zhongkai Hao, Chengqiang Lu, Zhenya Huang, Hao Wang, Zheyuan Hu, Qi Liu, Enhong Chen, and Cheekong Lee. Asgn: An active semi-supervised graph neural network for molecular property prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 731-752, 2020. 6
+
+[48] Yao Zhang et al. Bayesian semi-supervised learning for uncertainty-calibrated prediction of molecular properties and active learning. Chemical science, 10(35):8154-8163, 2019. 8
+
+[49] Dávid Bajusz, Anita Rácz, and Károly Héberger. Why is tanimoto index an appropriate choice for fingerprint-based similarity calculations? Journal of cheminformatics, 7(1):1-13, 2015. 8
+
+[50] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4-24, 2020. 13
+
+[51] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017. 15
+
+[52] Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807-814. Omnipress, 2010. 15
+
+[53] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. 15
+
+## A Appendix
+
+### A.1 Pool-based batch-mode active learning setting
+
+In this setting, we are given an initial pool of labeled data ${\mathcal{D}}_{L}^{0}$ of size ${N}_{L}^{0}$ , and an unlabeled pool ${\mathcal{D}}_{U}^{0}$ of size ${N}_{U}^{0} > {N}_{L}^{0}$ . Our goal is to design an AL algorithm that performs $T$ rounds of querying [7]. In the $t$ -th round $\left( {1 \leq t \leq T}\right)$ , a batch of $b$ samples, denoted as ${\mathcal{B}}^{t}$ , are selected from ${\mathcal{D}}_{U}^{t - 1}$ . Then, an oracle (e.g., in our case, a chemist) annotates the queried samples, which are then moved from the unlabeled pool to the labeled pool. Formally, let ${\mathcal{B}}_{\text{anno }}^{\bar{t}}$ denote annotated selected batch, and then the pools are updated by ${\mathcal{D}}_{L}^{t} = {\mathcal{D}}_{L}^{t - 1} \cup {\mathcal{B}}_{\text{anno }}^{t}$ , and ${\mathcal{D}}_{U}^{t} = {\mathcal{D}}_{U}^{t - 1} \smallsetminus {\mathcal{B}}^{t}$ ; accordingly ${N}_{L}^{t} = {N}_{L}^{t - 1} + b$ , ${N}_{U}^{t} = {N}_{U}^{t - 1} - b$ . The obtained ${\mathcal{D}}_{L}^{t}$ is then used to train a task learner $h\left( \cdot \right)$ - the model that performs the target learning task at hand (e.g., molecular property prediction in our case). In this paper, the task learner is a Graph Neural Networks (GNN) [50].
+
+Note that, the union of ${\mathcal{D}}_{L}^{t}$ and ${\mathcal{D}}_{U}^{t}$ is always the whole training set, i.e., ${\mathcal{D}}_{\text{train }} = {\mathcal{D}}_{L}^{t} \cup {\mathcal{D}}_{U}^{t},\forall t \geq 0$ . Aside from ${\mathcal{D}}_{\text{train }}$ , we also have a validation set ${\mathcal{D}}_{\text{val }}$ and a test set ${\mathcal{D}}_{\text{test }}$ , which are held-out and disjoint from ${\mathcal{D}}_{\text{train }}$ , for performing model selection and evaluation on the task learner. For brevity, we omit the round index $t$ unless necessary.
+
+### A.2 Calculating minimum energy for large $n$
+
+As mentioned in Sec. 4.1.2, we can calculate $\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right)$ in a task-wise manner, under the assumption that the $n$ labels are conditionally independent. Here we elaborate on the calculation.
+
+In multi-label classification, the $n$ labels are often assumed to be independent given the input. In our case, this is formulated as: $p\left( {\widehat{\mathbf{y}} \mid G}\right) = \mathop{\prod }\limits_{{i = 1}}^{n}p\left( {{y}_{i} \mid G}\right)$ . Thus, we have
+
+$$
+\underset{\widehat{\mathbf{y}}}{\arg \min }E\left( {G,\widehat{\mathbf{y}}}\right)
+$$
+
+$$
+= \underset{\widehat{\mathbf{y}}}{\arg \max }q\left( {G,\widehat{\mathbf{y}}}\right)
+$$
+
+$$
+= \underset{\widehat{\mathbf{y}}}{\arg \max }p\left( {{\widehat{y}}_{1},\cdots ,{\widehat{y}}_{n} \mid G}\right) {p}_{L}\left( G\right) \tag{16}
+$$
+
+$$
+= \underset{\widehat{\mathbf{y}}}{\arg \max }\left( {\mathop{\prod }\limits_{{i = 1}}^{n}p\left( {{\widehat{y}}_{i} \mid G}\right) }\right)
+$$
+
+$$
+= \left( {\underset{{\widehat{y}}_{1}}{\arg \max }p\left( {{\widehat{y}}_{1} \mid G}\right) ,\cdots ,\underset{{\widehat{y}}_{n}}{\arg \max }p\left( {{\widehat{y}}_{n} \mid G}\right) }\right) .
+$$
+
+This shows that we can calculate $\arg \mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right)$ by simply taking $\left( {\arg \mathop{\max }\limits_{{\widehat{y}}_{1}}p\left( {{\widehat{y}}_{1} \mid G}\right) ,\cdots ,\arg \mathop{\max }\limits_{{\widehat{y}}_{n}}p\left( {{\widehat{y}}_{n} \mid G}\right) }\right)$ , without the need of calculating the energy for all ${2}^{n}$ possible combinations of $\left( {{\widehat{y}}_{1},\cdots ,{\widehat{y}}_{n}}\right)$ .
+
+### A.3 Energy calculation
+
+Our EBM models the score ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ instead of the energy $E\left( \mathbf{x}\right)$ . For obtaining the energy value $E\left( \mathbf{x}\right)$ , we use summation to approximate integral over ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ after training the EBM.
+
+Specifically, the energy of any point $\mathrm{x}$ can be calculated through the following line integral:
+
+$$
+E\left( \mathbf{x}\right) = E\left( {\mathbf{x}}_{0}\right) + {\int }_{P}{\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right) \cdot \mathrm{d}\mathbf{x}, \tag{17}
+$$
+
+where - denotes a vector inner product. The two terms on the right hand side are explained as follows.
+
+The first term $E\left( {\mathbf{x}}_{0}\right)$ is the energy of an arbitrarily chosen reference point ${\mathbf{x}}_{0}$ . For selecting queries, we only need the relative energy value. Therefore, without loss of generality, we can take any reference point ${\mathbf{x}}_{0}$ as the zero-energy point, i.e., letting $E\left( {\mathbf{x}}_{0}\right) = 0$ .
+
+The second term is an integral along a path $P$ from ${\mathbf{x}}_{0}$ to $\mathbf{x}$ . Since the true score ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ is a conservative vector field, the integral result does not depend on the choice of $P$ (assuming that the EBM approximates ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ well). We thus calculate the integral along the directed line segment from ${\mathbf{x}}_{0}$ to $\mathbf{x}$ , denoted as ${\int }_{{\mathbf{x}}_{0}}^{\mathbf{x}}$ .
+
+5 Finally, the (relative) energy of $\mathbf{x}$ can be calculated by
+
+$$
+E\left( \mathbf{x}\right) = {\int }_{{\mathbf{x}}_{0}}^{\mathbf{x}}{\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right) \cdot \mathrm{d}\mathbf{x}
+$$
+
+$$
+\approx \mathop{\sum }\limits_{{k = 0}}^{K}\left( {{\mathbf{x}}_{k + 1} - {\mathbf{x}}_{k}}\right) \cdot {\nabla }_{\mathbf{x}}\log q\left( {\mathbf{x}}_{k + 1}\right) \tag{18}
+$$
+
+$$
+= \mathop{\sum }\limits_{{k = 0}}^{K}\left( {{\mathbf{x}}_{k + 1} - {\mathbf{x}}_{k}}\right) \cdot {s}_{\theta }\left( {\mathbf{x}}_{k + 1}\right) ,
+$$
+
+where $\left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{K}}\right\}$ denote $K$ points evenly distributed along the directed line segment, and ${\mathbf{x}}_{K + 1} \triangleq$ x.
+
+### A.4 Pseudo codes
+
+Alg. 1 and Alg. 2 show the pseudo codes of MMPQ and the model training process, respectively. Notations used can be found in Appx. A.1.
+
+Algorithm 1 Active learning with MMPQ
+
+---
+
+Input: Initial labeled pool ${\mathcal{D}}_{L}^{0}$ , unlabeled pool ${\mathcal{D}}_{U}^{0}$ , number of rounds $T$ , number of queries per
+
+ round $b$ , EBM, task learner
+
+ 1: Train task learner with ${\mathcal{D}}_{L}^{0}$ , perform model selection using ${\mathcal{D}}_{\text{val }}$ , and test the learner on ${\mathcal{D}}_{\text{test }}$
+
+ for $t \in \{ 0,\cdots , T\}$ do
+
+ Train the EBM (Sec. 4.2.2)
+
+ // perform active selection
+
+ Select a batch of $b$ samples ${\mathcal{B}}^{t}$ according to Eqn. (4)
+
+ Annotate ${\mathcal{B}}^{t}$ and obtain ${\mathcal{B}}_{\text{anno }}^{t}$
+
+ ${\mathcal{D}}_{L}^{t} = {\mathcal{D}}_{L}^{t - 1} \cup {\mathcal{B}}_{\text{anno }}^{t},{\mathcal{D}}_{U}^{t} = {\mathcal{D}}_{U}^{t - 1} \smallsetminus {\mathcal{B}}^{t}$
+
+ // train and test task learner
+
+ Train task learner with ${\mathcal{D}}_{L}^{t}$ , perform model selection using ${\mathcal{D}}_{\text{val }}$ , and test the learner on ${\mathcal{D}}_{\text{test }}$
+
+ end for
+
+Output: Performance on ${\mathcal{D}}_{\text{test }}$ for $t \in \{ 0,\cdots , T\}$
+
+---
+
+Algorithm 2 Training process
+
+---
+
+Input: EBM ${s}_{\theta }$ , AE encoder Enc and decoder Dec, and task learner $h$ (trained and fixed)
+
+ while not converge do
+
+ Randomly sample $G$ from ${\mathcal{D}}_{L},{G}^{\prime }$ from ${\mathcal{D}}_{U}$
+
+ Sample $\widehat{\mathbf{y}}$ by drawing ${\widehat{y}}_{i}$ from $\operatorname{Ber}\left( {h{\left( G\right) }_{i}}\right)$
+
+ $\mathbf{x} = \left( {\operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right) ,\widehat{\mathbf{y}}}\right)$
+
+ Calculate ${L}_{\text{joint }}$ with Eqn. (9),(10),(11)
+
+ Update ${s}_{\theta }$ , Enc and Dec with ${L}_{\text{joint }}$
+
+ end while
+
+Output: Trained EBM and AE
+
+---
+
+### A.5 Dataset information
+
+We use the BACE, BBBP, HIV and SIDER datasets from MoleculeNet [5]. Here we give a brief introduction to these datasets:
+
+- BACE: Human $\beta$ -secretase 1 (a.k.a. BACE-1) is an enzyme in human body. It is recently found that inhibition of BACE-1 can slow down the development of Alzheimer's disease. The BACE dataset contains experimentally measured binding (i.e., effective in inhibition) results (binarized) of 1,513 candidate inhibitors of BACE-1.
+
+- BBBP: The Blood-Brain Barrier (BBB) is a highly selective semipermeable border that prevents solutes in the blood from non-selectively crossing into the extracellular fluid of the central nervous system. For designing drugs to cure some brain disorder, one major challenge is to ensure that the obtained drug is able to go through BBB. The BBBP dataset provides binary labels for 2,039 molecules on their ability to permeate BBB.
+
+- HIV: The HIV dataset provides the experimentally tested ability to inhibit HIV replication for 41,127 molecules.
+
+- SIDER: SIDER is an abbreviation of Side Effect Resource, which is a database of marketed drugs and their adverse drug reactions (i.e., side effects). The SIDER dataset contains the results of 1,427 drugs on 27 kinds of side effects.
+
+Statistics of the datasets are in Tab. 1.
+
+Table 1: Statistics of used datasets
+
+ | BACE | SIDER | BBBP | HIV |
| Number of Tasks | 1 | 27 | 1 | 1 |
| Number of Molecules | 1,513 | 1,427 | 2,039 | 41,127 |
+
+549
+
+We note that the TOX21, TOXCAST, MUV and PCBA datasets in the MoleculeNet benchmark [5] are also often used. However, we find they contain a large number of molecules whose properties not fully provided by the dataset creator.
+
+To see this, for each property of the 4 datasets, we show in Fig. 7 the numbers of molecules whose ground truth labels are provided or not. We can see that, there are a large number of label-not-provided molecules for each property that needs to be predicted. This violates the general assumption in AL that the oracle can correctly annotate each query. Thus we do not run experiments on these datasets.
+
+
+
+Figure 7: Ratios of molecules whose ground truth properties are provided or not. The x-axis ticks correspond to the index of properties. We sort the properties by the ratio of label-not-provided molecules in ascending order.
+
+### A.6 Implementation details
+
+The AE for reconstructing the SMILES strings is a Transformer [22, 51], which has been shown to be effective in sequence modeling. The EBM is a 5-layer Multi-Layer Perceptron with residual connections, ReLU activation function [52], and Layer Normalization [53] between each hidden layer. We use the RdKit library ${}^{1}$ to pre-generate SMILES strings, and follow [22] to tokenize the strings. We instantiate the task learner with a 5-layer GINE architecture [4], which is widely used for molecular property prediction [4, 36, 37].
+
+In each AL round, the auto-encoder and EBM are jointly trained with batch size of 128. The learning rates are 2e-4 for the auto-encoder and 1e-3 for the EBM. The standard deviation of the Gaussian noise used in DSM is 10. The criterion for training convergence is that there are 10 batches whose DSM loss (see Eqn. (10)) is no larger than 0.015 , which is obtained in our pilot experiments. The maximum number of training epochs is 2500 . The used optimizer is Adam. The task learner is trained for 50 epochs with batch size of 128, learning rate of $1\mathrm{e} - 3$ , and the Adam optimizer. Note that the task learner is re-initialized before it is trained and tested. For calculating the (relative) energy, we use points for approximation, i.e., $K = {100}$ (see Eqn. (18)), and set zero-energy point as $\left( {{\mathbf{z}}_{0} = {\overline{\mathbf{z}}}_{U},{\widehat{y}}_{0} = 0}\right)$ , where ${\overline{\mathbf{z}}}_{U}$ is the mean embedding of the unlabeled pool:
+
+$$
+{\overline{\mathbf{z}}}_{\mathrm{U}} = \frac{1}{{N}_{U}}\mathop{\sum }\limits_{{G \in {\mathcal{D}}_{U}}}\operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right) . \tag{19}
+$$
+
+---
+
+${}^{1}$ https://www.rdkit.org/
+
+---
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/dnRSxTNIvjK/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/dnRSxTNIvjK/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..993e5bc3a5550441c1ddc7d9b86a214df998c82f
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/dnRSxTNIvjK/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,269 @@
+§ JOINTLY MODELLING UNCERTAINTY AND DIVERSITY FOR ACTIVE MOLECULAR PROPERTY PREDICTION
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Molecular property prediction is a fundamental task in AI-driven drug discovery. Deep learning has achieved great success in this task, but relies heavily on abundant annotated data. However, annotating molecules is particularly costly because it often requires lab experiments conducted by experts. Active Learning (AL) tackles this issue by querying (i.e., selecting) the most valuable samples to annotate, according to two criteria: uncertainty of the model and diversity of data. Combining both criteria (a.k.a. hybrid AL) generally leads to better performance than using only one single criterion. However, existing best hybrid methods rely on some trade-off hyperparameters for balancing uncertainty and diversity, and hence need to carefully tune the hyperparameters in each experiment setting, causing great annotation and time inefficiency. In this paper, we propose a novel AL method that jointly models uncertainty and diversity without the trade-off hyperparameters. Specifically, we model the joint distribution of the labeled data and the model prediction. Based on this distribution, we introduce a Minimum Maximum Probability Querying (MMPQ) strategy, in which a single selection score naturally captures how the model is uncertain about its prediction, and how dissimilar the sample is to the currently labeled data. To model the joint distribution, we adapt the energy-based models to the non-Euclidean molecular graph data, by learning chemically-meaningful embedding vectors as the proxy of the graphs. Extensive experiments on various benchmark datasets show that our method achieves superior AL performance, outperforming existing methods by a large margin. We also conduct ablation studies to verify different design choices of our approach.
+
+§ 1 INTRODUCTION
+
+AI-driven drug discovery is an important application of data mining and machine learning. In drug discovery pipeline, a fundamental step is to use computational methods to predict the molecular properties (e.g., toxicity and binding specificity) of candidate compounds [1, 2]. Recently, deep learning models have achieved great success in molecular property prediction [1, 3-5], but their high performance relies on a large amount of annotation. However, annotating molecules is particularly time-consuming and costly, since it often requires lab experiments or complex theoretical computation $\left\lbrack {3,6}\right\rbrack$ .
+
+One promising way to alleviate this problem is Active Learning (AL) [7], which aims at finding a strategy for iteratively querying (i.e., selecting) the most valuable data samples to annotate, so as to maximize model performance under a low annotation budget. AL strategies query samples mainly based on two criteria: uncertainty of the model [8], and diversity of queried data [9]. Strategies taking into account both criteria (a.k.a. hybrid strategies) are recently shown to outperform methods based on only uncertainty or diversity in many learning tasks [10-12]. Existing best hybrid methods generally rely on some trade-off hyperparameters for balancing uncertainty and diversity [11-15]. For example, WAAL [12] requires manually-tuned coefficients to obtain a weighted sum of its uncertainty and diversity terms. EADA [13] relies on two selection ratios for its two-step selection process. These trade-off hyperparameters are crucial to the AL performance and hence need to be carefully tuned for each experiment setting.
+
+However, tuning trade-off hyperparameters can cause substantial inefficiency in AL. For one thing, since these hyperparameters have a large influence on the outcome of corresponding AL strategies, the selected samples under different choices of the hyperparameters often vary a lot, and thus the total annotation cost needed for tuning will greatly exceed the budget. For another, the tuning process can take a long time, since each AL experiment iterates between query selection and and model (re)training for several rounds.
+
+In this paper, we propose a novel AL strategy that naturally takes into account uncertainty and diversity without the need of trade-off hyperparameters. Our strategy is based on a joint distribution $q\left( {x,y}\right) \triangleq p\left( {y \mid x}\right) p\left( x\right)$ , which contains information about both uncertainty and diversity: $p\left( {y \mid x}\right)$ is the prediction distribution of the model (with input $x$ and prediction $y$ ), which is widely used to define uncertainty metrics $\left\lbrack {7,8,{16}}\right\rbrack ;p\left( x\right)$ is the density of the currently annotated data, which is shown to be useful for identifying samples that can effectively increase data diversity [11, 12, 17].
+
+Specifically, our strategy operates by first maximizing $q\left( {x,y}\right)$ via varying $y$ , and then minimizing $\mathop{\max }\limits_{y}q\left( {x,y}\right)$ via varying $x$ . We thus name our strategy Minimum Maximum Probability Querying (MMPQ). Importantly, we show that the selection score of MMPQ can be viewed as the product of two terms - the first term leads to samples on which the model has low prediction confidence, while the second favors samples that are dissimilar to labeled data. In this way, the selected samples are naturally those that the model is most uncertain about, while at the same time being able to increase the data diversity.
+
+For modelling the joint distribution, we propose to use an Energy-Based Model (EBM) [18, 19], since it can explicitly output the desired probability value. For training the EBM, we need to tackle one key challenge in our setting: the variable $x$ in the joint distribution has a non-Euclidean data structure (i.e., a molecule graph), which renders the commonly-used EBM training scheme inapplicable [19, 20]. To address this challenge, we take a learned embedding vector $z$ as a proxy of the non-Euclidean input $x$ , which allows us to train the EBM on $z$ and $y$ with the commonly-used EBM training scheme. Specifically, inspired from [21, 22], we learn the embeddings by training an autoencoder to reconstruct the input SMILES strings, which is an expert-defined sequence representation of molecules. The EBM is trained by Denoising Score Matching (DSM) [20, 23], i.e., to learn the "Stein score" [24] of $q\left( {x,y}\right)$ , which has been shown to be an efficient and robust EBM training scheme.
+
+To evaluate our MMPQ strategy, we apply it to actively train a commonly-used Graph Neural Network (GNN) [4] on various benchmark datasets of molecular property prediction. Extensive results show that MMPQ enables the GNN to achieve high performance with limited annotation budget, significantly outperforming other competitive AL methods. In addition, we conduct ablation studies to verify different design choices of our method. In particular, we show that the uncertainty and diversity terms make complementary contributions to the good performance: the diversity term is important in early iterations of AL, while the uncertainty term is essential in later iterations. Anonymized code is available at https://anonymous.4open.science/r/MMPQ-5EBD/.
+
+§ 2 RELATED WORKS
+
+Molecular property prediction is a critical step in drug discovery $\left\lbrack {1,2,5}\right\rbrack$ . Traditional methods $(e.g$ ., based on density function theory [25]) are too slow to be applied in practice. To resolve this problem, deep learning methods $\left\lbrack {1,3,4,{26},{27}}\right\rbrack$ have been widely proposed, which can be categorized into two types: (1) descriptor-based methods [26, 27] that represent the input molecules as expert-crafted molecular descriptors (e.g., fingerprints [28]), and (2) GNN-based methods [1,3,4] that directly take molecule graphs as input. As found in [1, 5], GNNs generally outperform descriptor-based methods, and thus this work focuses on GNN-based molecular property prediction.
+
+Active learning improves annotation efficiency by iteratively querying samples based on two criteria: uncertainty of the task learner $\left\lbrack {8,{16},{29},{30}}\right\rbrack$ , and/or diversity of queried data $\left\lbrack {9,{17},{21}}\right\rbrack$ . Uncertainty-based methods define various uncertainty metrics for querying data $\left\lbrack {8,{10},{16},{30}}\right\rbrack$ , while diversity-based methods aim to find a representative subset of the whole dataset by querying diverse samples [9, 21]. Compared to using only uncertainty or diversity, recent works find that combining the two criteria (a.k.a. hybrid methods) leads to better performance [10, 11, 13-15]. However, existing hybrid approaches generally need to balance uncertainty and diversity via some trade-off hyperparameters. For example, WAAL [12] uses manually-tuned coefficients to weight its uncertainty term and diversity term. EADA [13] adopts a two-stage querying approach, where each stage requires a prefixed selection ratio. We note that EADA also trains an EBM for active selection. Our method differs from EADA mainly in 3 aspects. First, the motivation of EADA to adopt EBMs is to identify out-of-distribution samples, while we use EBMs for modelling the distribution capturing both uncertainty and diversity. Second, they train their EBM via contrastive divergence [31], while we use denoising score matching [20]. Third, they need two separate selection steps with different selection scores, and require two hyperparameters to trade off between uncertainty and diversity, while we only have one single selection score that naturally captures both uncertainty and diversity.
+
+Apart from the above, some other hybrid methods rely on trade-off hyperparameters during their model training process [11, 14, 15]. We note that there is an existing hybrid strategy, BADGE [10], that is also free from trade-off hyperparameters like ours. However, BADGE [10] assumes that task learner's prediction is a faithful proxy of the ground-truth label. This may not hold in early AL iterations on (typically) small molecule datasets, since the task learner would be inaccurate due to limited training data.
+
+Energy-based models are a class of powerful methods of explicit generative modeling. Recently, some works [32-35] leverage EBMs for modelling molecular data. To tackle difficulties caused by the discrete nature of molecule graphs, [32] leverages a dequantization technique, and [33] designs a diffusion process based on stochastic differential equations. Different from [32, 33], we propose to train our EBM on a continuous embedding space of molecules. On the other hand, $\left\lbrack {{34},{35}}\right\rbrack$ focuses on molecule conformation generation, which is essentially a continuous problem, since the conformation of a molecule is represented by the $3\mathrm{D}$ space coordinates of its atoms.
+
+§ 3 PRELIMINARIES
+
+Problem setting. We consider batch-mode pool-based active learning [7], a practical AL setting for deep models. In each AL round, a batch of samples from the unlabeled pool ${\mathcal{D}}_{U}$ are queried according to a strategy, annotated by an oracle (e.g., a chemist), and added to the labeled pool ${\mathcal{D}}_{L}$ . The updated ${\mathcal{D}}_{L}$ is then used to train the task learner. A more formal description of this setting is in Appx. A.1.
+
+Notations. A molecule is represented as a graph $G = \left( {V,E}\right)$ , with nodes $V$ and edges $E$ corresponding to atoms and chemical bonds. As in [4,36-38], we are interested in $n$ binary molecular properties (e.g., toxicity), which are denoted by a label vector $\mathbf{y} = \left( {{y}_{1},\cdots ,{y}_{n}}\right) \in \{ 0,1{\} }^{n}$ , where ${y}_{i} = 1$ or 0 means the molecule has the $i$ -th property or not. A task learner $h\left( \cdot \right)$ is trained to predict the properties. The $i$ -th output of the task learner, $h{\left( G\right) }_{i}$ , specifies a distribution $p\left( {{y}_{i} \mid G}\right)$ over the predicted label of the $i$ -th property of $G$ , which is essentially a Bernoulli distribution with success probability $h{\left( G\right) }_{i}$ (denoted as $\operatorname{Ber}\left( {h{\left( G\right) }_{i}}\right)$ ).
+
+Energy-Based Models. EBMs [18] specify probability density or mass functions as follows:
+
+$$
+{p}_{\theta }\left( \mathbf{x}\right) = \frac{\exp \left( {-{E}_{\theta }\left( \mathbf{x}\right) }\right) }{{Z}_{\theta }}, \tag{1}
+$$
+
+where $\mathbf{x} \in {\mathbb{R}}^{D}$ is a random sample, ${E}_{\theta }\left( \mathbf{x}\right)$ is the energy function with learnable parameters $\theta$ , and ${Z}_{\theta } = \int \exp \left( {-{E}_{\theta }\left( \mathbf{x}\right) }\right) \mathrm{d}\mathbf{x}$ is a normalizing constant. By learning $\theta$ , we can use an EBM to approximate a real data distribution, i.e., ${p}_{\theta } \approx {p}_{\text{ data }}$ .
+
+Denoising Score Matching. DSM [23, 39] is an efficient approach for training EBMs. Here, the "(Stein) score" of a distribution $f\left( \mathbf{x}\right)$ is defined as the log-probability’s first-order gradient function w.r.t. $\mathbf{x}$ , i.e., ${\nabla }_{\mathbf{x}}\log f\left( \mathbf{x}\right)$ . DSM first disturbs ${p}_{\text{ data }}\left( \mathbf{x}\right)$ with a pre-defined noise distribution ${p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right)$ , and then trains the EBM via
+
+$$
+{\mathbb{E}}_{\begin{matrix} {\mathbf{x} \sim {p}_{\text{ data }}\left( \mathbf{x}\right) } \\ {\widetilde{\mathbf{x}} \sim {p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right) } \end{matrix}}\left\lbrack {\frac{1}{2}{\begin{Vmatrix}{\nabla }_{\mathbf{x}}\log {p}_{\theta }\left( \widetilde{\mathbf{x}}\right) - {\nabla }_{\mathbf{x}}\log {p}_{N}\left( \widetilde{\mathbf{x}} \mid \mathbf{x}\right) \end{Vmatrix}}_{2}^{2}}\right\rbrack . \tag{2}
+$$
+
+With a proper ${p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right)$ , we can easily obtain ${\nabla }_{\mathbf{x}}{p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right)$ [39].
+
+§ 4 METHOD
+
+§ 4.1 THE MINIMUM-MAXIMUM-PROBABILITY QUERY STRATEGY
+
+Our proposed query strategy is based on the joint distribution of two key probability distributions used in existing works. The first is the prediction distribution of the task learner, i.e., $p\left( {y = \widehat{y} \mid G}\right)$ (abbreviated as $p\left( {\widehat{y} \mid G}\right)$ ), which is widely used to define different uncertainty metrics [7,8,16]. The second is the distribution of currently labeled pool ${\mathcal{D}}_{L}$ , denoted as ${p}_{L}\left( G\right)$ . As shown in $\left\lbrack {{11},{12},{17},{21}}\right\rbrack ,{p}_{L}\left( G\right)$ is useful for identifying samples that are dissimilar to the labeled ones, and annotating these samples effectively increases data diversity. Inspired by these works, we propose to model the joint distribution of $p\left( {y = \widehat{y} \mid G}\right)$ and ${p}_{L}\left( G\right)$ .
+
+Formally, let $q\left( {G,\widehat{\mathbf{y}}}\right)$ denote the joint distribution:
+
+$$
+q\left( {G,\widehat{\mathbf{y}}}\right) \triangleq p\left( {\widehat{\mathbf{y}} \mid G}\right) {p}_{L}\left( G\right) , \tag{3}
+$$
+
+where we use the boldface $\widehat{\mathbf{y}}$ because we may be interested in more than 1 tasks. Note that $\widehat{\mathbf{y}}$ is a random variable following $p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , not the ground-truth label of $G$ .
+
+Then, we perform active selection by first maximizing $q\left( {G,\widehat{\mathbf{y}}}\right)$ via varying $\widehat{\mathbf{y}}$ for each single $G$ , and then selecting a batch of $G$ that minimizes the obtained $\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {G,\widehat{\mathbf{y}}}\right)$ . Denote the selected batch as $\mathcal{B} = \left\{ {{G}_{1},\cdots ,{G}_{b}}\right\}$ . Our strategy is formalized as:
+
+$$
+\mathcal{B} = \underset{{G}_{1},\cdots ,{G}_{b} \in {\mathcal{D}}_{U}}{\arg \min }\left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {{G}_{1},\widehat{\mathbf{y}}}\right) ,\cdots ,\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {{G}_{b},\widehat{\mathbf{y}}}\right) }\right) . \tag{4}
+$$
+
+We name our strategy Minimum-Maximum-Probability Querying (MMPQ). The whole active learning process with this MMPQ strategy is summarized in Appx. A.4.
+
+§ 4.1.1 MMPQ AS A TUNING-FREE HYBRID STRATEGY
+
+Here we show that MMPQ naturally captures both uncertainty of the task learner and diversity w.r.t. the whole data space in a tuning-free manner. First, from Eqn. (4) and Eqn. (3), we can see that the selection score of MMPQ is:
+
+$$
+\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {G,\widehat{\mathbf{y}}}\right) = \left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}p\left( {\widehat{\mathbf{y}} \mid G}\right) }\right) {p}_{L}\left( G\right) . \tag{5}
+$$
+
+Then, let ${p}^{M} = \mathop{\max }\limits_{\widehat{\mathbf{y}}}p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , and it can be seen that the MMPQ strategy essentially selects data with smaller ${p}^{M}$ and smaller ${p}_{L}\left( G\right)$ .
+
+Uncertainty. Smaller ${p}^{M}$ corresponds to samples that the task learner is less confident about. Specifically, let ${\widehat{\mathbf{y}}}^{ * } = \left( {{\widehat{y}}_{1}^{ * },\cdots ,{\widehat{y}}_{n}^{ * }}\right)$ denote the prediction that achieves ${p}^{M}$ (i.e., ${p}^{M} = p\left( {{\widehat{\mathbf{y}}}^{ * } \mid G}\right)$ ), and ${\widehat{\mathbf{y}}}^{\prime } = \left( {{\widehat{y}}_{1}^{\prime },\cdots ,{\widehat{y}}_{n}^{\prime }}\right)$ denote any other prediction that is different from ${\widehat{\mathbf{y}}}^{ * }$ . Note that, since $p\left( {{\widehat{\mathbf{y}}}^{ * } \mid G}\right) + \mathop{\sum }\limits_{{{\widehat{\mathbf{y}}}^{\prime } \in \{ 0,1{\} }^{n},{\widehat{\mathbf{y}}}^{\prime } \neq {\widehat{\mathbf{y}}}^{ * }}}p\left( {{\widehat{\mathbf{y}}}^{\prime } \mid G}\right) = 1$ , smaller ${p}^{M}$ means that ${p}^{M}$ is closer to the second-largest (and all other) predictions, implying that the task learner is more uncertain about its prediction on molecule $G$ .
+
+ * Diversity. Smaller ${p}_{L}\left( G\right)$ means that $G$ lies in low-density regions of the distribution of currently labeled data, and hence is dissimilar to the labeled data. Thus, querying those with small ${p}_{L}\left( G\right)$ increases the diversity of the obtained labeled pool ${\mathcal{D}}_{L}^{t}\left\lbrack {{11},{12},{17},{21}}\right\rbrack$ .
+
+Based on above reasoning, samples with lowest selection score (i.e., those taken by the arg min operation in Eqn. (4)) are naturally those the model is most uncertain about, while at the same time being able to increase data diversity. As such, MMPQ does not need a hyperparameter to trade off between uncertainty and diversity.
+
+§ 4.1.2 IMPLEMENTATION OF MMPQ
+
+Since MMPQ is based on the value of $q\left( {G,\widehat{\mathbf{y}}}\right)$ , we need to model $q\left( {G,\widehat{\mathbf{y}}}\right)$ with an explicit deep generative model. In particular, we instantiate an Energy-Based Model (EBM) using a neural network, since EBMs have been shown to be quite expressive and stable in distribution modelling [39, 40]. Formally, $q\left( {G,\widehat{\mathbf{y}}}\right)$ is modelled by
+
+$$
+q\left( {G,\widehat{\mathbf{y}}}\right) = \frac{\exp \left( {-E\left( {G,\widehat{\mathbf{y}}}\right) }\right) }{Z}, \tag{6}
+$$
+
+ < g r a p h i c s >
+
+Figure 1: Model design and data flow. Colored in blue are inputs and outputs of the labeled pool and corresponding objective. Those corresponding to unlabeled pool are in red.
+
+§ WHERE $E\LEFT( {G,\WIDEHAT{\MATHBF{Y}}}\RIGHT)$ IS THE ENERGY VALUE GIVEN BY THE EBM, AND $Z$ IS A NORMALIZING CONSTANT.
+
+In this subsection, we focus on how to implement MMPQ with the EBM, and thus here we assume that the EBM is already trained and fixed. Model design and training of the EBM will be presented later in Sec. 4.2.
+
+From Eqn. (6), we have
+
+$$
+\underset{G}{\arg \min }\left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}q\left( {G,\widehat{\mathbf{y}}}\right) }\right) \tag{7}
+$$
+
+$$
+= \underset{G}{\arg \min }\left( {\mathop{\max }\limits_{\widehat{\mathbf{y}}}\left( {\log q\left( {G,\widehat{\mathbf{y}}}\right) + \log Z}\right) }\right) = \underset{G}{\arg \min }\left( {-\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right) }\right) = \underset{G}{\arg \max }\left( {\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right) }\right) .
+$$
+
+This reveals that we can implement MMPQ based on the learned energy values, without the need to calculate the normalizing constant $Z$ .
+
+One may argue that, $\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right)$ would be difficult to compute for large $n$ , since it involves all ${2}^{n}$ possible combination of $\left( {{\widehat{y}}_{1},\cdots ,{\widehat{y}}_{n}}\right)$ . We show in Appx. A. 2 that, by leveraging the conditional independence assumption of labels [41], we can simply compute $\mathop{\min }\limits_{\widehat{\mathbf{y}}}E\left( {G,\widehat{\mathbf{y}}}\right)$ in a task-wise manner.
+
+§ 4.2 MODEL DESIGN AND TRAINING OF THE EBM
+
+§ 4.2.1 MODEL DESIGN
+
+Designing an EBM for learning $q\left( {G,\widehat{\mathbf{y}}}\right)$ is not trivial, since the two variables have different data structure: $G$ is an attributed graph, while $\widehat{\mathbf{y}}$ is a vector. Moreover, learning EBMs for attributed graphs is itself a challenging open problem, due to the non-Euclidean and discrete nature [32, 42].
+
+To address the above issues, we propose to embed molecules graphs $G$ into a learned embedding space, and then build the EBM model on $\widehat{\mathbf{y}}$ and embeddings $\mathbf{z}$ (see Fig. 1). Inspired by Sinha ${et}$ al. [21], we learn the space by training an Auto-Encoder (AE) to reconstruct its inputs. However, due to graph isomorphism, directly reconstructing molecule graphs is difficult [22, 43, 44]. We thus propose to train the AE to reconstruct the molecules' SMILES strings [45], as shown in Fig. 1. SMILES is an expert-defined sequence representation of molecules, where the sub-strings correspond to chemically-meaningful substructures in molecules (e.g., functional groups). Such a sequence-based reconstruction task enables the auto-encoder to learn molecules embeddings without struggling to reconstruct graphs.
+
+Formally, let $\operatorname{Enc}\left( \cdot \right)$ and $\operatorname{Dec}\left( \cdot \right)$ denote the encoder and decoder respectively, and let $\operatorname{Sml}\left( \cdot \right)$ denote the operation of retrieving the SMILES string of a molecule (which can be easily pre-computed using open-sourced cheminformatics libraries). Then, for a molecule $G$ , the ground-truth and the reconstructed SMILES strings are
+
+$$
+S \triangleq \operatorname{Sml}\left( G\right) ,\;\widehat{S} \triangleq \operatorname{Dec}\left( {\operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right) }\right) . \tag{8}
+$$
+
+For learning high-quality embeddings, we use both labeled and unlabeled data to train the AE:
+
+$$
+{L}_{\mathrm{{rec}}} = {\mathbb{E}}_{G \in {\mathcal{D}}_{L}}\left\lbrack {d\left( {S,\widehat{S}}\right) }\right\rbrack + {\mathbb{E}}_{{G}^{\prime } \in {\mathcal{D}}_{U}}\left\lbrack {d\left( {{S}^{\prime },{\widehat{S}}^{\prime }}\right) }\right\rbrack , \tag{9}
+$$
+
+where $d\left( {\cdot , \cdot }\right)$ is a distance between sequence pairs.
+
+In the rest of this paper, we take $\mathbf{z} \triangleq \operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right)$ as a proxy of $G$ in some cases, and use $\mathbf{x}$ to denote the tuple $\left( {\mathbf{z},\widehat{\mathbf{y}}}\right)$ , which is implemented by concatenating $\mathbf{z}$ and $\widehat{\mathbf{y}}$ .
+
+Following previous works [39,40], we instantiate the EBM as a "score net" ${s}_{\theta }\left( \mathbf{x}\right)$ , which learns the score of the target distribution $q\left( \mathbf{x}\right)$ , i.e., ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ . When ${s}_{\theta }\left( \mathbf{x}\right)$ is trained, we use summation to approximate integral over ${\nabla }_{\mathbf{x}}\log q\left( \mathbf{x}\right)$ (see Appx. A.3). An alternative choice is to approximate the energy function ${E}_{\theta }\left( \mathbf{x}\right)$ , which however is more difficult than modeling the score, as experimentally shown in Sec. 5.3.3.
+
+§ 4.2.2 MODEL TRAINING
+
+We train the EBM ${s}_{\theta }\left( \mathbf{x}\right)$ via denoising score matching. With the noise ${p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right) = \mathcal{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x},{\sigma }^{2},I}\right)$ , we have ${\nabla }_{\mathbf{x}}{p}_{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x}}\right) = - \frac{\widetilde{\mathbf{x}} - \mathbf{x}}{{\sigma }^{2}}$ [39]. Then, the DSM objective is:
+
+$$
+{L}_{\mathrm{{DSM}}} = \mathbb{E}\underset{\widetilde{\mathbf{x}} \sim \mathcal{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x},\sigma }\right) }{ \approx }\left\lbrack {\begin{Vmatrix}{s}_{\theta }\left( \widetilde{\mathbf{x}}\right) + \frac{\widetilde{\mathbf{x}} - \mathbf{x}}{{\sigma }^{2}}\end{Vmatrix}}_{2}^{2}\right\rbrack , \tag{10}
+$$
+
+where we slightly abuse the notation, using $\mathbf{x} \in {\mathcal{D}}_{L}$ to denote $\mathbf{x} \in \left\{ {\left( {\operatorname{Enc}\left( {\operatorname{Sml}\left( G\right) }\right) ,\widehat{\mathbf{y}}}\right) \mid G \in {\mathcal{D}}_{L}}\right\}$ .
+
+Note that the second term in the target distribution (Eqn. (3)) is the density of the labeled data only. Therefore, in Eqn. (10), we calculate ${L}_{\mathrm{{DSM}}}$ only on the labeled pool ${\mathcal{D}}_{L}$ (cf. the reconstruction objective in Eqn. (9)).
+
+One challenge of calculating ${L}_{\mathrm{{DSM}}}$ is that it requires $\left( {G,\widehat{\mathbf{y}}}\right)$ pairs i.i.d. sampled from $q\left( {G,\widehat{\mathbf{y}}}\right)$ , but we do not have such samples at hand. To address this challenge, we propose a two-step sampling method: first randomly pick $G$ from ${\mathcal{D}}_{L}$ ; then draw a sample $\widehat{\mathbf{y}} = \left\{ {{\widehat{y}}_{1},\cdots ,{\widehat{y}}_{n}}\right\}$ from $p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , which can be implemented by drawing a ${\widehat{y}}_{i} \sim \operatorname{Ber}\left( {h{\left( G\right) }_{i}}\right)$ for all $i$ (under the conditional-independence assumption of labels).
+
+The EBM and the AE are jointly trained via
+
+$$
+{L}_{\text{ joint }} = {L}_{\mathrm{{DSM}}} + {L}_{\text{ rec }}. \tag{11}
+$$
+
+Pseudo code of model training process is summarize in Appx. A.4.
+
+§ 5 EXPERIMENTS
+
+§ 5.1 EXPERIMENT SETUP
+
+We run experiments under the batch-mode pool-based AL setting (elaborated in Appx. A.1). The labeled pool is initialized by randomly selecting ${10}\%$ samples of the entire training; the initial unlabeled pool is the rest ${90}\%$ . Then ${10}\mathrm{{AL}}$ rounds are performed; in each round, an unlabeled batch of 4% samples of the entire training set is queried, so the total annotation budget is 50% of the training set. We use the BACE, BBBP, HIV and SIDER datasets from the widely-used MoleculeNet benchmark [5] (also included in the Open Graph Benchmark [46]). Statistics and detailed descriptions of the datasets are in Appx. A.5. Following [4], we use scaffold split, with train/val/test $= {80}\% /{10}\% /{10}\%$ . We use AUROC as the performance metric, as suggested in [5]. Please refer to Appx. A. 6 for implementation details.
+
+§ 5.2 ACTIVE LEARNING PERFORMANCE
+
+We compare MMPQ against following 8 baselines, with $\mathbf{U},\mathbf{D}$ and $\mathbf{H}$ denoting Uncertainty-based, Diversity-based and Hybrid methods respectively: Random (random selection), Entropy (U) (selecting samples with the largest prediction entropy), MC-Dropout (U) [8], CoreSet (D) [9], ASGN (D) [47], BADGE (H) [10], WAAL (H) [12], EADA (H) [13]. Among them, ASGN is the only existing method that investigates AL in molecular property prediction. BADGE, WAAL and EADA are representative hybrid methods, and are the state-of-the-arts on many image classification datasets. We note that there are other hybrid methods [11, 14, 15], but their code is not or only partly released. We fail to reproduce their results, and hence do not include them for comparison.
+
+ < g r a p h i c s >
+
+Figure 2: Active learning performance of MMPQ (ours) and baseline hybrid methods. "Round 0" corresponds to the performance on initial labeled pool.
+
+ < g r a p h i c s >
+
+Figure 3: Active learning performance of MMPQ (ours) and uncertainty-based or diversity-based methods. "Round 0" corresponds to the performance on initial labeled pool.
+
+To avoid cluttered presentation, we show results of baseline hybrid methods in Fig. 2, and those of uncertainty-based or diversity-based methods in Fig. 3. In both figures, we include results of our MMPQ and the Random baseline.
+
+Results: From the figures we can see that our MMPQ outperforms the baselines on all 4 datasets. Specifically, on HIV, our MMPQ achieves 0.7302 AUROC in the 3-rd active round (using only 22% annotations of the entire training set), which is very close to the performance of using ${100}\%$ of the annotations (0.7344). This also explains why the performance of MMPQ almost saturates after the 3-rd round. Furthermore, the performance of hybrid methods requiring trade-off hyperparameters, i.e., WAAL and EADA, is not stable. In particular, though WAAL achieves performance comparable to our proposed MMPQ on BACE, it performs quite unsatisfactorily on HIV. Similarly, EADA performs well on SIDER but is the worst baseline on HIV. By contrast, our method achieves consistently superior performance. One may note that the performance of WAAL and EADA at round 0 is quite different from that of other methods. The reason is that, in other methods, the task learner is only trained with classification loss on the currently labeled data. Contrarily, in WAAL and EADA, apart from classification loss, the task learner is also trained with some auxiliary loss (i.e., adversarial loss in WAAL, and free-energy alignment loss in EADA). Therefore, even though the training data at round 0 are the same across all methods, the WAAL and EADA can have different performance.
+
+§ 5.3 ABLATION STUDIES
+
+Here we conduct ablative experiments on HIV, which is the largest dataset used (see Tab. 1).
+
+§ 5.3.1 UNCERTAINTY OR DIVERSITY ONLY
+
+In Sec. 4.1.1, we show that our MMPQ strategy captures both uncertainty and diversity through the two terms in Eqn. (5) respectively. Here we ablatively study the effectiveness of the two terms. Since we have only 1 target property on HIV, we use $\widehat{y}$ instead of $\widehat{\mathbf{y}}$ . Specifically, under the setup described in Sec. 5.1, we compare our MMPQ with another two strategies based on the trained EBM:
+
+ * The U.O. strategy that considers Uncertainty Only: querying data with minimum ${p}^{M} =$ $\mathop{\max }\limits_{\widehat{y}}p\left( {\widehat{y} \mid G}\right)$ . Let ${\widehat{y}}^{ * }$ denote the predicted label that achieves ${p}^{M}$ . Then, based on the learned
+
+ < g r a p h i c s >
+
+Figure 4: (a) AL performance of MMPQ, U.O. and D.O. strategies. (b) Mean ground-truth-label loss of data queried by MMPQ or D.O. strategy. (c) Mean average Tanimoto similarity of data queried by MMPQ or U.O. strategy.
+
+EBM, ${p}^{M}$ can be calculated by:
+
+$$
+{p}^{M} = \frac{\exp \left( {-E\left( {G,{\widehat{y}}^{ * }}\right) }\right) }{\mathop{\sum }\limits_{{\widehat{y} \in \{ 0,1\} }}\exp \left( {-E\left( {G,\widehat{y}}\right) }\right) }. \tag{12}
+$$
+
+ * The D.O. strategy that considers Diversity Only: querying those with minimum ${p}_{L}\left( G\right)$ , which
+
+satisfies
+
+$$
+{p}_{L}\left( G\right) \propto \mathop{\sum }\limits_{{\widehat{y}\{ 0,1\} }}\exp \left( {-E\left( {G,\widehat{y}}\right) }\right) . \tag{13}
+$$
+
+In Fig. 4 (a), as AL proceeds, the performance of the U.O. strategy rises slower than that of MMPQ or the D.O. strategy, though the final performance of U.O. (at round 10) is as good as MMPQ. On the other hand, the D.O. strategy reaches a peak very quickly (i.e., at the 2-nd AL round), but then its performance degrades as more data are annotated for training. One possible reason for such degradation is that the data queried in later rounds cannot provide the task learner with useful information about the learning task. Adding these data to the training pool may lead to overfitting, since more data means more training iterations. This shows that uncertainty and diversity are complementary to each other - diversity is important in early AL stages, while uncertainty is critical in later stages. Interestingly, this corroborates the finding in [48].
+
+Furthermore, we then dig deeper into the advantages of the MMPQ strategy, by examining how the two terms in Eqn. (5) affect the queried data.
+
+For studying the effectiveness of the uncertainty-based term $\mathop{\max }\limits_{\widehat{\mathbf{y}}}p\left( {\widehat{\mathbf{y}} \mid G}\right)$ , we examine whether the queried data of the MMPQ strategy have higher uncertainty than those of the D.O. strategy (since the MMPQ and D.O. only differ in this term). Note that, since HIV has only 1 property of interest, this term becomes $p\left( {\widehat{y} \mid G}\right)$ . For measuring uncertainty, distribution-based metrics such as entropy and classification margin are often used. However, for binary classification (as our case), these metrics are equivalent to selecting those with smallest $p\left( {\widehat{y} \mid G}\right)$ . Therefore, instead of distribution-based metrics, we adopt the ground-truth-label loss, as used in [30]. For the $t$ -th AL round, we calculate the mean loss of data in the current queries ${\mathcal{D}}_{L}^{t}$ . As shown in Fig. 4 (b), compared with the D.O. strategy, queries of our MMPQ have larger ground-truth-label loss, suggesting larger uncertainty of the task learner.
+
+For the diversity-based term ${p}_{L}\left( G\right)$ , we examine whether queries of MMPQ have smaller chemical similarity (i.e., larger diversity) than those of U.O. strategy (since of MMPQ and U.O. only differ in this term). We adopt the Tanimoto similarity [49], which is a widely used expert-defined molecular similarity metric. Formally, let ${T}_{ij}$ denote the Tanimoto similarity between molecule ${G}_{i}$ and ${G}_{j}$ , we calculate the mean Average Similarity (mAS) among molecules in ${\mathcal{D}}_{L}^{t}$ :
+
+$$
+{mAS} = \frac{1}{{N}_{L}^{t}\left( {{N}_{L}^{t} - 1}\right) }\mathop{\sum }\limits_{{{G}_{i},{G}_{j} \in {\mathcal{D}}_{L}^{t},j \neq i}}{T}_{ij}. \tag{14}
+$$
+
+Fig. 4 (c) shows that queries of MMPQ are less chemically similar than those of U.O., implying larger diversity.
+
+§ 5.3.2 ROBUSTNESS OF ENERGY CALCULATION
+
+In this part, we investigate the robustness of MMPQ w.r.t. how energy is calculated. Specifically, we consider the choice of zero-energy point, and the number of points for approximating the integral (i.e., $K$ in Eqn. (18)).
+
+ < g r a p h i c s >
+
+Figure 5: Active learning performance of differ- Figure 6: DSM loss of modeling score or energy. ent settings of $\left( {{\mathbf{z}}_{0},{\widehat{y}}_{0},K}\right)$ .
+
+In above experiments, we set the zero-energy point as $\left( {{\mathbf{z}}_{0} = {\overline{\mathbf{z}}}_{U},{\widehat{y}}_{0} = 0}\right)$ , where ${\overline{\mathbf{z}}}_{U}$ is the where ${\overline{\mathbf{z}}}_{U}$ is the mean embedding of the unlabeled pool, and let $K = {100}$ . We name this setting the "default setting", and denote it with the triple $\left( {{\overline{\mathbf{z}}}_{U},0,{100}}\right)$ .
+
+Then, based on $\left( {{\overline{\mathbf{z}}}_{U},0,{100}}\right)$ , we vary one of the three hyperparameters while keeping the other two unchanged, and run AL experiments under the setup described in Sec. 5.1. Specifically, we set (1) ${\mathbf{z}}_{0} \in \left\{ {{\overline{\mathbf{z}}}_{L},{\overline{\mathbf{z}}}_{F}}\right\}$ , where ${\overline{\mathbf{z}}}_{L}$ and ${\overline{\mathbf{z}}}_{F}$ are the mean embedding of the Labeled pool and that of the Full training set; (2) ${\widehat{y}}_{0} = 1$ ; (3) $K \in \{ {50},{500}\}$ .
+
+Fig. 5 shows the AL performance of the above settings and the default one. We can see that the AL performance of different settings are similar, which demonstrates that the MMPQ strategy is robust to the above hyperparameters.
+
+§ 5.3.3 IMPLEMENTING EBM BY MODELING ENERGY
+
+As introduced in Sec. 4.2, we instantiate the EBM as a score net that learns the score of the true distribution $q\left( {G,\widehat{\mathbf{y}}}\right)$ . An alternative is to implement the EBM as an energy net that models the energy function.
+
+We try this alternative in our experiments, but find that the training process fails to converge. Specifically, the energy net is also built on the embedding space of the AE, and has the same architecture as the score net, except that the final layer has a output dimension of 1 and a ReLU activation (since the energy is a non-negative scalar). The training objective of the energy net (denoted as ${E}_{\theta }$ ) is
+
+$$
+{\mathbb{E}}_{\widetilde{\mathbf{x}} \sim \mathcal{N}\left( {\widetilde{\mathbf{x}} \mid \mathbf{x},\sigma }\right) }\left\lbrack {\begin{Vmatrix}-{\nabla }_{\mathbf{x}}{E}_{\theta }\left( \widetilde{\mathbf{x}}\right) + \frac{\widetilde{\mathbf{x}} - \mathbf{x}}{{\sigma }^{2}}\end{Vmatrix}}_{2}^{2}\right\rbrack . \tag{15}
+$$
+
+Fig. 6 shows the loss curve (in a log scale) under best tuned hyperparameters (i.e., those yielding lowest loss). For comparison, the loss curve on HIV of our score net used in the MMPQ experiment in Sec. 5.2 is also given. We can see that, even with the best tuned hyperparameters, the training process of ${E}_{\theta }$ cannot converge well.
+
+§ 6 CONCLUSION
+
+We propose Maximum Minimum Probability Querying (MMPQ), a hybrid active learning method for molecular property prediction, without the need of manually trading off between uncertainty and diversity. The strategy is based on an EBM that models the joint distribution of labeled data and task learner's prediction. The EBM is built in an embedding space learned by an auto-encoder that reconstructs molecules' SMILES string. We propose train the EBM via denoising score matching. Once the EBM is trained, MMPQ selects data according to one single selection criterion that naturally captures uncertainty of the task learner and high diversity w.r.t. in the data space.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/fXjoyFXw3G/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/fXjoyFXw3G/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..d1c485066390b29cb6b5db001d083c4076a3bb3d
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/fXjoyFXw3G/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,504 @@
+# DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+Generative models for network time series (also known as dynamic graphs) have tremendous potential in fields such as epidemiology, biology and economics, where complex graph-based dynamics are core objects of study. Designing flexible and scalable generative models is a very challenging task due to the high dimensionality of the data, as well as the need to represent temporal dependencies and marginal network structure. Here we introduce DAMNETS, a scalable deep generative model for network time series. DAMNETS outperforms competing methods on all of our measures of sample quality, over real and synthetic data sets.
+
+## 1 Introduction
+
+Temporal networks (also known as dynamic graphs) arise naturally in many fields of study such as the spread of disease [1], molecular interaction networks [2], interbank liability networks [3] and online social [4] and citation networks [5]. Accurate data-driven generating modelling of these processes could have a profound wide-reaching impact, for example in simulating the trajectories of future pandemics or financial contagion risk in economic crash scenarios.
+
+In contrast to generating static networks (i.e., networks that do not evolve over time), generating time series of networks has received relatively little attention in the literature. While static networks usually include complex dependencies, network time series contain complex dependencies also across time. As an example, in a time series of social contact networks, the interest may lie in replicating not only the degree distribution but also the clustering behaviour, to capture the interplay between these summary statistics over different times of the day. This complexity is further exacerbated due to the high dimensional nature of network time series; a dataset with $N$ network time series on $n$ nodes each, and of length $T$ each, has size $N \times T \times {n}^{2}$ . Building a generative model that faithfully replicates both network topology and dependence between graph snapshots is an extremely challenging task.
+
+Data-driven generative models of other types of sequential data, such as natural language, commonly follow an encoder-decoder structure, e.g. Sequence2Sequence [6] and Transformer [7] models. We combine ideas from the static network generation and sequence modelling literatures in DAMNETS, an efficient and high quality generator for Markovian network time series. We leverage the insight that the delta matrix, that is the difference between subsequent adjacency matrices, is very sparse for most networks of interest. We therefore propose to use a GNN to encode the current state of the network, and utilise an efficient sparse adjacency matrix sampler to generate delta matrices conditioned on the node embeddings computed by the GNN to construct the next network in the time series.
+
+In this paper, we restrict our attention to time series ${G}_{0},{G}_{1},\ldots ,{G}_{T}$ of simple, undirected, labelled graphs on a fixed node set $V = \{ 1,\ldots , n\}$ with edge set ${E}_{t} \subseteq \{ \left( {i, j}\right) : i, j \in V\}$ . An element of the sequence ${G}_{t} = \left( {V,{E}_{t}}\right)$ has a random edge set ${E}_{t}$ drawn from a a time-dependent probability distribution ${p}_{t}\left( {V \times V}\right)$ over the set of node pairs on $V$ , and emits adjacency matrix ${A}^{\left( t\right) }$ .
+
+The remainder of this paper is structured as follows. Section 2 is a review of related work. Section 3 introduces the DAMNETS algorithmic pipeline. Section 4 details the outputs of numerical experiments for representative generative models from the network literature as well as real world networks. Section 5 summarises our main findings and proposes future avenues of investigation. The DAMNETS code is available at this link.
+
+## 2 Related Work
+
+### 2.1 Static Network Generation
+
+Static graph generation involves learning a probability distribution $p\left( G\right)$ over an observed set of networks. Recently, several machine learning approaches have shown good performance on generating arbitrary sets of networks, including DeepGMG [8], GraphRNN [9], GRAN [10] and BiGG [11]. Our paper continues this progression to the network time series setting.
+
+BiGG. BiGG is a scaleable model for generating static networks that we will introduce briefly here, as our approach shares some similarities. Popular frameworks such as GraphRNN, GRAN and BiGG all employ the following high-level pattern for sampling the adjacency matrix; they sample each row of the adjacency matrix one at at time, using a row-wise auto-regressive model to capture the topological structure of the sampled graph and a second auto-regressive model to capture within-row edge-level correlations. GraphRNN uses a hierachical RNN structure, GRAN uses a graph neural network with a conditional mixture of Bernoulli likelihood and BiGG uses a binary tree type structure, which is particularly suited to sparse graphs.
+
+The major innovation introduced in BiGG is an improvement upon the naive $O\left( n\right)$ time complexity for sampling a row of the adjacency matrix. Instead of sampling each of the $n$ entries using a linear-time autoregressive model (such as a RNN), the authors propose to sample each row using a binary tree. Each node $u$ is associated with a random binary tree ${\mathcal{T}}_{u}$ which is constructed as follows. Each tree node $k$ corresponds to an interval of graph nodes $\left\lbrack {{v}_{l},{v}_{r}}\right\rbrack$ . The process starts from the root $\left\lbrack {1, n}\right\rbrack$ and terminates at leaf nodes $\left\lbrack {v, v}\right\rbrack$ . At each decision step the model decides whether the tree has a left child (lch), with probability $p\left( {\operatorname{lch}\left( k\right) }\right)$ , and right child (rch), with probability $p\left( {\operatorname{rch}\left( k\right) }\right)$ , and if so descends further down the tree until it reaches a leaf node. The probability of this tree being a particular realisation ${\mathcal{T}}_{u} = {\tau }_{u}$ is thus
+
+$$
+p\left( {\tau }_{u}\right) = \mathop{\prod }\limits_{{k \in {\tau }_{u}}}p\left( {\operatorname{lch}\left( k\right) }\right) p\left( {\operatorname{rch}\left( k\right) }\right) . \tag{1}
+$$
+
+The tree ${\tau }_{u}$ is then represented as a row vector of length $n$ of an adjacency matrix, with position $v$ having entry 1 if ${\tau }_{u}$ contains the leaf $\left\lbrack {v, v}\right\rbrack$ , and 0 otherwise. The algorithmic advantage stems from setting all entries $\left\lbrack {{v}_{l},\frac{{v}_{r}}{2}}\right\rbrack$ to 0 in row $u$ as soon as at tree node $k = \left\lbrack {{v}_{l},{v}_{r}}\right\rbrack$ the left child is not generated (and similarly if a right child is not generated). Thus for a node $u$ , the corresponding row of the adjacency matrix can be sampled in $O\left( \left| {\mathcal{T}}_{u}\right| \right)$ decision steps. Since $\left| {\mathcal{N}}_{u}\right|$ , the size of the graph neighbourhood of $u$ , equals the number of leaf nodes and $\log n$ is the maximum depth of the binary tree, the upper bound $\left| {\mathcal{T}}_{u}\right| \leq \left| {\mathcal{N}}_{u}\right| \log n$ follows. Moreover, significantly larger time savings can be made in practice if the model decides to not descend further into the tree in the upper levels.
+
+To include dependence between entries within the row of the adjacency matrix, BiGG augments the process to produce state variables that track the decisions made, both above and below in the tree. At each tree node $k$ , one always decides first whether to generate the left child conditionally on the state of the tree above, which is denoted ${h}_{u}^{top}\left( k\right)$ , with the decision sampled from $p\left( {\operatorname{lch}\left( t\right) \mid {h}_{u}^{top}\left( k\right) }\right)$ . If the model decides to descend into the left child, the entire left subtree is generated before returning to $t$ and making a decision about whether to generate the right child. The left subtree that was generated is summarised by a bottom-up state variable, denoted ${h}_{u}^{\text{bot }}\left( k\right)$ , and this is used to decide whether to sample a right child (rch) for the subtree. The model for ${\mathcal{T}}_{u}$ therefore becomes
+
+$$
+p\left( {\mathcal{T}}_{u}\right) = \mathop{\prod }\limits_{{k \in {\mathcal{T}}_{u}}}p\left( {\operatorname{lch}\left( k\right) \mid {h}_{u}^{top}\left( k\right) }\right) p\left( {\operatorname{rch}\left( k\right) \mid {h}_{u}^{top}\left( k\right) ,{h}_{u}^{bot}\left( {\operatorname{lch}\left( k\right) }\right) }\right) , \tag{2}
+$$
+
+where the exact equations for ${h}_{u}^{top}$ and ${h}_{u}^{bot}$ are given in Algorithm 2. The child probabilities are finally created via two MLPs, denoted ${\mathrm{{MLP}}}_{x} : {\mathbb{R}}^{F} \rightarrow \mathbb{R}$ for $x = L, R$ , via
+
+$$
+p\left( {\operatorname{lch}\left( k\right) \mid {h}_{u}^{\text{top }}\left( k\right) }\right) = \operatorname{Bernoulli}\left( {{\operatorname{MLP}}_{L}\left( {h}_{u}^{\text{top }}\right) \left( k\right) }\right) , \tag{3}
+$$
+
+$$
+p\left( {\operatorname{rch}\left( k\right) \mid {h}_{u}^{top}\left( k\right) ,{h}_{u}^{bot}\left( {\operatorname{lch}\left( k\right) }\right) }\right) = \operatorname{Bernoulli}\left( {{\operatorname{MLP}}_{R}\left( {{h}_{u}^{top}\left( k\right) ,{h}_{u}^{bot}\left( {\operatorname{lch}\left( k\right) }\right) }\right) .}\right. \tag{4}
+$$
+
+### 2.2 Network Time Series (NTS) Generation
+
+There are classical models for generating time series of networks designed to capture a specific set of NTS characteristics, such as the forest fire process [5], which can produce power-law degree distributions and shrinking effective diameter (i.e., the largest shortest path length in the graph). These classical models, while very effective at re-creating certain types of behaviour, are not data-driven and require the network to obey a pre-defined set of characteristics to be effective. Approaches attempting to generate arbitrary network time series have appeared in the machine learning literature, such as the TagGen model [12], which uses a self-attention mechanism to learn from temporal random walks on a NTS, from which new NTSs are subsequently generated. Another recent algorithm is DYMOND [13], which is a simpler approach that models the arrival times of 3-node motifs, then samples these subgraphs to generate the NTS. It is important to note that both DYMOND and TagGen attempt to solve a slightly different problem to DAMNETS; they take as input a single time series ${G}_{0},\ldots ,{G}_{T}$ and pre-defined network statistics, and aim to generate an entire time series with these network statistics similar to this single realisation. Instead of specifying the network statistics of interest, DAMNETS aims to learn a probability distribution $p\left( {{G}_{t} \mid {G}_{t - 1}}\right)$ such that given an arbitrary graph ${G}_{t - 1}$ (not in the training set), one can draw many samples for ${G}_{t}$ and reason about the future trajectory of the network. This requires a different set of evaluation metrics and datasets, see Section 4 for discussion.
+
+AGE. The approach most similar to our own is the Attention-Based Graph Evolution (AGE) model [14]. AGE uses a model very similar to a Transformer [7] (only ommiting the positional encoding step), where a self-attention mechanism is applied to the rows of ${A}^{\left( t - 1\right) }$ to learn node embeddings, and a source target attention module is sequentially applied to generate the rows of ${A}^{\left( t\right) }$ . AGE has two clear shortcomings; the first one is that it does not explicitly account for graph connectivity, which is left to the attention mechanism to deduce. The second is that it does not capture edge-level correlations on the sampled rows. To give a simple example of why this is important, suppose we were considering a NTS where in every graph snapshot, each node has exactly two neighbours; the model should have some mechanism to condition on the edges it has sampled for a node so that it can stop once it has generated two edges. Furthermore AGE operates directly between two adjacency matrices rather than generating only differences, which does not allow it to utilise sparsity, limiting the scaleability of the method. In contrast, DAMNETS explicitly utilises graph connectivity in the model pipeline and has the capacity to model edge correlations within rows of the adjacency matrix.
+
+## 3 DAMNETS Architecture
+
+Our goal is to learn a generative model $p\left( {\cdot \mid {G}_{t - 1}}\right)$ for the next network in a NTS, given a set of training network time series $\left\{ {{\left\{ {G}_{t}^{1}\right\} }_{t = 0}^{{T}_{1}},\ldots ,{\left\{ {G}_{t}^{N}\right\} }_{t = 0}^{{T}_{N}},}\right\}$ . Our model has a Markovian structure and hence for generating ${G}_{t}$ all relevant information about the past is assumed to be contained in ${G}_{t - 1}$ .
+
+For a description of our model we first introduce the delta matrix ${\Delta }^{\left( t\right) } \in \{ - 1,0,1{\} }^{n \times n}$ defined as
+
+$$
+{\Delta }_{ij}^{\left( t\right) } = {A}^{\left( t\right) } - {A}^{\left( t - 1\right) } = \begin{cases} 1 & \Rightarrow \text{ add edge }\left( {i, j}\right) \\ 0 & \Rightarrow \text{ no change in }\left( {i, j}\right) \\ - 1 & \Rightarrow \text{ remove edge }\left( {i, j}\right) . \end{cases}
+$$
+
+When conditioned on ${A}^{\left( t - 1\right) }$ , each entry ${\Delta }_{ij}^{\left( t\right) }$ can only take two values, namely ${\Delta }_{ij}^{\left( t\right) }$ can only be 0 or 1 if ${A}_{ij}^{\left( t - 1\right) } = 0$ , and ${\Delta }_{ij}^{\left( t\right) }$ can only be -1 or 0 if ${A}_{ij}^{\left( t - 1\right) } = 1$ . Learning a generative model $p\left( {{\Delta }^{\left( t\right) } \mid {G}_{t - 1}}\right)$ is equivalent to learning $p\left( {{G}_{t} \mid {G}_{t - 1}}\right)$ . Thus, this model only has to learn to produce the temporal update, rather than to reproduce the current graph and apply the temporal update.
+
+As we consider only undirected graphs, we only model the lower triangular part of the delta matrix. As our approach is an encoder-decoder framework, we first summarise the previous network ${G}_{t - 1}$ by computing node embeddings using a GNN as an encoder, then combine these with a modified version of the very efficient sparse graph sampler BiGG [11] to act as a decoder for the delta matrix.
+
+### 3.1 The Encoder
+
+The first step is to compute node embeddings for ${G}_{t - 1}$ , using a GNN. We employ a Graph Attention Network (GAT) [15], although any GNN layer is applicable. We use ${GAT}\left( {X, A}\right)$ to represent the application of a GAT network to a graph with node feature matrix $X$ and adjacency matrix $A$ . and in the absence of other node features we use the identity matrix as node features (which here corresponds to a one-hot encoding of the nodes). Node or edge-level features, whenever available, can be incorporated into the pipeline. The embedding of ${G}_{t - 1}$ is given by
+
+$$
+{H}^{\left( t - 1\right) } = {GAT}\left( {X,{A}^{\left( t - 1\right) }}\right) , \tag{5}
+$$
+
+where $X \in {\mathbb{R}}^{n \times p}$ is the node feature matrix, and ${H}^{\left( t - 1\right) } \in {\mathbb{R}}^{n \times q}$ is the node embedding matrix.
+
+### 3.2 The Decoder
+
+Starting with the first node according to the given node ordering, conditioning gives
+
+$$
+p\left( \Delta \right) = p\left( {\left\{ {\Delta }_{u}\right\} }_{u \in V}\right) = \mathop{\prod }\limits_{{u \in V}}p\left( {{\Delta }_{u} \mid \left\{ {{\Delta }_{w} : w < u}\right\} }\right) .
+$$
+
+We sample each row of $\Delta$ using Algorithm 2, a modified version of the BiGG row sampling algorithm. We enhance the procedure, allowing it to distinguish between a tree leaf which would be an edge addition and a tree leaf which would be an edge deletion. If the left (resp. right) child at level $k$ is a leaf node corresponding to entry ${\Delta }_{ij}^{\left( t\right) }$ , instead of (3) we sample the leaf node using
+
+$$
+p\left( {\operatorname{lch}\left( k\right) \mid h}\right) = \left\{ \begin{array}{l} \text{ Bernoulli }\left( {{\mathrm{{MLP}}}_{ + }\left( h\right) }\right) \text{ if }{A}_{ij}^{\left( t\right) } = 0, \\ \text{ Bernoulli }\left( {{\mathrm{{MLP}}}_{ - }\left( h\right) }\right) \text{ if }{A}_{ij}^{\left( t\right) } = 1, \end{array}\right. \tag{6}
+$$
+
+where $h \in {\mathbb{R}}^{q}$ is the corresponding state variable. Each application of Algorithm 2 returns an embedding, namely ${g}_{u} = {h}_{u}^{\text{bot }}\left( \text{root }\right)$ which depends on every entry in the row. As is done in the static setting we apply an auto-regressive model across these row embeddings to capture dependencies between rows. The bottom-up embeddings of each tree have no other computational dependencies, so can be efficiently pre-computed during training. We chose to use a standard Transformer self-attention layer [7] (which we call TFEncoder) with sinusiodal positional embedding for this auto-regressive component; this was chosen to provide similar representation power to the baseline model AGE. Self attention does not scale to very long sequences however, so for very large graphs with many nodes, this could be replaced by either an LSTM or the Fenwick Tree structure proposed in [16].
+
+### 3.3 The DAMNETS model architecture
+
+Figure 1: An overview of our approach to generating Markovian transitions in a network time series. We learn a generative model of the lower triangular part of the delta matrix given the previous graph ${G}_{t - 1}$ . We then draw a sample ${\Delta }^{\left( t\right) }$ and add this to ${A}^{\left( t - 1\right) }$ to produce a sample ${G}_{t}$ .
+
+
+
+With the two key components of our model defined, we now explain how these models are combined to generate delta matrices given an input graph. As stated in Equation (5), we first compute node embeddings ${H}^{\left( t - 1\right) } \in {\mathbb{R}}^{n \times F}$ , with ${H}_{i}^{\left( t - 1\right) } \in {\mathbb{R}}^{F}$ representing the node embedding computed for node $i$ in ${G}_{t - 1}$ . When generating the row tree ${\mathcal{T}}_{u}$ for node $u$ ,(which corresponds to generating the row of the delta matrix for node $u$ ), we combine the node embedding from the previous network with the row-wise auto-regressive term ${h}_{u - 1}^{row}$ computed by TFEncoder via an MLP
+
+$$
+{h}_{u}^{top}\left( \text{ root }\right) = {\mathrm{{MLP}}}_{\text{cat }}\left( {{h}_{u - 1}^{\text{row }},{H}_{u}^{\left( t - 1\right) }}\right) . \tag{7}
+$$
+
+where ${\mathrm{{MLP}}}_{\text{cat }} : {\mathbb{R}}^{2F} \rightarrow {\mathbb{R}}^{F}$ . The full procedure is described in Algorithm 1, with a detailed version Algorithm 2 in the SI, and is visualised in Figures 1 and 2. The model is trained via maximum likelihood over the entries of the delta matrix using gradient descent. The advantage of this framework is twofold; firstly the delta matrix is usually much sparser than the full adjacency matrix, allowing us to well utilise sparse sampling methods. This is a very natural assumption: one does not expect most of the network to change at each timestep, but rather just a small subset of the edges. The second is that differencing a time series makes learning easier. It is very common in traditional time series analysis to perform differencing transformations on data, as differencing may alleviate trends in the time series.
+
+## 4 Experiments
+
+Evaluating a generative model usually follows the following recipe: fit the generative model on the training data, draw samples from the model and then compare the distribution of these samples
+
+
+
+Figure 2: A visualisation of the generation of the $u$ -th row of the delta matrix ${\Delta }^{\left( t\right) }$ using the DAMNETS model architecture. The nodes shown in red indicate the graph ${G}_{t}$ . We use a GAT to compute node embeddings ${H}^{\left( t\right) }$ for each node in ${G}_{t}$ . Nodes shown in blue belong to the binary tree generated for each row; each tree is generated by combining the node embedding in the previous graph with an auto-regressive term computed using a Transformer (TF) Encoder across the rows of the delta matrix to produce ${h}_{u - 1}^{row}$ , which is used in Equation (7) to initialise the top-down descent of each tree.
+
+Algorithm 1: Algorithm for generating the the delta matrix ${\Delta }^{\left( t\right) }$ using DAMNETS
+
+---
+
+Input: Input graph ${G}_{t - 1} = \left( {V,{E}_{t - 1}}\right)$ , node features $X$
+
+${H}^{\left( t - 1\right) } \leftarrow {GAT}\left( {X,{A}^{\left( t - 1\right) }}\right)$
+
+${h}_{0}^{row} \leftarrow \varnothing$
+
+for $u \leftarrow 1$ to $n$ do
+
+ Let $k = \{ 1,\ldots , u - 1\}$ be the root of tree ${\mathcal{T}}_{u}$ .
+
+ ${h}_{u}^{\text{top }}\left( k\right) = {\operatorname{MLP}}_{\text{cat }}\left( {{h}_{u - 1}^{\text{row }},{H}_{u}^{\left( t - 1\right) }}\right) .$
+
+ ${g}_{u},{\mathcal{N}}_{u} \leftarrow \operatorname{Recursive}\left( {u, k,{h}_{u}^{\text{top }}\left( k\right) }\right)$ /* Algorithm 2 */
+
+ /* Only non-zero indices are returned in ${\mathcal{N}}_{u} *$ /
+
+ ${\Delta }_{u} \leftarrow$ Determine sign of entries using ${A}^{\left( t - 1\right) }$ and transform into a vector.
+
+ ${h}_{u}^{\text{row }} \leftarrow \operatorname{TFEncoder}\left( {{g}_{u};{g}_{1 : u - 1}}\right)$
+
+end
+
+Return ${\Delta }^{\left( t\right) }$ with rows ${\Delta }_{u}, u = 1,\ldots , n$ .
+
+---
+
+to some held out test data using some kind of statistical test or metric on the space of probability distributions. For static graphs, there exist a number of graph kernels [17] from which a Maximum Mean Discrepancy (MMD) [18] type metric can be derived. However these are very computational costly (some scaling as $O\left( {n}^{4}\right)$ for a graph with $n$ nodes). It is therefore common to define a set of summary statistics over the graphs, such as the degree distribution or clustering coefficient distribution, and compare the distributions of these summary statistics computed over the sampled and test graphs.
+
+We adopt a similar approach applied to the marginal distributions of the network time series. We choose to compare six different network statistics, three local and three global (see [19] for a background on network statistics). Our three local properties are the degree distribution, clustering coefficient distribution and the eigenvalue distribution of the graph Laplacian as introduced in [10]. For each graph, we compute a histogram of these properties over the nodes in the graph, and use a Gaussian kernel with the total-variation metric to compute the MMD. Our three global measures are transitivity, assortativity and closeness centrality. Each of these metrics produces one scalar value per graph, and we again use a Gaussian kernel with the ${\ell }^{2}$ metric to compute the MMD.
+
+For each time point $t$ and statistic $S\left( \cdot \right)$ , we compute ${\operatorname{MMD}}_{t}\left( {S\left( {G}_{t}^{\text{test }}\right) , S\left( {G}_{t}^{\text{sampled }}\right) }\right)$ , and use as final metric the sum $\overline{\operatorname{MMD}}\left( S\right) = \mathop{\sum }\limits_{t}{\operatorname{MMD}}_{t}\left( {S\left( {G}_{t}^{\text{test }}\right) , S\left( {G}_{t}^{\text{sampled }}\right) }\right)$ . If the marginal distributions match exactly, $\overline{\mathrm{{MMD}}}\left( S\right)$ will equal 0, and smaller values indicate better agreement between the distributions. We display all $\overline{\mathrm{{MMD}}}$ scores to three significant figures. Comparing the marginal distributions alone does not suffice as a comparison metric, so we also provide summary plots of these network statistics through time to verify that the evolution of these statistics match. In addition, we have designed several synthetic-data experiments to verify specific time-series properties observed in real-world networks which we would like to capture.
+
+A difficulty for graph generative model evaluation is that proper comparison of a network time series generator requires many realisations of this time series drawn from the same distribution to facilitate learning and subsequent comparison. Papers such as TagGen [12] and DYMOND [13] utilise datasets that comprise of one realisation of a real world temporal network, and aim to simply produce "surrogate" networks that closely resemble that single realisation. We aim to assess whether our model is able to generalise to new examples, in the sense that given a new graph ${G}_{t - 1}$ drawn from the same distribution as the training distribution, we can draw samples from ${G}_{t} \sim p\left( {\cdot \mid {G}_{t - 1}}\right)$ . We are therefore unable to use the same data sets as these papers, and instead design a new experimental setup in line with our objective.
+
+Our general experimental framework is as follows: we are given a set of realisations $\left\{ {{\left\{ {G}_{t}^{1}\right\} }_{t = 0}^{{T}_{1}},\ldots ,{\left\{ {G}_{t}^{N}\right\} }_{t = 0}^{{T}_{N}}}\right\}$ . For DAMNETS and AGE, we split this up into a set of training time series and test time series, and fit each model on the training set, then evaluate the performance on the test set. As DYMOND and TagGen can only learn from one time series at a time and produce realisations from that specific time series, we instead train an instance of these models separately on each time series in the test set and sample one time series from each trained model. This might seem like a large advantage for these models, as they have direct access to the test set. However our experimental results show that the aggregated behaviour of these samples does not match the underlying distribution well, suggesting these methods are not suitable for learning the true underlying process that a given sample was drawn from. Due to the fact that DYMOND and TagGen have to be re-trained on every single time series, we provide two sets of results for some datasets, with a smaller dataset chosen such that DYMOND and TagGen converge within 24 hours.
+
+### 4.1 The Barabási-Albert Model
+
+The family of Barabási-Albert (B-A) models [20] was designed to capture the so-called scale-free property observed in many real world networks through a preferential attachment mechanism. Formally a scale-free network is one whose degree distribution follows a power-law; if $\deg \left( i\right)$ represents the degree of node $i$ in a random network model, then the network is scale free if $\mathbb{P}\left( {\deg \left( i\right) = d}\right) \propto \frac{1}{{d}^{\gamma }}$ , for some constant $\gamma \in \mathbb{R}$ . Degree distributions with a power-law tail have been observed in many real networks of interest, such as hyperlinks on the World-Wide Web or metabolic networks, although the ubiquity of power law degree distributions has been disputed [21].
+
+The B-A model has two integer parameters, the number of nodes $n$ and the number of edges $m$ to be added at each iteration. The network is initialised with $m$ initial connected nodes. At each iteration $t$ , a new node is added and is connected to $m$ existing nodes, with probability proportional to the current degree ${p}_{u} = \frac{\deg \left( u\right) }{\mathop{\sum }\limits_{{v \in V}}\deg \left( v\right) }$ . Here, the standard NetworkX [22] implementation is used. Constructing a B-A network in this way yields a network time series of length $T = n - m$ , where each graph ${G}_{t}$ is the graph after node $m + t$ has the first edges attached to it. Nodes with a many existing connections (known as hubs) will likely accumulate more links; this is the preferential attachment property which, in the B-A model, leads to a power-law degree distribution with scale parameter $\gamma = 3$ .
+
+For the B-A experiments, we take $N = {200}$ time series with parameters $n = {100}$ and $m = 4$ , yielding time series of length $T = {96}$ . The results are displayed in Table 1 and Figure 3 . We see DAMNETS produces samples with orders of magnitude lower MMD than the baseline methods, and is the only model to correctly replicate the power law degree distribution.
+
+Table 1: The MMD on the B-A dataset for each network statistic. Lower is better.
+
+| Model | Degree | Clustering | Spectral | Transitivity | Assortativity | Closeness |
| DYMOND | 14.01 | 61.20 | 8.78 | 7.28 | 4.76 | 3.19 |
| TagGen | 16.33 | 16.55 | 2.29 | 2.06 | 23.95 | 0.10 |
| AGE | 15.08 | 25.15 | 9.45 | 3.42 | 6.37 | 2.36 |
| DAMNETS | $8{\mathrm{e}}^{-3}$ | 0.78 | 0.14 | 0.01 | 0.01 | $5{\mathrm{e}}^{-6}$ |
+
+
+
+Figure 3: Plots for the B-A model. Left: density against time; middle: transitivity against time; right: the average degree distribution of the final network ${G}_{T}$ produced by the models. Only DAMNETS correctly replicates the power law degree distribution.
+
+### 4.2 Bipartite Concentration
+
+Figure 4: A sample from the bipartite concentration model with 10 nodes in each partition, with an initial connection probability of $p = {0.2}$ and a concentration proportion ${p}^{con} = {0.3}$ . The highest degree node is shown in red; links concentrate on this node over time.
+
+
+
+This dataset is designed to simulate behaviour in rating systems where objects with many links tend to accumulate more recommendations [23]. For example in a data set consisting of users and movies, movies with many existing recommendations are likely to accumulate more over time. The graph ${G}_{0}$ is initialised as a random bipartite graph with connection probability $p$ . At each timestep, we select the node in the right-hand partition with the most links (ties broken at random) and re-wire a proportion ${p}^{\text{con }}$ of non-adjacent edges to that node.
+
+For the experiments we set $p = {0.5}$ and ${p}^{con} = {0.1}$ . For the smaller data set (S), we place 30 nodes in each partition (so $n = {60}$ ) and iterate for $T = {10}$ timesteps. For the larger dataset (L) we place 250 nodes in each partition $\left( {n = {500}}\right)$ and iterate for $T = {15}$ timesteps. To measure the extent to which the different generators replicate this bipartite structure, in addition to our standard summaries we also compute the mean Spectral Bipartivity (SB) [24] through time, which takes values in [0,1], with 0 indicating the network is not bipartite and 1 indicating the network is fully bipartite. The results are displayed in Table 2 and Figure 10. DAMNETS consistently outperforms all the baseline models across all summary statistics.
+
+Table 2: The MMD for each network statistic (lower is better) and Spectral Bipartivity (closer to 1 is better) across the small (S) and large (L) bipartite contraction test datasets.
+
+| Model | Deg. | Clust. | Spec. | Trans. | Assort. | Closeness | SB |
| (S) | (L) | (S) | (L) | (S) | (L) | (S) | (L) | (S) | (L) | (S) | (L) | (S) | (L) |
| DYMOND | 1.06 | - | 9.55 | - | 0.12 | - | 1.67 | - | $9{e}^{-4}$ | - | 0.14 | - | 0.50 | - |
| TagGen | 0.81 | - | 1.73 | - | 0.29 | - | $5{e}^{-4}$ | - | 0.07 | - | $2{e}^{-4}$ | - | 0.56 | - |
| AGE | 0.92 | 2.75 | 9.46 | 15.3 | 0.13 | 0.25 | 1.48 | 3.71 | 0.72 | 4.81 | 0.16 | 0.36 | 0.55 | 0.52 |
| DAMNETS | 0.01 | $4{\mathrm{e}}^{-3}$ | 0.11 | $3{\mathrm{e}}^{-3}$ | 0.03 | $5{\mathrm{e}}^{-4}$ | $7{\mathrm{e}}^{-6}$ | $8{\mathrm{e}}^{-8}$ | $1{\mathrm{e}}^{-4}$ | $7{\mathrm{e}}^{-6}$ | $4{\mathrm{e}}^{-7}$ | $1{\mathrm{e}}^{-7}$ | 0.99 | 0.99 |
+
+244
+
+
+
+Figure 5: Plots for the bipartite contraction model. Left: density against time; middle: transitivity against time; right: closeness against time. Only DAMNETS shows good performance in all statistics.
+
+### 4.3 Community Evolution and Decay
+
+Figure 6: A sample from the community decay model of length $T = 5$ on $V = \{ 1,\ldots ,{45}\}$ , with 15 nodes in each of the $Q = 3$ communities, connection probabilities ${p}_{\text{int }} = {0.7},{p}_{\text{ext }} = {0.005}$ , decay community $D = 3$ (coloured red) and decay proportion ${p}_{dec} = {0.2}$ .
+
+
+
+Our next network time series benchmark considers a dynamic community structure model. We initialise a three-community stochastic block model on $n$ nodes. At each time step, we re-wire a fixed proportion ${f}_{\text{dec }}$ of the third community (which we call the decay community), replacing them with a random outgoing edge to a node in one of the other communities. A sample from the model is shown in Figure 6, and a full description of the model is given in Appendix A.2.
+
+For the experiments we use inter-community connection probability ${p}_{\text{int }} = {0.9}$ , intra-community ${p}_{\text{ext }} = {0.01}$ , decay fraction ${f}_{\text{dec }} = {0.2}$ and iterate for $T = {20}$ timesteps. For the small (S) dataset we place 20 nodes in each community (for a total of $n = {60}$ nodes) and for the large (L) dataset we place 400 nodes in each community ( $n = {1200}$ in total). The non-decay communities should have constant density, and the decay community should have density decaying exponentially at rate ${f}_{dec}$ . The results are displayed in Table 3 and Figure 11. DAMNETS is the best performing model overall, although AGE also shows strong performance on this dataset.
+
+Table 3: The $\overline{\mathrm{{MMD}}}$ for each network statistic across the small (S) and large (L) community decay test datasets, with a (-) when the model did not converge within 24 hours. A lower MMD is better.
+
+| Model | Deg. | Clust. | Spec. | Trans. | Assort. | Closeness |
| (S) | (L) | (S) | (L) | (S) | (L) | (S) | (L) | (S) | (L) | (S) | (L) |
| DYMOND | 1.95 | - | 3.20 | - | 0.66 | - | 0.88 | - | 1.02 | - | 0.33 | - |
| TagGen | 10.99 | - | 2.91 | - | 2.18 | - | 0.26 | - | 2.37 | - | 1.04 | - |
| AGE | 0.15 | 0.17 | 2.00 | 2.06 | 0.43 | 0.42 | 0.02 | 0.03 | 0.07 | 0.06 | 0.01 | 0.03 |
| DAMNETS | 0.19 | 0.21 | 1.90 | 1.91 | 0.39 | 0.40 | 0.01 | 0.01 | 0.03 | 0.04 | 0.01 | 0.02 |
+
+
+
+Figure 7: The density of each community through time in the 3-community dataset.
+
+### 4.4 Correlation Networks
+
+This data set consists of financial correlation networks built from time series of asset prices from the Wharton CRSP database [25]. We consider a set of 49 liquid stocks from the US equity market, for which we have available minutely prices data. We construct a graph by assigning each stock to a node. We then estimate the correlation matrix of their 5-minute returns each day, and threshold these correlations at 1 standard deviation in order to construct the edges (so stocks are connected by an edge if they are strongly correlated). The data set spans $N = {97}$ weeks, with each week giving a time series of length $T = 5$ .
+
+One issue with this dataset is that correlations between financial instruments are known to be unstable over time (hence different realisations may not drawn from the same distribution). To mitigate this we did not split the data chronologically, but have rather drawn the training and test splits randomly (which correspond to selecting random weekly time series from the dataset). We repeat this procedure over 5 seeds and compute the average $\overline{\mathrm{{MMD}}}$ . The results are displayed in Table 4 and Figure 12 DAMNETS is the only model to show good performance across all statistics.
+
+Table 4: The MMD for each network statistic across the correlation test dataset. Lower is better.
+
+| Model | Degree | Clustering | Spectral | Transitivity | Assortativity | Closeness |
| DYMOND | 0.16 | 0.58 | 0.27 | 0.17 | 0.04 | 0.06 |
| TagGen | 0.95 | 0.56 | 0.85 | $4{\mathrm{e}}^{-3}$ | 0.08 | 0.48 |
| AGE | 0.14 | 1.07 | 0.31 | 0.26 | 0.08 | 0.10 |
| DAMNETS | 0.13 | 0.21 | 0.25 | 0.04 | 0.02 | 0.01 |
+
+
+
+Figure 8: The network statistics against time for the correlation dataset. DAMNETS is the only model that closely tracks the test distribution on all statistics.
+
+### 4.5 Ablation Study
+
+We see that DAMNETS outperforms all the baseline models on each dataset under consideration, in particular the AGE model, which is the most similar in that it also follows a Sequence2Sequence framework. DAMNETS differs from AGE in two major ways, namely the formulation in terms of the delta matrix and the model architecture adapted for sampling this sparse matrix. We provide an ablation study in Appendix B where we modify AGE to generate delta matrices, and also a version where we add positional encodings. We find that the delta matrix formulation significantly improves the performance of AGE, while positional encodings do not change the performance much, with neither variant of AGE able to match the performance of DAMNETS. This suggests it is the combination of our re-formulation of the problem combined with a model architecture suited to sample sparse delta matrices that provides such strong performance.
+
+## 5 Discussion and Conclusion
+
+DAMNETS provides a novel approach to generating network time series, with the ability to have fine-grained edge-level conditioning while maintaining scaleability by generating delta matrices rather than entire graphs and efficiently utilising the sparsity of these matrices. We have shown through extensive experiments that DAMNETS is able to learn a variety of important network models that existing methods simply cannot. DAMNETS can learn to generate long time series, re-produce power-law degree distributions, bipartite structure and maintains very strong performance on larger networks, while none of the baseline models are able to capture all of these properties.
+
+In future work, the Markovian assumption underlying DAMNETS could be relaxed to incorporate time series with long range dependencies, using techniques such as node memory introduced in the TGNN model [26]. The model could also be extended to handle graphs of varying size: node deletion could be performed by adding a step before the sampling of each row-tree wherein the model makes a decision about whether the node should persist to the current timestep. Node additions could be handled by allowing optional rows to be appended to the end of the delta matrix (and only sampling ones for these rows, as a new node could not have any edge deletions).
+
+References
+
+[1] Naoki Masuda and Petter Holme. Temporal Network Epidemiology. 01 2017.
+
+[2] Teresa Przytycka and Yoo-Ah Kim. Network integration meets network dynamics. BMC biology, 8:48, 04 2010.
+
+[3] John Leventides, Kalliopi Loukaki, and Vassilios G. Papavassiliou. Simulating financial contagion dynamics in random interbank networks. Journal of Economic Behavior Organization, 158:500-525, 2019.
+
+[4] Jure Leskovec, Lars Backstrom, Ravi Kumar, and Andrew Tomkins. Microscopic evolution of social networks. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '08, page 462-470, New York, NY, USA, 2008. Association for Computing Machinery.
+
+[5] Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time: Densification laws, shrinking diameters and possible explanations. In Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, KDD '05, page 177-187, New York, NY, USA, 2005. Association for Computing Machinery.
+
+[6] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2015.
+
+[7] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
+
+[8] Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324, 2018.
+
+[9] Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models. 35th International Conference on Machine Learning, ICML 2018, 13:9072-9081, 2 2018.
+
+[10] Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Charlie Nash, William L Hamilton, David Duvenaud, Raquel Urtasun, and Richard Zemel. Efficient Graph Generation with Graph Recurrent Attention Networks. In NeurIPS, 2019.
+
+[11] Hanjun Dai, Azade Nazi, Yujia Li, Bo Dai, and Dale Schuurmans. Scalable deep generative modeling for sparse graphs. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org, 2020.
+
+[12] Dawei Zhou, Lecheng Zheng, Jiawei Han, and Jingrui He. A data-driven graph generative model for temporal interaction networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 401-411, 2020.
+
+[13] Giselle Zeno, Timothy La Fond, and Jennifer Neville. DYMOND: DYnamic MOtif-NoDes Network Generative Model. In Proceedings of the Web Conference 2021 (WWW '21), page 12, Ljubljana, Slovenia, 2021. ACM, New York, NY, USA.
+
+[14] Shuangfei Fan and Bert Huang. Attention-Based Graph Evolution. Advances in Knowledge Discovery and Data Mining, 12084:436, 2020.
+
+[15] Petar Veličković, Arantxa Casanova, Pietro Liò, Guillem Cucurull, Adriana Romero, and Yoshua Bengio. Graph attention networks. 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, pages 1-12, 2018.
+
+[16] Hanjun Dai, Azade Nazi, Yujia Li, Bo Dai, and Dale Schuurmans. Scalable Deep Generative Modeling for Sparse Graphs. 2020.
+
+[17] Giannis Nikolentzos, Giannis Siglidis, and Michalis Vazirgiannis. Graph kernels: A survey. J. Artif. Int. Res., 72:943-1027, jan 2022.
+
+[18] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(25):723-773, 2012.
+
+[19] Mark Newman. Networks: an Introduction. Oxford University Press, second edition, 2018.
+
+[20] Reka Albert and Albert-Laszlo Barabasi. Statistical mechanics of complex networks. Reviews of Modern Physics, 74(1):47-97, 6 2001.
+
+[21] Aaron Clauset, Cosma Rohilla Shalizi, and M E J Newman. Power-Law Distributions in Empirical Data. SIAM Review, 51(4):661-703, 7 2009.
+
+[22] Aric A. Hagberg, Daniel A. Schult, and Pieter J. Swart. Exploring network structure, dynamics, and function using networkx. In Gaël Varoquaux, Travis Vaught, and Jarrod Millman, editors, Proceedings of the 7th Python in Science Conference, pages 11 - 15, Pasadena, CA USA, 2008.
+
+[23] Massimiliano Zanin, Pedro Cano, Javier M. Buldú, and Oscar Celma. Complex networks in recommendation systems. In Proceedings of the 2nd WSEAS International Conference on Computer Engineering and Applications, CEA'08, page 120-124, Stevens Point, Wisconsin, USA, 2008. World Scientific and Engineering Academy and Society (WSEAS).
+
+[24] Ernesto Estrada and Juan A. Rodríguez-Velázquez. Spectral measures of bipartivity in complex networks. Phys. Rev. E, 72:046105, Oct 2005.
+
+[25] The University of Chicago Booth School of Business. Center for Research in Security Prices (CRSP), 2022. Data retrieved from https://wrds-www.wharton.upenn.edu.
+
+[26] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. In ICML 2020 Workshop on Graph Representation Learning, 2020.
+
+[27] Kai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556-1566, Beijing, China, July 2015. Association for Computational Linguistics.
+
+[28] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 11 1997.
+
+[29] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In International Conference on Learning Representations, 2020.
+
+[30] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015.
+
+[31] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
+
+[32] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019.
+
+## A Supplementary Information for DAMNETS: A Deep Autoregressive Model for Generating Markovian Network Time Series
+
+### A.1 The DAMNETS Row Generation Algorithm
+
+Algorithm 2: Algorithm for generating the the ${u}^{th}$ row of the delta matrix
+
+Function Sample_Leaf(u, k, h):
+
+---
+
+ $e \leftarrow \left( {u, k}\right)$
+
+ if $e$ is Edge Addition then
+
+ has_leaf $\sim$ Bernoulli $\left( {{\mathrm{{MLP}}}_{ + }\left( h\right) }\right)$
+
+ else
+
+ has_leaf $\sim$ Bernoulli $\left( {{\mathrm{{MLP}}}_{ - }\left( h\right) }\right)$ /* Edge deletion */
+
+ end
+
+ if has_leaf then
+
+ return $\overrightarrow{1}, e$
+
+ else
+
+ return $\overrightarrow{0},\varnothing$
+
+ end
+
+End Function
+
+Function Recursive $\left( {u, k,{h}_{u}^{\text{top }}\left( k\right) }\right)$ :
+
+ if is_leaf(lch ${}_{u}\left( k\right)$ ) then
+
+ ${h}_{u}^{\text{bot }}\left( {{\operatorname{lch}}_{u}\left( k\right) }\right) ,{\mathcal{N}}_{u}^{k,\text{ left }} \leftarrow$ Sample_Leaf $\left( {u,{\operatorname{lch}}_{u}\left( k\right) ,{h}_{u}^{\text{top }}\left( k\right) }\right)$
+
+ else
+
+ has_left $\sim$ Bernoulli $\left( {{\mathrm{{MLP}}}_{L}\left( {{h}_{u}^{\text{top }}\left( k\right) }\right) }\right)$
+
+ if has_left then
+
+ ${h}_{u}^{top}\left( {{\operatorname{lch}}_{u}\left( k\right) }\right) \leftarrow \operatorname{LSTMCell}\left( {{h}_{u}^{top}\left( k\right) ,\operatorname{embed}\left( \operatorname{left}\right) }\right)$
+
+ ${h}_{u}^{\text{bot }}\left( {{\operatorname{lch}}_{u}\left( t\right) }\right) ,{\mathcal{N}}_{u}^{k,\text{ left }} \leftarrow$ Recursive $\left( {u,{\operatorname{lch}}_{u}\left( k\right) ,{h}_{u}^{\text{ top }}\left( {{\operatorname{lch}}_{u}\left( k\right) }\right) }\right)$
+
+ else
+
+ ${h}_{u}^{\text{bot }}\left( {{\operatorname{lch}}_{u}\left( k\right) }\right) ,{\mathcal{N}}_{u}^{k,\text{ left }} \leftarrow \overrightarrow{0},\varnothing$
+
+ end
+
+ end
+
+ ${\widehat{h}}_{u}^{\text{top }}\left( {\operatorname{rch}\left( k\right) }\right) \leftarrow {\operatorname{TreeCell}}^{\text{top }}\left( {{h}_{u}^{\text{bot }}\left( {{\operatorname{lch}}_{u}\left( k\right) }\right) ,{h}_{u}^{\text{top }}\left( {{\operatorname{lch}}_{u}\left( k\right) }\right) }\right)$
+
+ if is_leaf(rch ${}_{u}\left( k\right)$ ) then
+
+ ${h}_{u}^{\text{bot }}\left( {{\operatorname{rch}}_{u}\left( k\right) }\right) ,{\mathcal{N}}_{u}^{k,\text{ right }} \leftarrow$ Sample_Leaf $\left( {u,{\operatorname{rch}}_{u}\left( k\right) ,{\widehat{h}}_{u}^{\text{top }}\left( {{\operatorname{rch}}_{u}\left( k\right) }\right) }\right)$
+
+ else
+
+ has_right $\sim$ Bernoulli $\left( {{\operatorname{MLP}}_{L}\left( {{\widehat{h}}_{u}^{\text{top }}\left( {{\operatorname{rch}}_{u}\left( k\right) }\right) }\right) }\right)$
+
+ if has_right then
+
+ ${h}_{u}^{\text{top }}\left( {\operatorname{rch}\left( k\right) }\right) \leftarrow \operatorname{LSTMCell}\left( {{\widehat{h}}_{u}^{\text{top }}\left( {\operatorname{rch}\left( k\right) }\right) ,\operatorname{embed}\left( \operatorname{right}\right) }\right)$
+
+ ${h}_{u}^{\text{bot }}\left( {{\operatorname{rch}}_{u}\left( k\right) }\right) ,{\mathcal{N}}_{u}^{k,\text{ right }} \leftarrow$ Recursive $\left( {u,{\operatorname{rch}}_{u}\left( k\right) ,{h}_{u}^{\text{top }}\left( {{\operatorname{rch}}_{u}\left( k\right) }\right) }\right)$
+
+ else
+
+ ${h}_{u}^{\text{bot }}\left( {{\operatorname{rch}}_{u}\left( k\right) }\right) ,{\mathcal{N}}_{u}^{k,\text{ right }} \leftarrow \overrightarrow{0},\varnothing$
+
+ end
+
+ end
+
+ ${h}_{u}^{\text{bot }}\left( k\right) \leftarrow {\operatorname{TreeCell}}^{\text{bot }}\left( {{h}_{u}^{\text{bot }}\left( {{\operatorname{lch}}_{u}\left( k\right) }\right) ,{h}_{u}^{\text{bot }}\left( {{\operatorname{rch}}_{u}\left( k\right) }\right) }\right)$
+
+ ${\mathcal{N}}_{u}^{k} \leftarrow {\mathcal{N}}_{u}^{k,{left}} \cup {\mathcal{N}}_{u}^{k,{right}}$
+
+ return ${h}_{u}^{bot}\left( k\right) ,{\mathcal{N}}_{u}^{k}$
+
+ End Function
+
+---
+
+391
+
+First we provide details for the DAMNETS row generation algorithm given in Algorithm 2. Here, TreeCell ${}^{bot}$ and TreeCell ${}^{top}$ are two TreeLSTM [27] cells, embed(left) and embed(right) are learned embeddings for the binary values "left" and "right", and LSTMCell is a standard LSTM [28]. The top down cell summarises decisions made above $t$ in the tree, and the bottom up cell summarises lower levels of the tree (if they exist), where ${h}_{u}^{\text{bot }}\left( \varnothing \right) = 0$ . Notice that that ${h}^{\text{bot }}$ is computed independently of ${h}^{top}$ .
+
+### A.2 The Community Decay Model
+
+The three community decay model is formally defined as follows. The initial network ${G}_{0} = \left( {V,{E}_{0}}\right)$ with node set $V = \{ 1,\ldots , n\}$ is equipped with a surjective community membership function $C : \{ 1,\ldots , n\} \rightarrow \{ 1,\ldots , Q\}$ that encodes which of the $Q$ communities a given node $i$ belongs to (a node can only belong to one community). Here we assume that the community memberships are known. The initial graph ${G}_{0}$ is then fully described by the interior (within community) and exterior (across communities) edge probabilities ${p}_{ij} \mathrel{\text{:=}} \mathbb{P}\left( {\left( {i, j}\right) \in {E}_{0}}\right)$ , given by
+
+$$
+{p}_{ij} = \left\{ \begin{array}{ll} {p}_{\text{int }} & \text{ if }C\left( i\right) = C\left( j\right) \\ {p}_{\text{ext }} & \text{ if }C\left( i\right) \neq C\left( j\right) . \end{array}\right. \tag{8}
+$$
+
+A network time series ${G}_{1},\ldots ,{G}_{T}$ is then constructed as follows; we fix a community $D \in$ $\{ 1,\ldots , Q\}$ as the decay community. We define the set of internal edges for community $D$ as
+
+$$
+{D}_{t}^{\text{int }} \mathrel{\text{:=}} \left\{ {\left( {i, j}\right) \in {E}_{t} \mid C\left( i\right) = C\left( j\right) = D}\right\} . \tag{9}
+$$
+
+At each iteration $t$ ,(i.e time step), a fixed proportion ${f}_{\text{dec }}$ of the internal edges ${D}_{t}^{\text{int }}$ are replaced with external edges. This is achieved by selecting a random internal edge(i, j)and removing it from the edge set ${E}_{t}$ , then selecting a node $u$ uniformly from $\{ i, j\}$ . We then select a random endpoint $k$ uniformly from $\left\{ {v \in V \mid C\left( v\right) \neq D,\left( {u, v}\right) \notin {E}_{t}}\right\}$ , the set of nodes not in community $D$ and not connected to $u$ , and finally add the edge(u, k)to the edge set ${E}_{t}$ . We repeat this procedure $T$ times to generate our network time series.
+
+The model can be interpreted as starting with a network with $Q$ densely connected communities, decaying in time to have only $Q - 1$ clear communities; the decay community $D$ will appear as noise around those left unperturbed. A sample from the model can be seen in Figure 6; for ease of visualisation, each initial community has only 15 nodes.
+
+### A.3 Graph Attention Networks
+
+For the encoder step in DAMNETS we compute node embeddings for ${G}_{t - 1}$ , using a GNN. We employ a Graph Attention Network (GAT) [15], although any GNN layer is applicable. Given node features ${X}_{1},\ldots ,{X}_{n},{X}_{i} \in {\mathbb{R}}^{F}$ , a GAT layer produces a new set of node features ${h}_{i} \in {\mathbb{R}}^{{F}^{\prime }}$ according to
+
+$$
+{h}_{i} = \sigma \left( {\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}}}{\alpha }_{ij}W{X}_{j}}\right) , \tag{10}
+$$
+
+where $W \in {\mathbb{R}}^{{F}^{\prime } \times F}$ is a learnable weight matrix, $\sigma \left( \cdot \right)$ is a non-linear function applied element-wise, and ${\alpha }_{ij} \in \mathbb{R}$ are normalised attention coefficients computed as
+
+$$
+{e}_{ij} = a\left( {W{X}_{i}\parallel W{X}_{j}}\right) , \tag{11}
+$$
+
+$$
+{\alpha }_{ij} = \frac{\exp \left( {e}_{ij}\right) }{\mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}}}\exp \left( {e}_{ik}\right) }, \tag{12}
+$$
+
+where $\parallel$ represents the concatenation operation, and $a\left( \cdot \right)$ is a single layer MLP with the LeakyReLU activation function. These layers are stacked to produce a GAT network. GAT layers can also employ multi-head attention [7]. We write ${GAT}\left( {X, A}\right)$ to represent the application of a GAT network to a graph with node feature matrix $X$ and adjacency matrix $A$ .
+
+## B Ablation Study
+
+In many of our experiments, DAMNETS outperforms the AGE model [14]. One may conjecture that it is the use of the delta matrix which is the main driver of this difference in performance. To assess this hypothesis we perform the following ablation study: we re-formulate the AGE model to generate delta matrices instead of entire adjacency matrices. Recall that AGE is a transformer model as described in [7], but without the positional encodings. The transformer is trained via maximum likelihood to generate rows of the adjacency matrix ${A}^{\left( t + 1\right) }$ from the rows of the previous adjacency ${A}^{\left( t\right) }$ using the standard Sequence2Sequence framework.
+
+We instead re-formulate AGE to generate delta matrices, which we call AGE-D. To ensure valid delta matrices, we propose to train a transformer to generate $\left| {\Delta }^{\left( t\right) }\right|$ row-wise from the rows of ${A}^{\left( t\right) }$ , then
+
+construct ${A}^{\left( t + 1\right) }$ as
+
+$$
+{A}^{\left( t + 1\right) } = \left\{ {{A}^{\left( t\right) } + \left| {\Delta }^{\left( t\right) }\right| }\right\} \;{\;\operatorname{mod}\;2}.
+$$
+
+Note that this always produces a valid adjacency matrix. We also include a variant with positional encodings on both the input and output rows, which we title AGE-DPE. We compare the performance of AGE and the two proposed variants on the BA and the Bipartite Contraction (L) datasets, as there was a particularly large gap in performance between DAMNETS and AGE on these datasets. We also include the MMD for DAMNETS again for ease of reference. The results are displayed in Tables 5 and 6.
+
+Table 5: The MMD for each network statistic on the BA dataset. Lower is better.
+
+| Model | Degree | Clustering | Spectral | Transitivity | Assortativity | Closeness |
| AGE | 15.08 | 25.15 | 9.45 | 3.42 | 6.37 | 2.36 |
| AGE-D | 0.76 | 2.45 | 0.69 | 0.51 | 4.52 | $2{e}^{-3}$ |
| AGE-DPE | 0.76 | 2.37 | 0.71 | 0.49 | 4.31 | $2{e}^{-3}$ |
| DAMNETS | $8{\mathrm{e}}^{-3}$ | 0.78 | 0.14 | 0.01 | 0.01 | $5{\mathrm{e}}^{-6}$ |
+
+Table 6: The first block shows the $\overline{\mathrm{{MMD}}}$ for each network statistic on the Bipartite Contraction (L) dataset, for which lower is better. The last column shows the spectral bipartivity, for which a value closer to 1 is better.
+
+| Model | Degree | Clustering | Spectral | Transitivity | Assortativity | Closeness | SB |
| AGE | 2.75 | 15.3 | 0.25 | 3.71 | 4.81 | 0.36 | 0.52 |
| AGE-D | 0.15 | 6.14 | 0.15 | 0.04 | 0.02 | $1{e}^{-2}$ | 0.85 |
| AGE-DPE | 0.13 | 6.07 | 0.17 | 0.03 | 0.02 | $1{e}^{-2}$ | 0.87 |
| DAMNETS | $4{\mathrm{e}}^{-3}$ | $3{\mathrm{e}}^{-3}$ | $5{\mathrm{e}}^{-4}$ | $8{\mathrm{e}}^{-8}$ | $7{\mathrm{e}}^{-6}$ | $1{\mathrm{e}}^{-7}$ | 0.99 |
+
+We see that re-formulating AGE to generate delta matrices significantly improves the performance of the method on these datasets, whilst adding the positional encodings provided little to no gain in performance. We also note that while AGE-D performs better than AGE, it still does not match the performance of DAMNETS on these datasets, indicating that it is the combination of our formulation of the problem and specific choice of architecture that leads to such strong performance.
+
+## C Experimental Details
+
+### C.1 Model Specification and Training Details
+
+For the experiments in this paper we used a hidden size of $F = {256}$ for all experiments, as this was the default hidden size used in BiGG [16]. We used a single layer GAT [15] for all experiments. The rationale for this is as follows: GNNs are known to suffer from an oversmoothing problem [29], whereby node embeddings all become similar when using many stacked GNN layers. This would be particularly problematic in our case, as the model would not be able to distinguish between different states in the Markov chain and would likely perform very poorly. We therefore chose to use a very simple model with one GNN layer. It is possible this could be improved upon. All the LSTM networks in the BiGG decoder used 2-layers.
+
+We used the Adam [30] optimiser for all experiments, with learning rate 0.001 and weight decay parameter 0.0005 . We have not made an effort to optimise these parameters. We used early stopping based on the log-likelihood of the validation set, which was comprised of ${30}\%$ of the training data, chosen randomly. We used a batch size of 32 graphs (using gradient accumulation for the larger graphs to keep this consistent) and clipped gradients at a norm of 5 . We found that training to 0 training loss was very harmful for out of sample performance, and that early stopping is necessary for good performance. All numerical results are averaged over five seeds.
+
+We implemented the GAT using Torch Geometric [31], and used PyTorch [32] for the other deep learning functionality. We used Networkx [22] for processing the network data. We modified the original BiGG implementation to combine this with the encoder, which can be found at this link.
+
+### C.2 Baseline Model Information
+
+We used the publically released versions of DYMOND and TagGen. Both of these had fatal errors in their implementation, which we have fixed and released as a part of our source code. There is no available code for AGE, so we implemented this using standard PyTorch Transformer modules. We used all the default hyperparameters given in the respective papers. For training AGE we also used early stopping with the same validation log-likelihood criterion, batch size and optimiser settings. As AGE is a Transformer model, we experimented with many "tricks" that are commonly used to train Transformers, such as warmup learning rates as described in [7], but found they did not improve the performance of the model.
+
+### C.3 Hardware and Running Time
+
+All the experiments in this paper were carried out on a single Nvidia GeForce RTX 3090 GPU with an Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz. For the smaller experiments we capped the run time of DAMNETS and AGE at one hour, and 24 hours for the larger datasets (although in all cases both these models early stopped before the cap). DYMOND is also fast to run despite needing to be re-trained on each NTS due to its simplicity. TagGen required 24 hours to complete its experimental run on all datasets.
+
+486
+
+## D Further Experimental Plots
+
+
+
+Figure 9: The first five plots show the mean and standard deviation of the network statistics computed through time for the B-A dataset. We see that DAMNETS produces samples that are very similar to the test set across all metrics, whereas the baseline methods fail to do so. The final plot shows the average degree distribution of the final network ${G}_{T}$ produced by the models. Only DAMNETS correctly replicates the power law degree distribution.
+
+
+
+Figure 10: The network statistics computed through time for the bipartite contraction model. We see that DAMNETS shows excellect performance on all statistics, whereas the other models are not able to learn the dynamics.
+
+
+
+Figure 11: Statistics computed through time on the test set for the three-community decay model. First two rows: The average networks statistics computed across time. Final row: the density of each community through time. We see that both AGE and DAMNETS both show strong performance on this model, while DYMOND performs poorly.
+
+
+
+Figure 12: The average network statistics computed through time for the test correlation networks. We see DAMNETS closely tracks the test distribution on all statistics other than density.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/fXjoyFXw3G/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/fXjoyFXw3G/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..afb0a30ef3f5d68abaeee57aac7a6957764046f8
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/fXjoyFXw3G/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,307 @@
+§ DAMNETS: A DEEP AUTOREGRESSIVE MODEL FOR GENERATING MARKOVIAN NETWORK TIME SERIES
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Generative models for network time series (also known as dynamic graphs) have tremendous potential in fields such as epidemiology, biology and economics, where complex graph-based dynamics are core objects of study. Designing flexible and scalable generative models is a very challenging task due to the high dimensionality of the data, as well as the need to represent temporal dependencies and marginal network structure. Here we introduce DAMNETS, a scalable deep generative model for network time series. DAMNETS outperforms competing methods on all of our measures of sample quality, over real and synthetic data sets.
+
+§ 1 INTRODUCTION
+
+Temporal networks (also known as dynamic graphs) arise naturally in many fields of study such as the spread of disease [1], molecular interaction networks [2], interbank liability networks [3] and online social [4] and citation networks [5]. Accurate data-driven generating modelling of these processes could have a profound wide-reaching impact, for example in simulating the trajectories of future pandemics or financial contagion risk in economic crash scenarios.
+
+In contrast to generating static networks (i.e., networks that do not evolve over time), generating time series of networks has received relatively little attention in the literature. While static networks usually include complex dependencies, network time series contain complex dependencies also across time. As an example, in a time series of social contact networks, the interest may lie in replicating not only the degree distribution but also the clustering behaviour, to capture the interplay between these summary statistics over different times of the day. This complexity is further exacerbated due to the high dimensional nature of network time series; a dataset with $N$ network time series on $n$ nodes each, and of length $T$ each, has size $N \times T \times {n}^{2}$ . Building a generative model that faithfully replicates both network topology and dependence between graph snapshots is an extremely challenging task.
+
+Data-driven generative models of other types of sequential data, such as natural language, commonly follow an encoder-decoder structure, e.g. Sequence2Sequence [6] and Transformer [7] models. We combine ideas from the static network generation and sequence modelling literatures in DAMNETS, an efficient and high quality generator for Markovian network time series. We leverage the insight that the delta matrix, that is the difference between subsequent adjacency matrices, is very sparse for most networks of interest. We therefore propose to use a GNN to encode the current state of the network, and utilise an efficient sparse adjacency matrix sampler to generate delta matrices conditioned on the node embeddings computed by the GNN to construct the next network in the time series.
+
+In this paper, we restrict our attention to time series ${G}_{0},{G}_{1},\ldots ,{G}_{T}$ of simple, undirected, labelled graphs on a fixed node set $V = \{ 1,\ldots ,n\}$ with edge set ${E}_{t} \subseteq \{ \left( {i,j}\right) : i,j \in V\}$ . An element of the sequence ${G}_{t} = \left( {V,{E}_{t}}\right)$ has a random edge set ${E}_{t}$ drawn from a a time-dependent probability distribution ${p}_{t}\left( {V \times V}\right)$ over the set of node pairs on $V$ , and emits adjacency matrix ${A}^{\left( t\right) }$ .
+
+The remainder of this paper is structured as follows. Section 2 is a review of related work. Section 3 introduces the DAMNETS algorithmic pipeline. Section 4 details the outputs of numerical experiments for representative generative models from the network literature as well as real world networks. Section 5 summarises our main findings and proposes future avenues of investigation. The DAMNETS code is available at this link.
+
+§ 2 RELATED WORK
+
+§ 2.1 STATIC NETWORK GENERATION
+
+Static graph generation involves learning a probability distribution $p\left( G\right)$ over an observed set of networks. Recently, several machine learning approaches have shown good performance on generating arbitrary sets of networks, including DeepGMG [8], GraphRNN [9], GRAN [10] and BiGG [11]. Our paper continues this progression to the network time series setting.
+
+BiGG. BiGG is a scaleable model for generating static networks that we will introduce briefly here, as our approach shares some similarities. Popular frameworks such as GraphRNN, GRAN and BiGG all employ the following high-level pattern for sampling the adjacency matrix; they sample each row of the adjacency matrix one at at time, using a row-wise auto-regressive model to capture the topological structure of the sampled graph and a second auto-regressive model to capture within-row edge-level correlations. GraphRNN uses a hierachical RNN structure, GRAN uses a graph neural network with a conditional mixture of Bernoulli likelihood and BiGG uses a binary tree type structure, which is particularly suited to sparse graphs.
+
+The major innovation introduced in BiGG is an improvement upon the naive $O\left( n\right)$ time complexity for sampling a row of the adjacency matrix. Instead of sampling each of the $n$ entries using a linear-time autoregressive model (such as a RNN), the authors propose to sample each row using a binary tree. Each node $u$ is associated with a random binary tree ${\mathcal{T}}_{u}$ which is constructed as follows. Each tree node $k$ corresponds to an interval of graph nodes $\left\lbrack {{v}_{l},{v}_{r}}\right\rbrack$ . The process starts from the root $\left\lbrack {1,n}\right\rbrack$ and terminates at leaf nodes $\left\lbrack {v,v}\right\rbrack$ . At each decision step the model decides whether the tree has a left child (lch), with probability $p\left( {\operatorname{lch}\left( k\right) }\right)$ , and right child (rch), with probability $p\left( {\operatorname{rch}\left( k\right) }\right)$ , and if so descends further down the tree until it reaches a leaf node. The probability of this tree being a particular realisation ${\mathcal{T}}_{u} = {\tau }_{u}$ is thus
+
+$$
+p\left( {\tau }_{u}\right) = \mathop{\prod }\limits_{{k \in {\tau }_{u}}}p\left( {\operatorname{lch}\left( k\right) }\right) p\left( {\operatorname{rch}\left( k\right) }\right) . \tag{1}
+$$
+
+The tree ${\tau }_{u}$ is then represented as a row vector of length $n$ of an adjacency matrix, with position $v$ having entry 1 if ${\tau }_{u}$ contains the leaf $\left\lbrack {v,v}\right\rbrack$ , and 0 otherwise. The algorithmic advantage stems from setting all entries $\left\lbrack {{v}_{l},\frac{{v}_{r}}{2}}\right\rbrack$ to 0 in row $u$ as soon as at tree node $k = \left\lbrack {{v}_{l},{v}_{r}}\right\rbrack$ the left child is not generated (and similarly if a right child is not generated). Thus for a node $u$ , the corresponding row of the adjacency matrix can be sampled in $O\left( \left| {\mathcal{T}}_{u}\right| \right)$ decision steps. Since $\left| {\mathcal{N}}_{u}\right|$ , the size of the graph neighbourhood of $u$ , equals the number of leaf nodes and $\log n$ is the maximum depth of the binary tree, the upper bound $\left| {\mathcal{T}}_{u}\right| \leq \left| {\mathcal{N}}_{u}\right| \log n$ follows. Moreover, significantly larger time savings can be made in practice if the model decides to not descend further into the tree in the upper levels.
+
+To include dependence between entries within the row of the adjacency matrix, BiGG augments the process to produce state variables that track the decisions made, both above and below in the tree. At each tree node $k$ , one always decides first whether to generate the left child conditionally on the state of the tree above, which is denoted ${h}_{u}^{top}\left( k\right)$ , with the decision sampled from $p\left( {\operatorname{lch}\left( t\right) \mid {h}_{u}^{top}\left( k\right) }\right)$ . If the model decides to descend into the left child, the entire left subtree is generated before returning to $t$ and making a decision about whether to generate the right child. The left subtree that was generated is summarised by a bottom-up state variable, denoted ${h}_{u}^{\text{ bot }}\left( k\right)$ , and this is used to decide whether to sample a right child (rch) for the subtree. The model for ${\mathcal{T}}_{u}$ therefore becomes
+
+$$
+p\left( {\mathcal{T}}_{u}\right) = \mathop{\prod }\limits_{{k \in {\mathcal{T}}_{u}}}p\left( {\operatorname{lch}\left( k\right) \mid {h}_{u}^{top}\left( k\right) }\right) p\left( {\operatorname{rch}\left( k\right) \mid {h}_{u}^{top}\left( k\right) ,{h}_{u}^{bot}\left( {\operatorname{lch}\left( k\right) }\right) }\right) , \tag{2}
+$$
+
+where the exact equations for ${h}_{u}^{top}$ and ${h}_{u}^{bot}$ are given in Algorithm 2. The child probabilities are finally created via two MLPs, denoted ${\mathrm{{MLP}}}_{x} : {\mathbb{R}}^{F} \rightarrow \mathbb{R}$ for $x = L,R$ , via
+
+$$
+p\left( {\operatorname{lch}\left( k\right) \mid {h}_{u}^{\text{ top }}\left( k\right) }\right) = \operatorname{Bernoulli}\left( {{\operatorname{MLP}}_{L}\left( {h}_{u}^{\text{ top }}\right) \left( k\right) }\right) , \tag{3}
+$$
+
+$$
+p\left( {\operatorname{rch}\left( k\right) \mid {h}_{u}^{top}\left( k\right) ,{h}_{u}^{bot}\left( {\operatorname{lch}\left( k\right) }\right) }\right) = \operatorname{Bernoulli}\left( {{\operatorname{MLP}}_{R}\left( {{h}_{u}^{top}\left( k\right) ,{h}_{u}^{bot}\left( {\operatorname{lch}\left( k\right) }\right) }\right) .}\right. \tag{4}
+$$
+
+§ 2.2 NETWORK TIME SERIES (NTS) GENERATION
+
+There are classical models for generating time series of networks designed to capture a specific set of NTS characteristics, such as the forest fire process [5], which can produce power-law degree distributions and shrinking effective diameter (i.e., the largest shortest path length in the graph). These classical models, while very effective at re-creating certain types of behaviour, are not data-driven and require the network to obey a pre-defined set of characteristics to be effective. Approaches attempting to generate arbitrary network time series have appeared in the machine learning literature, such as the TagGen model [12], which uses a self-attention mechanism to learn from temporal random walks on a NTS, from which new NTSs are subsequently generated. Another recent algorithm is DYMOND [13], which is a simpler approach that models the arrival times of 3-node motifs, then samples these subgraphs to generate the NTS. It is important to note that both DYMOND and TagGen attempt to solve a slightly different problem to DAMNETS; they take as input a single time series ${G}_{0},\ldots ,{G}_{T}$ and pre-defined network statistics, and aim to generate an entire time series with these network statistics similar to this single realisation. Instead of specifying the network statistics of interest, DAMNETS aims to learn a probability distribution $p\left( {{G}_{t} \mid {G}_{t - 1}}\right)$ such that given an arbitrary graph ${G}_{t - 1}$ (not in the training set), one can draw many samples for ${G}_{t}$ and reason about the future trajectory of the network. This requires a different set of evaluation metrics and datasets, see Section 4 for discussion.
+
+AGE. The approach most similar to our own is the Attention-Based Graph Evolution (AGE) model [14]. AGE uses a model very similar to a Transformer [7] (only ommiting the positional encoding step), where a self-attention mechanism is applied to the rows of ${A}^{\left( t - 1\right) }$ to learn node embeddings, and a source target attention module is sequentially applied to generate the rows of ${A}^{\left( t\right) }$ . AGE has two clear shortcomings; the first one is that it does not explicitly account for graph connectivity, which is left to the attention mechanism to deduce. The second is that it does not capture edge-level correlations on the sampled rows. To give a simple example of why this is important, suppose we were considering a NTS where in every graph snapshot, each node has exactly two neighbours; the model should have some mechanism to condition on the edges it has sampled for a node so that it can stop once it has generated two edges. Furthermore AGE operates directly between two adjacency matrices rather than generating only differences, which does not allow it to utilise sparsity, limiting the scaleability of the method. In contrast, DAMNETS explicitly utilises graph connectivity in the model pipeline and has the capacity to model edge correlations within rows of the adjacency matrix.
+
+§ 3 DAMNETS ARCHITECTURE
+
+Our goal is to learn a generative model $p\left( {\cdot \mid {G}_{t - 1}}\right)$ for the next network in a NTS, given a set of training network time series $\left\{ {{\left\{ {G}_{t}^{1}\right\} }_{t = 0}^{{T}_{1}},\ldots ,{\left\{ {G}_{t}^{N}\right\} }_{t = 0}^{{T}_{N}},}\right\}$ . Our model has a Markovian structure and hence for generating ${G}_{t}$ all relevant information about the past is assumed to be contained in ${G}_{t - 1}$ .
+
+For a description of our model we first introduce the delta matrix ${\Delta }^{\left( t\right) } \in \{ - 1,0,1{\} }^{n \times n}$ defined as
+
+$$
+{\Delta }_{ij}^{\left( t\right) } = {A}^{\left( t\right) } - {A}^{\left( t - 1\right) } = \begin{cases} 1 & \Rightarrow \text{ add edge }\left( {i,j}\right) \\ 0 & \Rightarrow \text{ no change in }\left( {i,j}\right) \\ - 1 & \Rightarrow \text{ remove edge }\left( {i,j}\right) . \end{cases}
+$$
+
+When conditioned on ${A}^{\left( t - 1\right) }$ , each entry ${\Delta }_{ij}^{\left( t\right) }$ can only take two values, namely ${\Delta }_{ij}^{\left( t\right) }$ can only be 0 or 1 if ${A}_{ij}^{\left( t - 1\right) } = 0$ , and ${\Delta }_{ij}^{\left( t\right) }$ can only be -1 or 0 if ${A}_{ij}^{\left( t - 1\right) } = 1$ . Learning a generative model $p\left( {{\Delta }^{\left( t\right) } \mid {G}_{t - 1}}\right)$ is equivalent to learning $p\left( {{G}_{t} \mid {G}_{t - 1}}\right)$ . Thus, this model only has to learn to produce the temporal update, rather than to reproduce the current graph and apply the temporal update.
+
+As we consider only undirected graphs, we only model the lower triangular part of the delta matrix. As our approach is an encoder-decoder framework, we first summarise the previous network ${G}_{t - 1}$ by computing node embeddings using a GNN as an encoder, then combine these with a modified version of the very efficient sparse graph sampler BiGG [11] to act as a decoder for the delta matrix.
+
+§ 3.1 THE ENCODER
+
+The first step is to compute node embeddings for ${G}_{t - 1}$ , using a GNN. We employ a Graph Attention Network (GAT) [15], although any GNN layer is applicable. We use ${GAT}\left( {X,A}\right)$ to represent the application of a GAT network to a graph with node feature matrix $X$ and adjacency matrix $A$ . and in the absence of other node features we use the identity matrix as node features (which here corresponds to a one-hot encoding of the nodes). Node or edge-level features, whenever available, can be incorporated into the pipeline. The embedding of ${G}_{t - 1}$ is given by
+
+$$
+{H}^{\left( t - 1\right) } = {GAT}\left( {X,{A}^{\left( t - 1\right) }}\right) , \tag{5}
+$$
+
+where $X \in {\mathbb{R}}^{n \times p}$ is the node feature matrix, and ${H}^{\left( t - 1\right) } \in {\mathbb{R}}^{n \times q}$ is the node embedding matrix.
+
+§ 3.2 THE DECODER
+
+Starting with the first node according to the given node ordering, conditioning gives
+
+$$
+p\left( \Delta \right) = p\left( {\left\{ {\Delta }_{u}\right\} }_{u \in V}\right) = \mathop{\prod }\limits_{{u \in V}}p\left( {{\Delta }_{u} \mid \left\{ {{\Delta }_{w} : w < u}\right\} }\right) .
+$$
+
+We sample each row of $\Delta$ using Algorithm 2, a modified version of the BiGG row sampling algorithm. We enhance the procedure, allowing it to distinguish between a tree leaf which would be an edge addition and a tree leaf which would be an edge deletion. If the left (resp. right) child at level $k$ is a leaf node corresponding to entry ${\Delta }_{ij}^{\left( t\right) }$ , instead of (3) we sample the leaf node using
+
+$$
+p\left( {\operatorname{lch}\left( k\right) \mid h}\right) = \left\{ \begin{array}{l} \text{ Bernoulli }\left( {{\mathrm{{MLP}}}_{ + }\left( h\right) }\right) \text{ if }{A}_{ij}^{\left( t\right) } = 0, \\ \text{ Bernoulli }\left( {{\mathrm{{MLP}}}_{ - }\left( h\right) }\right) \text{ if }{A}_{ij}^{\left( t\right) } = 1, \end{array}\right. \tag{6}
+$$
+
+where $h \in {\mathbb{R}}^{q}$ is the corresponding state variable. Each application of Algorithm 2 returns an embedding, namely ${g}_{u} = {h}_{u}^{\text{ bot }}\left( \text{ root }\right)$ which depends on every entry in the row. As is done in the static setting we apply an auto-regressive model across these row embeddings to capture dependencies between rows. The bottom-up embeddings of each tree have no other computational dependencies, so can be efficiently pre-computed during training. We chose to use a standard Transformer self-attention layer [7] (which we call TFEncoder) with sinusiodal positional embedding for this auto-regressive component; this was chosen to provide similar representation power to the baseline model AGE. Self attention does not scale to very long sequences however, so for very large graphs with many nodes, this could be replaced by either an LSTM or the Fenwick Tree structure proposed in [16].
+
+§ 3.3 THE DAMNETS MODEL ARCHITECTURE
+
+Figure 1: An overview of our approach to generating Markovian transitions in a network time series. We learn a generative model of the lower triangular part of the delta matrix given the previous graph ${G}_{t - 1}$ . We then draw a sample ${\Delta }^{\left( t\right) }$ and add this to ${A}^{\left( t - 1\right) }$ to produce a sample ${G}_{t}$ .
+
+ < g r a p h i c s >
+
+With the two key components of our model defined, we now explain how these models are combined to generate delta matrices given an input graph. As stated in Equation (5), we first compute node embeddings ${H}^{\left( t - 1\right) } \in {\mathbb{R}}^{n \times F}$ , with ${H}_{i}^{\left( t - 1\right) } \in {\mathbb{R}}^{F}$ representing the node embedding computed for node $i$ in ${G}_{t - 1}$ . When generating the row tree ${\mathcal{T}}_{u}$ for node $u$ ,(which corresponds to generating the row of the delta matrix for node $u$ ), we combine the node embedding from the previous network with the row-wise auto-regressive term ${h}_{u - 1}^{row}$ computed by TFEncoder via an MLP
+
+$$
+{h}_{u}^{top}\left( \text{ root }\right) = {\mathrm{{MLP}}}_{\text{ cat }}\left( {{h}_{u - 1}^{\text{ row }},{H}_{u}^{\left( t - 1\right) }}\right) . \tag{7}
+$$
+
+where ${\mathrm{{MLP}}}_{\text{ cat }} : {\mathbb{R}}^{2F} \rightarrow {\mathbb{R}}^{F}$ . The full procedure is described in Algorithm 1, with a detailed version Algorithm 2 in the SI, and is visualised in Figures 1 and 2. The model is trained via maximum likelihood over the entries of the delta matrix using gradient descent. The advantage of this framework is twofold; firstly the delta matrix is usually much sparser than the full adjacency matrix, allowing us to well utilise sparse sampling methods. This is a very natural assumption: one does not expect most of the network to change at each timestep, but rather just a small subset of the edges. The second is that differencing a time series makes learning easier. It is very common in traditional time series analysis to perform differencing transformations on data, as differencing may alleviate trends in the time series.
+
+§ 4 EXPERIMENTS
+
+Evaluating a generative model usually follows the following recipe: fit the generative model on the training data, draw samples from the model and then compare the distribution of these samples
+
+ < g r a p h i c s >
+
+Figure 2: A visualisation of the generation of the $u$ -th row of the delta matrix ${\Delta }^{\left( t\right) }$ using the DAMNETS model architecture. The nodes shown in red indicate the graph ${G}_{t}$ . We use a GAT to compute node embeddings ${H}^{\left( t\right) }$ for each node in ${G}_{t}$ . Nodes shown in blue belong to the binary tree generated for each row; each tree is generated by combining the node embedding in the previous graph with an auto-regressive term computed using a Transformer (TF) Encoder across the rows of the delta matrix to produce ${h}_{u - 1}^{row}$ , which is used in Equation (7) to initialise the top-down descent of each tree.
+
+Algorithm 1: Algorithm for generating the the delta matrix ${\Delta }^{\left( t\right) }$ using DAMNETS
+
+Input: Input graph ${G}_{t - 1} = \left( {V,{E}_{t - 1}}\right)$ , node features $X$
+
+${H}^{\left( t - 1\right) } \leftarrow {GAT}\left( {X,{A}^{\left( t - 1\right) }}\right)$
+
+${h}_{0}^{row} \leftarrow \varnothing$
+
+for $u \leftarrow 1$ to $n$ do
+
+ Let $k = \{ 1,\ldots ,u - 1\}$ be the root of tree ${\mathcal{T}}_{u}$ .
+
+ ${h}_{u}^{\text{ top }}\left( k\right) = {\operatorname{MLP}}_{\text{ cat }}\left( {{h}_{u - 1}^{\text{ row }},{H}_{u}^{\left( t - 1\right) }}\right) .$
+
+ ${g}_{u},{\mathcal{N}}_{u} \leftarrow \operatorname{Recursive}\left( {u,k,{h}_{u}^{\text{ top }}\left( k\right) }\right)$ /* Algorithm 2 */
+
+ /* Only non-zero indices are returned in ${\mathcal{N}}_{u} *$ /
+
+ ${\Delta }_{u} \leftarrow$ Determine sign of entries using ${A}^{\left( t - 1\right) }$ and transform into a vector.
+
+ ${h}_{u}^{\text{ row }} \leftarrow \operatorname{TFEncoder}\left( {{g}_{u};{g}_{1 : u - 1}}\right)$
+
+end
+
+Return ${\Delta }^{\left( t\right) }$ with rows ${\Delta }_{u},u = 1,\ldots ,n$ .
+
+to some held out test data using some kind of statistical test or metric on the space of probability distributions. For static graphs, there exist a number of graph kernels [17] from which a Maximum Mean Discrepancy (MMD) [18] type metric can be derived. However these are very computational costly (some scaling as $O\left( {n}^{4}\right)$ for a graph with $n$ nodes). It is therefore common to define a set of summary statistics over the graphs, such as the degree distribution or clustering coefficient distribution, and compare the distributions of these summary statistics computed over the sampled and test graphs.
+
+We adopt a similar approach applied to the marginal distributions of the network time series. We choose to compare six different network statistics, three local and three global (see [19] for a background on network statistics). Our three local properties are the degree distribution, clustering coefficient distribution and the eigenvalue distribution of the graph Laplacian as introduced in [10]. For each graph, we compute a histogram of these properties over the nodes in the graph, and use a Gaussian kernel with the total-variation metric to compute the MMD. Our three global measures are transitivity, assortativity and closeness centrality. Each of these metrics produces one scalar value per graph, and we again use a Gaussian kernel with the ${\ell }^{2}$ metric to compute the MMD.
+
+For each time point $t$ and statistic $S\left( \cdot \right)$ , we compute ${\operatorname{MMD}}_{t}\left( {S\left( {G}_{t}^{\text{ test }}\right) ,S\left( {G}_{t}^{\text{ sampled }}\right) }\right)$ , and use as final metric the sum $\overline{\operatorname{MMD}}\left( S\right) = \mathop{\sum }\limits_{t}{\operatorname{MMD}}_{t}\left( {S\left( {G}_{t}^{\text{ test }}\right) ,S\left( {G}_{t}^{\text{ sampled }}\right) }\right)$ . If the marginal distributions match exactly, $\overline{\mathrm{{MMD}}}\left( S\right)$ will equal 0, and smaller values indicate better agreement between the distributions. We display all $\overline{\mathrm{{MMD}}}$ scores to three significant figures. Comparing the marginal distributions alone does not suffice as a comparison metric, so we also provide summary plots of these network statistics through time to verify that the evolution of these statistics match. In addition, we have designed several synthetic-data experiments to verify specific time-series properties observed in real-world networks which we would like to capture.
+
+A difficulty for graph generative model evaluation is that proper comparison of a network time series generator requires many realisations of this time series drawn from the same distribution to facilitate learning and subsequent comparison. Papers such as TagGen [12] and DYMOND [13] utilise datasets that comprise of one realisation of a real world temporal network, and aim to simply produce "surrogate" networks that closely resemble that single realisation. We aim to assess whether our model is able to generalise to new examples, in the sense that given a new graph ${G}_{t - 1}$ drawn from the same distribution as the training distribution, we can draw samples from ${G}_{t} \sim p\left( {\cdot \mid {G}_{t - 1}}\right)$ . We are therefore unable to use the same data sets as these papers, and instead design a new experimental setup in line with our objective.
+
+Our general experimental framework is as follows: we are given a set of realisations $\left\{ {{\left\{ {G}_{t}^{1}\right\} }_{t = 0}^{{T}_{1}},\ldots ,{\left\{ {G}_{t}^{N}\right\} }_{t = 0}^{{T}_{N}}}\right\}$ . For DAMNETS and AGE, we split this up into a set of training time series and test time series, and fit each model on the training set, then evaluate the performance on the test set. As DYMOND and TagGen can only learn from one time series at a time and produce realisations from that specific time series, we instead train an instance of these models separately on each time series in the test set and sample one time series from each trained model. This might seem like a large advantage for these models, as they have direct access to the test set. However our experimental results show that the aggregated behaviour of these samples does not match the underlying distribution well, suggesting these methods are not suitable for learning the true underlying process that a given sample was drawn from. Due to the fact that DYMOND and TagGen have to be re-trained on every single time series, we provide two sets of results for some datasets, with a smaller dataset chosen such that DYMOND and TagGen converge within 24 hours.
+
+§ 4.1 THE BARABÁSI-ALBERT MODEL
+
+The family of Barabási-Albert (B-A) models [20] was designed to capture the so-called scale-free property observed in many real world networks through a preferential attachment mechanism. Formally a scale-free network is one whose degree distribution follows a power-law; if $\deg \left( i\right)$ represents the degree of node $i$ in a random network model, then the network is scale free if $\mathbb{P}\left( {\deg \left( i\right) = d}\right) \propto \frac{1}{{d}^{\gamma }}$ , for some constant $\gamma \in \mathbb{R}$ . Degree distributions with a power-law tail have been observed in many real networks of interest, such as hyperlinks on the World-Wide Web or metabolic networks, although the ubiquity of power law degree distributions has been disputed [21].
+
+The B-A model has two integer parameters, the number of nodes $n$ and the number of edges $m$ to be added at each iteration. The network is initialised with $m$ initial connected nodes. At each iteration $t$ , a new node is added and is connected to $m$ existing nodes, with probability proportional to the current degree ${p}_{u} = \frac{\deg \left( u\right) }{\mathop{\sum }\limits_{{v \in V}}\deg \left( v\right) }$ . Here, the standard NetworkX [22] implementation is used. Constructing a B-A network in this way yields a network time series of length $T = n - m$ , where each graph ${G}_{t}$ is the graph after node $m + t$ has the first edges attached to it. Nodes with a many existing connections (known as hubs) will likely accumulate more links; this is the preferential attachment property which, in the B-A model, leads to a power-law degree distribution with scale parameter $\gamma = 3$ .
+
+For the B-A experiments, we take $N = {200}$ time series with parameters $n = {100}$ and $m = 4$ , yielding time series of length $T = {96}$ . The results are displayed in Table 1 and Figure 3 . We see DAMNETS produces samples with orders of magnitude lower MMD than the baseline methods, and is the only model to correctly replicate the power law degree distribution.
+
+Table 1: The MMD on the B-A dataset for each network statistic. Lower is better.
+
+max width=
+
+Model Degree Clustering Spectral Transitivity Assortativity Closeness
+
+1-7
+DYMOND 14.01 61.20 8.78 7.28 4.76 3.19
+
+1-7
+TagGen 16.33 16.55 2.29 2.06 23.95 0.10
+
+1-7
+AGE 15.08 25.15 9.45 3.42 6.37 2.36
+
+1-7
+DAMNETS $8{\mathrm{e}}^{-3}$ 0.78 0.14 0.01 0.01 $5{\mathrm{e}}^{-6}$
+
+1-7
+
+ < g r a p h i c s >
+
+Figure 3: Plots for the B-A model. Left: density against time; middle: transitivity against time; right: the average degree distribution of the final network ${G}_{T}$ produced by the models. Only DAMNETS correctly replicates the power law degree distribution.
+
+§ 4.2 BIPARTITE CONCENTRATION
+
+Figure 4: A sample from the bipartite concentration model with 10 nodes in each partition, with an initial connection probability of $p = {0.2}$ and a concentration proportion ${p}^{con} = {0.3}$ . The highest degree node is shown in red; links concentrate on this node over time.
+
+ < g r a p h i c s >
+
+This dataset is designed to simulate behaviour in rating systems where objects with many links tend to accumulate more recommendations [23]. For example in a data set consisting of users and movies, movies with many existing recommendations are likely to accumulate more over time. The graph ${G}_{0}$ is initialised as a random bipartite graph with connection probability $p$ . At each timestep, we select the node in the right-hand partition with the most links (ties broken at random) and re-wire a proportion ${p}^{\text{ con }}$ of non-adjacent edges to that node.
+
+For the experiments we set $p = {0.5}$ and ${p}^{con} = {0.1}$ . For the smaller data set (S), we place 30 nodes in each partition (so $n = {60}$ ) and iterate for $T = {10}$ timesteps. For the larger dataset (L) we place 250 nodes in each partition $\left( {n = {500}}\right)$ and iterate for $T = {15}$ timesteps. To measure the extent to which the different generators replicate this bipartite structure, in addition to our standard summaries we also compute the mean Spectral Bipartivity (SB) [24] through time, which takes values in [0,1], with 0 indicating the network is not bipartite and 1 indicating the network is fully bipartite. The results are displayed in Table 2 and Figure 10. DAMNETS consistently outperforms all the baseline models across all summary statistics.
+
+Table 2: The MMD for each network statistic (lower is better) and Spectral Bipartivity (closer to 1 is better) across the small (S) and large (L) bipartite contraction test datasets.
+
+max width=
+
+2*Model 2|c|Deg. 2|c|Clust. 2|c|Spec. 2|c|Trans. 2|c|Assort. 2|c|Closeness 2|c|SB
+
+2-15
+ (S) (L) (S) (L) (S) (L) (S) (L) (S) (L) (S) (L) (S) (L)
+
+1-15
+DYMOND 1.06 - 9.55 - 0.12 - 1.67 - $9{e}^{-4}$ - 0.14 - 0.50 -
+
+1-15
+TagGen 0.81 - 1.73 - 0.29 - $5{e}^{-4}$ - 0.07 - $2{e}^{-4}$ - 0.56 -
+
+1-15
+AGE 0.92 2.75 9.46 15.3 0.13 0.25 1.48 3.71 0.72 4.81 0.16 0.36 0.55 0.52
+
+1-15
+DAMNETS 0.01 $4{\mathrm{e}}^{-3}$ 0.11 $3{\mathrm{e}}^{-3}$ 0.03 $5{\mathrm{e}}^{-4}$ $7{\mathrm{e}}^{-6}$ $8{\mathrm{e}}^{-8}$ $1{\mathrm{e}}^{-4}$ $7{\mathrm{e}}^{-6}$ $4{\mathrm{e}}^{-7}$ $1{\mathrm{e}}^{-7}$ 0.99 0.99
+
+1-15
+
+244
+
+ < g r a p h i c s >
+
+Figure 5: Plots for the bipartite contraction model. Left: density against time; middle: transitivity against time; right: closeness against time. Only DAMNETS shows good performance in all statistics.
+
+§ 4.3 COMMUNITY EVOLUTION AND DECAY
+
+Figure 6: A sample from the community decay model of length $T = 5$ on $V = \{ 1,\ldots ,{45}\}$ , with 15 nodes in each of the $Q = 3$ communities, connection probabilities ${p}_{\text{ int }} = {0.7},{p}_{\text{ ext }} = {0.005}$ , decay community $D = 3$ (coloured red) and decay proportion ${p}_{dec} = {0.2}$ .
+
+ < g r a p h i c s >
+
+Our next network time series benchmark considers a dynamic community structure model. We initialise a three-community stochastic block model on $n$ nodes. At each time step, we re-wire a fixed proportion ${f}_{\text{ dec }}$ of the third community (which we call the decay community), replacing them with a random outgoing edge to a node in one of the other communities. A sample from the model is shown in Figure 6, and a full description of the model is given in Appendix A.2.
+
+For the experiments we use inter-community connection probability ${p}_{\text{ int }} = {0.9}$ , intra-community ${p}_{\text{ ext }} = {0.01}$ , decay fraction ${f}_{\text{ dec }} = {0.2}$ and iterate for $T = {20}$ timesteps. For the small (S) dataset we place 20 nodes in each community (for a total of $n = {60}$ nodes) and for the large (L) dataset we place 400 nodes in each community ( $n = {1200}$ in total). The non-decay communities should have constant density, and the decay community should have density decaying exponentially at rate ${f}_{dec}$ . The results are displayed in Table 3 and Figure 11. DAMNETS is the best performing model overall, although AGE also shows strong performance on this dataset.
+
+Table 3: The $\overline{\mathrm{{MMD}}}$ for each network statistic across the small (S) and large (L) community decay test datasets, with a (-) when the model did not converge within 24 hours. A lower MMD is better.
+
+max width=
+
+2*Model 2|c|Deg. 2|c|Clust. 2|c|Spec. 2|c|Trans. 2|c|Assort. 2|c|Closeness
+
+2-13
+ (S) (L) (S) (L) (S) (L) (S) (L) (S) (L) (S) (L)
+
+1-13
+DYMOND 1.95 - 3.20 - 0.66 - 0.88 - 1.02 - 0.33 -
+
+1-13
+TagGen 10.99 - 2.91 - 2.18 - 0.26 - 2.37 - 1.04 -
+
+1-13
+AGE 0.15 0.17 2.00 2.06 0.43 0.42 0.02 0.03 0.07 0.06 0.01 0.03
+
+1-13
+DAMNETS 0.19 0.21 1.90 1.91 0.39 0.40 0.01 0.01 0.03 0.04 0.01 0.02
+
+1-13
+
+ < g r a p h i c s >
+
+Figure 7: The density of each community through time in the 3-community dataset.
+
+§ 4.4 CORRELATION NETWORKS
+
+This data set consists of financial correlation networks built from time series of asset prices from the Wharton CRSP database [25]. We consider a set of 49 liquid stocks from the US equity market, for which we have available minutely prices data. We construct a graph by assigning each stock to a node. We then estimate the correlation matrix of their 5-minute returns each day, and threshold these correlations at 1 standard deviation in order to construct the edges (so stocks are connected by an edge if they are strongly correlated). The data set spans $N = {97}$ weeks, with each week giving a time series of length $T = 5$ .
+
+One issue with this dataset is that correlations between financial instruments are known to be unstable over time (hence different realisations may not drawn from the same distribution). To mitigate this we did not split the data chronologically, but have rather drawn the training and test splits randomly (which correspond to selecting random weekly time series from the dataset). We repeat this procedure over 5 seeds and compute the average $\overline{\mathrm{{MMD}}}$ . The results are displayed in Table 4 and Figure 12 DAMNETS is the only model to show good performance across all statistics.
+
+Table 4: The MMD for each network statistic across the correlation test dataset. Lower is better.
+
+max width=
+
+Model Degree Clustering Spectral Transitivity Assortativity Closeness
+
+1-7
+DYMOND 0.16 0.58 0.27 0.17 0.04 0.06
+
+1-7
+TagGen 0.95 0.56 0.85 $4{\mathrm{e}}^{-3}$ 0.08 0.48
+
+1-7
+AGE 0.14 1.07 0.31 0.26 0.08 0.10
+
+1-7
+DAMNETS 0.13 0.21 0.25 0.04 0.02 0.01
+
+1-7
+
+ < g r a p h i c s >
+
+Figure 8: The network statistics against time for the correlation dataset. DAMNETS is the only model that closely tracks the test distribution on all statistics.
+
+§ 4.5 ABLATION STUDY
+
+We see that DAMNETS outperforms all the baseline models on each dataset under consideration, in particular the AGE model, which is the most similar in that it also follows a Sequence2Sequence framework. DAMNETS differs from AGE in two major ways, namely the formulation in terms of the delta matrix and the model architecture adapted for sampling this sparse matrix. We provide an ablation study in Appendix B where we modify AGE to generate delta matrices, and also a version where we add positional encodings. We find that the delta matrix formulation significantly improves the performance of AGE, while positional encodings do not change the performance much, with neither variant of AGE able to match the performance of DAMNETS. This suggests it is the combination of our re-formulation of the problem combined with a model architecture suited to sample sparse delta matrices that provides such strong performance.
+
+§ 5 DISCUSSION AND CONCLUSION
+
+DAMNETS provides a novel approach to generating network time series, with the ability to have fine-grained edge-level conditioning while maintaining scaleability by generating delta matrices rather than entire graphs and efficiently utilising the sparsity of these matrices. We have shown through extensive experiments that DAMNETS is able to learn a variety of important network models that existing methods simply cannot. DAMNETS can learn to generate long time series, re-produce power-law degree distributions, bipartite structure and maintains very strong performance on larger networks, while none of the baseline models are able to capture all of these properties.
+
+In future work, the Markovian assumption underlying DAMNETS could be relaxed to incorporate time series with long range dependencies, using techniques such as node memory introduced in the TGNN model [26]. The model could also be extended to handle graphs of varying size: node deletion could be performed by adding a step before the sampling of each row-tree wherein the model makes a decision about whether the node should persist to the current timestep. Node additions could be handled by allowing optional rows to be appended to the end of the delta matrix (and only sampling ones for these rows, as a new node could not have any edge deletions).
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/fe1DEN1nds/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/fe1DEN1nds/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e6d24e8111d61739ff0bd9418f96dc32ceef33d7
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/fe1DEN1nds/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,322 @@
+# Gradual Weisfeiler-Leman: Slow and Steady Wins the Race
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+The classical Weisfeiler-Leman algorithm aka color refinement is fundamental for graph learning and central for successful graph kernels and graph neural networks. Originally developed for graph isomorphism testing, the algorithm iteratively refines vertex colors. On many datasets, the stable coloring is reached after a few iterations and the optimal number of iterations for machine learning tasks is typically even lower. This suggests that the colors diverge too fast, defining a similarity that is too coarse. We generalize the concept of color refinement and propose a framework for gradual neighborhood refinement, which allows a slower convergence to the stable coloring and thus provides a more fine-grained refinement hierarchy and vertex similarity. We assign new colors by clustering vertex neighborhoods, replacing the original injective color assignment function. Our approach is used to derive new variants of existing graph kernels and to approximate the graph edit distance via optimal assignments regarding vertex similarity. We show that in both tasks, our method outperforms the original color refinement with only moderate increase in running time advancing the state of the art.
+
+## 18 1 Introduction
+
+The (1-dimensional) Weisfeiler-Leman algorithm, also referred to as color refinement, iteratively refines vertex colors by encoding colors of neighbors and was originally developed as a heuristic for the graph isomorphism problem. Although it cannot distinguish some non-isomorphic graph pairs, for example strongly regular graphs, it succeeds in many cases. It is widely used as a sub-routine in isomorphism algorithms today to reduce ambiguities that have to be resolved by backtracking search [1]. It has also gained high popularity in graph learning, where the technique is used to define graph kernels [2-5] and to formalize the expressivity of graph neural networks, see the recent surveys [6, 7]. Graph kernels based on Weisfeiler-Leman refinement provide remarkable predictive performance while being computationally highly efficient. The original Weisfeiler-Leman subtree kernel [2] and its variants and extensions, e.g., [3-5], provide state-of-the-art classification accuracy on many datasets and are widely used baselines. The update scheme of the Weisfeiler-Leman algorithm is similar to the idea of neighborhood aggregation in graph neural networks (GNNs). It has been shown that (i) the expressive power of GNNs is limited by the Weisfeiler-Leman algorithm, and (ii) that GNN architectures exist that reach this expressive power [8, 9].
+
+As a consequence of its original application, the Weisfeiler-Leman algorithm assigns discrete colors and does not allow distinguishing minor or major differences in vertex neighborhood but considers two colors as either the same or different. Most Weisfeiler-Leman graph kernels match vertex colors of the first few refinement steps by equality, which can be considered as too rigid, since these colors encode complex neighborhood structures. In machine learning tasks, a more fine-grained differentiation appears promising. Often data is noisy, which in graphs can show for example in small differences in vertex degree. Such differences get picked up by the refinement strategy of the Weisfeiler-Leman algorithm and cannot be distinguished from significant differences.
+
+We address this problem by providing a different approach to the refinement step of the Weisfeiler-Leman algorithm: We replace the injective relabeling function with a non-injective one, to gain a more gradual refinement of colors. This allows to obtain a finer vertex similarity measure, that can distinguish between large and small changes in vertex neighborhoods with increasing radius. We characterize the set of functions that, while not necessarily injective, guarantee that the stable coloring of the original Weisfeiler-Leman algorithm is reached after a possibly higher number of iterations. Thus, our approach preserves the expressive power of the Weisfeiler-Leman algorithm. We discuss possible realization of such a function and use $k$ -means clustering in our experimental evaluation as an exemplary one.
+
+## Our Contribution.
+
+1. We propose refining, neighborhood preserving (renep) functions, which generalize the concept of color refinement. This family of functions leads to the coarsest stable coloring while only incorporating direct neighborhoods.
+
+2. We show the connections of our approach to the original Weisfeiler-Leman algorithm, as well as other vertex refinement strategies.
+
+3. We propose two new graph kernels based on renep functions, that outperform state-of-the-art kernels on synthetic and real-world datasets, with only moderate increase in running time.
+
+4. We apply our new approach for approximating the graph edit distance via bipartite graph matching and show that it outperforms state-of-the-art heuristics.
+
+## 2 Related Work
+
+Various graph kernels based on the standard Weisfeiler-Leman refinement have been proposed [2-5]. Recent comprehensive experimental evaluations confirm their high classification accuracy on many real-world datasets $\left\lbrack {{10},{11}}\right\rbrack$ . These approaches implicitly match colors by equality, which can be considered as too rigid, since colors encode unfolding trees representing complex neighborhood structures. Some recent works address this problem: Yanardag and Vishwanathan [12] introduced similarities between colors using techniques inspired by natural language processing, that were subsequently refined by Narayanan et al. [13]. Schulz et al. [14] define a distance function between colors by comparing the associated unfolding trees using a tree edit distance. Based on this distance the colors are clustered to obtain a new graph kernel. Although the tree edit distance is polynomial-time computable, the running time of the algorithm is very high. A kernel based on the Wasserstein distance of sets of unfolding trees was proposed by Fang et al. [15]. The vertices of the graphs are embedded into ${\ell }_{1}$ space using an approximation of the tree edit distance between their unfolding trees. A graph can then be seen as a distribution over those embeddings. While the function proposed is not guaranteed to be positive semi-definite, the method showed results similar to and in some cases exceeding state-of-the-art techniques. The running time, however, is still very high and the method is only feasible for unfolding trees of small height.
+
+These approaches define similarities between Weisfeiler-Leman colors and the associated unfolding trees. Our approach, in contrast, alters the Weisfeiler-Leman refinement procedure itself and does not rely on computationally expensive matching of unfolding trees.
+
+## 3 Preliminaries
+
+In this section we provide the definitions necessary to understand our new vertex refinement algorithm. We first give a short introduction to graphs and the original Weisfeiler-Leman algorithm, before we cover graph kernels.
+
+Graph Theory. A graph $G = \left( {V, E,\mu ,\nu }\right)$ consists of a set of vertices $V$ , denoted by $V\left( G\right)$ , a set of edges $E\left( G\right) = E \subseteq V \times V$ between the vertices, a labeling function for the vertices $\mu : V \rightarrow L$ , and a labeling function for the edges $\nu : E \rightarrow L$ . We discuss only undirected graphs and denote an edge between $u$ and $v$ by ${uv}$ . The set of neighbors of a vertex $v \in V$ is denoted by $N\left( v\right) = \{ u \mid {uv} \in E\}$ . The set $L$ contains categorical labels. A (rooted) tree $T$ is a simple (no self-loops or multi-edges), connected graph without cycles and with a designated root node $r$ . A tree ${T}^{\prime }$ is a subtree of a tree $T$ , denoted by ${T}^{\prime } \subseteq T$ , iff $V\left( {T}^{\prime }\right) \subseteq V\left( T\right)$ . The root of ${T}^{\prime }$ is the node closest to the root in $T$ . A partitioning $\pi$ of a set $S$ is a set $\left\{ {{S}_{1},\ldots ,{S}_{n}}\right\}$ of non-empty subsets of $S$ , such that $\forall i, j \in \{ 1,\ldots , n\} , i \neq j : {S}_{i} \cap {S}_{j} = \varnothing$ and $\mathop{\bigcup }\limits_{{i = 1}}^{n}{S}_{i} = S$ . For $s \in S$ we denote by $\pi \left( s\right)$ the unique identifier of the subset containing $s$ . For ${s}_{1},{s}_{2} \in S$ with $\pi \left( {s}_{1}\right) = \pi \left( {s}_{2}\right)$ , we also write ${s}_{1}{ \approx }_{\pi }{s}_{2}$ .
+
+
+
+Figure 1: Initial coloring and results of the first three iterations of the Weisfeiler-Leman algorithm. To use less colors for this example, vertices with a unique color do not get a new color. The color hierarchy shows the development of the colors over the refinement iterations.
+
+A vertex coloring $c : V\left( G\right) \rightarrow {\mathbb{N}}_{0}$ of a graph $G$ is a function assigning each vertex a color. A coloring $c$ can be interpreted as a partitioning ${\pi }_{c}$ of $V\left( G\right)$ with $v{ \approx }_{{\pi }_{c}}w \Leftrightarrow c\left( v\right) = c\left( w\right)$ for all $v, w$ in $V\left( G\right)$ . When it is clear from the context, we use colorings and their corresponding partitions interchangeably. A coloring $\pi$ is a refinement of (or refines) a coloring ${\pi }^{\prime }$ , iff ${s}_{1}{ \approx }_{\pi }{s}_{2} \Rightarrow {s}_{1}{ \approx }_{{\pi }^{\prime }}{s}_{2}$ for all ${s}_{1},{s}_{2}$ in $S$ . We denote this by $\pi \preccurlyeq {\pi }^{\prime }$ and write $\pi \equiv {\pi }^{\prime }$ if $\pi \preccurlyeq {\pi }^{\prime }$ and ${\pi }^{\prime } \preccurlyeq \pi$ . If $\pi \preccurlyeq {\pi }^{\prime }$ and $\pi ≢ {\pi }^{\prime }$ , we say that $\pi$ is a strict refinement of ${\pi }^{\prime }$ , written $\pi \prec {\pi }^{\prime }$ . The refinement relation defines a partial ordering on the colorings.
+
+Color Hierarchy. We consider a sequence of vertex colorings $\left( {{\pi }_{0},{\pi }_{1},\ldots ,{\pi }_{h}}\right)$ with ${\pi }_{h} \preccurlyeq \cdots \preccurlyeq {\pi }_{0}$ and assume that the colors assigned by ${\pi }_{i}$ and ${\pi }_{j}$ are distinct unless $i = j$ or the associated vertex sets are equal, i.e., $\forall {\pi }_{i},{\pi }_{j} : {\pi }_{i}\left( v\right) = {\pi }_{j}\left( v\right) \Rightarrow \left\{ {w \in V\left( G\right) \mid {\pi }_{i}\left( w\right) = {\pi }_{i}\left( v\right) }\right\} = \{ w \in V\left( G\right) \mid$ $\left. {{\pi }_{j}\left( w\right) = {\pi }_{j}\left( v\right) }\right\}$ . We can interpret such a sequence of colorings as a color hierarchy, i.e., a tree ${\mathcal{T}}_{h}$ that contains a node for each color $c \in \left\{ {{\pi }_{i}\left( v\right) \mid i \in \{ 0,\ldots , h\} \land v \in V\left( G\right) }\right\}$ and an edge(c, d)iff $\exists v \in V\left( G\right) : {\pi }_{i}\left( v\right) = c \land {\pi }_{i + 1}\left( v\right) = d$ . We associate each tree node with the set of vertices of $G$ having that color. Here, we assume that the initial coloring is uniform corresponding to the trivial vertex partitioning. If this is not the case, we add an artificial root node and connect it to the initial colors. Likewise we insert the coloring ${\pi }_{0} = \{ V\left( G\right) \}$ as first element in the sequence of vertex colorings. An example color hierarchy is given in Figure 1.
+
+Using this color hierarchy we can derive multiple colorings on the vertices: Choosing exactly one color on every path from the leaves to the root (or only the root), always leads to a valid coloring. The finest coloring is induced by the colors representing the leaves of the tree. Given a color hierarchy $T$ , we denote this coloring (which is equal to ${\pi }_{h}$ ) by ${\pi }_{T}$ .
+
+Weisfeiler-Leman Color Refinement. The 1-dimensional Weisfeiler-Leman (WL) algorithm or color refinement [16] starts with a coloring ${c}_{0}$ , where all vertices have a color representing their label (or a uniform coloring in case of unlabeled vertices). In iteration $i$ , the coloring ${c}_{i}$ is obtained by assigning each vertex $v$ in $V\left( G\right)$ a new color according to the colors of its neighbors, i.e.,
+
+$$
+{c}_{i + 1}\left( v\right) = h\left( {{c}_{i}\left( v\right) ,\left\{ \left\{ {{c}_{i}\left( u\right) \mid u \in N\left( v\right) }\right\} \right\} }\right) ,
+$$
+
+where $h : {\mathbb{N}}_{0} \times {\mathbb{N}}_{0}^{{\mathbb{N}}_{0}} \rightarrow {\mathbb{N}}_{0}$ is an injective function. Figure 1 depicts the first iterations of the algorithm for an example graph.
+
+After enough iterations the number of different colors will no longer change and this resulting coloring is called the coarsest stable coloring. The coarsest stable coloring is unique and always reached after at most $\left| {V\left( G\right) }\right| - 1$ iterations. This trivial upper bound on the number of iterations is tight [17]. In practice, however, Weisfeiler-Leman refinement converges much faster (see Appendix B).
+
+Graph Kernels and the Weisfeiler-Leman Subtree Kernel. A kernel on $X$ is a function $k : X \times$ $X \rightarrow \mathbb{R}$ , so that there exist a Hilbert space $\mathcal{H}$ and a mapping $\phi : X \rightarrow \mathcal{H}$ with $k\left( {x, y}\right) = \langle \phi \left( x\right) ,\phi \left( y\right) \rangle$ for all $x, y$ in $X$ , where $\langle \cdot , \cdot \rangle$ is the inner product of $\mathcal{H}$ . A graph kernel is a kernel on graphs, i.e., $X$ is the set of all graphs.
+
+The Weisfeiler-Leman subtree kernel [2] with height $h$ is defined as
+
+$$
+{k}_{ST}^{h}\left( {{G}_{1},{G}_{2}}\right) = \mathop{\sum }\limits_{{i = 0}}^{h}\mathop{\sum }\limits_{{u \in V\left( {G}_{1}\right) }}\mathop{\sum }\limits_{{v \in V\left( {G}_{2}\right) }}\delta \left( {{c}_{i}\left( u\right) ,{c}_{i}\left( v\right) }\right) , \tag{1}
+$$
+
+where $\delta$ is the Dirac kernel (1, iff ${c}_{i}\left( u\right)$ and ${c}_{i}\left( v\right)$ are equal, and 0 otherwise). It counts the number of vertices with common colors in the two graphs up to the given bound on the number of Weisfeiler-Leman iterations.
+
+
+
+Figure 2: Initial coloring and results of the first iteration using WL and GWL refinement. We assume that the update function of GWL is a clustering algorithm, producing two clusters per old color. Vertices colored gray and yellow by WL are put into the same cluster, as well as green and light blue ones, as their neighbor color multisets only differ by one element each.
+
+## 4 Gradual Weisfeiler-Leman Refinement
+
+As a different approach to the refinement step of the Weisfeiler-Leman algorithm, we essentially replace the injective relabeling function with a non-injective one. We do this by allowing vertices with differing neighbor color multisets to be assigned the same color under some conditions. Through this, the number of colors per iteration can be limited, allowing to obtain a more gradual refinement of colors. To reach the same stable coloring as the original Weisfeiler-Leman algorithm, the function has to assure that vertices with differing colors in one iteration will get differing colors in future iterations and that in each iteration at least one color is split up, if possible.
+
+We first define the property necessary to reach the stable coloring of the original Weisfeiler-Leman algorithm and discuss connections to the original as well as other vertex refinement algorithms. Then we provide a realization of such a function by means of clustering, which is used in our experimental evaluation. Figure 2 illustrates our idea. It depicts the initial coloring, the result of the first iteration of WL and a possible result of the first iteration of the gradual Weisfeiler-Leman refinement (GWL), when restricting the maximum number of new colors to 2 by clustering the neighbor color multisets.
+
+Update Functions. Using the same approach as the Weisfeiler-Leman algorithm, the color of a vertex is updated iteratively according to the colors of its neighbors. Let ${\mathcal{T}}_{i}$ denote a color hierarchy belonging to $G$ and ${n}_{i}\left( v\right) = \left\{ {\left| {{\pi }_{{\mathcal{T}}_{i}}\left( x\right) }\right| x \in N\left( v\right) }\right\}$ the neighbor color multiset of $v$ in iteration $i$ . We use a similar update strategy, but generalize it using a special type of function:
+
+$$
+\forall v \in V\left( G\right) : {c}_{i + 1}\left( v\right) = {\pi }_{{\mathcal{T}}_{i + 1}}\left( v\right) \text{, with }{\mathcal{T}}_{i + 1} = f\left( {G,{\mathcal{T}}_{i}}\right) ,
+$$
+
+where $f$ is a refining, neighborhood preserving function.
+
+A refining, neighborhood preserving (renep) function $f$ maps a pair $\left( {G,{\mathcal{T}}_{i}}\right)$ to a tree ${\mathcal{T}}_{i + 1}$ , such that
+
+1. ${\mathcal{T}}_{i} \subseteq {\mathcal{T}}_{i + 1}$
+
+2. ${\mathcal{T}}_{i} = {\mathcal{T}}_{i + 1}$ , iff $\forall v, w \in V\left( G\right) .v{ \approx }_{{\pi }_{{\mathcal{T}}_{i}}}w \Rightarrow {n}_{i}\left( v\right) = {n}_{i}\left( w\right)$
+
+3. ${\mathcal{T}}_{i} \subsetneq {\mathcal{T}}_{i + 1} \Rightarrow {\pi }_{{\mathcal{T}}_{i + 1}} \prec {\pi }_{{\mathcal{T}}_{i}}$
+
+4. $\forall v, w \in V\left( G\right) : \left( {v{ \approx }_{\pi {\tau }_{i}}w \land {n}_{i}\left( v\right) = {n}_{i}\left( w\right) }\right) \Rightarrow v{ \approx }_{\pi {\tau }_{i + 1}}w$
+
+The conditions assure, that the coloring ${\pi }_{{\mathcal{T}}_{i + 1}}$ is a strict refinement of ${\pi }_{{\mathcal{T}}_{i}}$ , if there exists a strict refinement: Condition 1 assures that the new coloring is a refinement of the old one. Condition 2 assures that the tree (and in turn the coloring) only stays the same, iff the stable coloring is reached, while condition 3 assures that, if the trees are not equal, ${\pi }_{{\mathcal{T}}_{i + 1}}$ is a strict refinement of ${\pi }_{{\mathcal{T}}_{i}}$ . Without this condition it would be possible to obtain a tree, that fulfills condition 1 but does not strictly refine the coloring (for example by adding one child to each leaf). Condition 4 assures that vertices, that are indistinguishable regarding their color and their neighbor color multiset, get the same color (as in the original Weisfeiler-Leman algorithm).
+
+We call this new approach gradual Weisfeiler-Leman refinement (GWL refinement). Since $f$ is a renep function, it is assured that at least one color is split into at least two new colors, if the stable coloring is not yet reached. This property and its implications are explored in the following section.
+
+Usually, the refinement is computed simultaneously for multiple graphs. This can be realized by using the disjoint union of all graphs as input. Note that this will have an influence on the function $f$ , since refinements might differ based on the vertices involved. This is a typical case of transductive learning, because the algorithm has to run on all graphs and if a new graph is encountered, the algorithm has to run again on the enlarged graph.
+
+### 4.1 Equivalence of the Stable Colorings
+
+The gradual color refinement will never assign two vertices the same color, if their colors differed in the previous iteration, since we require the coloring to be a refinement of the previous one. We can show that the stable coloring obtained by GWL refinement using any renep function is equal to the unique coarsest stable coloring, which is obtained by the original Weisfeiler-Leman algorithm.
+
+Theorem 1 ([18], Proposition 3). For every coloring $\pi$ of $V\left( G\right)$ , there is a unique coarsest stable coloring $p$ that refines $\pi$ .
+
+This means GWL with any renep function, should it reach a coarsest stable coloring, will reach this unique coarsest stable coloring. It remains to show that GWL will reach a coarsest stable coloring.
+
+Theorem 2. For all $G$ the ${GWL}$ refinement using any renep function will find the unique coarsest stable coloring of $V\left( G\right)$ .
+
+Proof. Let ${\pi }_{\mathcal{T}} = \left\{ {{p}_{1},\ldots ,{p}_{n}}\right\}$ be the stable coloring obtained from GWL on the initial coloring ${\pi }_{0}$ . Assume there exists another stable coloring ${\pi }^{\prime } = \left\{ {{p}_{1}^{\prime },\ldots ,{p}_{m}^{\prime }}\right\}$ with ${\pi }_{\mathcal{T}} \prec {\pi }^{\prime } \preccurlyeq {\pi }_{0}$ , so $m < n$ . Then $\exists v, w \in V\left( G\right) .\left( {v{ \approx }_{{\pi }^{\prime }}w \land v{ ≉ }_{\pi \tau }w}\right)$ and since condition 4 applies $n\left( v\right) \neq n\left( w\right)$ , which contradicts the assumption that ${\pi }^{\prime }$ is stable.
+
+The original Weisfeiler-Leman refinement can be realized by using the renep function with $\Leftrightarrow$ instead of $\Rightarrow$ in condition 4 . This ensures that vertices get assigned the same color, iff they previously had the same color and their neighborhood color multisets do not differ. Since this procedure splits all colors, that can be split up, it is the fastest converging possible renep function (because only direct neighborhood is considered). A trivial upper bound for the maximum number of Weisfeiler-Leman iterations needed is $\left| {V\left( G\right) - 1}\right|$ and there are infinitely many graphs on which this number of iterations is required for convergence [17]. We obtain the same upper bound for GWL.
+
+Theorem 3. The maximum number of iterations needed to reach the stable coloring using GWL refinement is $\left| {V\left( G\right) }\right| - 1$ .
+
+Proof. The function we consider is a renep function. It follows that, prior to reaching the stable coloring, at least one color is split into at least two new colors in every iteration. Since vertices that had different colors at any step will also have different colors in the following iterations, the number of colors increases in every step. Hence, after at most $\left| {V\left( G\right) }\right| - 1$ steps, each vertex has a unique color, which is a stable coloring.
+
+Sequential Weisfeiler-Leman. For optimizing the running time of the Weisfeiler-Leman algorithm, sequential refinement strategies have been proposed [18-20], which lead to the same stable coloring as the original WL. Our presentation follows Berkholz et al. [18], who provide implementation details and a thorough complexity analysis. Sequential WL manages a stack containing the colors that still have to be processed. All initial colors are added to this stack. In each step, the next color $c$ from the stack is used to refine the current coloring $\pi$ (and generate a new coloring ${\pi }^{\prime }$ ) using the following update strategy: $\forall v, w \in V\left( G\right) : v{ \approx }_{{\pi }^{\prime }}w \Leftrightarrow \left| \left\{ {x \mid x \in N\left( v\right) \land \pi \left( x\right) = c}\right\} \right| =$ $\left| \left\{ {x \mid x \in N\left( w\right) \land \pi \left( x\right) = c}\right\} \right| \land v{ \approx }_{\pi }w$ . Note that ${\pi }^{\prime } \prec \pi$ is not guaranteed. For colors that are split, all new colors are added to the stack with exception of the largest color class. This is shown to be sufficient for generating the coarsest stable coloring [18].
+
+Sequential Weisfeiler-Leman can be realized by our GWL with the restriction, that in sequential WL, some refinement operations might not produce strict refinements. We need to skip these in our approach (since renep functions have to produce strict refinements as long as the coloring is not stable).
+
+The renep function has to fulfill $\forall v, w \in V\left( G\right) : v{ \approx }_{{\pi }_{{\mathcal{T}}_{i + 1}}}w \Leftrightarrow \left| \left\{ {x \mid x \in N\left( v\right) \land {\pi }_{{\mathcal{T}}_{i}}\left( x\right) = c}\right\} \right| =$ $\left| \left\{ {x \mid x \in N\left( w\right) \land \pi {\mathcal{T}}_{i}\left( x\right) = c}\right\} \right| \land v{ \approx }_{\pi {\mathcal{T}}_{i}}w$ , where $c$ is the next color in the stack that produces a strict refinement.
+
+### 4.2 Running Time
+
+The running time of the gradual Weisfeiler-Leman refinement depends on the cost of the update function used.
+
+Theorem 4. The running time for the gradual Weisfeiler-Leman refinement is $O\left( {i \cdot {t}_{u}\left( \left| {V\left( G\right) }\right| \right) }\right)$ , where $i$ is the number of iterations and ${t}_{u}\left( n\right)$ is the time needed to compute the renep function for $n$ elements.
+
+The update function used in the original Weisfeiler-Leman refinement can be computed in time $O\left( {\left| {V\left( G\right) }\right| + \left| {E\left( G\right) }\right| }\right)$ in the worst-case by sorting the neighbor color multisets using bucket sort [2].
+
+### 4.3 Discussion of Suitable Update Functions
+
+The update function of the original Weisfeiler-Leman refinement provides a fast way to reach the stable coloring, but in machine learning tasks a more fine grained vertex similarity is needed. A suitable update function restricts the number of new colors to a manageable amount, while still fulfilling the requirements of a renep function. Clustering the neighborhood multisets of the vertices, and letting the clusters imply the new colors, is an intuitive way to restrict the number of colors per iteration and assign similar neighborhoods the same new color. We discuss how to realize a renap function using clustering.
+
+Whether two vertices, that currently have the same color, will be assigned the same color in the next step, depends on two factors: If they have the same neighbor color multiset, they have to remain in one color group. If their neighbor color multisets differ, however, the renep function can decide to either separate them or not (provided any new colors are generated to fulfill condition 3). We propose clustering the neighbor color multisets separately for each old color and let the clusters imply new colors. If a clustering function guarantees to produce at least two clusters for inputs with at least two distinct objects, we obtain a renep function.
+
+Although various clustering algorithms are available, we identified $k$ -means as a convenient choice because of its efficiency and controllability of the number of clusters. In order to apply $k$ -means to multisets of colors, we represent them as (sparse) vectors, where each entry counts the number of neighbors with a specific color. The above method using $k$ -means clustering with $k > 1$ satisfies the requirements of a renep function. Of course, if the number of elements to cluster is less than or equal to $k$ , the clustering can be omitted and each element can be assigned its own cluster. The number of clusters in iteration $i$ is bounded by $\left| L\right| \cdot {k}^{i}$ , since each color can split into at most $k$ new colors in each iteration and the initial coloring has at most $\left| L\right|$ colors.
+
+## 5 Applications
+
+The gradual Weisfeiler-Leman refinement provides a more fine-grained approach to capture vertex similarity, where two vertices are considered more similar, the longer it takes until they get assigned different colors. This makes the approach applicable not only to vertex classification, but also in graph kernels and as a vertex similarity measure for graph matching. We further describe these possible applications in the following and evaluate them against the state-of-the-art methods in Section 6.
+
+Graph Kernels. The idea of the GWL subtree kernel is essentially the same as the Weisfeiler-Leman subtree kernel [2], but instead of using the original Weisfeiler-Leman algorithm, the GWL algorithm is used to generate the features. We use the definition given in Equation (1) replacing the Weisfeiler-Leman colorings with the coloring from the GWL algorithm. The Weisfeiler-Leman optimal assignment kernel [5] is obtained from an optimal assignment between the vertices of two graphs regarding a vertex similarity obtained from a color hierarchy. We replace the Weisfeiler-Leman color hierarchy used originally by the one from our gradual refinement. We evaluate the performance of our newly proposed kernels in Section 6.
+
+Tree Metrics for Approximating the Graph Edit Distance. The Weisfeiler-Leman refinement produces a color hierarchy, see Figure 1, which can be interpreted as a tree defining a metric on the vertices [21]. This tree metric can be used in bipartite graph matching to find an optimal assignment between the vertices of the two graphs in linear time. Finding a vertex assignment is a commonly used strategy for gaining an upper bound of the graph edit distance, a general distance measure for graphs. The upper bound is computed by deriving a (sub-optimal) edit path from the vertex assignment. We use the same approach as Kriege et al. [21], but again replace the original Weisfeiler-Leman refinement with our gradual one. This means, instead of using the color hierarchy computed by the Weisfeiler-Leman refinement, we use the color hierarchy generated by our approach as the underlying tree metric. We evaluate this approximation of the graph edit distance regarding its accuracy in $k$ nn-classification against the state-of-the-art and the original approach.
+
+## 6 Experimental Evaluation
+
+We evaluate the proposed approach regarding its applicability in graph kernels, as the gradual Weisfeiler-Leman subtree kernel (GWL) and the gradual Weisfeiler-Leman optimal assignment kernel (GWLOA), as well as its usefulness as a tree metric for approximating the graph edit distance. Specifically, we address the following research questions:
+
+Q1 Can our kernels compete with state-of-the-art methods regarding classification accuracy on real-world and synthetic datasets?
+
+Q2 Which refinement speed is appropriate and are there dataset-specific differences?
+
+Q3 How do our kernels compare to the state-of-the-art methods in terms of running time?
+
+Q4 Is the vertex similarity obtained from GWL refinement suitable for approximating the graph edit distance?
+
+We compare to the Weisfeiler-Leman subtree kernel (WLST) [2], the Weisfeiler-Leman optimal assignment kernel (WLOA) [5], as well as RWL* [14], the approximation of the relaxed Weisfeiler-Leman subtree kernel, and the deep Weisfeiler-Leman kernel (DWL) [12]. We do not compare to [4], since the kernel showed results similar to the WLOA kernel. We compare the graph edit distance approximation using our tree metric GWLT to the original approach Lin [21] and state-of-the-art method BGM [22].
+
+### 6.1 Setup
+
+As discussed in Section 4.3 we used $k$ -means clustering in our new approach. If for any color less than $k$ different vectors were present in the clustering step, each distinct vector got its own cluster. We implemented our GWL, GWLOA and also the original WLST and WLOA [5] in Java. We used nested cross-validation with $\{ 0,\ldots ,{10}\}$ iterations for WLST, WLOA, GWL and GWLOA and $k$ -means with $k \in \{ 2,4,8,{16}\}$ .
+
+We used the RWL* Python implementation provided by the authors. Note that in contrast to the other approaches, this implementation uses multi-threading. We again used nested cross-validation for evaluation, with unfolding trees of depth in $\{ 1,\ldots ,4\}$ and default values for the other parameters. We used the DWL Python implementation provided by the authors. The parameters window size $w$ and dimension $d$ were set to 25, since they generally worked best out of the combinations from $d, w \in \{ 5,{25},{50}\}$ and no defaults were given. We used the default settings for the other parameters and varied the number of iterations for the Weisfeiler-Leman algorithm from $\{ 1,\ldots ,{10}\}$ , again choosing the best value with nested cross-validation. The running time experiments were conducted on an Intel Xeon Gold 6130 machine at ${2.1}\mathrm{{GHz}}$ with ${96}\mathrm{{GBRAM}}$ . For approximation of the graph edit distance, we used the Java implementation of Lin provided by the authors and implemented our approach GWLT, as well as BGM, also in Java for a fair comparison.
+
+Extension to Edge Labels. The original Weisfeiler-Leman algorithm can be extended to respect edge labels by updating the colors according to ${c}_{i + 1}\left( v\right) = h\left( {{c}_{i}\left( v\right) ,\left\{ {\left( {l\left( {u, v}\right) ,{c}_{i}\left( u\right) }\right) \mid u \in N\left( v\right) }\right\} }\right)$ . All kernels used in the comparison use a similar strategy to incorporate edge labels if present.
+
+Datasets. We used several real-world datasets from the TUDataset [23] and the EGO-Nets datasets [14] for our experiments. See Appendix A and E for an overview of the datasets, as
+
+Table 1: Average classification accuracy and standard deviation (highest accuracies marked in bold).
+
+| Kernel | PTC_FM | KKI | EGO-1 | EGO-2 | EGO-3 | EGO-4 |
| WLST | ${64.16} \pm {1.30}$ | ${49.97} \pm {2.88}$ | ${51.30} \pm {2.42}$ | ${57.15} \pm {1.61}$ | ${56.15} \pm {1.67}$ | ${53.40} \pm {1.77}$ |
| DWL | ${64.18} \pm {1.46}$ | ${50.93} \pm {2.87}$ | ${55.80} \pm {1.35}$ | ${56.50} \pm {1.64}$ | ${55.90} \pm {1.64}$ | ${53.25} \pm {2.81}$ |
| RWL* | ${62.43} \pm {1.46}$ | ${46.54} \pm {4.03}$ | ${65.60} \pm {2.74}$ | ${70.20} \pm {1.36}$ | 67.60 ±1.07 | ${74.25} \pm {2.12}$ |
| WLOA | ${62.34} \pm {1.39}$ | ${48.72} \pm {4.05}$ | ${55.95} \pm {1.11}$ | ${60.30} \pm {2.00}$ | ${54.25} \pm {1.35}$ | ${52.30} \pm {2.29}$ |
| GWL | ${62.61} \pm {1.94}$ | $\mathbf{{57.79}} \pm {3.95}$ | ${67.95} \pm {2.05}$ | $\mathbf{{73.65}} \pm {1.86}$ | ${65.45} \pm {1.88}$ | 77.45 ±1.97 |
| GWLOA | $\mathbf{{64.58}} \pm {1.77}$ | ${47.47} \pm {2.41}$ | $\mathbf{{69.80}} \pm {1.65}$ | ${72.40} \pm {2.52}$ | ${67.45} \pm {1.69}$ | ${75.35} \pm {1.67}$ |
| COLLAB | DD | IMDB-B | MSRC_9 | NCI1 | REDDIT-B |
| WLST | 78.98 ±0.22 | ${79.00} \pm {0.52}$ | ${72.01} \pm {0.80}$ | ${90.13} \pm {0.75}$ | ${85.96} \pm {0.18}$ | ${80.81} \pm {0.52}$ |
| DWL | ${78.93} \pm {0.18}$ | ${78.92} \pm {0.40}$ | ${72.36} \pm {0.56}$ | ${90.50} \pm {0.76}$ | ${85.68} \pm {0.18}$ | ${80.83} \pm {0.40}$ |
| RWL* | 77.94 ±0.38 | ${77.52} \pm {0.65}$ | ${72.96} \pm {0.86}$ | ${88.86} \pm {0.89}$ | ${79.45} \pm {0.32}$ | 77.69 ±0.31 |
| WLOA | ${80.81} \pm {0.22}$ | $\mathbf{{79.44}} \pm {0.31}$ | ${72.60} \pm {0.89}$ | ${90.68} \pm {0.92}$ | $\mathbf{{86.29}} \pm {0.13}$ | ${89.40} \pm {0.14}$ |
| GWL | ${80.62} \pm {0.33}$ | ${79.00} \pm {0.81}$ | 73.66 ±1.25 | ${88.32} \pm {1.20}$ | ${85.33} \pm {0.35}$ | ${86.46} \pm {0.35}$ |
| GWLOA | 81.30 ±0.29 | ${78.49} \pm {0.57}$ | ${72.88} \pm {0.79}$ | $\mathbf{{91.27}} \pm {1.06}$ | ${85.36} \pm {0.36}$ | 89.98 $\pm {0.34}$ |
+
+well as additional synthetic datasets and corresponding results. We selected these datasets as they cover a wide range of applications, consisting of both molecule datasets and graphs derived from social networks. See Appendix B for the number of Weisfeiler-Leman iterations needed to reach the stable coloring for each dataset.
+
+### 6.2 Results
+
+In the following, we present the classification accuracy, as well as running time, of the different kernel methods. We investigate the parameter selection for our algorithm and discuss the application of our approach for approximating the graph edit distance.
+
+Q1: Classification Accuracy. Table 1 shows the classification accuracy of the different kernels. While on some datasets our new approaches do not outcompete all state-of-the-art methods, they are more accurate in most cases, in some cases even with a large margin to the second-placed (for example on KKI, EGO-1 or EGO-4). While RWL* is better than our approaches on some datasets, the running time of this method is much higher, cf. Q3. WLOA also produces very good results on many datasets, but cannot compete on the EGO-Nets and synthetic datasets (see Appendix E). For molecular graphs $\left( {{PTC}\_ {FM},{NCI}}\right)$ we see no significant improvements, which can be explained by their small degree and sensitivity of molecular properties to small changes. Overall, our method provides the highest accuracy on 9 of 12 datasets and is close to the best accuracy for the others.
+
+Q2: Parameter Selection. For GWL and GWLOA two parameters have to be chosen: The number of iterations and the number $k$ of clusters in $k$ -means. We investigate which choices lead to the best classification accuracy. Figure 3 shows the number of times, a specific parameter combination was selected as it provided the best accuracy for the test set. Here, we only show the parameter selection for some of the datasets. The results for the other datasets, as well as the parameter selection for WLST and WLOA, can be found in Appendix C. We can see that for GWL and most datasets the best $k$ is in $\{ 2,4,8\}$ and on those datasets classification accuracy of GWL exceeds that of WLST. On datasets on which GWL performed worse than WLST, the best choice for parameters is not clear and it seems like a larger $k$ might be beneficial for improving the classification accuracy. Similar tendencies can be observed for GWLOA.
+
+Q3: Running Time. Figure 4 shows the time needed for computing the feature vectors using the different kernels (for results on the other datasets and the influence of the parameter $k$ on running time see Appendix D and F). RWL* and DWL are much slower than the other kernels, while only RWL* leads to minor improvements in classification accuracy on few datasets. While our approach is only slightly slower than WLST/WLOA, it yields great improvements on the classification accuracy on most datasets, cf. Q1.
+
+
+
+Figure 3: Number of times a parameter combination of GWL and GWLOA was selected from $k \in \{ 2,4,8,{16}\}$ and $\#$ iterations $\in \{ 0,\ldots ,{10}\}$ based on the accuracy achieved on the test set.
+
+
+
+Figure 4: Running time in milliseconds for computing the feature vectors for all graphs of a dataset using the different methods. Note that RWL* uses multi-threading, while the other methods do not. Missing values for RWL* and DWL in the larger datasets are due to timeout.
+
+Q4: Approximating the Graph Edit Distance. Table 2 compares the classification accuracy of our approach when approximating the graph edit distance to the original and another state-of-the-art method based on bipartite graph matching. Our approach clearly outcompetes both method on all datasets.
+
+## 7 Conclusions
+
+We proposed a general framework for iterative vertex refinement generalizing the popular Weisfeiler-Leman algorithm and discussed connections to other vertex refinement strategies. Based on this, we proposed two new graph kernels and showed that they outperform the original Weisfeiler-Leman subtree kernel and similar state-of-the-art approaches in terms of classification accuracy in almost all cases, while keeping the running time much lower than comparable methods. We also investigated the application of our method to approximating the graph edit distance, where we again outperformed the state-of-the-art methods.
+
+In further research it might be interesting to systematically compare our approach to graph neural networks, since their message passing scheme is similar to the update strategy of the WL algorithm. Moreover, other renep functions can be explored, for example, by using other clustering strategies, or by developing new concepts for inexact neighborhood comparison.
+
+Table 2: Average classification accuracy and standard deviation (highest accuracies marked in bold).
+
+| Method | PTC_FM | MSRC_9 | KKI | EGO-1 | EGO-2 | EGO-3 | EGO-4 |
| BGM | ${60.14} \pm {1.50}$ | ${72.13} \pm {1.28}$ | 43.89$\pm {1.27}$ | ${44.75} \pm {1.05}$ | ${42.05} \pm {1.25}$ | out of time | out of time |
| Lin | ${62.38} \pm {1.08}$ | ${81.36} \pm {0.64}$ | 55.18$\pm {2.44}$ | ${40.40} \pm {1.17}$ | ${31.65} \pm {1.07}$ | ${26.60} \pm {0.94}$ | ${36.55} \pm {1.72}$ |
| GWLT63.19 ±0.11$\mathbf{{85.97}} \pm {0.59}$55.18$\pm {2.44}$$\mathbf{{56.20}} \pm {1.42}$47.9036.40 ±1.0447.90 ±1.32 |
+
+References
+
+[1] Brendan D. McKay and Adolfo Piperno. Practical graph isomorphism, II. J. Symb. Comput., 60:94-112, 2014. doi: 10.1016/j.jsc.2013.09.003. 1
+
+[2] Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler-Lehman graph kernels. J. Mach. Learn. Res., 12:2539-2561, 2011. URL http://dl.acm.org/citation.cfm?id=2078187.1, 2, 3, 6, 7
+
+[3] Matteo Togninalli, M. Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, and Karsten M. Borgwardt. Wasserstein Weisfeiler-Lehman graph kernels. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 6436-6446, 2019. 1
+
+[4] Dai Hai Nguyen, Canh Hao Nguyen, and Hiroshi Mamitsuka. Learning subtree pattern importance for Weisfeiler-Lehman based graph kernels. Mach. Learn., 110(7):1585-1607, 2021. 7
+
+[5] Nils M. Kriege, Pierre-Louis Giscard, and Richard C. Wilson. On valid optimal assignment kernels and applications to graph classification. In Advances in Neural Information Processing Systems, pages 1615-1623, 2016. 1, 2, 6, 7
+
+[6] Christopher Morris, Matthias Fey, and Nils M. Kriege. The power of the Weisfeiler-Leman algorithm for machine learning with graphs. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pages 4543-4550. ijcai.org, 2021. doi: 10.24963/ijcai. 2021/618. 1
+
+[7] Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M. Kriege, Martin Grohe, Matthias Fey, and Karsten M. Borgwardt. Weisfeiler and Leman go machine learning: The story so far. CoRR, abs/2112.09992, 2021. 1
+
+[8] Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 4602-4609. AAAI Press, 2019. doi: 10.1609/aaai.v33i01.33014602. 1
+
+[9] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR. OpenReview.net, 2019. 1
+
+[10] Nils M. Kriege, Fredrik D. Johansson, and Christopher Morris. A survey on graph kernels. Applied Network Science, 5(1):6, 2020. doi: 10.1007/s41109-019-0195-3. 2
+
+[11] Karsten M. Borgwardt, M. Elisabetta Ghisu, Felipe Llinares-López, Leslie O'Bray, and Bastian Rieck. Graph kernels: State-of-the-art and future challenges. Found. Trends Mach. Learn., 13 (5-6), 2020. doi: 10.1561/2200000076. 2
+
+[12] Pinar Yanardag and S. V. N. Vishwanathan. Deep graph kernels. In Longbing Cao, Chengqi Zhang, Thorsten Joachims, Geoffrey I. Webb, Dragos D. Margineantu, and Graham Williams, editors, Proceedings of the 21 th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, pages 1365-1374. ACM, 2015. doi: 10.1145/2783258.2783417. 2, 7
+
+[13] Annamalai Narayanan, Mahinthan Chandramohan, Lihui Chen, Yang Liu, and Santhoshkumar Saminathan. subgraph2vec: Learning distributed representations of rooted sub-graphs from large graphs. CoRR, abs/1606.08928, 2016. URL http://arxiv.org/abs/1606.08928.2
+
+[14] Till Hendrik Schulz, Tamás Horváth, Pascal Welke, and Stefan Wrobel. A generalized Weisfeiler-Lehman graph kernel. Mach Learn, 2022. 2, 7, 12
+
+[15] Zhongxi Fang, Jianming Huang, Xun Su, and Hiroyuki Kasai. Wasserstein graph distance based on ${l}_{1}$ -approximated tree edit distance between Weisfeiler-Lehman subtrees. 2022. doi: 10.48550/ARXIV.2207.04216. URL https://arxiv.org/abs/2207.04216.2
+
+[16] Martin Grohe, Kristian Kersting, Martin Mladenov, and Pascal Schweitzer. Color Refinement and Its Applications. In An Introduction to Lifted Probabilistic Inference. The MIT Press, 08 2021. ISBN 9780262365598. 3
+
+[17] Sandra Kiefer and Brendan D. McKay. The iteration number of colour refinement. In Artur Czumaj, Anuj Dawar, and Emanuela Merelli, editors, 47th International Colloquium on Automata, Languages, and Programming, ICALP 2020, July 8-11, 2020, Saarbriicken, Germany (Virtual Conference), volume 168 of LIPIcs, pages 73:1-73:19. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. doi: 10.4230/LIPIcs.ICALP.2020.73. URL https: //doi.org/10.4230/LIPIcs.ICALP.2020.73.3,5
+
+[18] Christoph Berkholz, Paul S. Bonsma, and Martin Grohe. Tight lower and upper bounds for the complexity of canonical colour refinement. Theory Comput. Syst., 60(4):581-614, 2017. doi: 10.1007/s00224-016-9686-0. URL https://doi.org/10.1007/s00224-016-9686-0.5
+
+[19] Robert Paige and Robert Endre Tarjan. Three partition refinement algorithms. SIAM J. Comput., 16(6):973-989, 1987. doi: 10.1137/0216062. URL https://doi.org/10.1137/0216062.
+
+[20] Tommi A. Junttila and Petteri Kaski. Engineering an efficient canonical labeling tool for large and sparse graphs. In Proceedings of the Nine Workshop on Algorithm Engineering and Experiments, ALENEX 2007, New Orleans, Louisiana, USA, January 6, 2007. SIAM, 2007. doi: 10.1137/1.9781611972870.13.URL https://doi.org/10.1137/1.9781611972870.13.5
+
+[21] Nils M. Kriege, Pierre-Louis Giscard, Franka Bause, and Richard C. Wilson. Computing optimal assignments in linear time for approximate graph matching. In ICDM, pages 349-358, 2019. 7
+
+[22] Kaspar Riesen and Horst Bunke. Approximate graph edit distance computation by means of bipartite graph matching. Image Vision Comput., 27(7):950-959, 2009. 7
+
+[23] Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. TUDataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond, GRL+, 2020. URL http://www.graphlearning.io.7, 12
+
+Table 3: Datasets with discrete vertex and edge labels and their statistics [23]. The EGO-Nets datasets [14] are unlabeled.
+
+| Name | Graphs | Classes | avg $\left| V\right|$ | avg $\left| \mathbf{E}\right|$ | $\left| {L}_{V}\right|$ | $\left| {L}_{E}\right|$ |
| KKI | 83 | 2 | 26.96 | 48.42 | 190 | - |
| PTC_FM | 349 | 2 | 14.11 | 14.48 | 18 | 4 |
| COLLAB | 5000 | 3 | 74.49 | 2457.78 | - | - |
| ${DD}$ | 1178 | 2 | 284.32 | 715.66 | 82 | - |
| IMDB-BINARY | 1000 | 2 | 19.77 | 96.53 | - | - |
| MSRC_9 | 221 | 2 | 40.58 | 97.94 | 10 | - |
| NCI1 | 4110 | 2 | 29.87 | 32.30 | 37 | - |
| REDDIT-BINARY | 2000 | 2 | 429.63 | 497.75 | - | - |
| EGO-1 | 200 | 4 | 138.97 | 593.53 | - | - |
| ${EGO} - 2$ | 200 | 4 | 178.55 | 1444.86 | - | - |
| EGO-3 | 200 | 4 | 220.01 | 2613.49 | - | - |
| EGO-4 | 200 | 4 | 259.78 | 4135.80 | - | - |
+
+Table 4: Number of iterations needed to reach the stable coloring.
+
+| Dataset WL | KKI 3 | PTC_FM 13 | COLLAB - | ${DD}$ - | ${IMDB} - B$ 3 | MSRC_9 3 |
| Dataset | NCI1 | REDDIT-B | EGO-1 | ${EGO} - 2$ | ${EGO} - 3$ | ${EGO} - 4$ |
| WL | 39 | - | 5 | 4 | 4 | 5 |
+
+## A Datasets
+
+We used several real-world datasets from the TUDataset [23], the EGO-Nets datasets [14], as well as synthetic datasets for our experiments. See Table 3 for an overview of the real-world datasets. We selected these datasets as they cover a wide range of applications, consisting of both molecule datasets and graphs derived from social networks.
+
+The synthetic datasets were generated using the block graph generation method [14]. We generated 9 synthetic datasets with two classes and 200 graphs in each class. For each dataset we first generated two seed graphs (one per class) with 16 vertices, that both are constructed from a tree by appending a single edge, so that their sets of vertex degrees are equal. For the dataset graphs each vertex of the seed graph was replaced by 8 vertices. Vertices generated from the same seed vertex, as well as from adjacent vertices are connected with probability $p.m$ noise edges are then added randomly. We investigated the two cases $p = {1.0}$ and $m \in \{ 0,{10},{20},{50},{100}\}$ , and $p \in \{ {1.0},{0.8},{0.6},{0.4},{0.2}\}$ and $m = 0$ . We denote the datasets by ${S}_{ - }{p}_{ - }m$ .
+
+## B Number of Weisfeiler-Leman Iterations
+
+We investigate the number of WL iterations needed to reach the stable coloring in the various datasets. On the datasets with - entries (see Table 4) our algorithm, that checked whether the stable coloring is reached, did not finish in a reasonable time due to the size of the datasets/graphs. It can be seen that on most datasets the number of iterations needed to reach the stable coloring is very low. This means that after a few iterations we do not gain any new information when using for example the traditional WLST kernel.
+
+## C Parameter Selection - Further Results
+
+Figures 5 and 6 show the parameter selection for the remaining datasets for GWL and GWLOA. In most datasets the choice is restricted to two or three values. For some of the datasets, the best choice seems to include $k = {16}$ . This might indicate that a larger $k$ could be beneficial for increasing the accuracy.
+
+
+
+Figure 5: With $k \in \{ 2,4,8,{16}\}$ and $\#$ iterations $\in \{ 0,\ldots ,{10}\}$ , we show the number of times a specific parameter combination for GWL was selected as it provided the best accuracy for the test set.
+
+
+
+Figure 6: With $k \in \{ 2,4,8,{16}\}$ and $\#$ iterations $\in \{ 0,\ldots ,{10}\}$ , we show the number of times a specific parameter combination for GWLOA was selected as it provided the best accuracy for the test set.
+
+Figure 7 shows the parameter selection for WLST and WLOA. There is only one parameter, the number of WL iterations, for both kernels. We can see that indeed on most datasets, only few iterations are needed to gain the best possible accuracy. On datasets such as ${EGO} - 2,{EGO} - 4$ or ${NCI1}$ , however, this is not the case. For NCI1 we can assume that we still gain information through more iterations, since we have not yet reached the stable coloring. For the $\bar{E}{GO}$ -datasets, this is surprising, since the stable coloring is reached after 4 (5) iterations. On the other hand, the classification accuracy reached is still not good.
+
+
+
+Figure 7: With $\#$ iterations $\in \{ 0,\ldots ,{10}\}$ , we show the number of times a specific parameter combination for WLST and WLOA was selected as it provided the best accuracy for the test set.
+
+
+
+Figure 8: Running time in milliseconds for computing the feature vectors using the different methods. Note that RWL* uses multi-threading, while the other methods do not. Missing values for RWL* and DWL in the larger datasets are due to timeout.
+
+Table 5: Average classification accuracy and standard deviation on the synthetic datasets.
+
+| Kernel | $S\_ 1\_ 0$ | $S\_ 1\_ {10}$ | $S\underline{}1\underline{}{20}$ | $S = 1\_ {50}$ | $S\_ 1\_ {100}$ | $S\underline{}{0.8}\underline{}0$ | $S\underline{}{0.6}\underline{}0$ | $S\underline{}{0.4}\underline{}0$ | $S\underline{}{0.2}\underline{}0$ |
| WLST | 100.00±0.00 | ${98.68} \pm {0.34}$ | ${61.93} \pm {1.08}$ | ${54.55} \pm {0.68}$ | ${49.78} \pm {1.18}$ | ${50.65} \pm {1.57}$ | ${48.10} \pm {1.10}$ | ${51.98} \pm {1.40}$ | ${42.65} \pm {1.80}$ |
| DWL | 100.00±0.00 | ${98.70} \pm {0.31}$ | ${62.10} \pm {0.98}$ | ${43.83} \pm {1.66}$ | ${51.40} \pm {0.94}$ | ${49.80} \pm {2.01}$ | ${46.88} \pm {1.89}$ | ${49.05} \pm {1.98}$ | ${42.85} \pm {1.98}$ |
| RWL* | 100.00±0.00 | 100.00±0.00 | ${100.00} \pm {0.00}$ | out of time | 100.00±0.00 | 100.00±0.00 | $\mathbf{{99.35}} \pm {0.17}$ | $\mathbf{{81.93}} \pm {1.00}$ | 56.33 $\pm$ 2.48 |
| WLOA | 100.00±0.00 | ${97.65} \pm {0.44}$ | ${60.85} \pm {1.65}$ | ${47.50} \pm {1.83}$ | ${50.23} \pm {1.24}$ | ${49.45} \pm {1.47}$ | ${48.08} \pm {1.92}$ | ${43.23} \pm {1.21}$ | ${50.53} \pm {1.92}$ |
| $\mathbf{{GWL}}$ | 100.00±0.00 | ${100.00} \pm {0.00}$ | ${100.00} \pm {0.00}$ | ${100.00} \pm {0.00}$ | ${100.00} \pm {0.00}$ | ${100.00} \pm {0.00}$ | ${92.20} \pm {0.97}$ | ${72.53} \pm {1.78}$ | ${53.58} \pm {1.71}$ |
| GWLOA | ${100.00} \pm {0.00}$ | 100.00±0.00 | ${100.00} \pm {0.00}$ | 100.00±0.00 | ${100.00} \pm {0.00}$ | 100.00±0.00 | ${94.95} \pm {0.59}$ | ${72.03} \pm {1.66}$ | ${50.90} \pm {2.63}$ |
+
+## D Running Time - Further Results
+
+Figure 8 shows the running time results for the remaining datasets. While the runtime of our approach exceeds that of DWL on some of the larger datasets, it enhances the classification accuracy a lot.
+
+## E Results on Synthetic Datasets
+
+Table 5 shows the classification accuracy of the different methods on the synthetic datasets, with the best accuracy for each dataset being marked in bold. We can see, while all kernels can perfectly learn on the datasets without noise, neither WLST, WLOA nor DWL can manage the noise included in the other datasets, having worse accuracy with increasing noise. While the decrease in accuracy with decreasing the edge probability is slightly worse than that of RWL*, our approach has a much lower running time.
+
+## F Influence of Parameter $k$ on Running Time
+
+We investigate which effect the choice of $k$ has on the running time. Figure 9 shows the time needed for computing the feature vectors using our kernels with $k \in \{ 2,4,8,{16}\}$ . The difference in running time between GWL and GWLOA is only marginal on most datasets, only on ${KKI}$ and ${NCII}$ a larger difference can be seen. As expected, the running time of both kernels increases with increasing $k$ . Interestingly, for larger $k$ , the running time does not increase much anymore, after a certain number of iterations, this might be because the stable partitioning was reached by then.
+
+
+
+Figure 9: Running time in milliseconds for computing the feature vectors using the different values for parameter $k$ on our newly proposed methods.
+
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/fe1DEN1nds/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/fe1DEN1nds/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..64b4752111aaf6f41b6597bd0a66877739e2c6db
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/fe1DEN1nds/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,269 @@
+§ GRADUAL WEISFEILER-LEMAN: SLOW AND STEADY WINS THE RACE
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+The classical Weisfeiler-Leman algorithm aka color refinement is fundamental for graph learning and central for successful graph kernels and graph neural networks. Originally developed for graph isomorphism testing, the algorithm iteratively refines vertex colors. On many datasets, the stable coloring is reached after a few iterations and the optimal number of iterations for machine learning tasks is typically even lower. This suggests that the colors diverge too fast, defining a similarity that is too coarse. We generalize the concept of color refinement and propose a framework for gradual neighborhood refinement, which allows a slower convergence to the stable coloring and thus provides a more fine-grained refinement hierarchy and vertex similarity. We assign new colors by clustering vertex neighborhoods, replacing the original injective color assignment function. Our approach is used to derive new variants of existing graph kernels and to approximate the graph edit distance via optimal assignments regarding vertex similarity. We show that in both tasks, our method outperforms the original color refinement with only moderate increase in running time advancing the state of the art.
+
+§ 18 1 INTRODUCTION
+
+The (1-dimensional) Weisfeiler-Leman algorithm, also referred to as color refinement, iteratively refines vertex colors by encoding colors of neighbors and was originally developed as a heuristic for the graph isomorphism problem. Although it cannot distinguish some non-isomorphic graph pairs, for example strongly regular graphs, it succeeds in many cases. It is widely used as a sub-routine in isomorphism algorithms today to reduce ambiguities that have to be resolved by backtracking search [1]. It has also gained high popularity in graph learning, where the technique is used to define graph kernels [2-5] and to formalize the expressivity of graph neural networks, see the recent surveys [6, 7]. Graph kernels based on Weisfeiler-Leman refinement provide remarkable predictive performance while being computationally highly efficient. The original Weisfeiler-Leman subtree kernel [2] and its variants and extensions, e.g., [3-5], provide state-of-the-art classification accuracy on many datasets and are widely used baselines. The update scheme of the Weisfeiler-Leman algorithm is similar to the idea of neighborhood aggregation in graph neural networks (GNNs). It has been shown that (i) the expressive power of GNNs is limited by the Weisfeiler-Leman algorithm, and (ii) that GNN architectures exist that reach this expressive power [8, 9].
+
+As a consequence of its original application, the Weisfeiler-Leman algorithm assigns discrete colors and does not allow distinguishing minor or major differences in vertex neighborhood but considers two colors as either the same or different. Most Weisfeiler-Leman graph kernels match vertex colors of the first few refinement steps by equality, which can be considered as too rigid, since these colors encode complex neighborhood structures. In machine learning tasks, a more fine-grained differentiation appears promising. Often data is noisy, which in graphs can show for example in small differences in vertex degree. Such differences get picked up by the refinement strategy of the Weisfeiler-Leman algorithm and cannot be distinguished from significant differences.
+
+We address this problem by providing a different approach to the refinement step of the Weisfeiler-Leman algorithm: We replace the injective relabeling function with a non-injective one, to gain a more gradual refinement of colors. This allows to obtain a finer vertex similarity measure, that can distinguish between large and small changes in vertex neighborhoods with increasing radius. We characterize the set of functions that, while not necessarily injective, guarantee that the stable coloring of the original Weisfeiler-Leman algorithm is reached after a possibly higher number of iterations. Thus, our approach preserves the expressive power of the Weisfeiler-Leman algorithm. We discuss possible realization of such a function and use $k$ -means clustering in our experimental evaluation as an exemplary one.
+
+§ OUR CONTRIBUTION.
+
+1. We propose refining, neighborhood preserving (renep) functions, which generalize the concept of color refinement. This family of functions leads to the coarsest stable coloring while only incorporating direct neighborhoods.
+
+2. We show the connections of our approach to the original Weisfeiler-Leman algorithm, as well as other vertex refinement strategies.
+
+3. We propose two new graph kernels based on renep functions, that outperform state-of-the-art kernels on synthetic and real-world datasets, with only moderate increase in running time.
+
+4. We apply our new approach for approximating the graph edit distance via bipartite graph matching and show that it outperforms state-of-the-art heuristics.
+
+§ 2 RELATED WORK
+
+Various graph kernels based on the standard Weisfeiler-Leman refinement have been proposed [2-5]. Recent comprehensive experimental evaluations confirm their high classification accuracy on many real-world datasets $\left\lbrack {{10},{11}}\right\rbrack$ . These approaches implicitly match colors by equality, which can be considered as too rigid, since colors encode unfolding trees representing complex neighborhood structures. Some recent works address this problem: Yanardag and Vishwanathan [12] introduced similarities between colors using techniques inspired by natural language processing, that were subsequently refined by Narayanan et al. [13]. Schulz et al. [14] define a distance function between colors by comparing the associated unfolding trees using a tree edit distance. Based on this distance the colors are clustered to obtain a new graph kernel. Although the tree edit distance is polynomial-time computable, the running time of the algorithm is very high. A kernel based on the Wasserstein distance of sets of unfolding trees was proposed by Fang et al. [15]. The vertices of the graphs are embedded into ${\ell }_{1}$ space using an approximation of the tree edit distance between their unfolding trees. A graph can then be seen as a distribution over those embeddings. While the function proposed is not guaranteed to be positive semi-definite, the method showed results similar to and in some cases exceeding state-of-the-art techniques. The running time, however, is still very high and the method is only feasible for unfolding trees of small height.
+
+These approaches define similarities between Weisfeiler-Leman colors and the associated unfolding trees. Our approach, in contrast, alters the Weisfeiler-Leman refinement procedure itself and does not rely on computationally expensive matching of unfolding trees.
+
+§ 3 PRELIMINARIES
+
+In this section we provide the definitions necessary to understand our new vertex refinement algorithm. We first give a short introduction to graphs and the original Weisfeiler-Leman algorithm, before we cover graph kernels.
+
+Graph Theory. A graph $G = \left( {V,E,\mu ,\nu }\right)$ consists of a set of vertices $V$ , denoted by $V\left( G\right)$ , a set of edges $E\left( G\right) = E \subseteq V \times V$ between the vertices, a labeling function for the vertices $\mu : V \rightarrow L$ , and a labeling function for the edges $\nu : E \rightarrow L$ . We discuss only undirected graphs and denote an edge between $u$ and $v$ by ${uv}$ . The set of neighbors of a vertex $v \in V$ is denoted by $N\left( v\right) = \{ u \mid {uv} \in E\}$ . The set $L$ contains categorical labels. A (rooted) tree $T$ is a simple (no self-loops or multi-edges), connected graph without cycles and with a designated root node $r$ . A tree ${T}^{\prime }$ is a subtree of a tree $T$ , denoted by ${T}^{\prime } \subseteq T$ , iff $V\left( {T}^{\prime }\right) \subseteq V\left( T\right)$ . The root of ${T}^{\prime }$ is the node closest to the root in $T$ . A partitioning $\pi$ of a set $S$ is a set $\left\{ {{S}_{1},\ldots ,{S}_{n}}\right\}$ of non-empty subsets of $S$ , such that $\forall i,j \in \{ 1,\ldots ,n\} ,i \neq j : {S}_{i} \cap {S}_{j} = \varnothing$ and $\mathop{\bigcup }\limits_{{i = 1}}^{n}{S}_{i} = S$ . For $s \in S$ we denote by $\pi \left( s\right)$ the unique identifier of the subset containing $s$ . For ${s}_{1},{s}_{2} \in S$ with $\pi \left( {s}_{1}\right) = \pi \left( {s}_{2}\right)$ , we also write ${s}_{1}{ \approx }_{\pi }{s}_{2}$ .
+
+ < g r a p h i c s >
+
+Figure 1: Initial coloring and results of the first three iterations of the Weisfeiler-Leman algorithm. To use less colors for this example, vertices with a unique color do not get a new color. The color hierarchy shows the development of the colors over the refinement iterations.
+
+A vertex coloring $c : V\left( G\right) \rightarrow {\mathbb{N}}_{0}$ of a graph $G$ is a function assigning each vertex a color. A coloring $c$ can be interpreted as a partitioning ${\pi }_{c}$ of $V\left( G\right)$ with $v{ \approx }_{{\pi }_{c}}w \Leftrightarrow c\left( v\right) = c\left( w\right)$ for all $v,w$ in $V\left( G\right)$ . When it is clear from the context, we use colorings and their corresponding partitions interchangeably. A coloring $\pi$ is a refinement of (or refines) a coloring ${\pi }^{\prime }$ , iff ${s}_{1}{ \approx }_{\pi }{s}_{2} \Rightarrow {s}_{1}{ \approx }_{{\pi }^{\prime }}{s}_{2}$ for all ${s}_{1},{s}_{2}$ in $S$ . We denote this by $\pi \preccurlyeq {\pi }^{\prime }$ and write $\pi \equiv {\pi }^{\prime }$ if $\pi \preccurlyeq {\pi }^{\prime }$ and ${\pi }^{\prime } \preccurlyeq \pi$ . If $\pi \preccurlyeq {\pi }^{\prime }$ and $\pi ≢ {\pi }^{\prime }$ , we say that $\pi$ is a strict refinement of ${\pi }^{\prime }$ , written $\pi \prec {\pi }^{\prime }$ . The refinement relation defines a partial ordering on the colorings.
+
+Color Hierarchy. We consider a sequence of vertex colorings $\left( {{\pi }_{0},{\pi }_{1},\ldots ,{\pi }_{h}}\right)$ with ${\pi }_{h} \preccurlyeq \cdots \preccurlyeq {\pi }_{0}$ and assume that the colors assigned by ${\pi }_{i}$ and ${\pi }_{j}$ are distinct unless $i = j$ or the associated vertex sets are equal, i.e., $\forall {\pi }_{i},{\pi }_{j} : {\pi }_{i}\left( v\right) = {\pi }_{j}\left( v\right) \Rightarrow \left\{ {w \in V\left( G\right) \mid {\pi }_{i}\left( w\right) = {\pi }_{i}\left( v\right) }\right\} = \{ w \in V\left( G\right) \mid$ $\left. {{\pi }_{j}\left( w\right) = {\pi }_{j}\left( v\right) }\right\}$ . We can interpret such a sequence of colorings as a color hierarchy, i.e., a tree ${\mathcal{T}}_{h}$ that contains a node for each color $c \in \left\{ {{\pi }_{i}\left( v\right) \mid i \in \{ 0,\ldots ,h\} \land v \in V\left( G\right) }\right\}$ and an edge(c, d)iff $\exists v \in V\left( G\right) : {\pi }_{i}\left( v\right) = c \land {\pi }_{i + 1}\left( v\right) = d$ . We associate each tree node with the set of vertices of $G$ having that color. Here, we assume that the initial coloring is uniform corresponding to the trivial vertex partitioning. If this is not the case, we add an artificial root node and connect it to the initial colors. Likewise we insert the coloring ${\pi }_{0} = \{ V\left( G\right) \}$ as first element in the sequence of vertex colorings. An example color hierarchy is given in Figure 1.
+
+Using this color hierarchy we can derive multiple colorings on the vertices: Choosing exactly one color on every path from the leaves to the root (or only the root), always leads to a valid coloring. The finest coloring is induced by the colors representing the leaves of the tree. Given a color hierarchy $T$ , we denote this coloring (which is equal to ${\pi }_{h}$ ) by ${\pi }_{T}$ .
+
+Weisfeiler-Leman Color Refinement. The 1-dimensional Weisfeiler-Leman (WL) algorithm or color refinement [16] starts with a coloring ${c}_{0}$ , where all vertices have a color representing their label (or a uniform coloring in case of unlabeled vertices). In iteration $i$ , the coloring ${c}_{i}$ is obtained by assigning each vertex $v$ in $V\left( G\right)$ a new color according to the colors of its neighbors, i.e.,
+
+$$
+{c}_{i + 1}\left( v\right) = h\left( {{c}_{i}\left( v\right) ,\left\{ \left\{ {{c}_{i}\left( u\right) \mid u \in N\left( v\right) }\right\} \right\} }\right) ,
+$$
+
+where $h : {\mathbb{N}}_{0} \times {\mathbb{N}}_{0}^{{\mathbb{N}}_{0}} \rightarrow {\mathbb{N}}_{0}$ is an injective function. Figure 1 depicts the first iterations of the algorithm for an example graph.
+
+After enough iterations the number of different colors will no longer change and this resulting coloring is called the coarsest stable coloring. The coarsest stable coloring is unique and always reached after at most $\left| {V\left( G\right) }\right| - 1$ iterations. This trivial upper bound on the number of iterations is tight [17]. In practice, however, Weisfeiler-Leman refinement converges much faster (see Appendix B).
+
+Graph Kernels and the Weisfeiler-Leman Subtree Kernel. A kernel on $X$ is a function $k : X \times$ $X \rightarrow \mathbb{R}$ , so that there exist a Hilbert space $\mathcal{H}$ and a mapping $\phi : X \rightarrow \mathcal{H}$ with $k\left( {x,y}\right) = \langle \phi \left( x\right) ,\phi \left( y\right) \rangle$ for all $x,y$ in $X$ , where $\langle \cdot , \cdot \rangle$ is the inner product of $\mathcal{H}$ . A graph kernel is a kernel on graphs, i.e., $X$ is the set of all graphs.
+
+The Weisfeiler-Leman subtree kernel [2] with height $h$ is defined as
+
+$$
+{k}_{ST}^{h}\left( {{G}_{1},{G}_{2}}\right) = \mathop{\sum }\limits_{{i = 0}}^{h}\mathop{\sum }\limits_{{u \in V\left( {G}_{1}\right) }}\mathop{\sum }\limits_{{v \in V\left( {G}_{2}\right) }}\delta \left( {{c}_{i}\left( u\right) ,{c}_{i}\left( v\right) }\right) , \tag{1}
+$$
+
+where $\delta$ is the Dirac kernel (1, iff ${c}_{i}\left( u\right)$ and ${c}_{i}\left( v\right)$ are equal, and 0 otherwise). It counts the number of vertices with common colors in the two graphs up to the given bound on the number of Weisfeiler-Leman iterations.
+
+ < g r a p h i c s >
+
+Figure 2: Initial coloring and results of the first iteration using WL and GWL refinement. We assume that the update function of GWL is a clustering algorithm, producing two clusters per old color. Vertices colored gray and yellow by WL are put into the same cluster, as well as green and light blue ones, as their neighbor color multisets only differ by one element each.
+
+§ 4 GRADUAL WEISFEILER-LEMAN REFINEMENT
+
+As a different approach to the refinement step of the Weisfeiler-Leman algorithm, we essentially replace the injective relabeling function with a non-injective one. We do this by allowing vertices with differing neighbor color multisets to be assigned the same color under some conditions. Through this, the number of colors per iteration can be limited, allowing to obtain a more gradual refinement of colors. To reach the same stable coloring as the original Weisfeiler-Leman algorithm, the function has to assure that vertices with differing colors in one iteration will get differing colors in future iterations and that in each iteration at least one color is split up, if possible.
+
+We first define the property necessary to reach the stable coloring of the original Weisfeiler-Leman algorithm and discuss connections to the original as well as other vertex refinement algorithms. Then we provide a realization of such a function by means of clustering, which is used in our experimental evaluation. Figure 2 illustrates our idea. It depicts the initial coloring, the result of the first iteration of WL and a possible result of the first iteration of the gradual Weisfeiler-Leman refinement (GWL), when restricting the maximum number of new colors to 2 by clustering the neighbor color multisets.
+
+Update Functions. Using the same approach as the Weisfeiler-Leman algorithm, the color of a vertex is updated iteratively according to the colors of its neighbors. Let ${\mathcal{T}}_{i}$ denote a color hierarchy belonging to $G$ and ${n}_{i}\left( v\right) = \left\{ {\left| {{\pi }_{{\mathcal{T}}_{i}}\left( x\right) }\right| x \in N\left( v\right) }\right\}$ the neighbor color multiset of $v$ in iteration $i$ . We use a similar update strategy, but generalize it using a special type of function:
+
+$$
+\forall v \in V\left( G\right) : {c}_{i + 1}\left( v\right) = {\pi }_{{\mathcal{T}}_{i + 1}}\left( v\right) \text{ , with }{\mathcal{T}}_{i + 1} = f\left( {G,{\mathcal{T}}_{i}}\right) ,
+$$
+
+where $f$ is a refining, neighborhood preserving function.
+
+A refining, neighborhood preserving (renep) function $f$ maps a pair $\left( {G,{\mathcal{T}}_{i}}\right)$ to a tree ${\mathcal{T}}_{i + 1}$ , such that
+
+1. ${\mathcal{T}}_{i} \subseteq {\mathcal{T}}_{i + 1}$
+
+2. ${\mathcal{T}}_{i} = {\mathcal{T}}_{i + 1}$ , iff $\forall v,w \in V\left( G\right) .v{ \approx }_{{\pi }_{{\mathcal{T}}_{i}}}w \Rightarrow {n}_{i}\left( v\right) = {n}_{i}\left( w\right)$
+
+3. ${\mathcal{T}}_{i} \subsetneq {\mathcal{T}}_{i + 1} \Rightarrow {\pi }_{{\mathcal{T}}_{i + 1}} \prec {\pi }_{{\mathcal{T}}_{i}}$
+
+4. $\forall v,w \in V\left( G\right) : \left( {v{ \approx }_{\pi {\tau }_{i}}w \land {n}_{i}\left( v\right) = {n}_{i}\left( w\right) }\right) \Rightarrow v{ \approx }_{\pi {\tau }_{i + 1}}w$
+
+The conditions assure, that the coloring ${\pi }_{{\mathcal{T}}_{i + 1}}$ is a strict refinement of ${\pi }_{{\mathcal{T}}_{i}}$ , if there exists a strict refinement: Condition 1 assures that the new coloring is a refinement of the old one. Condition 2 assures that the tree (and in turn the coloring) only stays the same, iff the stable coloring is reached, while condition 3 assures that, if the trees are not equal, ${\pi }_{{\mathcal{T}}_{i + 1}}$ is a strict refinement of ${\pi }_{{\mathcal{T}}_{i}}$ . Without this condition it would be possible to obtain a tree, that fulfills condition 1 but does not strictly refine the coloring (for example by adding one child to each leaf). Condition 4 assures that vertices, that are indistinguishable regarding their color and their neighbor color multiset, get the same color (as in the original Weisfeiler-Leman algorithm).
+
+We call this new approach gradual Weisfeiler-Leman refinement (GWL refinement). Since $f$ is a renep function, it is assured that at least one color is split into at least two new colors, if the stable coloring is not yet reached. This property and its implications are explored in the following section.
+
+Usually, the refinement is computed simultaneously for multiple graphs. This can be realized by using the disjoint union of all graphs as input. Note that this will have an influence on the function $f$ , since refinements might differ based on the vertices involved. This is a typical case of transductive learning, because the algorithm has to run on all graphs and if a new graph is encountered, the algorithm has to run again on the enlarged graph.
+
+§ 4.1 EQUIVALENCE OF THE STABLE COLORINGS
+
+The gradual color refinement will never assign two vertices the same color, if their colors differed in the previous iteration, since we require the coloring to be a refinement of the previous one. We can show that the stable coloring obtained by GWL refinement using any renep function is equal to the unique coarsest stable coloring, which is obtained by the original Weisfeiler-Leman algorithm.
+
+Theorem 1 ([18], Proposition 3). For every coloring $\pi$ of $V\left( G\right)$ , there is a unique coarsest stable coloring $p$ that refines $\pi$ .
+
+This means GWL with any renep function, should it reach a coarsest stable coloring, will reach this unique coarsest stable coloring. It remains to show that GWL will reach a coarsest stable coloring.
+
+Theorem 2. For all $G$ the ${GWL}$ refinement using any renep function will find the unique coarsest stable coloring of $V\left( G\right)$ .
+
+Proof. Let ${\pi }_{\mathcal{T}} = \left\{ {{p}_{1},\ldots ,{p}_{n}}\right\}$ be the stable coloring obtained from GWL on the initial coloring ${\pi }_{0}$ . Assume there exists another stable coloring ${\pi }^{\prime } = \left\{ {{p}_{1}^{\prime },\ldots ,{p}_{m}^{\prime }}\right\}$ with ${\pi }_{\mathcal{T}} \prec {\pi }^{\prime } \preccurlyeq {\pi }_{0}$ , so $m < n$ . Then $\exists v,w \in V\left( G\right) .\left( {v{ \approx }_{{\pi }^{\prime }}w \land v{ ≉ }_{\pi \tau }w}\right)$ and since condition 4 applies $n\left( v\right) \neq n\left( w\right)$ , which contradicts the assumption that ${\pi }^{\prime }$ is stable.
+
+The original Weisfeiler-Leman refinement can be realized by using the renep function with $\Leftrightarrow$ instead of $\Rightarrow$ in condition 4 . This ensures that vertices get assigned the same color, iff they previously had the same color and their neighborhood color multisets do not differ. Since this procedure splits all colors, that can be split up, it is the fastest converging possible renep function (because only direct neighborhood is considered). A trivial upper bound for the maximum number of Weisfeiler-Leman iterations needed is $\left| {V\left( G\right) - 1}\right|$ and there are infinitely many graphs on which this number of iterations is required for convergence [17]. We obtain the same upper bound for GWL.
+
+Theorem 3. The maximum number of iterations needed to reach the stable coloring using GWL refinement is $\left| {V\left( G\right) }\right| - 1$ .
+
+Proof. The function we consider is a renep function. It follows that, prior to reaching the stable coloring, at least one color is split into at least two new colors in every iteration. Since vertices that had different colors at any step will also have different colors in the following iterations, the number of colors increases in every step. Hence, after at most $\left| {V\left( G\right) }\right| - 1$ steps, each vertex has a unique color, which is a stable coloring.
+
+Sequential Weisfeiler-Leman. For optimizing the running time of the Weisfeiler-Leman algorithm, sequential refinement strategies have been proposed [18-20], which lead to the same stable coloring as the original WL. Our presentation follows Berkholz et al. [18], who provide implementation details and a thorough complexity analysis. Sequential WL manages a stack containing the colors that still have to be processed. All initial colors are added to this stack. In each step, the next color $c$ from the stack is used to refine the current coloring $\pi$ (and generate a new coloring ${\pi }^{\prime }$ ) using the following update strategy: $\forall v,w \in V\left( G\right) : v{ \approx }_{{\pi }^{\prime }}w \Leftrightarrow \left| \left\{ {x \mid x \in N\left( v\right) \land \pi \left( x\right) = c}\right\} \right| =$ $\left| \left\{ {x \mid x \in N\left( w\right) \land \pi \left( x\right) = c}\right\} \right| \land v{ \approx }_{\pi }w$ . Note that ${\pi }^{\prime } \prec \pi$ is not guaranteed. For colors that are split, all new colors are added to the stack with exception of the largest color class. This is shown to be sufficient for generating the coarsest stable coloring [18].
+
+Sequential Weisfeiler-Leman can be realized by our GWL with the restriction, that in sequential WL, some refinement operations might not produce strict refinements. We need to skip these in our approach (since renep functions have to produce strict refinements as long as the coloring is not stable).
+
+The renep function has to fulfill $\forall v,w \in V\left( G\right) : v{ \approx }_{{\pi }_{{\mathcal{T}}_{i + 1}}}w \Leftrightarrow \left| \left\{ {x \mid x \in N\left( v\right) \land {\pi }_{{\mathcal{T}}_{i}}\left( x\right) = c}\right\} \right| =$ $\left| \left\{ {x \mid x \in N\left( w\right) \land \pi {\mathcal{T}}_{i}\left( x\right) = c}\right\} \right| \land v{ \approx }_{\pi {\mathcal{T}}_{i}}w$ , where $c$ is the next color in the stack that produces a strict refinement.
+
+§ 4.2 RUNNING TIME
+
+The running time of the gradual Weisfeiler-Leman refinement depends on the cost of the update function used.
+
+Theorem 4. The running time for the gradual Weisfeiler-Leman refinement is $O\left( {i \cdot {t}_{u}\left( \left| {V\left( G\right) }\right| \right) }\right)$ , where $i$ is the number of iterations and ${t}_{u}\left( n\right)$ is the time needed to compute the renep function for $n$ elements.
+
+The update function used in the original Weisfeiler-Leman refinement can be computed in time $O\left( {\left| {V\left( G\right) }\right| + \left| {E\left( G\right) }\right| }\right)$ in the worst-case by sorting the neighbor color multisets using bucket sort [2].
+
+§ 4.3 DISCUSSION OF SUITABLE UPDATE FUNCTIONS
+
+The update function of the original Weisfeiler-Leman refinement provides a fast way to reach the stable coloring, but in machine learning tasks a more fine grained vertex similarity is needed. A suitable update function restricts the number of new colors to a manageable amount, while still fulfilling the requirements of a renep function. Clustering the neighborhood multisets of the vertices, and letting the clusters imply the new colors, is an intuitive way to restrict the number of colors per iteration and assign similar neighborhoods the same new color. We discuss how to realize a renap function using clustering.
+
+Whether two vertices, that currently have the same color, will be assigned the same color in the next step, depends on two factors: If they have the same neighbor color multiset, they have to remain in one color group. If their neighbor color multisets differ, however, the renep function can decide to either separate them or not (provided any new colors are generated to fulfill condition 3). We propose clustering the neighbor color multisets separately for each old color and let the clusters imply new colors. If a clustering function guarantees to produce at least two clusters for inputs with at least two distinct objects, we obtain a renep function.
+
+Although various clustering algorithms are available, we identified $k$ -means as a convenient choice because of its efficiency and controllability of the number of clusters. In order to apply $k$ -means to multisets of colors, we represent them as (sparse) vectors, where each entry counts the number of neighbors with a specific color. The above method using $k$ -means clustering with $k > 1$ satisfies the requirements of a renep function. Of course, if the number of elements to cluster is less than or equal to $k$ , the clustering can be omitted and each element can be assigned its own cluster. The number of clusters in iteration $i$ is bounded by $\left| L\right| \cdot {k}^{i}$ , since each color can split into at most $k$ new colors in each iteration and the initial coloring has at most $\left| L\right|$ colors.
+
+§ 5 APPLICATIONS
+
+The gradual Weisfeiler-Leman refinement provides a more fine-grained approach to capture vertex similarity, where two vertices are considered more similar, the longer it takes until they get assigned different colors. This makes the approach applicable not only to vertex classification, but also in graph kernels and as a vertex similarity measure for graph matching. We further describe these possible applications in the following and evaluate them against the state-of-the-art methods in Section 6.
+
+Graph Kernels. The idea of the GWL subtree kernel is essentially the same as the Weisfeiler-Leman subtree kernel [2], but instead of using the original Weisfeiler-Leman algorithm, the GWL algorithm is used to generate the features. We use the definition given in Equation (1) replacing the Weisfeiler-Leman colorings with the coloring from the GWL algorithm. The Weisfeiler-Leman optimal assignment kernel [5] is obtained from an optimal assignment between the vertices of two graphs regarding a vertex similarity obtained from a color hierarchy. We replace the Weisfeiler-Leman color hierarchy used originally by the one from our gradual refinement. We evaluate the performance of our newly proposed kernels in Section 6.
+
+Tree Metrics for Approximating the Graph Edit Distance. The Weisfeiler-Leman refinement produces a color hierarchy, see Figure 1, which can be interpreted as a tree defining a metric on the vertices [21]. This tree metric can be used in bipartite graph matching to find an optimal assignment between the vertices of the two graphs in linear time. Finding a vertex assignment is a commonly used strategy for gaining an upper bound of the graph edit distance, a general distance measure for graphs. The upper bound is computed by deriving a (sub-optimal) edit path from the vertex assignment. We use the same approach as Kriege et al. [21], but again replace the original Weisfeiler-Leman refinement with our gradual one. This means, instead of using the color hierarchy computed by the Weisfeiler-Leman refinement, we use the color hierarchy generated by our approach as the underlying tree metric. We evaluate this approximation of the graph edit distance regarding its accuracy in $k$ nn-classification against the state-of-the-art and the original approach.
+
+§ 6 EXPERIMENTAL EVALUATION
+
+We evaluate the proposed approach regarding its applicability in graph kernels, as the gradual Weisfeiler-Leman subtree kernel (GWL) and the gradual Weisfeiler-Leman optimal assignment kernel (GWLOA), as well as its usefulness as a tree metric for approximating the graph edit distance. Specifically, we address the following research questions:
+
+Q1 Can our kernels compete with state-of-the-art methods regarding classification accuracy on real-world and synthetic datasets?
+
+Q2 Which refinement speed is appropriate and are there dataset-specific differences?
+
+Q3 How do our kernels compare to the state-of-the-art methods in terms of running time?
+
+Q4 Is the vertex similarity obtained from GWL refinement suitable for approximating the graph edit distance?
+
+We compare to the Weisfeiler-Leman subtree kernel (WLST) [2], the Weisfeiler-Leman optimal assignment kernel (WLOA) [5], as well as RWL* [14], the approximation of the relaxed Weisfeiler-Leman subtree kernel, and the deep Weisfeiler-Leman kernel (DWL) [12]. We do not compare to [4], since the kernel showed results similar to the WLOA kernel. We compare the graph edit distance approximation using our tree metric GWLT to the original approach Lin [21] and state-of-the-art method BGM [22].
+
+§ 6.1 SETUP
+
+As discussed in Section 4.3 we used $k$ -means clustering in our new approach. If for any color less than $k$ different vectors were present in the clustering step, each distinct vector got its own cluster. We implemented our GWL, GWLOA and also the original WLST and WLOA [5] in Java. We used nested cross-validation with $\{ 0,\ldots ,{10}\}$ iterations for WLST, WLOA, GWL and GWLOA and $k$ -means with $k \in \{ 2,4,8,{16}\}$ .
+
+We used the RWL* Python implementation provided by the authors. Note that in contrast to the other approaches, this implementation uses multi-threading. We again used nested cross-validation for evaluation, with unfolding trees of depth in $\{ 1,\ldots ,4\}$ and default values for the other parameters. We used the DWL Python implementation provided by the authors. The parameters window size $w$ and dimension $d$ were set to 25, since they generally worked best out of the combinations from $d,w \in \{ 5,{25},{50}\}$ and no defaults were given. We used the default settings for the other parameters and varied the number of iterations for the Weisfeiler-Leman algorithm from $\{ 1,\ldots ,{10}\}$ , again choosing the best value with nested cross-validation. The running time experiments were conducted on an Intel Xeon Gold 6130 machine at ${2.1}\mathrm{{GHz}}$ with ${96}\mathrm{{GBRAM}}$ . For approximation of the graph edit distance, we used the Java implementation of Lin provided by the authors and implemented our approach GWLT, as well as BGM, also in Java for a fair comparison.
+
+Extension to Edge Labels. The original Weisfeiler-Leman algorithm can be extended to respect edge labels by updating the colors according to ${c}_{i + 1}\left( v\right) = h\left( {{c}_{i}\left( v\right) ,\left\{ {\left( {l\left( {u,v}\right) ,{c}_{i}\left( u\right) }\right) \mid u \in N\left( v\right) }\right\} }\right)$ . All kernels used in the comparison use a similar strategy to incorporate edge labels if present.
+
+Datasets. We used several real-world datasets from the TUDataset [23] and the EGO-Nets datasets [14] for our experiments. See Appendix A and E for an overview of the datasets, as
+
+Table 1: Average classification accuracy and standard deviation (highest accuracies marked in bold).
+
+max width=
+
+Kernel PTC_FM KKI EGO-1 EGO-2 EGO-3 EGO-4
+
+1-7
+WLST ${64.16} \pm {1.30}$ ${49.97} \pm {2.88}$ ${51.30} \pm {2.42}$ ${57.15} \pm {1.61}$ ${56.15} \pm {1.67}$ ${53.40} \pm {1.77}$
+
+1-7
+DWL ${64.18} \pm {1.46}$ ${50.93} \pm {2.87}$ ${55.80} \pm {1.35}$ ${56.50} \pm {1.64}$ ${55.90} \pm {1.64}$ ${53.25} \pm {2.81}$
+
+1-7
+RWL* ${62.43} \pm {1.46}$ ${46.54} \pm {4.03}$ ${65.60} \pm {2.74}$ ${70.20} \pm {1.36}$ 67.60 ±1.07 ${74.25} \pm {2.12}$
+
+1-7
+WLOA ${62.34} \pm {1.39}$ ${48.72} \pm {4.05}$ ${55.95} \pm {1.11}$ ${60.30} \pm {2.00}$ ${54.25} \pm {1.35}$ ${52.30} \pm {2.29}$
+
+1-7
+GWL ${62.61} \pm {1.94}$ $\mathbf{{57.79}} \pm {3.95}$ ${67.95} \pm {2.05}$ $\mathbf{{73.65}} \pm {1.86}$ ${65.45} \pm {1.88}$ 77.45 ±1.97
+
+1-7
+GWLOA $\mathbf{{64.58}} \pm {1.77}$ ${47.47} \pm {2.41}$ $\mathbf{{69.80}} \pm {1.65}$ ${72.40} \pm {2.52}$ ${67.45} \pm {1.69}$ ${75.35} \pm {1.67}$
+
+1-7
+X COLLAB DD IMDB-B MSRC_9 NCI1 REDDIT-B
+
+1-7
+WLST 78.98 ±0.22 ${79.00} \pm {0.52}$ ${72.01} \pm {0.80}$ ${90.13} \pm {0.75}$ ${85.96} \pm {0.18}$ ${80.81} \pm {0.52}$
+
+1-7
+DWL ${78.93} \pm {0.18}$ ${78.92} \pm {0.40}$ ${72.36} \pm {0.56}$ ${90.50} \pm {0.76}$ ${85.68} \pm {0.18}$ ${80.83} \pm {0.40}$
+
+1-7
+RWL* 77.94 ±0.38 ${77.52} \pm {0.65}$ ${72.96} \pm {0.86}$ ${88.86} \pm {0.89}$ ${79.45} \pm {0.32}$ 77.69 ±0.31
+
+1-7
+WLOA ${80.81} \pm {0.22}$ $\mathbf{{79.44}} \pm {0.31}$ ${72.60} \pm {0.89}$ ${90.68} \pm {0.92}$ $\mathbf{{86.29}} \pm {0.13}$ ${89.40} \pm {0.14}$
+
+1-7
+GWL ${80.62} \pm {0.33}$ ${79.00} \pm {0.81}$ 73.66 ±1.25 ${88.32} \pm {1.20}$ ${85.33} \pm {0.35}$ ${86.46} \pm {0.35}$
+
+1-7
+GWLOA 81.30 ±0.29 ${78.49} \pm {0.57}$ ${72.88} \pm {0.79}$ $\mathbf{{91.27}} \pm {1.06}$ ${85.36} \pm {0.36}$ 89.98 $\pm {0.34}$
+
+1-7
+
+well as additional synthetic datasets and corresponding results. We selected these datasets as they cover a wide range of applications, consisting of both molecule datasets and graphs derived from social networks. See Appendix B for the number of Weisfeiler-Leman iterations needed to reach the stable coloring for each dataset.
+
+§ 6.2 RESULTS
+
+In the following, we present the classification accuracy, as well as running time, of the different kernel methods. We investigate the parameter selection for our algorithm and discuss the application of our approach for approximating the graph edit distance.
+
+Q1: Classification Accuracy. Table 1 shows the classification accuracy of the different kernels. While on some datasets our new approaches do not outcompete all state-of-the-art methods, they are more accurate in most cases, in some cases even with a large margin to the second-placed (for example on KKI, EGO-1 or EGO-4). While RWL* is better than our approaches on some datasets, the running time of this method is much higher, cf. Q3. WLOA also produces very good results on many datasets, but cannot compete on the EGO-Nets and synthetic datasets (see Appendix E). For molecular graphs $\left( {{PTC}\_ {FM},{NCI}}\right)$ we see no significant improvements, which can be explained by their small degree and sensitivity of molecular properties to small changes. Overall, our method provides the highest accuracy on 9 of 12 datasets and is close to the best accuracy for the others.
+
+Q2: Parameter Selection. For GWL and GWLOA two parameters have to be chosen: The number of iterations and the number $k$ of clusters in $k$ -means. We investigate which choices lead to the best classification accuracy. Figure 3 shows the number of times, a specific parameter combination was selected as it provided the best accuracy for the test set. Here, we only show the parameter selection for some of the datasets. The results for the other datasets, as well as the parameter selection for WLST and WLOA, can be found in Appendix C. We can see that for GWL and most datasets the best $k$ is in $\{ 2,4,8\}$ and on those datasets classification accuracy of GWL exceeds that of WLST. On datasets on which GWL performed worse than WLST, the best choice for parameters is not clear and it seems like a larger $k$ might be beneficial for improving the classification accuracy. Similar tendencies can be observed for GWLOA.
+
+Q3: Running Time. Figure 4 shows the time needed for computing the feature vectors using the different kernels (for results on the other datasets and the influence of the parameter $k$ on running time see Appendix D and F). RWL* and DWL are much slower than the other kernels, while only RWL* leads to minor improvements in classification accuracy on few datasets. While our approach is only slightly slower than WLST/WLOA, it yields great improvements on the classification accuracy on most datasets, cf. Q1.
+
+ < g r a p h i c s >
+
+Figure 3: Number of times a parameter combination of GWL and GWLOA was selected from $k \in \{ 2,4,8,{16}\}$ and $\#$ iterations $\in \{ 0,\ldots ,{10}\}$ based on the accuracy achieved on the test set.
+
+ < g r a p h i c s >
+
+Figure 4: Running time in milliseconds for computing the feature vectors for all graphs of a dataset using the different methods. Note that RWL* uses multi-threading, while the other methods do not. Missing values for RWL* and DWL in the larger datasets are due to timeout.
+
+Q4: Approximating the Graph Edit Distance. Table 2 compares the classification accuracy of our approach when approximating the graph edit distance to the original and another state-of-the-art method based on bipartite graph matching. Our approach clearly outcompetes both method on all datasets.
+
+§ 7 CONCLUSIONS
+
+We proposed a general framework for iterative vertex refinement generalizing the popular Weisfeiler-Leman algorithm and discussed connections to other vertex refinement strategies. Based on this, we proposed two new graph kernels and showed that they outperform the original Weisfeiler-Leman subtree kernel and similar state-of-the-art approaches in terms of classification accuracy in almost all cases, while keeping the running time much lower than comparable methods. We also investigated the application of our method to approximating the graph edit distance, where we again outperformed the state-of-the-art methods.
+
+In further research it might be interesting to systematically compare our approach to graph neural networks, since their message passing scheme is similar to the update strategy of the WL algorithm. Moreover, other renep functions can be explored, for example, by using other clustering strategies, or by developing new concepts for inexact neighborhood comparison.
+
+Table 2: Average classification accuracy and standard deviation (highest accuracies marked in bold).
+
+max width=
+
+Method PTC_FM MSRC_9 KKI EGO-1 EGO-2 EGO-3 EGO-4
+
+1-8
+BGM ${60.14} \pm {1.50}$ ${72.13} \pm {1.28}$ 43.89 $\pm {1.27}$ ${44.75} \pm {1.05}$ ${42.05} \pm {1.25}$ out of time out of time
+
+1-8
+Lin ${62.38} \pm {1.08}$ ${81.36} \pm {0.64}$ 55.18 $\pm {2.44}$ ${40.40} \pm {1.17}$ ${31.65} \pm {1.07}$ ${26.60} \pm {0.94}$ ${36.55} \pm {1.72}$
+
+1-8
+8|c|GWLT63.19 ±0.11 $\mathbf{{85.97}} \pm {0.59}$ 55.18 $\pm {2.44}$ $\mathbf{{56.20}} \pm {1.42}$ 47.9036.40 ±1.0447.90 ±1.32
+
+1-8
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/hM5UIWqZ7d/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/hM5UIWqZ7d/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b17c2fdf9df2aa06cea7bfc457353c47f8e06226
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/hM5UIWqZ7d/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,115 @@
+§ PYTORCH-GEOMETRIC EDGE – A LIBRARY FOR LEARNING REPRESENTATIONS OF GRAPH EDGES
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+Machine learning on graphs (GraphML) has been successfully deployed in a wide variety of problem areas, as many real-world datasets are inherently relational. However, both research and industrial applications require a solid, robust, and well-designed code base. In recent years, frameworks and libraries, such as PyTorch-Geometric (PyG) or Deep Graph Library (DGL), have been developed and become first-choice solutions for implementing and evaluating GraphML models. These frameworks are designed so that one can solve any graph-related task, including node- and graph-centric approaches (e.g., node classification, graph regression). However, there are no edge-centric models implemented, and edge-based tasks are often limited to link prediction. In this extended abstract, we introduce PyTorch-Geometric Edge (PyGE), a deep learning library that focuses on models for learning vector representations of edges. As the name suggests, it is built upon the PyG library and implements edge-oriented ML models, including simple baselines and graph neural networks, as well as corresponding datasets, data transformations, and evaluation mechanisms. The main goal of the presented library is to make edge representation learning more accessible for both researchers and industrial applications, simultaneously accelerating the development of the aforementioned methods, datasets and benchmarks.
+
+§ 201 INTRODUCTION
+
+Nowadays, one of the most prominent research areas in machine learning is representation learning. Solving classification, regression, or clustering tasks by means of popular machine learning models, like decision trees, SVMs, logistic regression, linear regression, or feed-forward neural networks, requires the presence of object features in the form of real-valued number vectors (also called embeddings, or representation vectors). Representation learning aims at finding algorithms and models that can extract such numeric features from arbitrary objects (images, texts, or graphs) in an automated and reliable way. In terms of machine learning on graphs (GraphML), these models / algorithms are called graph representation learning (GRL) methods. In recent years, GRL methods have been successfully deployed in a wide variety of domains, including social networks, financial networks, and computational chemistry [1-4].
+
+This wide adoption of graph-based models led to the creation of publicly available implementations, often in the form of frameworks or libraries with standardized APIs, which describe data formats, model building blocks, and scalable parameter optimization techniques. First-choice solutions are currently frameworks like PyTorch-Geometric (PyG) [5] or the Deep Graph Library (DGL) [6]. They include most of the existing graph neural networks and some traditional models, as well as datasets, preprocessing transformations, and basic evaluation mechanisms. This simplifies both production-ready model development and conducting GraphML research.
+
+The implemented design choices allow solving any graph-related task (e.g., node classification, graph regression). Nevertheless, the main focus in these libraries is on node- and graph-centric models and tasks, whereas edge-based tasks are often limited to link prediction.
+
+Present work. We aim to fill the gap for edge-centric GRL models and tasks. In this extended abstract, we introduce PyTorch-Geometric Edge (PyGE), a deep learning library focused on models for learning vector representations of graph edges. We build upon the PyTorch-Geometric (PyG) library and provide implementations: (1) for edge-centric models, including simple baselines and graph neural networks, (2) edge-based GNN layers, (3) datasets and corresponding preprocessing functions (in a PyTorch- and PyG-compliant format), and (4) evaluation mechanisms for edge tasks. PyGE should make edge representation learning more accessible for both researchers and industrial applications, simultaneously accelerating the development of edge-centric methods, datasets and benchmarks. Disclaimer: Please note that the introduced library is still under active development. We provide a summary of our planned work in Section 4.
+
+Contributions. We summarize our contributions as follows: (C1) We publicly release ${}^{1}$ PyTorch-Geometric Edge, the first deep learning library for edge representation learning. (C2) We implement a subset of available edge-based models, graph neural network layers, datasets, and corresponding data transformations.
+
+§ 2 PRELIMINARIES
+
+We start by introducing definitions for basic concepts covered in our presented library and explore the current state of node and edge embedding approaches, as well as GraphML software.
+
+Graph. A graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ describes a set of nodes $\mathcal{V}$ that are connected (pairwise) by a set of edges $\mathcal{E} \in \mathcal{V} \times \mathcal{V}$ . An attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X},{\mathbf{X}}^{\text{ edge }}}\right)$ extends this definition by a set of node attributes: $\mathbf{X} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times {d}_{\text{ node }}}$ , and optionally also edge attributes: ${\mathbf{X}}^{\text{ edge }} \in {\mathbb{R}}^{\left| \mathcal{E}\right| \times {d}_{\text{ edge }}}$ .
+
+Edge representation learning. The goal is to find a function ${f}_{\theta } : \mathcal{E} \rightarrow {\mathbb{R}}^{{d}_{\text{ edge }}}$ that maps an edge ${e}_{\left( u,v\right) } \in \mathcal{E}$ into a low-dimensional $\left( {{d}_{\text{ edge }} \ll \dim \left( \mathcal{E}\right) }\right)$ vector representation (embedding) ${\mathbf{z}}_{uv}$ that preserves selected properties of the edge (e.g., features or local structural neighborhood information).
+
+Edge-based tasks. Evaluation tasks for edge embeddings include: (1) link prediction - binary classification problem of the existence (future appearance) of an edge; (2) edge classification - label/type prediction of an existing edge (e.g., kind of social network relation); (3) edge regression - prediction a numerical edge feature (e.g., bond strength in a molecule).
+
+Node representation learning methods. Early approaches were built around the transductive setting with an enormous trainable lookup-embedding matrix, whose rows denote representation vectors for each node. The optimization process would preserve structural node information. For instance, DeepWalk [7], and its successor Node2vec [8] use the Skipgram [9] objective to model random walk-based co-occurrence probabilities. TADW [10] extended this approach to attributed graphs and reformulated the model as a matrix factorization problem. Other early approaches include: LINE [11], SDNE [12], or FSCNMF [13]. Recent methods are based on Graph Neural Networks (GNNs) - trainable functions that transform feature vectors of a node and its neighbors to a new embedding vector (inductive setting). These functions can be stacked to create a deep (graph) neural network. The most popular ideas include: a graph reformulation of the convolution operator (GCN [14]), neighborhood sampling and aggregation of sampled features (GraphSAGE [15]), attention mechanism over graph structure (GAT [16]) or modeling injective functions (GIN [17]).
+
+Edge representation learning methods. This area is still underdeveloped, i.e., only a handful of proposed models and algorithms exists. Most early approaches are node-based transformations, i.e., the edge embedding ${\mathbf{z}}_{uv}$ is computed from two node embeddings ${\mathbf{z}}_{u}$ and ${\mathbf{z}}_{v}$ . There are simple non-trainable binary operators [8], such as the average $\left( {{\mathbf{z}}_{uv} = \frac{{\mathbf{z}}_{u} + {\mathbf{z}}_{v}}{2}}\right)$ , the Hadamard product $\left( {{\mathbf{z}}_{uv} = {\mathbf{z}}_{u} * {\mathbf{z}}_{v}}\right)$ , or the weighted L1 $\left( {{\mathbf{z}}_{uv} = \left| {{\mathbf{z}}_{u} - {\mathbf{z}}_{v}}\right| }\right)$ or L2 $\left( {{\mathbf{z}}_{uv} = {\left| {\mathbf{z}}_{u} - {\mathbf{z}}_{v}\right| }^{2}}\right)$ operators. NRIM [18] proposes trainable transformations as two kinds of neural network layers: node2edge $\left( {{\mathbf{z}}_{uv} = {f}_{\theta }\left( \left\lbrack {{\mathbf{z}}_{u},{\mathbf{z}}_{v},{\mathbf{x}}_{uv}^{\text{ edge }}}\right\rbrack \right) }\right)$ and edge 2node $\left( {{\mathbf{z}}_{u} = {f}_{\omega }\left( \left\lbrack {\mathop{\sum }\limits_{{v \in \mathcal{N}\left( u\right) }}{\mathbf{z}}_{uv},{\mathbf{x}}_{u}}\right\rbrack \right) }\right)$ . Another group of edge embedding methods directly learn the edge embeddings, i.e., without an intermediate node embedding step. Line2vec [19] utilizes a line graph transformation (converting nodes into edges and vice versa), applies a custom edge weighting method and runs Node2vec on the line graph. The loss function extends the Skipgram loss with a so-called collective homophily loss (to ensure closeness of neighboring edges in the embedding space). This method is inherently transductive (due to Node2vec) and completely ignores any attributes. Those problems are addressed by AttrE2vec [20]. It samples a fixed number of uniform random walks from two edge neighborhoods $(\mathcal{N}\left( u\right)$ , $\mathcal{N}\left( v\right)$ ) and aggregates feature vectors of encountered edges (using average, exponential decaying, or recurrent neural networks) into summary vectors ${\mathbf{S}}_{u},{\mathbf{S}}_{v}$ , respectively. An MLP encoder network with a self-attention-like mechanism transforms the summary vectors and the edge features into the final edge embedding. AttrE2vec is trained using a contrastive cosine learning objective and a feature reconstruction loss. PairE [21] utilizes two kinds of edge feature aggregations: (1) concatenated node features (self features), (2) concatenation of averaged neighbor features for both nodes (agg features). An MLP encoder with skip-connections transforms these two vectors into the edge embedding. Two shallow decoders reconstruct the feature probability distribution. The resulting PairE autoencoder is trained using the sum of the KL-divergences of the self and agg features. Other methods include: EGNN [22], ConPI [23] or Edge2vec [24].
+
+${}^{1}$ The link to the repository will be included in the final version and is now omitted due to double-blind policy. We include an anonymized version of our library in the attachments on OpenReview.
+
+GraphML software. The backbone of all modern deep learning frameworks are tools for automatic differentiation, such as: Tensorflow [25] or PyTorch [26]. GraphML libraries are mostly built upon these tools, e.g., PyG uses PyTorch, GEM [27] and DynGEM [28] use Tensorflow, DGL can be used both with Tensorflow and PyTorch, whereas some like KarateClub [29] are using a custom backend. All of these libraries are focused on node- and graph-centric models. Our proposed PyTorch-Geometric Edge library is the first one that focuses on edge-centric models and layers. It adapts the PyG library API and uses PyTorch as its backend.
+
+§ 3 PYTORCH-GEOMETRIC EDGE
+
+Relation to PyG. Our proposed PyGE library re-uses the API and data format implemented in PyTorch-Geometric. The graph is stored as a Data( ) object with edges in form of a sparse COO matrix (edge_index). Other fields include: x (node attributes), edge_attr (edge attributes), y (node/edge labels). We also keep a similar layout of the library package structure, i.e., we have a module for datasets, models, neural network layers (nn), data transformations (transforms) and data samplers (samplers). The forward( ) method in all implemented models/layers accepts two parameters: $\mathrm{x}$ (node or edge features) and edge_index (adjacency matrix). Hence, the implemented models/layers can be integrated with other PyG models/layers and vice versa (we show that in the examples/ folder in the repository). The same applies for the datasets.
+
+§ 3.1 CURRENT STATE OF IMPLEMENTATION
+
+We now show the current state of the library and what is already implemented. Please refer to Section 4 where we explain our future plans.
+
+Datasets. We currently include 5 datasets (Cora, PubMed, KarateClub, Dolphin and Cuneiform) that were originally used for evaluation of the implemented methods. We summarize their statistic in Table 1. Note most of them also require preprocessing steps (see: AttrE2vec [20] for details) for the edge classification evaluation - we implement appropriate data transformations.
+
+Table 1: Summary of included datasets. The $*$ symbol denotes the number of edge classes after applying an appropriate data transformation.
+
+max width=
+
+Name V $\left| \mathcal{E}\right|$ ${d}_{\text{ node }}$ ${d}_{\text{ edge }}$ classes
+
+1-6
+KarateClub [30] 34 156 - - 4*
+
+1-6
+Dolphin [31] 62 318 - - 5*
+
+1-6
+Cora [32] 2 708 10 556 1 433 - ${8}^{ * }$
+
+1-6
+PubMed [33] 19717 88 648 500 - ${4}^{ * }$
+
+1-6
+Cuneiform [34] 5 680 23 922 3 2 2
+
+1-6
+
+Models and layers. We implement most of the edge representation learning methods discussed in Section 2 into our proposed PyGE library (see: Table 2). Nevertheless, more of them will be implemented in future versions.
+
+Table 2: Models and layers implemented in PyGE.
+
+max width=
+
+$\mathbf{{Method}}$ Type Inductive Attributed Characteristics
+
+1-5
+Node pair operator [8] layer ✓ ✘ non-trainable
+
+1-5
+node2edge [18] layer ✓ ✓ trainable
+
+1-5
+Line2vec [19] model ✘ ✘ line graph, random-walk
+
+1-5
+AttrE2vec [20] model ✓ ✓ contrastive, AE, random-walk
+
+1-5
+PairE [21] model ✓ ✓ AE, KL-div
+
+1-5
+
+Embedding evaluation. We implement a ready-to-use edge classification evaluator class, which takes edge embeddings and edge labels, applies a logistic regression classifier and returns typical classification metrics, like ROC-AUC, F1 or accuracy. This is a widely adopted technique in unsupervised learning, called the linear evaluation protocol [35].
+
+Example usage. In the repository, we provide an end-to-end script showing the usage of a given model/layer. Every script: (1) loads a dataset and applies the required data transformations (preprocessing), (2) prepares the data split of edges into train and test sets, (3) build a model, (4) trains the model for a certain amount of epochs, (5) evaluates the learned edge embeddings. We provide also an example script in this extended abstract - see Section A.
+
+§ 3.2 MAINTENANCE
+
+An open-source library requires continuous maintenance. We host our code base at GitHub, which allows to track all development progress and user-generated issues. We will build library releases and announce them on GitHub and host them later on the Python Package Index (PyPI) to allow users to simply run a pip install torch-geometric-edge command to install our library. We use the MIT license to give potential users, researchers, and industrial adopters a good user experience without worrying about the rights to use or modify our code base. Another aspect of software development and maintenance is Continuous Integration. We use the GitHub Actions module to automatically execute code quality checks and unit tests with every pull request to our library. This ensures no change will break existing functionality or lower our assumed code quality.
+
+§ 4 SUMMARY AND ROADMAP
+
+In this extended abstract, we presented an initial version of PyTorch-Geometric Edge, the first deep learning library that focuses on representation learning for graph edges. We provided information about currently implemented models/layers and datasets. Our roadmap is extensive and includes: (I) preparation of a complete documentation (right now: we rely on code quality checks and example scripts on how to use particular models/layers), (II) addition of more datasets (e.g., Enron Email Dataset ${}^{2}$ , FF-TW-YT ${}^{3}$ , among others),(III) implementation of other mentioned edge-centric models (and a continuous extension of the literature review to find new methods), (IV) we want to add more edge evaluation schemes, (V) in the full paper, we want to include an extensive benchmark of all implemented models and compare them in different downstream tasks; moreover we want to provide the entire reproducible experimental pipeline and pretrained models. With such an amount of incoming work, we want to encourage readers interested in edge representation learning to contact the authors and contribute to our library. We are convinced that edge representation learning can be widely adopted in networked tasks, like message classification in social networks, connection/attack classification in cybersecurity applications, to name only a few.
+
+${}^{2}$ https://www.cs.cmu.edu/ẽnron/
+
+${}^{3}$ http://multilayer.it.uu.se/datasets.html
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/ha9hPpthvQ/Initial_manuscript_md/Initial_manuscript.md b/papers/LOG/LOG 2022/LOG 2022 Conference/ha9hPpthvQ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..7847291cfd22fb5afe6ba8bc1bd5bc057373abeb
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/ha9hPpthvQ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,415 @@
+# Towards Efficient and Expressive GNNs for Graph Classification via Subgraph-aware Weisfeiler-Lehman
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+## Abstract
+
+The expressive power of GNNs is upper-bounded by Weisfeiler-Lehman (WL) test. To achieve GNNs with high expressiveness, researchers resort to subgraph-based GNNs (WL/GNN on subgraphs), deploying GNNs on subgraphs centered around each node to encode subgraphs instead of rooted subtrees like WL. However, deploying multiple GNNs on subgraphs suffers from much higher computational cost than deploying a single GNN on the whole graph, limiting its application to large-size graphs. In this paper, we propose a novel paradigm, namely Subgraph-aware WL (SaWL), to obtain graph representation that reaches subgraph-level expressiveness with a single GNN. We prove that SaWL has beyond-WL capability for graph isomorphism test, and propose a fast implementation for it. To generalize SaWL to graphs with continuous node features, we propose a neural version named Subgraph-aware GNN (SaGNN) to learn graph representation. Both SaWL and SaGNN are more expressive than 1-WL while having similar computational cost to 1-WL/GNN, without causing exponentially higher complexity like other more expressive GNNs. Experimental results on several benchmark datasets demonstrate that fast SaWL and SaGNN significantly outperform competitive baseline methods on the task of graph classification, while achieving high efficiency.
+
+## 191 Introduction
+
+Graph-structured data widely exist in the real world, and modeling graphs has become an important topic in the field of machine learning. Graph learning has widespread applications [1-3], and many valuable applications can be formulated as graph classification, e.g., molecular property prediction [4], drug toxicity prediction [5]. Graph classification aims to predict the label of the given graph by exploiting graph structure and feature information. Learning expressive representations of graphs is crucial for classifying graphs of different structural characteristics.
+
+Recently, Graph Neural Networks (GNNs) have achieved great success in graph classification tasks [6- 8]. GNNs that follow a message passing scheme first iteratively aggregate neighbor information to update node representations, then pool node representations into graph-level representations [9]. Essentially, GNNs are parameterized generalizations of the 1-dimensional Weisfeiler-Lehman algorithm (1-WL) [10], which encodes each node by its' rooted subtree pattern [11], as shown in Figure 1 (a). Despite the success of traditional message passing GNNs, the expressive power of GNNs is theoretically upper-bounded by 1-WL, which is known to have limited power in distinguishing many non-isomorphic graphs [12-14].
+
+To uplift the expressive power of GNNs, researchers adopt a paradigm of WL/GNN on subgraphs (Figure 1 (c)), which encodes rooted subgraphs instead of rooted subtrees as node representations [15- 17]. Methods under the paradigm first extract rooted subgraphs (i.e., subgraph induced by the neighbor nodes within $h$ hops of a center node), and then apply GNNs on each extracted subgraph respectively. However, as GNNs are applied to subgraphs extracted from each node of the graph, the 39 computational cost of these methods is much higher than that of traditional message passing GNNs, especially when the subgraphs have similar sizes to the whole graph. In this paper, we propose a novel paradigm of Subgraph-aware WL/GNN (SaWL), which reaches higher expressiveness than 1-WL with a single GNN (Figure 1 (b)). It first deploys WL/GNN on the full graph to obtain node representations, and then aggregates the nodes within each subgraph to achieve subgraph awareness. The proposed paradigm greatly reduces the computational cost of existing WL-on-subgraph methods, while achieving higher expressive power than 1-WL. Under the paradigm, we propose an algorithm as fast implementation of SaWL, which consists of a WL encoder and a subgraph operator (S operator). We first apply a standard WL on the full graph to iteratively update each node label based on its current label and the labels of its neighbors [18]. After each iteration of WL, we use the S operator to encode the rooted subgraph of each node by aggregating the current labels of nodes within the subgraph. The whole graph feature mapping at this iteration is obtained further by pooling the subgraph feature mapping. Finally, we concatenate graph feature mappings at different iterations into a final graph feature mapping for graph classification. We then generalize SaWL to a neural version, Subgraph-aware GNN (SaGNN).
+
+
+
+Figure 1: (a) WL encodes nodes by rooted subtrees, which has limited expressiveness. (b) WL/GNN on Subgraphs paradigm extracts rooted subgraphs and applies GNNs on each rooted subgraph, which is computationally expensive. (c) Our Subgraph-aware WL/GNN applies WL/GNN on the full graph and then encodes rooted subgraphs by aggregating nodes within the subgraph. The proposed paradigm possesses higher expressive power than 1-WL while keeping the computational cost low.
+
+Compared to the paradigm of WL/GNN-on-subgraphs, the proposed Subgraph-aware WL/GNN does not need to copy a full $n$ -node graph into $n$ subgraphs (each rooted at a node) and run WL/GNN on each subgraph separately (thus the same node can have multiple representations when appearing in different subgraphs). Instead, Subgraph-aware WL/GNN only runs WL/GNN on the full graph and encodes subgraph information based on the "global" WL/GNN node representations. It encodes the subgraph information while avoiding the need to apply WL/GNN on each extracted subgraph respectively, which improves the expressiveness and keeps low computational cost at the same time.
+
+We evaluate the effectiveness of the proposed fast SaWL and SaGNN on graph classification tasks via several benchmark datasets, and then conduct the expressive power evaluation to verify the high distinguishing power of our methods. We further compare the running time of our methods with 4 other high expressive methods. The experimental results show that our methods have both high effectiveness and high efficiency.
+
+## 2 Preliminary
+
+### 2.1 Weisfeiler-Lehman and Feature Mapping
+
+Weisfeiler-Lehman (1-WL) [10] is one of the most widely used algorithms which can tackle graph 69 isomorphism test for a broad class of graphs [19, 20]. Specifically, 1-WL proceeds in iterations denoted by $h$ , and each iteration includes multisets determination, injective mapping and relabeling [18].
+
+Given two graphs $G$ and $H$ , firstly, WL aggregates the labels of neighbor nodes as a multiset ${M}_{v}^{h}$ . For $h = 0,{M}_{v}^{0} = {l}_{v}^{0}$ , and for $h > 0,{M}_{v}^{h} = \left\{ {\left\{ {{l}_{u}^{h - 1} \mid u \in \mathcal{N}\left( v\right) }\right\} \text{, where}{l}_{v}^{h}\text{is the label of node}v\text{in the}}\right\}$ $h$ -th iteration, $\mathcal{N}\left( v\right)$ denotes the neighbor nodes of $v$ and $\{ \}$ denotes the multiset. Note that multiset is a generalized set that allows repeated elements [13]. Then, an injective function is required to update the label of node, ${l}_{v}^{h} \mathrel{\text{:=}} \operatorname{HASH}\left( \left( {{l}_{v}^{h - 1},{M}_{v}^{h}}\right) \right)$ . The procedures repeat until the multisets of node labels of two graphs differ, or the number of iterations reaches a predetermined value. The feature mapping of the whole graph can be obtained after each iteration. We can use the multiset of node labels in the $h$ -th iteration to represent the whole graph [18]. Although 1-WL works well in testing isomorphism on many graphs, the distinguishing power of the 1-WL is limited [12, 21].
+
+### 2.2 Graph Neural Networks
+
+Traditional message passing Graph Neural Networks (GNNs) follow an aggregation and update scheme, which can be viewed as the neural implementation of the 1-WL [13, 22]. Nodes aggregate features of neighbor nodes, combine them with its features and update to new representations:
+
+$$
+{\mathbf{h}}_{v}^{k} = \operatorname{UPDATE}\left( {{\mathbf{h}}_{v}^{k - 1},\operatorname{AGGREGATE}\left( {{\mathbf{h}}_{u}^{k - 1} \mid u \in \mathcal{N}\left( v\right) }\right) }\right) , \tag{1}
+$$
+
+where the UPDATE and AGGREGATE functions are implemented with neural networks. Then, the whole graph representation can be computed by a pooling/readout operation like sum [23-25]:
+
+$$
+{\mathbf{h}}^{k}\left( G\right) = \operatorname{READOUT}\left( {{\mathbf{h}}_{v}^{k} \mid v \in \mathcal{V}\left( G\right) }\right) . \tag{2}
+$$
+
+GNNs have been popular architectures for representation learning on graphs. However, it has been proved that the expressive power of message passing GNNs is upper bounded by the 1-WL algorithm $\left\lbrack {{13},{14}}\right\rbrack$ , which limits the performance on graph classification tasks.
+
+## 3 Subgraph-aware Weisfeiler-Lehman
+
+We propose a new paradigm of Subgraph-aware Weisfeiler-Lehman (SaWL), which exceeds the expressive power of 1-WL while keeping low computational complexity. The paradigm first iteratively applies WL/GNN to the original input graph. With the obtained node representations at each iteration, the paradigm encodes each rooted subgraph by hashing the node representations within its range. Then, the subgraph representations are pooled to obtain the whole graph representation.
+
+### 3.1 SaWL for Graph Classification
+
+SaWL consists of a WL encoder, a subgraph encoding operator (the S operator) and a graph feature mapping module. For graph $G$ , the WL encoder executes normal WL steps described in section 2.1, which outputs the updated node labels $\left\{ {{l}_{v}^{h} \mid v \in \mathcal{V}\left( G\right) }\right\}$ , where ${l}_{v}^{h}$ is the label of node $v$ in the $h$ -th iteration. The core of the proposed SaWL lies in the additional S operator, which encodes subgraph information with the results of each WL iteration. We describe the S operator in the following.
+
+S operator. We employ an injective hash function that acts on labels of nodes within the subgraph to encode the subgraph information into a subgraph feature mapping:
+
+$$
+{\phi }^{\left( h\right) }\left( {G}_{v}^{h}\right) = \operatorname{HASH}\left( \left\{ \left\{ {{l}_{v}^{h} \mid v \in \mathcal{V}\left( {G}_{v}^{h}\right) }\right\} \right\} \right) , \tag{3}
+$$
+
+where ${G}_{v}^{h}$ is the $h$ -hop rooted subgraph around node $v$ . The hash function can be designed freely. Essentially, the $\mathrm{S}$ operator encodes the multiset of node labels within ${G}_{v}^{h}$ (obtained by running $h$ iterations of WL on the full graph) into a subgraph representation.
+
+Graph Feature Mapping Module. With the subgraph feature mapping, an injective readout function is adopted to obtain the whole graph feature mapping in the $h$ -th iteration, i.e.,
+
+$$
+{\psi }^{\left( h\right) }\left( G\right) = \operatorname{READOUT}\left( {{\phi }^{\left( h\right) }\left( {G}_{v}^{h}\right) \mid v \in \mathcal{V}\left( G\right) }\right) . \tag{4}
+$$
+
+The readout function can be chosen freely. To retain the structural information at all iterations, the final graph feature mapping is obtained by concatenation, i.e., $\psi \left( G\right) =$ CONCAT $\left( {{\mathbf{\psi }}^{\left( 0\right) }\left( G\right) ,{\mathbf{\psi }}^{\left( 1\right) }\left( G\right) ,\ldots ,{\mathbf{\psi }}^{\left( H\right) }\left( G\right) }\right)$ , where $H$ is the maximum iteration number.
+
+
+
+Figure 2: Illustration of the fast SaWL. Colored numbers denote node labels. In (b), (c), (e) and (f), neighbor nodes are aggregated as multiset and compressed to updated labels (the same as 1-WL). In (d) and (g), the S operator encodes each rooted subgraph into a feature mapping. After the 2nd iteration, the feature mapping of ${G}_{{v}_{1}}^{2}$ is no longer equal to that of ${H}_{{u}_{1}}^{2}$ , so that graph $G$ and $H$ can be discriminated by SaWL (but not by 1-WL).
+
+Discussion. Compared to plain WL, which directly uses node labels at $h$ -th iteration to obtain the graph representation, SaWL additionally uses the multiset of labels of node $v$ ’s neighbors within $h$ -hop to enhance WL with subgraph information. To understand SaWL’s benefits over plain WL, from one point of view, SaWL encodes the node-subgraph-graph hierarchy instead of the node-graph hierarchy of WL, which better captures the hierarchical structural characteristics of the graph. From another point of view, plain WL encodes a node by its rooted subtree pattern, which can have repeated nodes. The repetitions of the same node are regarded as distinct nodes, and the actual number of nodes in the subtree pattern might be corrupted. The hash function in the $\mathrm{S}$ operator further characterizes the information of the actual number of nodes in the subgraph (which also equals the actual number of nodes in the subtree pattern, because the subgraph ${G}_{v}^{h}$ does not have repeated nodes).
+
+### 3.2 A Fast Implementation of SaWL
+
+To illustrate the idea of SaWL, we provide a particular implementation here named fast SaWL. For the $\mathrm{S}$ operator, we design HASH function as a counting mapping that counts the occurrence of different node labels in the subgraph. Then, we adopt sum pooling as the READOUT function to obtain the whole graph feature mapping.
+
+Definition 1 (Counting mapping). Let ${\mathcal{L}}_{h} \subseteq \mathcal{L}$ denote the set of node labels that occur at least once in the $h$ -th iteration. ${\mathcal{L}}_{h} = \left( {{\ell }_{1}^{h},{\ell }_{2}^{h},\ldots ,{\ell }_{\left| {\mathcal{L}}_{h}\right| }^{h}}\right)$ and we assume that ${\mathcal{L}}_{h}$ is ordered. Assume ${G}_{v}^{h} \in \mathcal{G}$ , where $\mathcal{G}$ is the complete graph space. For each iteration $h$ , we define a counting mapping ${c}_{h} : \mathcal{G} \times {\mathcal{L}}_{h} \rightarrow \mathbb{N}$ , where ${c}_{h}\left( {{G}_{v}^{h},{\ell }_{i}^{h}}\right)$ is the number of the occurrences of the $i$ -th node label ${\ell }_{i}^{h}$ in subgraph ${G}_{v}^{h}$ at the $h$ -th iteration.
+
+With counting mapping, the feature mapping of the subgraph ${G}_{v}^{h}$ can be obtained by ${\phi }^{\left( h\right) }\left( {G}_{v}^{h}\right) =$ $\left( {{c}_{h}\left( {{G}_{v}^{h},{\ell }_{1}^{h}}\right) ,\ldots ,{c}_{h}\left( {{G}_{v}^{h},{\ell }_{\left| {\mathcal{L}}_{h}\right| }^{h}}\right) }\right)$ , where the value of the $i$ -th position of the vector represents the occurrence number of label ${\ell }_{i}^{h}$ in the $h$ -th iteration. Essentially, the $\mathrm{S}$ operator encodes subgraph by mapping the multiset of node labels within the subgraph to a vector, recording the occurrence number of each label. Then, the whole graph feature mapping is obtained by applying sum pooling to the subgraph feature mappings. Although the sum pooling is not an injective readout function, as we will show, it allows fast computation (acceleration) via an implementation trick.
+
+Illustration. We illustrate the fast SaWL in Figure 2. Given two graphs $G$ and $H$ where colored numbers indicate node labels. The WL encoder of fast SaWL updates node labels in (b), (c), (e) and (f).
+
+S operator encodes rooted subgraphs, and we take two rooted subgraphs as examples in Figure 2(g). The feature mapping of the subgraph ${G}_{{v}_{1}}^{2}$ in the 2nd iteration is ${\phi }^{\left( 2\right) }\left( {G}_{{v}_{1}}^{2}\right) = \left( {3,2}\right)$ , which means the label 4 occurs three times and label 5 occurs twice in the subgraph. Then the subgraphs are pooled to obtain the graph feature mapping in the 2nd iteration, e.g., for graph $G,{\psi }^{\left( 2\right) }\left( G\right) = {\phi }^{\left( 2\right) }\left( {G}_{{v}_{1}}^{2}\right) +$ ${\phi }^{\left( 2\right) }\left( {G}_{{v}_{2}}^{2}\right) + \ldots + {\phi }^{\left( 2\right) }\left( {G}_{{v}_{6}}^{2}\right) = \left( {{20},{12}}\right)$ . And for graph $H,{\psi }^{\left( 2\right) }\left( H\right) = \left( {{16},{12}}\right)$ . Finally, the whole graph feature mappings are $\psi \left( G\right) = \left( {4,2,{12},8,{20},{12}}\right)$ , and $\psi \left( H\right) = \left( {4,2,{12},8,{16},{12}}\right)$ . The graph $G$ and $H$ cannot be discriminated by 1-WL, but they can be discriminated by our fast SaWL.
+
+Acceleration. In fast SaWL, the calculation of the S operator can be executed simultaneously with the WL encoder, which reduces the computational time. Since the subgraph feature mappings are summed as the whole graph feature mapping, the frequency of one node contributing to the whole graph feature mapping is equal to the number of occurrences of this node in all $h$ -hop rooted subgraphs. We use graph $H$ (adapted from Figure $2\left( \mathrm{f}\right)$ ) as an example. In Figure $3\left( \mathrm{a}\right)$ , each tuple(a, b) represents the feature mapping of the node's rooted subgraph. The whole graph feature mapping can be computed by summing all nodes’ feature mappings: ${\psi }^{\left( 2\right) }\left( H\right) =$ $\left( {2,2}\right) + \ldots + \left( {4,2}\right) + \ldots + \left( {2,2}\right) = \left( {{16},{12}}\right)$ . However, we can actually compute the whole graph feature mapping from a global perspective. E.g., node ${u}_{1}$ contributes to the 2-hop rooted subgraphs of nodes ${u}_{1},{u}_{2},{u}_{3},{u}_{4}$ . And the number of ${u}_{1}$ ’s contributions to the whole graph feature mapping is exactly the size of node ${u}_{1}$ ’s 2-hop rooted subgraph, i.e., $\left| {\mathcal{V}\left( {H}_{{u}_{1}}^{\left( 2\right) }\right) }\right| = 4$ . Similarly, we mark each node’s contribution number beside it in Figure 3(b). The whole graph feature mapping can be alternatively computed by summing the contribution numbers for each label dimension, i.e., ${\psi }^{\left( 2\right) }\left( H\right) = \left( {4 + 4 + 4 + 4,6 + 6}\right) = \left( {{16},{12}}\right)$ . The sizes of rooted subgraphs can be computed together in the multiset determination of WL run on the original graph by propagating node label and ID simultaneously. We present the steps of the accelerating version of the fast SaWL for graph classification in Algorithm 1 of the Appendix. We additionally detail how to use the accelerating version for graph isomorphism test in Appendix A.7.
+
+
+
+Figure 3: ${u}_{1}$ contributes to the feature mappings of rooted subgraphs of ${u}_{1},{u}_{2},{u}_{3},{u}_{4}$ . The contribution number equals the size of rooted subgraph ${H}_{{u}_{1}}^{\left( 2\right) }$ .
+
+### 3.3 The Expressive Power of SaWL
+
+We first analyze the expressive power of SaWL by comparing it with 1-WL. Once the graphs can be discriminated by 1-WL, they can be discriminated by SaWL as well.
+
+Proposition 1. Given two graphs $G$ and $H$ , if they can be distinguished by 1-WL, i.e., ${\phi }^{\left( h\right) }\left( G\right) \neq$ ${\phi }^{\left( h\right) }\left( H\right)$ , then they must be distinguished by the SaWL, i.e., ${\psi }^{\left( h\right) }\left( G\right) \neq {\psi }^{\left( h\right) }\left( H\right)$ .
+
+See Appendix A. 2 for proof. If the graph pair can be discriminated by 1-WL, the counting mappings of the whole graphs are different. There must exist subgraphs with different counting mappings in the graph pair. Therefore, the final feature mappings of the two graphs obtained by SaWL are different.
+
+Proposition 2. We define the number of $h$ -shortest neighbors of each node as ${s}_{v}^{h}$ , which is the number of nodes with the exact shortest distance $h$ from the center node $v$ . For graphs $G$ and $H$ , if $\left. \left\{ {\left| {s}_{v}^{h}\right| v \in \mathcal{V}\left( G\right) }\right\} \right\} \neq \left\{ \left\{ {\left| {s}_{u}^{h}\right| u \in \mathcal{V}\left( H\right) }\right\} \right\}$ , then the two graphs can be distinguished by the h-layer SaWL.
+
+From a global graph perspective, if the multisets of numbers of the $h$ -shortest neighbor of nodes in graph $G$ and $H$ are different, there exist at least two subgraphs in the graphs with different encodings. Then from a subgraph perspective, the multiset of subgraph encodings of the two graphs are different and they can be discriminated by SaWL. We provide a detailed explanation in the Appendix A.3.
+
+Theorem 1. The expressive power of SaWL is higher than that of 1-WL in distinguishing graphs.
+
+As proved in Proposition 1, once the graphs can be discriminated by 1-WL, they must be discriminated by SaWL. There are also many graphs that can be discriminated by SaWL, but not by 1-WL, e.g., graphs $G$ and $H$ in Figures 2, we provide more examples in Appendix A.4. To sum up, the expressive power of SaWL is strictly higher than that of 1-WL. According to recent research on subgraph
+
+GNNs [26], SaWL's k-hop subgraph selection and encoding scheme can be implemented by 3-order Invariant Graph Networks (3-IGNs), whose expressive power is bounded by 3-WL [27]. Thus, SaWL's expressive power is also bounded by 3-WL.
+
+### 3.4 Complexity
+
+We analyze the computational complexity of the fast SaWL and the corresponding accelerating version respectively. Given the graph $G$ with node number $N$ , average node degree $D$ and edge number $M$ , where $M = {ND}$ . We assume the average node number of the subgraphs is $n$ . For the fast SaWL, the multiset determination, the label compression and relabeling in the WL encoder take a total runtime of $O\left( {ND}\right)$ [18]. In the S operator, the feature mapping computing of one subgraph with $n$ nodes takes $O\left( n\right)$ , and that of the $N$ subgraphs takes $O\left( {Nn}\right)$ . To sum up, the time complexity is $O\left( {ND}\right) + O\left( {Nn}\right)$ . For the accelerating version, the $\mathrm{S}$ operator can be executed simultaneously with the multiset determination of the WL encoder. Specifically, determining the label multisets and identity sets for all nodes takes $O\left( {ND}\right)$ operations which can be accomplished simultaneously. The runtime of the identity set can be achieved by using a hash table. Therefore, the total time complexity of the accelerating version is $O\left( {ND}\right)$ , which equals that of 1-WL algorithm [18].
+
+## 4 Subgraph-aware Graph Neural Network
+
+In order to generalize SaWL to scenarios with continuous features, we propose a neural version of SaWL, namely Subgraph-aware GNN (SaGNN). Each component in the SaWL is replaced with a neural network in SaGNN.
+
+Model. The neural version SaGNN includes two components: the GNN encoder and the S operator. Any standard neural version of the 1-WL algorithm can be utilized as the GNN encoder. Given input graphs, $\mathbf{{GNN}}$ encoder updates nodes with its previous state and representations of neighbor nodes (Eq. 1). Specifically, we adopt GIN with $\epsilon = 0$ to obtain the node representations in the $k$ -th layer, i.e., ${\mathbf{h}}_{v}^{\left( k\right) } = {\operatorname{MLP}}^{\left( k\right) }\left( {{\mathbf{h}}_{v}^{\left( k - 1\right) } + \mathop{\sum }\limits_{{u \in \mathcal{N}\left( v\right) }}{\mathbf{h}}_{u}^{\left( k - 1\right) }}\right)$ , where $\mathcal{N}\left( v\right)$ denotes the neighbor nodes of node $v$ , and ${\mathbf{h}}_{v}^{\left( k\right) } \in {\mathbb{R}}^{N \times {D}_{1}},{D}_{1}$ is the feature dimension. In each layer, node representations are updated by the GNN encoder applied to the full graph.
+
+With the updated node representations, $\mathbf{S}$ operator in SaGNN are designed to further encode $k$ -hop subgraphs around each node, which provides extra expressive power beyond plain GNN. An injective function is utilized for encoding subgraph information by aggregating nodes within the subgraph (Eq. 3). In this paper, we adopt MLP with SUM as the hash function, which achieves injective for the countable feature space [13]. The representation of the subgraph around node $v$ is obtained by ${\mathbf{h}}_{s, v}^{\left( k\right) } = \operatorname{MLP}\left( {\mathop{\sum }\limits_{{q \in \mathcal{V}\left( {G}_{v}^{k}\right) }}{\mathbf{h}}_{q}^{\left( k\right) }}\right) .$
+
+Then, graph representations in the $k$ -th layer are calculated with a readout (pooling) function (Eq. 4). In SaGNN, we adopt sum pooling as the readout function, i.e., ${\mathbf{H}}^{\left( k\right) }\left( G\right) = \operatorname{SUM}\left( {\left. {\mathbf{h}}_{s, v}^{\left( k\right) }\right| \;v \in V\left( G\right) }\right)$ . Then the representations of graph $G$ in all layers are concatenated as the final graph representation, i.e., $\mathbf{H}\left( G\right) = \operatorname{CONCAT}\left( {{\mathbf{H}}^{\left( 1\right) }\left( G\right) ,{\mathbf{H}}^{\left( 2\right) }\left( G\right) ,\ldots ,{\mathbf{H}}^{\left( k\right) }\left( G\right) }\right)$ , and $\mathbf{H}\left( G\right) \in {\mathbb{R}}^{{D}_{1} * k}$ .
+
+Discussion. Since the SaGNN is the neural version of SaWL, and the SaWL have been shown to be more expressive than 1-WL, the expressive power of SaGNN is higher than that of 1-WL. The computational complexity of SaGNN is also the same as the fast SaWL (section 3.4), which is $O\left( {{ND} + {Nn}}\right)$ . Besides, both the proposed SaGNN and the existing methods of WL-on-subgraph paradigm [15-17] intend to uplift GNNs by encoding subgraphs. However, methods of WL-on-subgraph paradigm bring high computational cost by extracting rooted subgraphs and applying multiple GNNs. Instead, SaGNN encodes rooted subgraphs with the nodes updated in full graphs, which keeps the computational cost low. We present a detailed comparison in Appendix A.5.
+
+## 5 Experiments
+
+In this section, we first evaluate the effectiveness of the proposed fast SaWL and SaGNN on graph classification tasks. Then we conduct experiments to verify the expressiveness of the methods. Besides, We compare the computation time of our methods with 1-WL and methods of WL-on-subgraph paradigm to verify the efficiency of our methods.
+
+Table 1: 10-Fold Cross Validation average test accuracy (%) on TU datasets.
+
+| Methods | MUTAG | PTC_MR | Mutagenicity | NCI1 | NCI109 |
| SP kernel | ${87.28} \pm {0.55}$ | ${58.24} \pm {2.44}$ | ${71.63} \pm {2.19}$ | ${73.47} \pm {0.21}$ | ${73.07} \pm {0.11}$ |
| WL kernel | ${82.05} \pm {0.36}$ | ${57.97} \pm {0.49}$ | - | ${82.19} \pm {0.18}$ | ${82.46} \pm {0.24}$ |
| DGK | ${87.44} \pm {2.72}$ | ${60.08} \pm {2.55}$ | - | ${73.55} \pm {0.51}$ | ${73.26} \pm {0.26}$ |
| GCN | ${78.69} \pm {6.56}$ | ${66.73} \pm {4.65}$ | ${80.84} \pm {1.35}$ | ${78.39} \pm {1.79}$ | ${77.57} \pm {1.79}$ |
| GIN | ${81.51} \pm {8.47}$ | ${54.09} \pm {6.20}$ | ${77.70} \pm {2.50}$ | ${80.0} \pm {1.40}$ | ${70.20} \pm {3.21}$ |
| Diffpool | ${80.00} \pm {6.98}$ | ${57.14} \pm {7.11}$ | ${80.55} \pm {1.98}$ | ${78.88} \pm {3.05}$ | ${76.76} \pm {2.38}$ |
| SortPool | ${85.83} \pm {1.66}$ | ${58.59} \pm {2.47}$ | ${80.41} \pm {1.02}$ | ${74.44} \pm {0.47}$ | - |
| 1-2-3-GNN | ${86.10} \pm {0.0}$ | ${60.9} \pm {0.0}$ | - | ${76.2} \pm {0.0}$ | - |
| 3-hop GNN | ${87.56} \pm {0.72}$ | - | - | ${80.61} \pm {0.34}$ | - |
| Nested GIN | ${87.90} \pm {8.20}$ | ${54.1} \pm {7.70}$ | ${82.40} \pm {2.00}$ | ${78.60} \pm {2.30}$ | ${77.20} \pm {2.90}$ |
| GraphSNN | ${91.57} \pm {2.80}$ | ${66.70} \pm {3.70}$ | - | ${81.60} \pm {2.80}$ | - |
| fast SaWL | ${90.00} \pm {3.89}$ | ${70.33} \pm {5.32}$ | ${84.32} \pm {1.48}$ | ${84.45} \pm {0.66}$ | ${85.37} \pm {0.81}$ |
| SaGNN | ${88.81} \pm {5.21}$ | ${71.78} \pm {4.43}$ | ${84.13} \pm {1.31}$ | ${83.78} \pm {1.03}$ | ${83.35} \pm {0.56}$ |
+
+### 5.1 Datasets
+
+In the tasks of graph classification, we evaluate fast SaWL and SaGNN with seven datasets, including TU datasets [28], and Open Graph Benchmark (OGB) dataset [29]. Graphs in these datasets represent chemical molecules, nodes represent atoms, and edges represent chemical bonds. TU datasets include MUTAG [30], PTC_MR [31], Mutagenicity [32], NCI1 [33] and NCI109 [33]. The task is binary classification, and the metric is classification accuracy. Task on OGB dataset ogbg-molhiv is molecular prediction. It is a binary classification, and the metric is ROC-AUC. We provide statistics of the datasets in Appendix A.6. We evaluate the expressiveness of our methods on the EXP dataset [34], which is a synthetic dataset containing 600 pairs of graphs.
+
+### 5.2 Baselines
+
+In the experiment of the graph classification task on TU, we adopt three graph kernel methods, some GNNs methods based on the 1-WL, and some methods with higher expressive power than 1-WL as baselines. Graph kernel methods include shortest path kernel [35], WL subtree kernel [18] and deep graph kernel [36]. GNNs methods based on the 1-WL include GCN [22], GIN [13], Diffpool [25], and Sortpool [37]. For GCN, graph representations are obtained by the learned nodes representations and sum pooling. Higher expressive methods include 1-2-3 GNN [14], 3-hop GNN [17] Nested GNN [15] and GraphSNN [38]. On OGB dataset, we compare with the traditional message passing GNNs, and the higher expressive methods Deep LRP-1-3 [39], Nested GNN [15] and GIN-AK ${}^{ + }$ [16]. Results of baselines are obtained either from raw paper or source code with published experimental settings ("-" indicates that results are not available). For GCN and GIN, we search the model layer in $\{ 2,3,4,5\}$ , and hidden dimensions in $\{ {32},{64},{128}\}$ . For Nested GNN, we choose the best-performing Nested GIN as the baseline according to the results in the original paper. And the results on the datasets Mutagenicity, NCI and NCI109, we search the subgraph height in $\{ 2,3,4,5\}$ with 4 model layers.
+
+### 5.3 Experimental Setup
+
+In graph classification tasks, we adopt multilayer perceptrons (MLPs) with softmax as the classifier to predict the class label of the graph. On the TU datasets, we perform 10-fold cross-validation where 9 folds for training, 1 fold for testing. 10% split of the training set is used for model selection [40]. We report the average and standard deviation (in percentage) of test accuracy across the 10 folds. We train the models with batch size 32. On the OGB dataset, the experiments are conducted 10 times, and the average scores of ROC-AUC are reported. We train the models with batch size 256. For all datasets, we implement experiments with PyTorch and employ Adam optimizer with the learning rate of 0.001 to optimize the model. For fast SaWL, we search the iteration times in $\{ 2,3\}$ . For SaGNN, we search the model layers in $\{ 2,3,4\}$ . In the training process, we adopt the early stopping strategy 273
+
+Table 2: Performance Evaluation on OGB dataset. Table 3: Results on EXP.
+
+| Methods | ogbg-molhiv (AUC) |
| Validation | Test |
| GCN [22] | ${82.04} \pm {1.41}$ | ${76.06} \pm {0.97}$ |
| GIN [13] | ${82.32} \pm {0.90}$ | ${75.58} \pm {1.40}$ |
| Deep LRP-1-3 [39] | ${81.31} \pm {0.88}$ | ${76.87} \pm {1.80}$ |
| Nested GNN [15] | ${83.17} \pm {1.99}$ | ${78.34} \pm {1.86}$ |
| GIN-AK+ [16] | - | ${79.61} \pm {1.19}$ |
| fast SaWL | ${79.13} \pm {0.69}$ | ${78.29} \pm {0.48}$ |
| SaGNN | ${81.06} \pm {1.14}$ | ${78.86} \pm {0.73}$ |
+
+| Model | Test Accuracy (%) |
| GCN [22] | ${50.0} \pm {0.00}$ |
| GIN [13] | ${50.0} \pm {0.00}$ |
| GCN-RNI [34] | ${98.0} \pm {1.85}$ |
| PPGN [41] | ${100.0} \pm {0.00}$ |
| 3-GCN [14] | ${99.7} \pm {0.004}$ |
| Nested GNN [15] | ${99.9} \pm {0.26}$ |
| GIN-AK+ [16] | ${100.0} \pm {0.00}$ |
| fast SaWL | ${99.50} \pm {0.70}$ |
| SaGNN | ${99.67} \pm {0.70}$ |
+
+with patience 30 , and we report the test results at the epoch of best validation. The experimental setup of the expressive power evaluation on the EXP dataset are kept the same with [34]. We use the Nvidia V100 GPUs to run the experiments.
+
+### 5.4 Effectiveness Evaluation
+
+Performance on Graph Classification Task. Results of the graph classification on TU and OGB datasets are shown in Tables 1, 2. Compared with graph kernel methods and traditional GNNs based on 1-WL, our fast SaWL and SaGNN gain strong improvements. Especially, both proposed methods achieve better performance than WL subtree kernel and GIN, which proves the higher discriminating power experimentally. It verifies that the augmented subgraph information on the basis of the subtree pattern obtained by the WL/GNN is effective on the graph classification task. For the methods with higher expressive power than traditional message passing GNNs, i.e., 1-2-3-GNN, 3-hop GNN, Nested GNN and GraphSNN, our fast SaWL and SaGNN still outperform the methods in most TU datasets. Especially, our fast SaWL gains such progress with low computational cost. The proposed methods achieve comparable results to other high expressive methods on the larger-scale OGB dataset. The results show that our methods achieve higher or comparable performance to methods with high computational cost. We adopt GIN as the GNN encoder in SaGNN. The improvements compared to GIN verify the effectiveness of the $\mathrm{S}$ operator, which provides additional subgraph information in graph classification. The neural version SaGNN achieves slightly lower performance than fast SaWL on some small-scale datasets, which may be because the neural model is not sufficiently trained with insufficient training data. On the larger-scale OGB dataset, the neural version SaGNN achieves better results than fast SaWL with sufficient training. In summary, fast SaWL and SaGNN achieve improvement compared with competitive baselines on the graph classification task.
+
+Expressive Power Evaluation. We first evaluate the expressiveness on the EXP dataset and then show cases of graph isomorphism test in Appendix A.7. Results on the EXP are shown in Table 3, and some results of baselines are from [34]. Each pair of graphs in EXP is non-isomorphic and 1-WL indistinguishable, and the results of GCN and GIN verify this. GCN-RNI [34], PPNG [41], 3-GCN [14], Nested GNN [15] and GNN-AK ${}^{ + }$ [38] are five baselines with higher expressive power as well as high computational cost than 1-WL. Our fast SaWL and SaGNN consistently achieve very high accuracy, which can distinguish nearly all graph pairs. The results are comparable with the $k$ -GNNs [14,41] and Nested GNN [15], which are more computationally complex. The results verify the high expressive power of fast SaWL and SaGNN, which have been stated theoretically.
+
+### 5.5 Efficiency Evaluation
+
+We compare the running time of the proposed methods with baselines to verify the high efficiency in practice. Our fast SaWL has higher discriminating power than 1-WL, while the accelerating version of the fast SaWL has the same time complexity as 1-WL, which have been demonstrated in section 3.4. We record the running time of fast SaWL and 1- WL in obtaining feature mappings of all graphs in four datasets respectively. The average running time with ten runs are shown in Tabel 4. The running time of fast SaWL is similar to that of 1-WL. The time difference is less than 0.5 seconds on the TU dataset and less than 3 seconds on the ogbg-molhiv, which contains 41127 graphs. We further conduct the t-test as a significance test. The p-value is 0.8413 , and ${0.8413} > {0.05}$ , which demonstrates that there is no significant difference in the running time of fast SaWL and 1-WL on graph feature mapping calculation. For SaGNN, we compare the running time with an example method of the WL-on-subgraph paradigm, i.e., Nested GNN (NGNN) [15] in Figure 4. On TU datasets, the running time of the Nested GNN is more than three times that of SaGNN. On the ogbg-molhiv dataset (abbreviated as ogb in Figure 4), we compare the epoch time and the whole training time. The running time of the Nested GNN is more than ten times that of SaGNN on both each epoch and the whole training process, e.g., the average training time of Nested GNN on an epoch is ${134.91} \pm {21.30}$ seconds, and that of SaGNN is ${9.71} \pm {0.49}$ seconds. The time comparison demonstrates that our SaGNN is significantly more efficient than methods of the WL-on-subgraph paradigm.
+
+Table 4: Runtime Comparison of fast SaWL with 1-WL (second).
+
+| Methods | Mutagenicity | NCI1 | NCI109 | ogbg-molhiv |
| 1-WL | ${4.90} \pm {0.23}$ | ${4.69} \pm {0.16}$ | ${4.73} \pm {0.20}$ | ${112.25} \pm {0.68}$ |
| fast SaWL | ${4.99} \pm {0.22}$ | ${4.81} \pm {0.20}$ | ${4.96} \pm {0.20}$ | ${115.11} \pm {0.71}$ |
+
+
+
+Figure 4: Training Time Comparison of SaGNN with Method of the WL-on-subgraph paradigm.
+
+## 6 Related Works
+
+The expressiveness of graph neural networks is a key research topic in graph machine learning. Many approaches with higher expressive power than 1-WL have been proposed, including high-dimension WL based [14, 41], feature augmentation based [34, 42], subgraph encoding based [15, 16, 43] and equivariant models [26, 27, 44]. We provide a breifly review here. (1) It's natural to build GNNs based on a high-dimension WL algorithm for high expressive power, e.g., PPNG [41] based on the high-order graph networks, $k$ -GNNs [14] based on the set $k$ -WL algorithm. However, the high dimension WL algorithms require enumeration of the nodes tuple, which limits the scalability and generalization with high computational cost. (2) Random feature-based methods augment GNNs by adding random features as additional node features. E.g., GCN-RNI [34] enhances GNNs with random node initialization. rGINs [42] concatenates random features with node features then applies GINs on the combined features. However, random feature augmentation-based methods limit the generalization ability of the methods. (3) Many existing subgraph-based methods first extract subgraphs centered on each node of graphs, then apply GNNs on the extracted subgraphs. e.g., Nested GNN [15] implements base GNN on the extracted subgraphs then obtains the whole graph representations by a global pooling. GNN-AK [16] extracts subgraphs and applies multiple GNNs as well. $k$ -hop GNNs [17] propose to aggregate the node with the information from its $k$ -hop neighborhood, rather than only from its direct neighbors, which can identity fundamental graph properties such as connectivity, triangle freeness. These methods can be summarized as WL-on-subgraph paradigm (Figure 1 (b)), and the computational cost are much higher than 1-WL, which limits their application to the large scale graphs. We provide more related works in Appendix A.8.
+
+## 7 Conclusion
+
+The traditional message passing graph neural networks (GNNs) are at most as powerful as 1-WL algorithm. Since the representative power of the subgraph is higher than that of the subtree, methods of the WL-on-subgraph paradigm are proposed to improve GNNs, which brings expensive computational cost. As a contrast, we propose the subgraph-aware WL (SaWL) paradigm in this paper, which uplifts GNNs and keeps computation complexity low. Under the paradigm, we first implement an algorithm named fast SaWL, where the additional S operator encodes subgraph information on the basis of the WL on the full graph. We then present the neural version of the SaWL named SaGNN, which replace the components in SaWL with neural networks. SaWL and SaGNN are proved to be more expressive than 1-WL, and have achieved significant improvements in the experiments.
+
+References
+
+[1] Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. Graph neural networks for social recommendation. In The world wide web conference, pages 417-426, 2019. 1
+
+[2] T Gaudelet, B Day, AR Jamasb, J Soman, C Regep, G Liu, et al. Utilising graph machine learning within drug discovery and development (2020). arXiv preprint arXiv:2012.05716, 2020.17
+
+[3] Alexey Strokach, David Becerra, Carles Corbi-Verge, Albert Perez-Riba, and Philip M Kim. Fast and flexible protein design using deep graph neural networks. Cell systems, 11(4):402-411, 2020.1
+
+[4] Zhongkai Hao, Chengqiang Lu, Zhenya Huang, Hao Wang, Zheyuan Hu, Qi Liu, Enhong Chen, and Cheekong Lee. Asgn: An active semi-supervised graph neural network for molecular property prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 731-752, 2020. 1
+
+[5] Lesong Wei, Xiucai Ye, Yuyang Xue, Tetsuya Sakurai, and Leyi Wei. Atse: a peptide toxicity predictor by exploiting structural and evolutionary information based on graph neural network and attention mechanism. Briefings in Bioinformatics, 22(5):bbab041, 2021. 1
+
+[6] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4-24, 2020. 1
+
+[7] Sergi Abadal, Akshay Jain, Robert Guirado, Jorge López-Alonso, and Eduard Alarcón. Computing graph neural networks: A survey from algorithms to accelerators. ACM Computing Surveys (CSUR), 54(9):1-38, 2021.
+
+[8] Yu Zhou, Haixia Zheng, Xin Huang, Shufeng Hao, Dengao Li, and Jumin Zhao. Graph neural networks: Taxonomy, advances, and trends. ACM Transactions on Intelligent Systems and Technology (TIST), 13(1):1-54, 2022. 1
+
+[9] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning (ICML), pages 1263-1272. PMLR, 2017. 1
+
+[10] B. Y. Weisfeiler and A. A. Leman. A reduction of a graph to a canonical form and an algebra arising during this reduction (in russian). 1968. 1, 2
+
+[11] Brendan L Douglas. The weisfeiler-lehman method and graph isomorphism testing. arXiv preprint arXiv:1101.5211, 2011. 1, 16
+
+[12] Ryoma Sato. A survey on the expressive power of graph neural networks. arXiv preprint arXiv:2003.04078, 2020. 1, 3, 14
+
+[13] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In Proceedings of the Information Conference of Learning Representation (ICLR), 2018.3,6,7,8,17
+
+[14] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI conference on artificial intelligence, 2019. 1, 3, 7, 8, 9, 17
+
+[15] Muhan Zhang and Pan Li. Nested graph neural networks. Advances in Neural Information Processing Systems, 34, 2021. 1, 6, 7, 8, 9, 15
+
+[16] Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. From stars to subgraphs: Uplifting any gnn with local structure awareness. In Proceedings of the Information Conference of Learning Representation (ICLR), 2021. 7, 8, 9, 15
+
+[17] Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. k-hop graph neural networks. Neural Networks, 130:195-205, 2020. 1, 6, 7, 9, 15
+
+[18] Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(9), 2011.2,3,6,7,16,17
+
+[19] Nils M Kriege, Fredrik D Johansson, and Christopher Morris. A survey on graph kernels. Applied Network Science, 5(1):1-42, 2020. 2
+
+[20] László Babai and Ludik Kucera. Canonical labelling of graphs in linear average time. In 20th Annual Symposium on Foundations of Computer Science (sfcs 1979), pages 39-46. IEEE, 1979. 2
+
+[21] Martin Grohe. Descriptive complexity, canonisation, and definable graph structure theory, volume 47. Cambridge University Press, 2017. 3
+
+[22] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the Information Conference of Learning Representation (ICLR), 2017.3, 7, 8, 17
+
+[23] Junhyun Lee, Inyeop Lee, and Jaewoo Kang. Self-attention graph pooling. In International Conference on Machine Learning (ICML), pages 3734-3743. PMLR, 2019. 3, 17
+
+[24] Yao Ma, Suhang Wang, Charu C Aggarwal, and Jiliang Tang. Graph convolutional networks with eigenpooling. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 723-731, 2019.
+
+[25] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems (NIPS), pages 4800-4810, 2018. 3, 7, 17
+
+[26] Fabrizio Frasca, Beatrice Bevilacqua, Michael M Bronstein, and Haggai Maron. Understanding and extending subgraph gnns by rethinking their symmetries. arXiv preprint arXiv:2206.11140, 2022.6,9
+
+[27] Waïss Azizian and Marc Lelarge. Expressive power of invariant and equivariant graph neural networks. In ICLR 2021, 2021. 6, 9
+
+[28] Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020. 7
+
+[29] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020. 7
+
+[30] Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786-797, 1991. 7
+
+[31] Hannu Toivonen, Ashwin Srinivasan, Ross D King, Stefan Kramer, and Christoph Helma. Statistical evaluation of the predictive toxicology challenge 2000-2001. Bioinformatics, 19(10): 1183-1193, 2003. 7
+
+[32] Jeroen Kazius, Ross McGuire, and Roberta Bursi. Derivation and validation of toxicophores for mutagenicity prediction. Journal of medicinal chemistry, 48(1):312-320, 2005. 7
+
+[33] Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3):347-375, 2008.7
+
+[34] Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In Proceedings of the Thirtieth International Joint Conference on Artifical Intelligence (IJCAI), 2021. 7, 8, 9
+
+[35] Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In Fifth IEEE International Conference on Data Mining (ICDM), pages 8-pp. IEEE, 2005. 7, 17
+
+[36] Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 1365-1374, 2015.7
+
+[37] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Thirty-second AAAI conference on artificial intelligence, 2018.7,17
+
+[38] Asiri Wijesinghe and Qing Wang. A new perspective on" how graph neural networks go beyond weisfeiler-lehman?". In International Conference on Learning Representations, 2021. 7, 8
+
+[39] Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? Advances in neural information processing systems, 33:10383-10395, 2020. 7, 8
+
+[40] Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. Proceedings of the Information Conference of Learning Representation (ICLR), 2019. 7
+
+[41] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 2156-2167, 2019. 8, 9
+
+[42] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pages 333-341. SIAM, 2021. 9
+
+[43] Leonardo Cotta, Christopher Morris, and Bruno Ribeiro. Reconstruction for powerful graph representations. Advances in Neural Information Processing Systems, 34:1713-1726, 2021. 9
+
+[44] Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. In International Conference on Learning Representations, 2021. 9
+
+[45] Oliver Wieder, Stefan Kohlbacher, Mélaine Kuenemann, Arthur Garon, Pierre Ducrot, Thomas Seidel, and Thierry Langer. A compact review of molecular property prediction with graph neural networks. Drug Discovery Today: Technologies, 37:1-12, 2020. 17
+
+[46] S Vichy N Vishwanathan, Nicol N Schraudolph, Risi Kondor, and Karsten M Borgwardt. Graph kernels. Journal of Machine Learning Research, 11:1201-1242, 2010. 17
+
+[47] Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. Efficient graphlet kernels for large graph comparison. In Artificial Intelligence and Statistics, pages 488-495, 2009. 17
+
+[48] Lanning Wei, Huan Zhao, Quanming Yao, and Zhiqiang He. Pooling architecture search for graph classification. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2091-2100, 2021. 17
+
+[49] Haoteng Tang, Guixiang Ma, Lifang He, Heng Huang, and Liang Zhan. Commpool: An interpretable graph pooling framework for hierarchical graph representation learning. Neural Networks, 143:669-677, 2021. 17
+
+## A Appendix
+
+### A.1 Acceleration of the fast SaWL
+
+The S operator in the fast SaWL (Section 3.2) can be calculated simultaneously with the WL encoder, which leads to the accelerating version. The idea of the acceleration is illustrated in Figure 3, each node contributes to the feature mappings of $m$ rooted subgraphs, where $m$ equals the size of rooted subgraph centered in the node. The sizes of rooted subgraphs can be computed simultaneously with the multiset determination of the WL encoder. We then present the steps of the accelerating version.
+
+The accelerating version proceeds in iterations. Each iteration consists of five steps (Algorithm 1), which are multisets determination, multisets sorting, label compression, relabeling and feature mapping obtaining. Specifically, given two graphs $G$ and ${G}^{\prime }$ , for node $v$ , the label is denoted as ${l}_{v}^{h}$ and the identity is denoted as $i{d}_{v}$ . In step1, we aggregate the labels and identity sets of neighbor nodes respectively. Node labels of neighbor nodes are aggregated as a multiset ${M}_{v}^{h}$ . For $h = 0$ , ${M}_{v}^{0} = {l}_{v}^{0}$ , and for $h > 0,{M}_{v}^{h} = \left\{ {\left\{ {{l}_{u}^{h - 1} \mid u \in \mathcal{N}\left( v\right) }\right\} \text{, where}\mathcal{N}\left( v\right) \text{denotes the neighbor nodes of}}\right\}$ $v$ and $\{ \}$ denotes the multiset. Identity sets of neighbor nodes are aggregated and combined with the identity of the center node which forms a new set ${t}_{v}^{h}$ . For $h = 0,{t}_{v}^{0} = \left\{ {i{d}_{v}}\right\}$ , and for $h > 0$ , ${t}_{v}^{h} = \left\{ {i{d}_{v}, i{d}_{w} \mid w \in {t}_{u}^{h - 1}, u \in \mathcal{N}\left( v\right) }\right\}$ . In step 2, each label multiset ${M}_{v}^{h}$ is sorted and converted to a string ${\mathcal{S}}_{v}^{h}$ with the prefix ${l}_{v}^{h - 1}$ , which prepares for the label compression. In step 3, each string is compressed to a new label with a hash function $\mathbf{g} : \sum * \rightarrow \sum$ and $\mathbf{g}$ should be an injective function. The mapping alphabet is shared across graphs, which guarantees a common feature space. In step 4, we relabel each node in graph $G$ and ${G}^{\prime }$ as ${l}_{v}^{h} \mathrel{\text{:=}} \mathbf{g}\left( {\mathcal{S}}_{v}^{h}\right)$ .
+
+We assume the minimum label in $h$ -th iteration is ${l}_{m}$ . Then, from a global-graph perspective, the value of the $i$ -th position ( $i$ starts from 0 ) of the final graph feature mapping in layer $h$ is:
+
+$$
+{\psi }_{i}^{\left( h\right) }\left( G\right) = \mathop{\sum }\limits_{{{l}_{v}^{h} = {l}_{m} + i, v \in V}}\left| {t}_{v}^{h}\right| , \tag{5}
+$$
+
+which means the summation of the occurrences of label ${l}_{m} + i$ in all $h$ -hop subgraphs. The final graph feature mappings obtained by the fast SaWL and the accelerating version are equivalent. In the accelerating version, the feature mappings of subgraphs do not require to be calculated separately, which reduces the computational cost and speeds up the computation.
+
+Algorithm 1 Accelerating version of fast SaWL for Graph Classification
+
+---
+
+Input: Node Labels (features) $\mathbf{X}$ ; Adjacency Matrix $\mathbf{A}$
+
+ for $h = 1$ to $H$ do
+
+ 1. Label multisets and identity sets determination
+
+ - Aggeregate labels of neighbor nodes centered in each node $v$ in graph $G$ as multiset ${M}_{v}^{h}$ .
+
+ For $h = 0,{M}_{v}^{0} = {l}_{v}^{0}$ , and for $h > 0,{M}_{v}^{h} = \left\{ {{l}_{u}^{h - 1} \mid u \in \mathcal{N}\left( v\right) }\right\}$ .
+
+ - Aggregate identity sets of neighbor nodes centered in each node $v$ in graph $G$ . Identity of
+
+ node $v$ and elements in identity sets of neighbor nodes compose the new identity set. For
+
+ $h = 0,{t}_{v}^{0} = \{ {id}\left( v\right) \}$ , for $h > 0,{t}_{v}^{h} = \left\{ {i{d}_{v}, i{d}_{w} \mid w \in {t}_{u}^{h - 1}, u \in \mathcal{N}\left( v\right) }\right\}$ .
+
+ 2. Sorting labels in each label multiset
+
+ - Sort label elements in the label multiset in ascending order and concatenate them into a
+
+ string ${\mathcal{S}}_{v}^{h}$ .
+
+ Add ${l}_{v}^{h - 1}$ as a prefix to ${\mathcal{S}}_{v}^{h}$ .
+
+ 3. Label compression
+
+ - Map each string ${\mathcal{S}}_{v}^{h}$ to a compressed label using a hash function $g : \sum * \rightarrow \sum$ such that
+
+ $g\left( {\mathcal{S}}_{v}^{h}\right) \mathrel{\text{:=}} g\left( {\mathcal{S}}_{w}^{h}\right)$ if and only if ${\mathcal{S}}_{v}^{h} = {\mathcal{S}}_{w}^{h}$ .
+
+ Relabeling
+
+ Set ${l}_{v}^{h} \mathrel{\text{:=}} g\left( {\mathcal{S}}_{v}^{h}\right)$ for all nodes in the graph.
+
+ $i$ -th position of graph feature vector of $h$ layer
+
+ ${\psi }_{i}^{\left( h\right) }\left( G\right) = \mathop{\sum }\limits_{{{l}_{v}^{h} = {l}_{m} + i, v \in V}}\left| {t}_{v}^{h}\right| .$
+
+ end for
+
+Output: Graph Feature Vector $\psi \left( G\right) = \left\lbrack {{\psi }^{\left( 0\right) }\left( G\right) ,\ldots ,{\psi }^{\left( H\right) }\left( G\right) }\right\rbrack$
+
+---
+
+### A.2 Proof of Proposition 1
+
+Proof. For graph $G$ and $H$ , if they can be discriminated by 1-WL, there must exits a constant $h$ that ${\phi }^{\left( h\right) }\left( G\right) \neq {\phi }^{\left( h\right) }\left( H\right)$ . Since ${\phi }^{\left( h\right) }\left( G\right) = \left( {{c}_{h}\left( {G,{\ell }_{1}^{h}}\right) ,\ldots ,{c}_{h}\left( {G,{\ell }_{\left| {\mathcal{L}}_{h}\right| }^{h}}\right) }\right)$ , there must exist a ${\ell }_{i}^{h}$ , such that ${c}_{h}\left( {G,{\ell }_{i}^{h}}\right) \neq {c}_{h}\left( {H,{\ell }_{i}^{h}}\right)$ . Then there must be different subgraphs in the two graphs such that ${c}_{h}\left( {{G}_{v}^{h},{\ell }_{i}^{h}}\right) \neq {c}_{h}\left( {{H}_{u}^{h},{\ell }_{i}^{h}}\right)$ , where ${G}_{v}^{h}$ is a $h$ -hop subgraph around node $v$ of $G$ . As a result, the sets of subgraph feature mappings of graph $G$ and $H$ are not equal, i.e., $\left\{ {\phi \left( {G}_{v}^{h}\right) \mid v \in }\right.$ $\mathcal{V}\left( G\right) \} \neq \left\{ {\phi \left( {H}_{u}^{h}\right) \mid u \in \mathcal{V}\left( H\right) }\right\}$ . With the condition that READOUT is a injective function, we have $\operatorname{READOUT}\left( \left\{ {\phi \left( {G}_{v}^{h}\right) \mid v \in V\left( G\right) }\right\} \right) \neq \operatorname{READOUT}\left( \left\{ {\phi \left( {H}_{u}^{h}\right) \mid u \in \mathcal{V}\left( H\right) }\right\} \right)$ , i.e., ${\psi }_{h}\left( G\right) \neq {\psi }_{h}\left( H\right)$ . In other words, the graph $G$ and $H$ can also be discriminated by the SaWL.
+
+### A.3 Explaination of Proposition 2
+
+We further explain the Proposition 2 in section 3.3. For graph $G$ and $H$ , if $\begin{Vmatrix}{{s}_{v}^{h} \mid v \in \mathcal{V}\left( G\right) }\end{Vmatrix} \neq$ $\left\{ {{s}_{u}^{h} \mid u \in \mathcal{V}\left( H\right) }\right\}$ , then the two graphs can be distinguished by the $h$ -layer SaWL. ${s}_{v}^{h}$ is the number of nodes with the exact shortest distance $h$ from node $v$ . When $h = 1$ , if the numbers of 1-hop neighbor nodes are different in $G$ and $H,1$ -WL can discriminate the two graphs, i.e., ${\phi }^{\left( h\right) }\left( G\right) \neq {\phi }^{\left( h\right) }\left( H\right)$ . According to Proposition 1, SaWL can discriminate the graphs as well. Assume the number of 1-hop neighbor nodes are the same; when $h = 2$ , the number of nodes with the shortest distance 2 are different in $G$ and $H$ . Then the sizes of 2-hop rooted subgraphs in $G$ and $H$ are different, which leads to the difference in the multisets of rooted subgraphs in the two graphs. With the injective readout function, the final graph feature mappings of the graph $G$ and $H$ are different. Similarly, assume the numbers of(h - 1)-hop neighbor nodes in two graphs are the same. Then if the numbers of $h$ -shortest distance nodes in two graphs are different, it results in the different multisets of rooted subgraphs and the different graph feature mappings. Therefore, the graphs $G$ and $H$ can be discriminated by SaWL.
+
+For a further intuitive understanding, we take the implemented algorithm of SaWL, i.e., fast SaWL, as an example. From the perspective of the accelerating version, the size of the rooted subgraph equals the contribution of the center nodes to the whole graph feature mapping (shown in Figure 3(b)). Therefore, different sizes of rooted subgraphs directly lead to different feature mappings of the graph $G$ and $H$ . The graphs can be discriminated by fast SaWL.
+
+### A.4 Graph Examples
+
+In this subsection, we provide two classes of graphs that cannot be discriminated by WL [12], but can be discriminated by the proposed SaWL. Note that the labels of all nodes in Figure 5 are the same. SaWL can discriminate the graphs only by utilizing the graph structure, and the additional label information of nodes can leave the discrimination easier.
+
+The first class is $k$ -regular graphs of the same size (Figure 5(b)-(f)). The 6-nodes 2-regular graph in Figure 5(b), 8-nodes 3-regular graphs in Figure 5(c), 12-nodes 4-regular graphs in Figure 5(d) and two pairs of circulant graphs in Figure 5(e), 5(f) can be discriminated by 2-layer SaWL. The green nodes are center nodes, and the grey nodes are 2-hop neighbors of the green nodes. We take Figure 5(c) as example. There are two 2-hop shortest neighbors of the green node in the left graph, which are marked as grey. While for the green node in the right graph, the number of the 2-hop shortest neighbor is three (grey nodes in the right graph). According to proposition 2 in section 3.3, the left graph and the right graph can be discriminated by SaWL with two layers.
+
+For a more intuitive understanding, we present the feature mappings of graphs in Figure 5(c) with 1-WL and our fast SaWL. We assume the initial label of each node is 0 . For 1-WL, the multiset determination in the 1st and the 2nd iteration includes $0,{000} \rightarrow 1;1,{111} \rightarrow 2$ . The feature mappings of the graph in the left and right after the 2nd iteration are equal, i.e., $\phi \left( {G}_{\text{left }}\right) = \phi \left( {G}_{\text{right }}\right) =$ (8,8,8). For our fast SaWL, the feature mapping of the graph in the left is $\psi \left( {G}_{\text{left }}\right) = \left( {8,{32},{52}}\right)$ , while that of the right graph is $\psi \left( {G}_{\text{right }}\right) = \left( {8,{32},{56}}\right)$ . The difference comes from the green node and its equivalent nodes. In the left graph, label 2 occurs 52 times in all rooted subgraphs, and it occurs 56 times in the rooted subgraphs of the right graph.
+
+The second class includes some non-regular non-isomorphic graphs, e.g., Figure 5(a). The two graphs are non-regular graphs, but WL cannot distinguish them. SaWL can discriminate the two graphs with three layers. We take pink nodes as center nodes. For the left graph, there are three 3-hop shortest neighbors of the pink node. While for the right graph, there exist two 3-hop shortest neighbors of the pink node, which are marked as grey. Therefore, the two graphs can be distinguished by SaWL.
+
+
+
+Figure 5: Graph pairs can discriminated by SaWL, but not WL.
+
+### A.5 Comparison with WL-on-subgraphs methods
+
+We discuss relations of the proposed methods of subgraph-aware WL (Figure 1(c)) paradigm with other methods of WL-on-subgraph paradigm (Figure 1(b)). Methods of WL-on-subgraph paradigm usually extract subgraphs around each node of the graph, then apply GNNs on each extracted subgraph respectively, such as Nested GNN [15], GNN-AK [16] and k-hop GNN [17]. However, the computation complexity of this paradigm is much higher than our proposed subgraph-aware WL paradigm. For graph $G$ with $N$ nodes, the average degree of nodes is denoted as $D$ , and the average nodes number of subgraphs is denoted as $n$ . Extracting $k$ -hop subgraphs from each node takes $O\left( {k \cdot N \cdot D}\right)$ . Applying GNNs on all extracted subgraphs takes $O\left( {N \cdot n \cdot D}\right)$ . Totally, the computation cost is $O\left( {k \cdot N \cdot D + N \cdot n \cdot D}\right)$ . Compared to high dimensional GNNs based on $k$ -WL, methods of WL-on-Subgraph paradigm reduce the computational cost. However, the complexity is still much higher than that of 1-WL and our proposed methods.
+
+Essentially, Both the proposed methods of subgraph-aware WL paradigm and the existing methods of WL-on-subgraph paradigm intend to uplift GNNs by encoding subgraphs. However, the WL-on-subgraph methods apply GNNs on all extracted subgraphs respectively, which brings high computational cost. As a contrast, our subgraph-aware WL methods encode subgraphs while keeping the computational cost low (shown in Section 3.4).
+
+### A.6 Datasets
+
+We provide statistics of the datasets utilized in graph classification tasks in table 5. We adopt molecular datasets for evaluation, including TU datasets and OGB dataset. Nodes in these datasets denote atoms, and the edges denote chemical bonds. For each dataset, we present the total number of graphs, the number of positive ground truth labels, the average numbers of nodes and edges, and the types of node labels.
+
+Table 5: Statistics of datasets.
+
+| Dataset | #Graphs | #Positive | #Avg. Nodes | #Avg. Edges | #Nodes Types |
| MUTAG | 188 | 125 | 17.9 | 19.8 | 7 |
| PTC_MR | 344 | 152 | 25.6 | 29.4 | 18 |
| Mutagenicity | 4337 | 2401 | 30.3 | 30.8 | 13 |
| NCI1 | 4110 | 2057 | 29.9 | 32.3 | 37 |
| NCI109 | 4127 | 2079 | 29.6 | 32.1 | 38 |
| ogbg-molhiv | 41127 | 1443 | 25.5 | 27.5 | 119 |
+
+### A.7 The Accelerating Version for graph isomorphism test
+
+The accelerating version of fast SaWL provided in Appendix A.1 can be utilized for the graph isomorphism test, which has the same time complexity as 1-WL, but higher discriminating power than 1-WL. We first present the definition of the graph isomorphism test, and then we explain the steps and the termination condition of the accelerating version in the graph isomorphism test.
+
+Graph Isomorphism Test. For graph $G,\mathcal{V}\left( G\right)$ and $\mathcal{E}\left( G\right)$ are the sets of nodes and edges respectively. Two graphs $G$ and $H$ are isomorphic if there exists a bijection $\xi$ between $\mathcal{V}\left( G\right)$ and $\mathcal{V}\left( H\right) .\xi : \mathcal{V}\left( G\right) \rightarrow \mathcal{V}\left( H\right)$ and it preserves the edge relation, i.e., $\left( {u, v}\right) \in \mathcal{E}\left( G\right)$ if and only if $\left( {\xi \left( u\right) ,\xi \left( v\right) }\right) \in \mathcal{E}\left( H\right)$ for all $u, v \in \mathcal{V}\left( G\right)$ . Although the exact complexity of the graph isomorphism problem is still uncertain, there are some efficient graph isomorphism algorithms [11].
+
+The Accelerating Version of Fast SaWL for Graph Isomorphism Test. When used for the graph isomorphism test, each iteration of the accelerating version consists of four steps, i.e., steps 1-4 of Algorithm 1. Given two graphs $G$ and $H$ , the accelerating version terminates after iteration $h$ if:
+
+$$
+\left\{ {\left( {{l}_{v}^{h},\left| {t}_{v}^{h}\right| }\right) \mid v \in \mathcal{V}\left( G\right) }\right\} \neq \left\{ {\left( {{l}_{u}^{h},\left| {t}_{u}^{h}\right| }\right) \mid u \in \mathcal{V}\left( H\right) }\right\} . \tag{6}
+$$
+
+${l}_{v}^{h}$ denotes the label of node $v$ in the $h$ -th iteration, and it represents a $h$ -height subtree pattern. ${t}_{v}^{h}$ denotes the set of the node identities (IDs). It contains node identities in the subtree pattern without repetition due to the uniqueness of the node identity. The termination condition implies that fast SaWL can determine that two graphs are non-isomorphic once the updated labels or the number of nodes in the subtree patterns are different.
+
+The terminating condition of the 1-WL can be denoted as $\left\{ {{l}_{v}^{h} \mid v \in \mathcal{V}\left( G\right) }\right\} \neq \left\{ {{l}_{u}^{h} \mid u \in \mathcal{V}\left( H\right) }\right\}$ [18]. The terminating condition of the accelerating version of fast SaWL (Eq. 6) is stricter than that of 1-WL by adding a new structural constraint. Therefore, once the graphs are determined unequal by the 1-WL algorithm, they must also be determined unequal by the proposed implementation. Besides, there exist many graphs that WL cannot discriminate, which can be determined as non-isomorphic (e.g., graph pairs in Figure 5). To conclude, the discriminating power of the SaWL is higher than that of 1-WL in the graph isomorphism test.
+
+Cases. We take the graph pair in Figure 5(c) as an example, the iteration process has been described in Appendix A.4. We denote the left graph as $G$ and the right graph as $H$ . After the 2nd iteration, for our fast SaWL, the set of graph $G$ is $\{ \left( {2,6}\right) ,\left( {2,7}\right) \mid v \in \mathcal{V}\left( G\right) \}$ . The set of graph $H$ is $\{ \left( {2,7}\right) \mid u \in$ $\mathcal{V}\left( H\right) \}$ , and $\{ \left( {2,6}\right) ,\left( {2,7}\right) \mid v \in \mathcal{V}\left( G\right) \} \neq \{ \left( {2,7}\right) \mid u \in \mathcal{V}\left( H\right) \}$ . The terminating condition is satisfied, and the two graphs are determined as non-isomorphic. While for 1-WL, $\{ 2 \mid v \in \mathcal{V}\left( G\right) \} = \{ 2 \mid u \in$ $\mathcal{V}\left( H\right) \}$ , the two graphs cannot be discriminated. All graph pairs in Figure 5 can be discriminated by fast SaWL in this way.
+
+### A.8 More Related Works
+
+We present more related works, including graph kernel methods and traditional message passing GNNs based on the 1-WL algorithm here.
+
+Graph Kernels. Graph classification is an important task with many valuable downstream applications, such as chemical molecular property prediction [45] and pharmaceutical drug research [2]. Graph classification aims to predict the labels of given graphs by utilizing graph structure and feature information. Historically, graph kernels have been the dominant approaches for graph classification. Graph kernels first decompose the graph into different substructures, e.g., path, graphlet, and subtree, then the kernel matrix of the graphs is calculated by comparing the predefined substructures. Typical graph kernel methods include shortest path kernel [35], random walk graph kernel [46], graphlet kernel [47], and WL subtree kernel [18]. Kernel matrix is sent to kernel machine to obtain the predicted labels of graphs. However, graph kernel methods are limited for heuristic feature extraction.
+
+GNNs based on 1-WL algorithm. Recently, Graph Neural Networks (GNNs) have been popular methods for graph classification, which made a great success $\left\lbrack {{37},{48}}\right\rbrack$ . These methods can be viewed as the neural implementation of the 1-WL [13, 22], which first updates node representations by aggregating neighbor nodes, and then pools the nodes to obtain the graph representation. Many pooling strategies have been proposed for graph classification [23, 25, 49]. However, it has been proved that the expressive power of traditional GNNs based on 1-WL is at most as large as 1-WL $\left\lbrack {{13},{14}}\right\rbrack$ , which limits the performance of GNN-pooling methods on the graph classification task.
\ No newline at end of file
diff --git a/papers/LOG/LOG 2022/LOG 2022 Conference/ha9hPpthvQ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/LOG/LOG 2022/LOG 2022 Conference/ha9hPpthvQ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6634014ca0700da03be05ce0e120ae4187029471
--- /dev/null
+++ b/papers/LOG/LOG 2022/LOG 2022 Conference/ha9hPpthvQ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,299 @@
+§ TOWARDS EFFICIENT AND EXPRESSIVE GNNS FOR GRAPH CLASSIFICATION VIA SUBGRAPH-AWARE WEISFEILER-LEHMAN
+
+Anonymous Author(s)
+
+Anonymous Affiliation
+
+Anonymous Email
+
+§ ABSTRACT
+
+The expressive power of GNNs is upper-bounded by Weisfeiler-Lehman (WL) test. To achieve GNNs with high expressiveness, researchers resort to subgraph-based GNNs (WL/GNN on subgraphs), deploying GNNs on subgraphs centered around each node to encode subgraphs instead of rooted subtrees like WL. However, deploying multiple GNNs on subgraphs suffers from much higher computational cost than deploying a single GNN on the whole graph, limiting its application to large-size graphs. In this paper, we propose a novel paradigm, namely Subgraph-aware WL (SaWL), to obtain graph representation that reaches subgraph-level expressiveness with a single GNN. We prove that SaWL has beyond-WL capability for graph isomorphism test, and propose a fast implementation for it. To generalize SaWL to graphs with continuous node features, we propose a neural version named Subgraph-aware GNN (SaGNN) to learn graph representation. Both SaWL and SaGNN are more expressive than 1-WL while having similar computational cost to 1-WL/GNN, without causing exponentially higher complexity like other more expressive GNNs. Experimental results on several benchmark datasets demonstrate that fast SaWL and SaGNN significantly outperform competitive baseline methods on the task of graph classification, while achieving high efficiency.
+
+§ 191 INTRODUCTION
+
+Graph-structured data widely exist in the real world, and modeling graphs has become an important topic in the field of machine learning. Graph learning has widespread applications [1-3], and many valuable applications can be formulated as graph classification, e.g., molecular property prediction [4], drug toxicity prediction [5]. Graph classification aims to predict the label of the given graph by exploiting graph structure and feature information. Learning expressive representations of graphs is crucial for classifying graphs of different structural characteristics.
+
+Recently, Graph Neural Networks (GNNs) have achieved great success in graph classification tasks [6- 8]. GNNs that follow a message passing scheme first iteratively aggregate neighbor information to update node representations, then pool node representations into graph-level representations [9]. Essentially, GNNs are parameterized generalizations of the 1-dimensional Weisfeiler-Lehman algorithm (1-WL) [10], which encodes each node by its' rooted subtree pattern [11], as shown in Figure 1 (a). Despite the success of traditional message passing GNNs, the expressive power of GNNs is theoretically upper-bounded by 1-WL, which is known to have limited power in distinguishing many non-isomorphic graphs [12-14].
+
+To uplift the expressive power of GNNs, researchers adopt a paradigm of WL/GNN on subgraphs (Figure 1 (c)), which encodes rooted subgraphs instead of rooted subtrees as node representations [15- 17]. Methods under the paradigm first extract rooted subgraphs (i.e., subgraph induced by the neighbor nodes within $h$ hops of a center node), and then apply GNNs on each extracted subgraph respectively. However, as GNNs are applied to subgraphs extracted from each node of the graph, the 39 computational cost of these methods is much higher than that of traditional message passing GNNs, especially when the subgraphs have similar sizes to the whole graph. In this paper, we propose a novel paradigm of Subgraph-aware WL/GNN (SaWL), which reaches higher expressiveness than 1-WL with a single GNN (Figure 1 (b)). It first deploys WL/GNN on the full graph to obtain node representations, and then aggregates the nodes within each subgraph to achieve subgraph awareness. The proposed paradigm greatly reduces the computational cost of existing WL-on-subgraph methods, while achieving higher expressive power than 1-WL. Under the paradigm, we propose an algorithm as fast implementation of SaWL, which consists of a WL encoder and a subgraph operator (S operator). We first apply a standard WL on the full graph to iteratively update each node label based on its current label and the labels of its neighbors [18]. After each iteration of WL, we use the S operator to encode the rooted subgraph of each node by aggregating the current labels of nodes within the subgraph. The whole graph feature mapping at this iteration is obtained further by pooling the subgraph feature mapping. Finally, we concatenate graph feature mappings at different iterations into a final graph feature mapping for graph classification. We then generalize SaWL to a neural version, Subgraph-aware GNN (SaGNN).
+
+ < g r a p h i c s >
+
+Figure 1: (a) WL encodes nodes by rooted subtrees, which has limited expressiveness. (b) WL/GNN on Subgraphs paradigm extracts rooted subgraphs and applies GNNs on each rooted subgraph, which is computationally expensive. (c) Our Subgraph-aware WL/GNN applies WL/GNN on the full graph and then encodes rooted subgraphs by aggregating nodes within the subgraph. The proposed paradigm possesses higher expressive power than 1-WL while keeping the computational cost low.
+
+Compared to the paradigm of WL/GNN-on-subgraphs, the proposed Subgraph-aware WL/GNN does not need to copy a full $n$ -node graph into $n$ subgraphs (each rooted at a node) and run WL/GNN on each subgraph separately (thus the same node can have multiple representations when appearing in different subgraphs). Instead, Subgraph-aware WL/GNN only runs WL/GNN on the full graph and encodes subgraph information based on the "global" WL/GNN node representations. It encodes the subgraph information while avoiding the need to apply WL/GNN on each extracted subgraph respectively, which improves the expressiveness and keeps low computational cost at the same time.
+
+We evaluate the effectiveness of the proposed fast SaWL and SaGNN on graph classification tasks via several benchmark datasets, and then conduct the expressive power evaluation to verify the high distinguishing power of our methods. We further compare the running time of our methods with 4 other high expressive methods. The experimental results show that our methods have both high effectiveness and high efficiency.
+
+§ 2 PRELIMINARY
+
+§ 2.1 WEISFEILER-LEHMAN AND FEATURE MAPPING
+
+Weisfeiler-Lehman (1-WL) [10] is one of the most widely used algorithms which can tackle graph 69 isomorphism test for a broad class of graphs [19, 20]. Specifically, 1-WL proceeds in iterations denoted by $h$ , and each iteration includes multisets determination, injective mapping and relabeling [18].
+
+Given two graphs $G$ and $H$ , firstly, WL aggregates the labels of neighbor nodes as a multiset ${M}_{v}^{h}$ . For $h = 0,{M}_{v}^{0} = {l}_{v}^{0}$ , and for $h > 0,{M}_{v}^{h} = \left\{ {\left\{ {{l}_{u}^{h - 1} \mid u \in \mathcal{N}\left( v\right) }\right\} \text{ , where }{l}_{v}^{h}\text{ is the label of node }v\text{ in the }}\right\}$ $h$ -th iteration, $\mathcal{N}\left( v\right)$ denotes the neighbor nodes of $v$ and $\{ \}$ denotes the multiset. Note that multiset is a generalized set that allows repeated elements [13]. Then, an injective function is required to update the label of node, ${l}_{v}^{h} \mathrel{\text{ := }} \operatorname{HASH}\left( \left( {{l}_{v}^{h - 1},{M}_{v}^{h}}\right) \right)$ . The procedures repeat until the multisets of node labels of two graphs differ, or the number of iterations reaches a predetermined value. The feature mapping of the whole graph can be obtained after each iteration. We can use the multiset of node labels in the $h$ -th iteration to represent the whole graph [18]. Although 1-WL works well in testing isomorphism on many graphs, the distinguishing power of the 1-WL is limited [12, 21].
+
+§ 2.2 GRAPH NEURAL NETWORKS
+
+Traditional message passing Graph Neural Networks (GNNs) follow an aggregation and update scheme, which can be viewed as the neural implementation of the 1-WL [13, 22]. Nodes aggregate features of neighbor nodes, combine them with its features and update to new representations:
+
+$$
+{\mathbf{h}}_{v}^{k} = \operatorname{UPDATE}\left( {{\mathbf{h}}_{v}^{k - 1},\operatorname{AGGREGATE}\left( {{\mathbf{h}}_{u}^{k - 1} \mid u \in \mathcal{N}\left( v\right) }\right) }\right) , \tag{1}
+$$
+
+where the UPDATE and AGGREGATE functions are implemented with neural networks. Then, the whole graph representation can be computed by a pooling/readout operation like sum [23-25]:
+
+$$
+{\mathbf{h}}^{k}\left( G\right) = \operatorname{READOUT}\left( {{\mathbf{h}}_{v}^{k} \mid v \in \mathcal{V}\left( G\right) }\right) . \tag{2}
+$$
+
+GNNs have been popular architectures for representation learning on graphs. However, it has been proved that the expressive power of message passing GNNs is upper bounded by the 1-WL algorithm $\left\lbrack {{13},{14}}\right\rbrack$ , which limits the performance on graph classification tasks.
+
+§ 3 SUBGRAPH-AWARE WEISFEILER-LEHMAN
+
+We propose a new paradigm of Subgraph-aware Weisfeiler-Lehman (SaWL), which exceeds the expressive power of 1-WL while keeping low computational complexity. The paradigm first iteratively applies WL/GNN to the original input graph. With the obtained node representations at each iteration, the paradigm encodes each rooted subgraph by hashing the node representations within its range. Then, the subgraph representations are pooled to obtain the whole graph representation.
+
+§ 3.1 SAWL FOR GRAPH CLASSIFICATION
+
+SaWL consists of a WL encoder, a subgraph encoding operator (the S operator) and a graph feature mapping module. For graph $G$ , the WL encoder executes normal WL steps described in section 2.1, which outputs the updated node labels $\left\{ {{l}_{v}^{h} \mid v \in \mathcal{V}\left( G\right) }\right\}$ , where ${l}_{v}^{h}$ is the label of node $v$ in the $h$ -th iteration. The core of the proposed SaWL lies in the additional S operator, which encodes subgraph information with the results of each WL iteration. We describe the S operator in the following.
+
+S operator. We employ an injective hash function that acts on labels of nodes within the subgraph to encode the subgraph information into a subgraph feature mapping:
+
+$$
+{\phi }^{\left( h\right) }\left( {G}_{v}^{h}\right) = \operatorname{HASH}\left( \left\{ \left\{ {{l}_{v}^{h} \mid v \in \mathcal{V}\left( {G}_{v}^{h}\right) }\right\} \right\} \right) , \tag{3}
+$$
+
+where ${G}_{v}^{h}$ is the $h$ -hop rooted subgraph around node $v$ . The hash function can be designed freely. Essentially, the $\mathrm{S}$ operator encodes the multiset of node labels within ${G}_{v}^{h}$ (obtained by running $h$ iterations of WL on the full graph) into a subgraph representation.
+
+Graph Feature Mapping Module. With the subgraph feature mapping, an injective readout function is adopted to obtain the whole graph feature mapping in the $h$ -th iteration, i.e.,
+
+$$
+{\psi }^{\left( h\right) }\left( G\right) = \operatorname{READOUT}\left( {{\phi }^{\left( h\right) }\left( {G}_{v}^{h}\right) \mid v \in \mathcal{V}\left( G\right) }\right) . \tag{4}
+$$
+
+The readout function can be chosen freely. To retain the structural information at all iterations, the final graph feature mapping is obtained by concatenation, i.e., $\psi \left( G\right) =$ CONCAT $\left( {{\mathbf{\psi }}^{\left( 0\right) }\left( G\right) ,{\mathbf{\psi }}^{\left( 1\right) }\left( G\right) ,\ldots ,{\mathbf{\psi }}^{\left( H\right) }\left( G\right) }\right)$ , where $H$ is the maximum iteration number.
+
+ < g r a p h i c s >
+
+Figure 2: Illustration of the fast SaWL. Colored numbers denote node labels. In (b), (c), (e) and (f), neighbor nodes are aggregated as multiset and compressed to updated labels (the same as 1-WL). In (d) and (g), the S operator encodes each rooted subgraph into a feature mapping. After the 2nd iteration, the feature mapping of ${G}_{{v}_{1}}^{2}$ is no longer equal to that of ${H}_{{u}_{1}}^{2}$ , so that graph $G$ and $H$ can be discriminated by SaWL (but not by 1-WL).
+
+Discussion. Compared to plain WL, which directly uses node labels at $h$ -th iteration to obtain the graph representation, SaWL additionally uses the multiset of labels of node $v$ ’s neighbors within $h$ -hop to enhance WL with subgraph information. To understand SaWL’s benefits over plain WL, from one point of view, SaWL encodes the node-subgraph-graph hierarchy instead of the node-graph hierarchy of WL, which better captures the hierarchical structural characteristics of the graph. From another point of view, plain WL encodes a node by its rooted subtree pattern, which can have repeated nodes. The repetitions of the same node are regarded as distinct nodes, and the actual number of nodes in the subtree pattern might be corrupted. The hash function in the $\mathrm{S}$ operator further characterizes the information of the actual number of nodes in the subgraph (which also equals the actual number of nodes in the subtree pattern, because the subgraph ${G}_{v}^{h}$ does not have repeated nodes).
+
+§ 3.2 A FAST IMPLEMENTATION OF SAWL
+
+To illustrate the idea of SaWL, we provide a particular implementation here named fast SaWL. For the $\mathrm{S}$ operator, we design HASH function as a counting mapping that counts the occurrence of different node labels in the subgraph. Then, we adopt sum pooling as the READOUT function to obtain the whole graph feature mapping.
+
+Definition 1 (Counting mapping). Let ${\mathcal{L}}_{h} \subseteq \mathcal{L}$ denote the set of node labels that occur at least once in the $h$ -th iteration. ${\mathcal{L}}_{h} = \left( {{\ell }_{1}^{h},{\ell }_{2}^{h},\ldots ,{\ell }_{\left| {\mathcal{L}}_{h}\right| }^{h}}\right)$ and we assume that ${\mathcal{L}}_{h}$ is ordered. Assume ${G}_{v}^{h} \in \mathcal{G}$ , where $\mathcal{G}$ is the complete graph space. For each iteration $h$ , we define a counting mapping ${c}_{h} : \mathcal{G} \times {\mathcal{L}}_{h} \rightarrow \mathbb{N}$ , where ${c}_{h}\left( {{G}_{v}^{h},{\ell }_{i}^{h}}\right)$ is the number of the occurrences of the $i$ -th node label ${\ell }_{i}^{h}$ in subgraph ${G}_{v}^{h}$ at the $h$ -th iteration.
+
+With counting mapping, the feature mapping of the subgraph ${G}_{v}^{h}$ can be obtained by ${\phi }^{\left( h\right) }\left( {G}_{v}^{h}\right) =$ $\left( {{c}_{h}\left( {{G}_{v}^{h},{\ell }_{1}^{h}}\right) ,\ldots ,{c}_{h}\left( {{G}_{v}^{h},{\ell }_{\left| {\mathcal{L}}_{h}\right| }^{h}}\right) }\right)$ , where the value of the $i$ -th position of the vector represents the occurrence number of label ${\ell }_{i}^{h}$ in the $h$ -th iteration. Essentially, the $\mathrm{S}$ operator encodes subgraph by mapping the multiset of node labels within the subgraph to a vector, recording the occurrence number of each label. Then, the whole graph feature mapping is obtained by applying sum pooling to the subgraph feature mappings. Although the sum pooling is not an injective readout function, as we will show, it allows fast computation (acceleration) via an implementation trick.
+
+Illustration. We illustrate the fast SaWL in Figure 2. Given two graphs $G$ and $H$ where colored numbers indicate node labels. The WL encoder of fast SaWL updates node labels in (b), (c), (e) and (f).
+
+S operator encodes rooted subgraphs, and we take two rooted subgraphs as examples in Figure 2(g). The feature mapping of the subgraph ${G}_{{v}_{1}}^{2}$ in the 2nd iteration is ${\phi }^{\left( 2\right) }\left( {G}_{{v}_{1}}^{2}\right) = \left( {3,2}\right)$ , which means the label 4 occurs three times and label 5 occurs twice in the subgraph. Then the subgraphs are pooled to obtain the graph feature mapping in the 2nd iteration, e.g., for graph $G,{\psi }^{\left( 2\right) }\left( G\right) = {\phi }^{\left( 2\right) }\left( {G}_{{v}_{1}}^{2}\right) +$ ${\phi }^{\left( 2\right) }\left( {G}_{{v}_{2}}^{2}\right) + \ldots + {\phi }^{\left( 2\right) }\left( {G}_{{v}_{6}}^{2}\right) = \left( {{20},{12}}\right)$ . And for graph $H,{\psi }^{\left( 2\right) }\left( H\right) = \left( {{16},{12}}\right)$ . Finally, the whole graph feature mappings are $\psi \left( G\right) = \left( {4,2,{12},8,{20},{12}}\right)$ , and $\psi \left( H\right) = \left( {4,2,{12},8,{16},{12}}\right)$ . The graph $G$ and $H$ cannot be discriminated by 1-WL, but they can be discriminated by our fast SaWL.
+
+Acceleration. In fast SaWL, the calculation of the S operator can be executed simultaneously with the WL encoder, which reduces the computational time. Since the subgraph feature mappings are summed as the whole graph feature mapping, the frequency of one node contributing to the whole graph feature mapping is equal to the number of occurrences of this node in all $h$ -hop rooted subgraphs. We use graph $H$ (adapted from Figure $2\left( \mathrm{f}\right)$ ) as an example. In Figure $3\left( \mathrm{a}\right)$ , each tuple(a, b) represents the feature mapping of the node's rooted subgraph. The whole graph feature mapping can be computed by summing all nodes’ feature mappings: ${\psi }^{\left( 2\right) }\left( H\right) =$ $\left( {2,2}\right) + \ldots + \left( {4,2}\right) + \ldots + \left( {2,2}\right) = \left( {{16},{12}}\right)$ . However, we can actually compute the whole graph feature mapping from a global perspective. E.g., node ${u}_{1}$ contributes to the 2-hop rooted subgraphs of nodes ${u}_{1},{u}_{2},{u}_{3},{u}_{4}$ . And the number of ${u}_{1}$ ’s contributions to the whole graph feature mapping is exactly the size of node ${u}_{1}$ ’s 2-hop rooted subgraph, i.e., $\left| {\mathcal{V}\left( {H}_{{u}_{1}}^{\left( 2\right) }\right) }\right| = 4$ . Similarly, we mark each node’s contribution number beside it in Figure 3(b). The whole graph feature mapping can be alternatively computed by summing the contribution numbers for each label dimension, i.e., ${\psi }^{\left( 2\right) }\left( H\right) = \left( {4 + 4 + 4 + 4,6 + 6}\right) = \left( {{16},{12}}\right)$ . The sizes of rooted subgraphs can be computed together in the multiset determination of WL run on the original graph by propagating node label and ID simultaneously. We present the steps of the accelerating version of the fast SaWL for graph classification in Algorithm 1 of the Appendix. We additionally detail how to use the accelerating version for graph isomorphism test in Appendix A.7.
+
+ < g r a p h i c s >
+
+Figure 3: ${u}_{1}$ contributes to the feature mappings of rooted subgraphs of ${u}_{1},{u}_{2},{u}_{3},{u}_{4}$ . The contribution number equals the size of rooted subgraph ${H}_{{u}_{1}}^{\left( 2\right) }$ .
+
+§ 3.3 THE EXPRESSIVE POWER OF SAWL
+
+We first analyze the expressive power of SaWL by comparing it with 1-WL. Once the graphs can be discriminated by 1-WL, they can be discriminated by SaWL as well.
+
+Proposition 1. Given two graphs $G$ and $H$ , if they can be distinguished by 1-WL, i.e., ${\phi }^{\left( h\right) }\left( G\right) \neq$ ${\phi }^{\left( h\right) }\left( H\right)$ , then they must be distinguished by the SaWL, i.e., ${\psi }^{\left( h\right) }\left( G\right) \neq {\psi }^{\left( h\right) }\left( H\right)$ .
+
+See Appendix A. 2 for proof. If the graph pair can be discriminated by 1-WL, the counting mappings of the whole graphs are different. There must exist subgraphs with different counting mappings in the graph pair. Therefore, the final feature mappings of the two graphs obtained by SaWL are different.
+
+Proposition 2. We define the number of $h$ -shortest neighbors of each node as ${s}_{v}^{h}$ , which is the number of nodes with the exact shortest distance $h$ from the center node $v$ . For graphs $G$ and $H$ , if $\left. \left\{ {\left| {s}_{v}^{h}\right| v \in \mathcal{V}\left( G\right) }\right\} \right\} \neq \left\{ \left\{ {\left| {s}_{u}^{h}\right| u \in \mathcal{V}\left( H\right) }\right\} \right\}$ , then the two graphs can be distinguished by the h-layer SaWL.
+
+From a global graph perspective, if the multisets of numbers of the $h$ -shortest neighbor of nodes in graph $G$ and $H$ are different, there exist at least two subgraphs in the graphs with different encodings. Then from a subgraph perspective, the multiset of subgraph encodings of the two graphs are different and they can be discriminated by SaWL. We provide a detailed explanation in the Appendix A.3.
+
+Theorem 1. The expressive power of SaWL is higher than that of 1-WL in distinguishing graphs.
+
+As proved in Proposition 1, once the graphs can be discriminated by 1-WL, they must be discriminated by SaWL. There are also many graphs that can be discriminated by SaWL, but not by 1-WL, e.g., graphs $G$ and $H$ in Figures 2, we provide more examples in Appendix A.4. To sum up, the expressive power of SaWL is strictly higher than that of 1-WL. According to recent research on subgraph
+
+GNNs [26], SaWL's k-hop subgraph selection and encoding scheme can be implemented by 3-order Invariant Graph Networks (3-IGNs), whose expressive power is bounded by 3-WL [27]. Thus, SaWL's expressive power is also bounded by 3-WL.
+
+§ 3.4 COMPLEXITY
+
+We analyze the computational complexity of the fast SaWL and the corresponding accelerating version respectively. Given the graph $G$ with node number $N$ , average node degree $D$ and edge number $M$ , where $M = {ND}$ . We assume the average node number of the subgraphs is $n$ . For the fast SaWL, the multiset determination, the label compression and relabeling in the WL encoder take a total runtime of $O\left( {ND}\right)$ [18]. In the S operator, the feature mapping computing of one subgraph with $n$ nodes takes $O\left( n\right)$ , and that of the $N$ subgraphs takes $O\left( {Nn}\right)$ . To sum up, the time complexity is $O\left( {ND}\right) + O\left( {Nn}\right)$ . For the accelerating version, the $\mathrm{S}$ operator can be executed simultaneously with the multiset determination of the WL encoder. Specifically, determining the label multisets and identity sets for all nodes takes $O\left( {ND}\right)$ operations which can be accomplished simultaneously. The runtime of the identity set can be achieved by using a hash table. Therefore, the total time complexity of the accelerating version is $O\left( {ND}\right)$ , which equals that of 1-WL algorithm [18].
+
+§ 4 SUBGRAPH-AWARE GRAPH NEURAL NETWORK
+
+In order to generalize SaWL to scenarios with continuous features, we propose a neural version of SaWL, namely Subgraph-aware GNN (SaGNN). Each component in the SaWL is replaced with a neural network in SaGNN.
+
+Model. The neural version SaGNN includes two components: the GNN encoder and the S operator. Any standard neural version of the 1-WL algorithm can be utilized as the GNN encoder. Given input graphs, $\mathbf{{GNN}}$ encoder updates nodes with its previous state and representations of neighbor nodes (Eq. 1). Specifically, we adopt GIN with $\epsilon = 0$ to obtain the node representations in the $k$ -th layer, i.e., ${\mathbf{h}}_{v}^{\left( k\right) } = {\operatorname{MLP}}^{\left( k\right) }\left( {{\mathbf{h}}_{v}^{\left( k - 1\right) } + \mathop{\sum }\limits_{{u \in \mathcal{N}\left( v\right) }}{\mathbf{h}}_{u}^{\left( k - 1\right) }}\right)$ , where $\mathcal{N}\left( v\right)$ denotes the neighbor nodes of node $v$ , and ${\mathbf{h}}_{v}^{\left( k\right) } \in {\mathbb{R}}^{N \times {D}_{1}},{D}_{1}$ is the feature dimension. In each layer, node representations are updated by the GNN encoder applied to the full graph.
+
+With the updated node representations, $\mathbf{S}$ operator in SaGNN are designed to further encode $k$ -hop subgraphs around each node, which provides extra expressive power beyond plain GNN. An injective function is utilized for encoding subgraph information by aggregating nodes within the subgraph (Eq. 3). In this paper, we adopt MLP with SUM as the hash function, which achieves injective for the countable feature space [13]. The representation of the subgraph around node $v$ is obtained by ${\mathbf{h}}_{s,v}^{\left( k\right) } = \operatorname{MLP}\left( {\mathop{\sum }\limits_{{q \in \mathcal{V}\left( {G}_{v}^{k}\right) }}{\mathbf{h}}_{q}^{\left( k\right) }}\right) .$
+
+Then, graph representations in the $k$ -th layer are calculated with a readout (pooling) function (Eq. 4). In SaGNN, we adopt sum pooling as the readout function, i.e., ${\mathbf{H}}^{\left( k\right) }\left( G\right) = \operatorname{SUM}\left( {\left. {\mathbf{h}}_{s,v}^{\left( k\right) }\right| \;v \in V\left( G\right) }\right)$ . Then the representations of graph $G$ in all layers are concatenated as the final graph representation, i.e., $\mathbf{H}\left( G\right) = \operatorname{CONCAT}\left( {{\mathbf{H}}^{\left( 1\right) }\left( G\right) ,{\mathbf{H}}^{\left( 2\right) }\left( G\right) ,\ldots ,{\mathbf{H}}^{\left( k\right) }\left( G\right) }\right)$ , and $\mathbf{H}\left( G\right) \in {\mathbb{R}}^{{D}_{1} * k}$ .
+
+Discussion. Since the SaGNN is the neural version of SaWL, and the SaWL have been shown to be more expressive than 1-WL, the expressive power of SaGNN is higher than that of 1-WL. The computational complexity of SaGNN is also the same as the fast SaWL (section 3.4), which is $O\left( {{ND} + {Nn}}\right)$ . Besides, both the proposed SaGNN and the existing methods of WL-on-subgraph paradigm [15-17] intend to uplift GNNs by encoding subgraphs. However, methods of WL-on-subgraph paradigm bring high computational cost by extracting rooted subgraphs and applying multiple GNNs. Instead, SaGNN encodes rooted subgraphs with the nodes updated in full graphs, which keeps the computational cost low. We present a detailed comparison in Appendix A.5.
+
+§ 5 EXPERIMENTS
+
+In this section, we first evaluate the effectiveness of the proposed fast SaWL and SaGNN on graph classification tasks. Then we conduct experiments to verify the expressiveness of the methods. Besides, We compare the computation time of our methods with 1-WL and methods of WL-on-subgraph paradigm to verify the efficiency of our methods.
+
+Table 1: 10-Fold Cross Validation average test accuracy (%) on TU datasets.
+
+max width=
+
+Methods MUTAG PTC_MR Mutagenicity NCI1 NCI109
+
+1-6
+SP kernel ${87.28} \pm {0.55}$ ${58.24} \pm {2.44}$ ${71.63} \pm {2.19}$ ${73.47} \pm {0.21}$ ${73.07} \pm {0.11}$
+
+1-6
+WL kernel ${82.05} \pm {0.36}$ ${57.97} \pm {0.49}$ - ${82.19} \pm {0.18}$ ${82.46} \pm {0.24}$
+
+1-6
+DGK ${87.44} \pm {2.72}$ ${60.08} \pm {2.55}$ - ${73.55} \pm {0.51}$ ${73.26} \pm {0.26}$
+
+1-6
+GCN ${78.69} \pm {6.56}$ ${66.73} \pm {4.65}$ ${80.84} \pm {1.35}$ ${78.39} \pm {1.79}$ ${77.57} \pm {1.79}$
+
+1-6
+GIN ${81.51} \pm {8.47}$ ${54.09} \pm {6.20}$ ${77.70} \pm {2.50}$ ${80.0} \pm {1.40}$ ${70.20} \pm {3.21}$
+
+1-6
+Diffpool ${80.00} \pm {6.98}$ ${57.14} \pm {7.11}$ ${80.55} \pm {1.98}$ ${78.88} \pm {3.05}$ ${76.76} \pm {2.38}$
+
+1-6
+SortPool ${85.83} \pm {1.66}$ ${58.59} \pm {2.47}$ ${80.41} \pm {1.02}$ ${74.44} \pm {0.47}$ -
+
+1-6
+1-2-3-GNN ${86.10} \pm {0.0}$ ${60.9} \pm {0.0}$ - ${76.2} \pm {0.0}$ -
+
+1-6
+3-hop GNN ${87.56} \pm {0.72}$ - - ${80.61} \pm {0.34}$ -
+
+1-6
+Nested GIN ${87.90} \pm {8.20}$ ${54.1} \pm {7.70}$ ${82.40} \pm {2.00}$ ${78.60} \pm {2.30}$ ${77.20} \pm {2.90}$
+
+1-6
+GraphSNN ${91.57} \pm {2.80}$ ${66.70} \pm {3.70}$ - ${81.60} \pm {2.80}$ -
+
+1-6
+fast SaWL ${90.00} \pm {3.89}$ ${70.33} \pm {5.32}$ ${84.32} \pm {1.48}$ ${84.45} \pm {0.66}$ ${85.37} \pm {0.81}$
+
+1-6
+SaGNN ${88.81} \pm {5.21}$ ${71.78} \pm {4.43}$ ${84.13} \pm {1.31}$ ${83.78} \pm {1.03}$ ${83.35} \pm {0.56}$
+
+1-6
+
+§ 5.1 DATASETS
+
+In the tasks of graph classification, we evaluate fast SaWL and SaGNN with seven datasets, including TU datasets [28], and Open Graph Benchmark (OGB) dataset [29]. Graphs in these datasets represent chemical molecules, nodes represent atoms, and edges represent chemical bonds. TU datasets include MUTAG [30], PTC_MR [31], Mutagenicity [32], NCI1 [33] and NCI109 [33]. The task is binary classification, and the metric is classification accuracy. Task on OGB dataset ogbg-molhiv is molecular prediction. It is a binary classification, and the metric is ROC-AUC. We provide statistics of the datasets in Appendix A.6. We evaluate the expressiveness of our methods on the EXP dataset [34], which is a synthetic dataset containing 600 pairs of graphs.
+
+§ 5.2 BASELINES
+
+In the experiment of the graph classification task on TU, we adopt three graph kernel methods, some GNNs methods based on the 1-WL, and some methods with higher expressive power than 1-WL as baselines. Graph kernel methods include shortest path kernel [35], WL subtree kernel [18] and deep graph kernel [36]. GNNs methods based on the 1-WL include GCN [22], GIN [13], Diffpool [25], and Sortpool [37]. For GCN, graph representations are obtained by the learned nodes representations and sum pooling. Higher expressive methods include 1-2-3 GNN [14], 3-hop GNN [17] Nested GNN [15] and GraphSNN [38]. On OGB dataset, we compare with the traditional message passing GNNs, and the higher expressive methods Deep LRP-1-3 [39], Nested GNN [15] and GIN-AK ${}^{ + }$ [16]. Results of baselines are obtained either from raw paper or source code with published experimental settings ("-" indicates that results are not available). For GCN and GIN, we search the model layer in $\{ 2,3,4,5\}$ , and hidden dimensions in $\{ {32},{64},{128}\}$ . For Nested GNN, we choose the best-performing Nested GIN as the baseline according to the results in the original paper. And the results on the datasets Mutagenicity, NCI and NCI109, we search the subgraph height in $\{ 2,3,4,5\}$ with 4 model layers.
+
+§ 5.3 EXPERIMENTAL SETUP
+
+In graph classification tasks, we adopt multilayer perceptrons (MLPs) with softmax as the classifier to predict the class label of the graph. On the TU datasets, we perform 10-fold cross-validation where 9 folds for training, 1 fold for testing. 10% split of the training set is used for model selection [40]. We report the average and standard deviation (in percentage) of test accuracy across the 10 folds. We train the models with batch size 32. On the OGB dataset, the experiments are conducted 10 times, and the average scores of ROC-AUC are reported. We train the models with batch size 256. For all datasets, we implement experiments with PyTorch and employ Adam optimizer with the learning rate of 0.001 to optimize the model. For fast SaWL, we search the iteration times in $\{ 2,3\}$ . For SaGNN, we search the model layers in $\{ 2,3,4\}$ . In the training process, we adopt the early stopping strategy 273
+
+Table 2: Performance Evaluation on OGB dataset. Table 3: Results on EXP.
+
+max width=
+
+2*Methods 2|c|ogbg-molhiv (AUC)
+
+2-3
+ Validation Test
+
+1-3
+GCN [22] ${82.04} \pm {1.41}$ ${76.06} \pm {0.97}$
+
+1-3
+GIN [13] ${82.32} \pm {0.90}$ ${75.58} \pm {1.40}$
+
+1-3
+Deep LRP-1-3 [39] ${81.31} \pm {0.88}$ ${76.87} \pm {1.80}$
+
+1-3
+Nested GNN [15] ${83.17} \pm {1.99}$ ${78.34} \pm {1.86}$
+
+1-3
+GIN-AK+ [16] - ${79.61} \pm {1.19}$
+
+1-3
+fast SaWL ${79.13} \pm {0.69}$ ${78.29} \pm {0.48}$
+
+1-3
+SaGNN ${81.06} \pm {1.14}$ ${78.86} \pm {0.73}$
+
+1-3
+
+max width=
+
+Model Test Accuracy (%)
+
+1-2
+GCN [22] ${50.0} \pm {0.00}$
+
+1-2
+GIN [13] ${50.0} \pm {0.00}$
+
+1-2
+GCN-RNI [34] ${98.0} \pm {1.85}$
+
+1-2
+PPGN [41] ${100.0} \pm {0.00}$
+
+1-2
+3-GCN [14] ${99.7} \pm {0.004}$
+
+1-2
+Nested GNN [15] ${99.9} \pm {0.26}$
+
+1-2
+GIN-AK+ [16] ${100.0} \pm {0.00}$
+
+1-2
+fast SaWL ${99.50} \pm {0.70}$
+
+1-2
+SaGNN ${99.67} \pm {0.70}$
+
+1-2
+
+with patience 30, and we report the test results at the epoch of best validation. The experimental setup of the expressive power evaluation on the EXP dataset are kept the same with [34]. We use the Nvidia V100 GPUs to run the experiments.
+
+§ 5.4 EFFECTIVENESS EVALUATION
+
+Performance on Graph Classification Task. Results of the graph classification on TU and OGB datasets are shown in Tables 1, 2. Compared with graph kernel methods and traditional GNNs based on 1-WL, our fast SaWL and SaGNN gain strong improvements. Especially, both proposed methods achieve better performance than WL subtree kernel and GIN, which proves the higher discriminating power experimentally. It verifies that the augmented subgraph information on the basis of the subtree pattern obtained by the WL/GNN is effective on the graph classification task. For the methods with higher expressive power than traditional message passing GNNs, i.e., 1-2-3-GNN, 3-hop GNN, Nested GNN and GraphSNN, our fast SaWL and SaGNN still outperform the methods in most TU datasets. Especially, our fast SaWL gains such progress with low computational cost. The proposed methods achieve comparable results to other high expressive methods on the larger-scale OGB dataset. The results show that our methods achieve higher or comparable performance to methods with high computational cost. We adopt GIN as the GNN encoder in SaGNN. The improvements compared to GIN verify the effectiveness of the $\mathrm{S}$ operator, which provides additional subgraph information in graph classification. The neural version SaGNN achieves slightly lower performance than fast SaWL on some small-scale datasets, which may be because the neural model is not sufficiently trained with insufficient training data. On the larger-scale OGB dataset, the neural version SaGNN achieves better results than fast SaWL with sufficient training. In summary, fast SaWL and SaGNN achieve improvement compared with competitive baselines on the graph classification task.
+
+Expressive Power Evaluation. We first evaluate the expressiveness on the EXP dataset and then show cases of graph isomorphism test in Appendix A.7. Results on the EXP are shown in Table 3, and some results of baselines are from [34]. Each pair of graphs in EXP is non-isomorphic and 1-WL indistinguishable, and the results of GCN and GIN verify this. GCN-RNI [34], PPNG [41], 3-GCN [14], Nested GNN [15] and GNN-AK ${}^{ + }$ [38] are five baselines with higher expressive power as well as high computational cost than 1-WL. Our fast SaWL and SaGNN consistently achieve very high accuracy, which can distinguish nearly all graph pairs. The results are comparable with the $k$ -GNNs [14,41] and Nested GNN [15], which are more computationally complex. The results verify the high expressive power of fast SaWL and SaGNN, which have been stated theoretically.
+
+§ 5.5 EFFICIENCY EVALUATION
+
+We compare the running time of the proposed methods with baselines to verify the high efficiency in practice. Our fast SaWL has higher discriminating power than 1-WL, while the accelerating version of the fast SaWL has the same time complexity as 1-WL, which have been demonstrated in section 3.4. We record the running time of fast SaWL and 1- WL in obtaining feature mappings of all graphs in four datasets respectively. The average running time with ten runs are shown in Tabel 4. The running time of fast SaWL is similar to that of 1-WL. The time difference is less than 0.5 seconds on the TU dataset and less than 3 seconds on the ogbg-molhiv, which contains 41127 graphs. We further conduct the t-test as a significance test. The p-value is 0.8413, and ${0.8413} > {0.05}$ , which demonstrates that there is no significant difference in the running time of fast SaWL and 1-WL on graph feature mapping calculation. For SaGNN, we compare the running time with an example method of the WL-on-subgraph paradigm, i.e., Nested GNN (NGNN) [15] in Figure 4. On TU datasets, the running time of the Nested GNN is more than three times that of SaGNN. On the ogbg-molhiv dataset (abbreviated as ogb in Figure 4), we compare the epoch time and the whole training time. The running time of the Nested GNN is more than ten times that of SaGNN on both each epoch and the whole training process, e.g., the average training time of Nested GNN on an epoch is ${134.91} \pm {21.30}$ seconds, and that of SaGNN is ${9.71} \pm {0.49}$ seconds. The time comparison demonstrates that our SaGNN is significantly more efficient than methods of the WL-on-subgraph paradigm.
+
+Table 4: Runtime Comparison of fast SaWL with 1-WL (second).
+
+max width=
+
+Methods Mutagenicity NCI1 NCI109 ogbg-molhiv
+
+1-5
+1-WL ${4.90} \pm {0.23}$ ${4.69} \pm {0.16}$ ${4.73} \pm {0.20}$ ${112.25} \pm {0.68}$
+
+1-5
+fast SaWL ${4.99} \pm {0.22}$ ${4.81} \pm {0.20}$ ${4.96} \pm {0.20}$ ${115.11} \pm {0.71}$
+
+1-5
+
+ < g r a p h i c s >
+
+Figure 4: Training Time Comparison of SaGNN with Method of the WL-on-subgraph paradigm.
+
+§ 6 RELATED WORKS
+
+The expressiveness of graph neural networks is a key research topic in graph machine learning. Many approaches with higher expressive power than 1-WL have been proposed, including high-dimension WL based [14, 41], feature augmentation based [34, 42], subgraph encoding based [15, 16, 43] and equivariant models [26, 27, 44]. We provide a breifly review here. (1) It's natural to build GNNs based on a high-dimension WL algorithm for high expressive power, e.g., PPNG [41] based on the high-order graph networks, $k$ -GNNs [14] based on the set $k$ -WL algorithm. However, the high dimension WL algorithms require enumeration of the nodes tuple, which limits the scalability and generalization with high computational cost. (2) Random feature-based methods augment GNNs by adding random features as additional node features. E.g., GCN-RNI [34] enhances GNNs with random node initialization. rGINs [42] concatenates random features with node features then applies GINs on the combined features. However, random feature augmentation-based methods limit the generalization ability of the methods. (3) Many existing subgraph-based methods first extract subgraphs centered on each node of graphs, then apply GNNs on the extracted subgraphs. e.g., Nested GNN [15] implements base GNN on the extracted subgraphs then obtains the whole graph representations by a global pooling. GNN-AK [16] extracts subgraphs and applies multiple GNNs as well. $k$ -hop GNNs [17] propose to aggregate the node with the information from its $k$ -hop neighborhood, rather than only from its direct neighbors, which can identity fundamental graph properties such as connectivity, triangle freeness. These methods can be summarized as WL-on-subgraph paradigm (Figure 1 (b)), and the computational cost are much higher than 1-WL, which limits their application to the large scale graphs. We provide more related works in Appendix A.8.
+
+§ 7 CONCLUSION
+
+The traditional message passing graph neural networks (GNNs) are at most as powerful as 1-WL algorithm. Since the representative power of the subgraph is higher than that of the subtree, methods of the WL-on-subgraph paradigm are proposed to improve GNNs, which brings expensive computational cost. As a contrast, we propose the subgraph-aware WL (SaWL) paradigm in this paper, which uplifts GNNs and keeps computation complexity low. Under the paradigm, we first implement an algorithm named fast SaWL, where the additional S operator encodes subgraph information on the basis of the WL on the full graph. We then present the neural version of the SaWL named SaGNN, which replace the components in SaWL with neural networks. SaWL and SaGNN are proved to be more expressive than 1-WL, and have achieved significant improvements in the experiments.
\ No newline at end of file