id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2304.00050
kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration
In this paper, we present a residual neural network-based method for point set registration that preserves the topological structure of the target point set. Similar to coherent point drift (CPD), the registration (alignment) problem is viewed as the movement of data points sampled from a target distribution along a re...
Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky
2023-03-31T18:06:26Z
http://arxiv.org/abs/2304.00050v2
# kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration ###### Abstract In this paper, we present a residual neural network-based method for point set registration that preserves the topological structure of the target point set. Similar to coherent point drift (CPD), the registration...
2310.20579
Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
We analytically investigate how over-parameterization of models in randomized machine learning algorithms impacts the information leakage about their training data. Specifically, we prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets, and explore its dependence on ...
Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, Volkan Cevher
2023-10-31T16:13:22Z
http://arxiv.org/abs/2310.20579v1
# Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks ###### Abstract We analytically investigate how over-parameterization of models in randomized machine learning algorithms impacts the information leakage about their training data. Specifically, we prove a privacy bound for the KL...
2306.17396
Koopman operator learning using invertible neural networks
In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invariant subspace of the Koopman operator based on prior knowledge is inefficient and challenging, part...
Yuhuang Meng, Jianguo Huang, Yue Qiu
2023-06-30T04:26:46Z
http://arxiv.org/abs/2306.17396v2
# Physics-informed invertible neural network for the Koopman operator learning 1 ###### Abstract In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invar...
2310.04424
Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological AI
The Gene Regulatory Network (GRN) of biological cells governs a number of key functionalities that enables them to adapt and survive through different environmental conditions. Close observation of the GRN shows that the structure and operational principles resembles an Artificial Neural Network (ANN), which can pave t...
Adrian Ratwatte, Samitha Somathilaka, Sasitharan Balasubramaniam, Assaf A. Gilad
2023-09-14T21:37:38Z
http://arxiv.org/abs/2310.04424v1
# Stability Analysis of Non-Linear Classifiers using Gene Regulatory Neural Network for Biological AI ###### Abstract The Gene Regulatory Network (GRN) of biological cells governs a number of key functionalities that enables them to adapt and survive through different environmental conditions. Close observation of th...
2309.03770
Neural lasso: a unifying approach of lasso and neural networks
In recent years, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. In this article, the statistical technique lasso for variable selection is represented through a neural network. It is observed that, althou...
David Delgado, Ernesto Curbelo, Danae Carreras
2023-09-07T15:17:10Z
http://arxiv.org/abs/2309.03770v1
# Neural lasso: a unifying approach of lasso and neural networks ###### Abstract In recent years, there is a growing interest in combining techniques attributed to the areas of Statistics and Machine Learning in order to obtain the benefits of both approaches. In this article, the statistical technique lasso for vari...
2309.04037
SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks
The fast growth of computational power and scales of modern super-computing systems have raised great challenges for the management of exascale scientific data. To maintain the usability of scientific data, error-bound lossy compression is proposed and developed as an essential technique for the size reduction of scien...
Jinyang Liu, Sheng Di, Sian Jin, Kai Zhao, Xin Liang, Zizhong Chen, Franck Cappello
2023-09-07T22:15:32Z
http://arxiv.org/abs/2309.04037v3
SRN-SZ: Deep Leaning-Based Scientific Error-bounded Lossy Compression with Super-resolution Neural Networks ###### Abstract The fast growth of computational power and scales of modern super-computing systems have raised great challenges for the management of exascale scientific data. To maintain the usability of scie...
2309.15728
Line Graph Neural Networks for Link Weight Prediction
Link weight prediction is of great practical importance, since real-world networks are often weighted networks. Previous studies have mainly used shallow graph features for link weight prediction, which limits the prediction performance. In this paper, we propose a new link weight prediction algorithm, namely Line Grap...
Jinbi Liang, Cunlai Pu
2023-09-27T15:34:44Z
http://arxiv.org/abs/2309.15728v1
# Line Graph Neural Networks for Link Weight Prediction ###### Abstract. Link weight prediction is of great practical importance, since real-world networks are often weighted networks. Previous studies have mainly used shallow graph features for link weight prediction, which limits the prediction performance. In this...
2309.03374
Physics Informed Neural Networks for Modeling of 3D Flow-Thermal Problems with Sparse Domain Data
Successfully training Physics Informed Neural Networks (PINNs) for highly nonlinear PDEs on complex 3D domains remains a challenging task. In this paper, PINNs are employed to solve the 3D incompressible Navier-Stokes (NS) equations at moderate to high Reynolds numbers for complex geometries. The presented method utili...
Saakaar Bhatnagar, Andrew Comerford, Araz Banaeizadeh
2023-09-06T21:52:14Z
http://arxiv.org/abs/2309.03374v3
# Physics Informed Neural Networks for Modeling of 3D Flow-Thermal Problems with Sparse Domain Data ###### Abstract Successfully training Physics Informed Neural Networks (PINNs) for highly nonlinear PDEs on complex 3D domains remains a challenging task. In this paper, PINNs are employed to solve the 3D incompressibl...
2309.16022
GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis
With the ever-growing popularity of Graph Neural Networks (GNNs), efficient GNN inference is gaining tremendous attention. Field-Programming Gate Arrays (FPGAs) are a promising execution platform due to their fine-grained parallelism, low-power consumption, reconfigurability, and concurrent execution. Even better, High...
Chenfeng Zhao, Zehao Dong, Yixin Chen, Xuan Zhang, Roger D. Chamberlain
2023-09-27T20:58:33Z
http://arxiv.org/abs/2309.16022v1
# GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis ###### Abstract With the ever-growing popularity of Graph Neural Networks (GNNs), efficient GNN inference is gaining tremendous attention. Field-Programming Gate Arrays (FPGAs) are a promising execution platform due to their fine-grained par...
2309.04426
Advanced Computing and Related Applications Leveraging Brain-inspired Spiking Neural Networks
In the rapid evolution of next-generation brain-inspired artificial intelligence and increasingly sophisticated electromagnetic environment, the most bionic characteristics and anti-interference performance of spiking neural networks show great potential in terms of computational speed, real-time information processing...
Lyuyang Sima, Joseph Bucukovski, Erwan Carlson, Nicole L. Yien
2023-09-08T16:41:08Z
http://arxiv.org/abs/2309.04426v1
# Advanced Computing and Related Applications ###### Abstract In the rapid evolution of next-generation brain-inspired artificial intelligence and increasingly sophisticated electromagnetic environment, the most bionic characteristics and anti-interference performance of spiking neural networks show great potential i...
2303.18083
Analysis and Comparison of Two-Level KFAC Methods for Training Deep Neural Networks
As a second-order method, the Natural Gradient Descent (NGD) has the ability to accelerate training of neural networks. However, due to the prohibitive computational and memory costs of computing and inverting the Fisher Information Matrix (FIM), efficient approximations are necessary to make NGD scalable to Deep Neura...
Abdoulaye Koroko, Ani Anciaux-Sedrakian, Ibtihel Ben Gharbia, Valérie Garès, Mounir Haddou, Quang Huy Tran
2023-03-31T14:21:53Z
http://arxiv.org/abs/2303.18083v2
# Analysis and Comparison of Two-Level KFAC Methods for Training Deep Neural Networks ###### Abstract As a second-order method, the Natural Gradient Descent (NGD) has the ability to accelerate training of neural networks. However, due to the prohibitive computational and memory costs of computing and inverting the Fi...
2308.16848
Accurate Computation of Quantum Excited States with Neural Networks
We present a variational Monte Carlo algorithm for estimating the lowest excited states of a quantum system which is a natural generalization of the estimation of ground states. The method has no free parameters and requires no explicit orthogonalization of the different states, instead transforming the problem of find...
David Pfau, Simon Axelrod, Halvard Sutterud, Ingrid von Glehn, James S. Spencer
2023-08-31T16:27:08Z
http://arxiv.org/abs/2308.16848v3
# Natural Quantum Monte Carlo Computation of Excited States ###### Abstract We present a variational Monte Carlo algorithm for estimating the lowest excited states of a quantum system which is a natural generalization of the estimation of ground states. The method has no free parameters and requires no explicit ortho...
2310.00496
The Sparsity Roofline: Understanding the Hardware Limits of Sparse Neural Networks
We introduce the Sparsity Roofline, a visual performance model for evaluating sparsity in neural networks. The Sparsity Roofline jointly models network accuracy, sparsity, and theoretical inference speedup. Our approach does not require implementing and benchmarking optimized kernels, and the theoretical speedup become...
Cameron Shinn, Collin McCarthy, Saurav Muralidharan, Muhammad Osama, John D. Owens
2023-09-30T21:29:31Z
http://arxiv.org/abs/2310.00496v2
# The Sparsity Roofline: Understanding the Hardware Limits of Sparse Neural Networks ###### Abstract We introduce the Sparsity Roofline, a visual performance model for evaluating sparsity in neural networks. The Sparsity Roofline jointly models network accuracy, sparsity, and theoretical inference speedup. Our approa...
2306.17630
Navigating Noise: A Study of How Noise Influences Generalisation and Calibration of Neural Networks
Enhancing the generalisation abilities of neural networks (NNs) through integrating noise such as MixUp or Dropout during training has emerged as a powerful and adaptable technique. Despite the proven efficacy of noise in NN training, there is no consensus regarding which noise sources, types and placements yield maxim...
Martin Ferianc, Ondrej Bohdal, Timothy Hospedales, Miguel Rodrigues
2023-06-30T13:04:26Z
http://arxiv.org/abs/2306.17630v2
# Impact of Noise on Calibration and Generalisation of Neural Networks ###### Abstract Noise injection and data augmentation strategies have been effective for enhancing the generalisation and robustness of neural networks (NNs). Certain types of noise such as label smoothing and MixUp have also been shown to improve...
2310.04431
Can neural networks count digit frequency?
In this research, we aim to compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number. It has various applications in machine learning and computer vision, e.g. for obtaining the frequency of a target object in a...
Padmaksh Khandelwal
2023-09-25T03:45:36Z
http://arxiv.org/abs/2310.04431v1
## Can Neural Networks Count Digit Frequency? ### Abstract In this research, we aim to compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number. It has various applications in machine learning and computer vis...
2309.05102
Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? An analysis using stochastic processes
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization ...
Sören Christensen, Jan Kallsen
2023-09-10T18:12:52Z
http://arxiv.org/abs/2309.05102v3
# Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? ###### Abstract In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the br...
2309.11188
Rebellions and Impeachments in a Neural Network Society
Basede on a study of the modern presidencial democracies in South America, we present a statistical mechanics exploration of the collective, coordinated action of political actors in the legislative chamber that may result on the impeachment of the executive. By representing the legislative political actors with neurla...
Juan Neirotti, Nestor Caticha
2023-09-20T10:18:17Z
http://arxiv.org/abs/2309.11188v2
# Rebellions and Impeachments in a Neural Network Society ###### Abstract Based on a study of the modern presidencial democracies in South America, we present a statistical mechanics exploration of the collective, coordinated action of political actors in the legislative chamber that may result on the impeachment of ...
2301.13817
Patch Gradient Descent: Training Neural Networks on Very Large Images
Traditional CNN models are trained and tested on relatively low resolution images (<300 px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective learning strategy that allows to train the existing CNN architectures on lar...
Deepak K. Gupta, Gowreesh Mago, Arnav Chavan, Dilip K. Prasad
2023-01-31T18:04:35Z
http://arxiv.org/abs/2301.13817v1
# Patch Gradient Descent: Training Neural Networks ###### Abstract Traditional CNN models are trained and tested on relatively low resolution images (\(<300\) px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective lea...
2310.20552
Privacy-preserving design of graph neural networks with applications to vertical federated learning
The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great success in applications to financial risk management (FRM). The surging developments of graph representation learning (GRL...
Ruofan Wu, Mingyang Zhang, Lingjuan Lyu, Xiaolong Xu, Xiuquan Hao, Xinyi Fu, Tengfei Liu, Tianyi Zhang, Weiqiang Wang
2023-10-31T15:34:59Z
http://arxiv.org/abs/2310.20552v1
# Privacy-preserving design of graph neural networks with applications to vertical federated learning ###### Abstract The paradigm of vertical federated learning (VFL), where institutions collaboratively train machine learning models via combining each other's local feature or label information, has achieved great su...
2306.17418
ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homolog
A ReLU neural network leads to a finite polyhedral decomposition of input space and a corresponding finite dual graph. We show that while this dual graph is a coarse quantization of input space, it is sufficiently robust that it can be combined with persistent homology to detect homological signals of manifolds in the ...
Yajing Liu, Christina M Cole, Chris Peterson, Michael Kirby
2023-06-30T06:20:21Z
http://arxiv.org/abs/2306.17418v1
# ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homology ###### Abstract A ReLU neural network leads to a finite polyhedral decomposition of input space and a corresponding finite dual graph. We show that while this dual graph is a coarse quantization of input space, it is sufficiently robust that i...
2309.15111
SGD Finds then Tunes Features in Two-Layer Neural Networks with near-Optimal Sample Complexity: A Case Study in the XOR problem
In this work, we consider the optimization process of minibatch stochastic gradient descent (SGD) on a 2-layer neural network with data separated by a quadratic ground truth function. We prove that with data drawn from the $d$-dimensional Boolean hypercube labeled by the quadratic ``XOR'' function $y = -x_ix_j$, it is ...
Margalit Glasgow
2023-09-26T17:57:44Z
http://arxiv.org/abs/2309.15111v2
SGD Finds then Tunes Features in Two-Layer Neural Networks with Near-Optimal Sample Complexity: A Case Study in the XOR problem ###### Abstract In this work, we consider the optimization process of minibatch stochastic gradient descent (SGD) on a 2-layer neural network with data separated by a quadratic ground truth ...
2309.06019
DSLOT-NN: Digit-Serial Left-to-Right Neural Network Accelerator
We propose a Digit-Serial Left-tO-righT (DSLOT) arithmetic based processing technique called DSLOT-NN with aim to accelerate inference of the convolution operation in the deep neural networks (DNNs). The proposed work has the ability to assess and terminate the ineffective convolutions which results in massive power an...
Muhammad Sohail Ibrahim, Muhammad Usman, Malik Zohaib Nisar, Jeong-A Lee
2023-09-12T07:36:23Z
http://arxiv.org/abs/2309.06019v2
# DSLOT-NN: Digit-Serial Left-to-Right Neural Network Accelerator ###### Abstract We propose a Digit-Serial Left-to-right (DSLOT) arithmetic based processing technique called _DSLOT-NN_ with aim to accelerate inference of the convolution operation in the deep neural networks (DNNs). The proposed work has the ability ...
2309.16223
GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions. However, it is not clear yet how to evaluate the correctness of those explanations, whether it is from a human or a model perspectiv...
Kenza Amara, Mennatallah El-Assady, Rex Ying
2023-09-28T07:56:10Z
http://arxiv.org/abs/2309.16223v2
# GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations ###### Abstract Diverse explainability methods of graph neural networks (GNN) have recently been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions. However, it is not clear yet ...
2309.05263
Brain-inspired Evolutionary Architectures for Spiking Neural Networks
The complex and unique neural network topology of the human brain formed through natural evolution enables it to perform multiple cognitive functions simultaneously. Automated evolutionary mechanisms of biological network structure inspire us to explore efficient architectural optimization for Spiking Neural Networks (...
Wenxuan Pan, Feifei Zhao, Zhuoya Zhao, Yi Zeng
2023-09-11T06:39:11Z
http://arxiv.org/abs/2309.05263v1
# Brain-inspired Evolutionary Architectures for ###### Abstract The complex and unique neural network topology of the human brain formed through natural evolution enables it to perform multiple cognitive functions simultaneously. Automated evolutionary mechanisms of biological network structure inspire us to explore ...
2309.16425
Feed-forward and recurrent inhibition for compressing and classifying high dynamic range biosignals in spiking neural network architectures
Neuromorphic processors that implement Spiking Neural Networks (SNNs) using mixed-signal analog/digital circuits represent a promising technology for closed-loop real-time processing of biosignals. As in biology, to minimize power consumption, the silicon neurons' circuits are configured to fire with a limited dynamic ...
Rachel Sava, Elisa Donati, Giacomo Indiveri
2023-09-28T13:22:51Z
http://arxiv.org/abs/2309.16425v1
Feed-forward and recurrent inhibition for compressing and classifying high dynamic range biosignals in spiking neural network architectures ###### Abstract Neuromorphic processors that implement Spiking Neural Networks (SNNs) using mixed-signal analog/digital circuits represent a promising technology for closed-loop ...
2309.13575
Probabilistic Weight Fixing: Large-scale training of neural network weight uncertainties for quantization
Weight-sharing quantization has emerged as a technique to reduce energy expenditure during inference in large neural networks by constraining their weights to a limited set of values. However, existing methods for weight-sharing quantization often make assumptions about the treatment of weights based on value alone tha...
Christopher Subia-Waud, Srinandan Dasmahapatra
2023-09-24T08:04:28Z
http://arxiv.org/abs/2309.13575v3
Probabilistic Weight Fixing: Large-scale training of neural network weight uncertainties for quantization ###### Abstract Weight-sharing quantization has emerged as a technique to reduce energy expenditure during inference in large neural networks by constraining their weights to a limited set of values. However, exi...
2309.09934
Hierarchical Attention and Graph Neural Networks: Toward Drift-Free Pose Estimation
The most commonly used method for addressing 3D geometric registration is the iterative closet-point algorithm, this approach is incremental and prone to drift over multiple consecutive frames. The Common strategy to address the drift is the pose graph optimization subsequent to frame-to-frame registration, incorporati...
Kathia Melbouci, Fawzi Nashashibi
2023-09-18T16:51:56Z
http://arxiv.org/abs/2309.09934v1
# Hierarchical Attention and Graph Neural Networks: Toward Drift-Free Pose Estimation ###### Abstract The most commonly used method for addressing 3D geometric registration is the iterative closet-point algorithm, this approach is incremental and prone to drift over multiple consecutive frames. The Common strategy to...
2307.16416
MRA-GNN: Minutiae Relation-Aware Model over Graph Neural Network for Fingerprint Embedding
Deep learning has achieved remarkable results in fingerprint embedding, which plays a critical role in modern Automated Fingerprint Identification Systems. However, previous works including CNN-based and Transformer-based approaches fail to exploit the nonstructural data, such as topology and correlation in fingerprint...
Yapeng Su, Tong Zhao, Zicheng Zhang
2023-07-31T05:54:06Z
http://arxiv.org/abs/2307.16416v1
# MRA-GNN: Minutiae Relation-Aware Model over Graph Neural Network for Fingerprint Embedding ###### Abstract Deep learning has achieved remarkable results in fingerprint embedding, which plays a critical role in modern Automated Fingerprint Identification Systems. However, previous works including CNN-based and Trans...
2309.12445
Ensemble Neural Networks for Remaining Useful Life (RUL) Prediction
A core part of maintenance planning is a monitoring system that provides a good prognosis on health and degradation, often expressed as remaining useful life (RUL). Most of the current data-driven approaches for RUL prediction focus on single-point prediction. These point prediction approaches do not include the probab...
Ahbishek Srinivasan, Juan Carlos Andresen, Anders Holst
2023-09-21T19:38:44Z
http://arxiv.org/abs/2309.12445v1
# Ensemble Neural Networks for Remaining Useful Life (RUL) Prediction ###### Abstract A core part of maintenance planning is a monitoring system that provides a good prognosis on health and degradation, often expressed as remaining useful life (RUL). Most of the current data-driven approaches for RUL prediction focus...
2309.13773
GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust Parameters of Unseen Limited Precision Neural Networks
Graph Hypernetworks (GHN) can predict the parameters of varying unseen CNN architectures with surprisingly good accuracy at a fraction of the cost of iterative optimization. Following these successes, preliminary research has explored the use of GHNs to predict quantization-robust parameters for 8-bit and 4-bit quantiz...
Stone Yun, Alexander Wong
2023-09-24T23:01:00Z
http://arxiv.org/abs/2309.13773v1
GHN-QAT: Training Graph Hypernetworks to Predict Quantization-Robust Parameters of Unseen Limited Precision Neural Networks ###### Abstract Graph Hypernetworks (GHN) can predict the parameters of varying unseen CNN architectures with surprisingly good accuracy at a fraction of the cost of iterative optimization. Foll...
2309.12121
A Multiscale Autoencoder (MSAE) Framework for End-to-End Neural Network Speech Enhancement
Neural network approaches to single-channel speech enhancement have received much recent attention. In particular, mask-based architectures have achieved significant performance improvements over conventional methods. This paper proposes a multiscale autoencoder (MSAE) for mask-based end-to-end neural network speech en...
Bengt J. Borgstrom, Michael S. Brandstein
2023-09-21T14:41:54Z
http://arxiv.org/abs/2309.12121v1
# A Multiscale Autoencoder (MSAE) Framework for End-to-End Neural Network Speech Enhancement ###### Abstract Neural network approaches to single-channel speech enhancement have received much recent attention. In particular, mask-based architectures have achieved significant performance improvements over conventional ...
2309.06626
Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity
The demand for efficient processing of deep neural networks (DNNs) on embedded devices is a significant challenge limiting their deployment. Exploiting sparsity in the network's feature maps is one of the ways to reduce its inference latency. It is known that unstructured sparsity results in lower accuracy degradation ...
Matteo Grimaldi, Darshan C. Ganji, Ivan Lazarevich, Sudhakar Sah
2023-09-12T22:28:53Z
http://arxiv.org/abs/2309.06626v2
# Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity ###### Abstract The demand for efficient processing of deep neural networks (DNNs) on embedded devices is a significant challenge limiting their deployment. Exploiting sparsity in the network's feature maps is one of the ways to reduce its in...
2303.17823
An interpretable neural network-based non-proportional odds model for ordinal regression
This study proposes an interpretable neural network-based non-proportional odds model (N$^3$POM) for ordinal regression. N$^3$POM is different from conventional approaches to ordinal regression with non-proportional models in several ways: (1) N$^3$POM is defined for both continuous and discrete responses, whereas stan...
Akifumi Okuno, Kazuharu Harada
2023-03-31T06:40:27Z
http://arxiv.org/abs/2303.17823v4
# An interpretable neural network-based ###### Abstract This study proposes an interpretable neural network-based non-proportional odds model (N\({}^{3}\)POM) for ordinal regression. In the model, the response variable can take continuous values, and the regression coefficients vary depending on the predicting ordina...
2309.06975
Predicting Expressibility of Parameterized Quantum Circuits using Graph Neural Network
Parameterized Quantum Circuits (PQCs) are essential to quantum machine learning and optimization algorithms. The expressibility of PQCs, which measures their ability to represent a wide range of quantum states, is a critical factor influencing their efficacy in solving quantum problems. However, the existing technique ...
Shamminuj Aktar, Andreas Bärtschi, Abdel-Hameed A. Badawy, Diane Oyen, Stephan Eidenbenz
2023-09-13T14:08:01Z
http://arxiv.org/abs/2309.06975v1
# Predicting Expressibility of Parameterized Quantum Circuits using Graph Neural Network ###### Abstract Parameterized Quantum Circuits (PQCs) are essential to quantum machine learning and optimization algorithms. The expressibility of PQCs, which measures their ability to represent a wide range of quantum states, is...
2309.08569
Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach
Graph Neural Networks have achieved tremendous success in modeling complex graph data in a variety of applications. However, there are limited studies investigating privacy protection in GNNs. In this work, we propose a learning framework that can provide node privacy at the user level, while incurring low utility loss...
Karuna Bhaila, Wen Huang, Yongkai Wu, Xintao Wu
2023-09-15T17:35:51Z
http://arxiv.org/abs/2309.08569v2
# Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach ###### Abstract Graph Neural Networks have achieved tremendous success in modeling complex graph data in a variety of applications. However, there are limited studies investigating privacy protection in GNNs. In this work, we propose a l...
2309.11763
Bloch Equation Enables Physics-informed Neural Network in Parametric Magnetic Resonance Imaging
Magnetic resonance imaging (MRI) is an important non-invasive imaging method in clinical diagnosis. Beyond the common image structures, parametric imaging can provide the intrinsic tissue property thus could be used in quantitative evaluation. The emerging deep learning approach provides fast and accurate parameter est...
Qingrui Cai, Liuhong Zhu, Jianjun Zhou, Chen Qian, Di Guo, Xiaobo Qu
2023-09-21T03:53:33Z
http://arxiv.org/abs/2309.11763v1
# Bloch Equation Enables Physics-informed Neural Network in Parametric Magnetic Resonance Imaging ###### Abstract Magnetic resonance imaging (MRI) is an important non-invasive imaging method in clinical diagnosis. Beyond the common image structures, parametric imaging can provide the intrinsic tissue property thus co...
2306.17597
Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings
The event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain, while still dense and redundant in the temporal domain. Although spiking neural network (SNN), the event-driven neuromorphic model, has the potential to extract spatio-temporal features from the event streams, ...
Yuan Zhang, Jian Cao, Ling Zhang, Jue Chen, Wenyu Sun, Yuan Wang
2023-06-30T12:17:30Z
http://arxiv.org/abs/2306.17597v1
# Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings ###### Abstract The event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain, while still dense and redundant in the temporal domain. Although spiking neural network (SNN), the event-driven neuromorp...
2309.04434
Physics-Informed Neural Networks for an optimal counterdiabatic quantum computation
We introduce a novel methodology that leverages the strength of Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD) protocol in the optimization of quantum circuits comprised of systems with $N_{Q}$ qubits. The primary objective is to utilize physics-inspired deep learning techniques to accurat...
Antonio Ferrer-Sánchez, Carlos Flores-Garrigos, Carlos Hernani-Morales, José J. Orquín-Marqués, Narendra N. Hegade, Alejandro Gomez Cadavid, Iraitz Montalban, Enrique Solano, Yolanda Vives-Gilabert, José D. Martín-Guerrero
2023-09-08T16:55:39Z
http://arxiv.org/abs/2309.04434v2
# Physics-Informed Neural Networks for an Optimal Counterdiabatic quantum computation ###### Abstract We introduce a novel methodology that leverages the strength of Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD) protocol in the optimization of quantum circuits comprised of systems with ...
2309.03617
NeuroCodeBench: a plain C neural network benchmark for software verification
Safety-critical systems with neural network components require strong guarantees. While existing neural network verification techniques have shown great progress towards this goal, they cannot prove the absence of software faults in the network implementation. This paper presents NeuroCodeBench - a verification benchma...
Edoardo Manino, Rafael Sá Menezes, Fedor Shmarov, Lucas C. Cordeiro
2023-09-07T10:19:33Z
http://arxiv.org/abs/2309.03617v1
# NeuroCodeBench: a plain C neural network benchmark for software verification ###### Abstract Safety-critical systems with neural network components require strong guarantees. While existing neural network verification techniques have shown great progress towards this goal, they cannot prove the absence of software ...
2309.15075
On Excess Risk Convergence Rates of Neural Network Classifiers
The recent success of neural networks in pattern recognition and classification problems suggests that neural networks possess qualities distinct from other more classical classifiers such as SVMs or boosting classifiers. This paper studies the performance of plug-in classifiers based on neural networks in a binary cla...
Hyunouk Ko, Namjoon Suh, Xiaoming Huo
2023-09-26T17:14:10Z
http://arxiv.org/abs/2309.15075v1
# On Excess Risk Convergence Rates of Neural Network Classifiers ###### Abstract The recent success of neural networks in pattern recognition and classification problems suggests that neural networks possess qualities distinct from other more classical classifiers such as SVMs or boosting classifiers. This paper stud...
2310.03760
Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition
The extensive ubiquitous availability of sensors in smart devices and the Internet of Things (IoT) has opened up the possibilities for implementing sensor-based activity recognition. As opposed to traditional sensor time-series processing and hand-engineered feature extraction, in light of deep learning's proven effect...
Danial Ahangarani, Mohammad Shirazi, Navid Ashraf
2023-09-26T14:55:32Z
http://arxiv.org/abs/2310.03760v1
Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition ###### Abstract The extensive ubiquitous availability of sensors in smart devices and the Internet of Things (IoT) has opened up the possibilities for implementing sensor-based activity recognitio...
2309.13410
Tropical neural networks and its applications to classifying phylogenetic trees
Deep neural networks show great success when input vectors are in an Euclidean space. However, those classical neural networks show a poor performance when inputs are phylogenetic trees, which can be written as vectors in the tropical projective torus. Here we propose tropical embedding to transform a vector in the tro...
Ruriko Yoshida, Georgios Aliatimis, Keiji Miura
2023-09-23T15:47:35Z
http://arxiv.org/abs/2309.13410v1
# Tropical neural networks and its applications to classifying phylogenetic trees ###### Abstract Deep neural networks show great success when input vectors are in an Euclidean space. However, those classical neural networks show a poor performance when inputs are phylogenetic trees, which can be written as vectors i...
2303.17995
Neural Network Entropy (NNetEn): Entropy-Based EEG Signal and Chaotic Time Series Classification, Python Package for NNetEn Calculation
Entropy measures are effective features for time series classification problems. Traditional entropy measures, such as Shannon entropy, use probability distribution function. However, for the effective separation of time series, new entropy estimation methods are required to characterize the chaotic dynamic of the syst...
Andrei Velichko, Maksim Belyaev, Yuriy Izotov, Murugappan Murugappan, Hanif Heidari
2023-03-31T12:11:21Z
http://arxiv.org/abs/2303.17995v2
Neural Network Entropy (NNetEn): Entropy-Based EEG Signal and Chaotic Time Series Classification, Python Package for NNetEn Calculation ###### Abstract Entropy measures are effective features for time series classification problems. Traditional entropy measures, such as Shannon entropy, use probability distribution f...
2308.00143
Formally Explaining Neural Networks within Reactive Systems
Deep neural networks (DNNs) are increasingly being used as controllers in reactive systems. However, DNNs are highly opaque, which renders it difficult to explain and justify their actions. To mitigate this issue, there has been a surge of interest in explainable AI (XAI) techniques, capable of pinpointing the input fe...
Shahaf Bassan, Guy Amir, Davide Corsi, Idan Refaeli, Guy Katz
2023-07-31T20:19:50Z
http://arxiv.org/abs/2308.00143v3
# Formally Explaining Neural Networks ###### Abstract Deep neural networks (DNNs) are increasingly being used as controllers in reactive systems. However, DNNs are highly opaque, which renders it difficult to explain and justify their actions. To mitigate this issue, there has been a surge of interest in explainable ...
2302.14623
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
The sheer size of modern neural networks makes model serving a serious computational challenge. A popular class of compression techniques overcomes this challenge by pruning or sparsifying the weights of pretrained networks. While useful, these techniques often face serious tradeoffs between computational requirements ...
Riade Benbaki, Wenyu Chen, Xiang Meng, Hussein Hazimeh, Natalia Ponomareva, Zhe Zhao, Rahul Mazumder
2023-02-28T15:03:18Z
http://arxiv.org/abs/2302.14623v1
# Fast as CHITA: Neural Network Pruning with Combinatorial Optimization ###### Abstract The sheer size of modern neural networks makes model serving a serious computational challenge. A popular class of compression techniques overcomes this challenge by pruning or sparsifying the weights of pretrained networks. While...
2309.04742
Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in Neural Networks
We consider the problem of performing Bayesian inference for logistic regression using appropriate extensions of the ensemble Kalman filter. Two interacting particle systems are proposed that sample from an approximate posterior and prove quantitative convergence rates of these interacting particle systems to their mea...
Diksha Bhandari, Jakiw Pidstrigach, Sebastian Reich
2023-09-09T10:01:51Z
http://arxiv.org/abs/2309.04742v2
# Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in ReLU Networks # Affine Invariant Ensemble Transform Methods to Improve Predictive Uncertainty in ReLU Networks Diksha Bhandari, Jakiw Pidstrigach, Sebastian Reich **Abstract** We consider the problem of performing Bayesian inference f...
2309.12849
DeepOPF-U: A Unified Deep Neural Network to Solve AC Optimal Power Flow in Multiple Networks
The traditional machine learning models to solve optimal power flow (OPF) are mostly trained for a given power network and lack generalizability to today's power networks with varying topologies and growing plug-and-play distributed energy resources (DERs). In this paper, we propose DeepOPF-U, which uses one unified de...
Heng Liang, Changhong Zhao
2023-09-22T13:22:15Z
http://arxiv.org/abs/2309.12849v1
# DeepOPF-U: A Unified Deep Neural Network to Solve AC Optimal Power Flow in Multiple Networks ###### Abstract The traditional machine learning models to solve optimal power flow (OPF) are mostly trained for a given power network and lack generalizability to today's power networks with varying topologies and growing ...
2310.00137
On the Disconnect Between Theory and Practice of Neural Networks: Limits of the NTK Perspective
The neural tangent kernel (NTK) has garnered significant attention as a theoretical framework for describing the behavior of large-scale neural networks. Kernel methods are theoretically well-understood and as a result enjoy algorithmic benefits, which can be demonstrated to hold in wide synthetic neural network archit...
Jonathan Wenger, Felix Dangel, Agustinus Kristiadi
2023-09-29T20:51:24Z
http://arxiv.org/abs/2310.00137v2
# On the Disconnect Between Theory and Practice of Overparametrized Neural Networks ###### Abstract The infinite-width limit of neural networks (NNs) has garnered significant attention as a theoretical framework for analyzing the behavior of large-scale, overparametrized networks. By approaching infinite width, NNs e...
2301.01597
Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood. Some QNNs with specific encoding methods can be efficiently simulated by classical surrogates, while others with quantum memory may perform better than classi...
Yuxuan Du, Yibo Yang, Dacheng Tao, Min-Hsiu Hsieh
2022-12-29T10:46:40Z
http://arxiv.org/abs/2301.01597v3
# Demystify Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification ###### Abstract Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood. Some QNNs with specific encoding methods can be...
2309.08385
A Unified View Between Tensor Hypergraph Neural Networks And Signal Denoising
Hypergraph Neural networks (HyperGNNs) and hypergraph signal denoising (HyperGSD) are two fundamental topics in higher-order network modeling. Understanding the connection between these two domains is particularly useful for designing novel HyperGNNs from a HyperGSD perspective, and vice versa. In particular, the tenso...
Fuli Wang, Karelia Pena-Pena, Wei Qian, Gonzalo R. Arce
2023-09-15T13:19:31Z
http://arxiv.org/abs/2309.08385v1
# A Unified View Between Tensor Hypergraph Neural Networks And Signal Denoising ###### Abstract Hypergraph Neural networks (HyperGNNs) and hypergraph signal denoising (HyperGSD) are two fundamental topics in higher-order network modeling. Understanding the connection between these two domains is particularly useful f...
2307.16373
2D Convolutional Neural Network for Event Reconstruction in IceCube DeepCore
IceCube DeepCore is an extension of the IceCube Neutrino Observatory designed to measure GeV scale atmospheric neutrino interactions for the purpose of neutrino oscillation studies. Distinguishing muon neutrinos from other flavors and reconstructing inelasticity are especially difficult tasks at GeV scale energies in I...
J. H. Peterson, M. Prado Rodriguez, K. Hanson
2023-07-31T02:37:36Z
http://arxiv.org/abs/2307.16373v1
# 2D Convolutional Neural Network for Event Reconstruction in IceCube DeepCore ###### Abstract: IceCube DeepCore is an extension of the IceCube Neutrino Observatory designed to measure GeV scale atmospheric neutrino interactions for the purpose of neutrino oscillation studies. Distinguishing muon neutrinos from other...
2309.09700
Securing Fixed Neural Network Steganography
Image steganography is the art of concealing secret information in images in a way that is imperceptible to unauthorized parties. Recent advances show that is possible to use a fixed neural network (FNN) for secret embedding and extraction. Such fixed neural network steganography (FNNS) achieves high steganographic per...
Zicong Luo, Sheng Li, Guobiao Li, Zhenxing Qian, Xinpeng Zhang
2023-09-18T12:07:37Z
http://arxiv.org/abs/2309.09700v1
# Securing Fixed Neural Network Steganography ###### Abstract. Image steganography is the art of concealing secret information in images in a way that is imperceptible to unauthorized parties. Recent advances show that is possible to use a fixed neural network (FNN) for secret embedding and extraction. Such fixed neu...
2301.00106
Physics-informed Neural Networks approach to solve the Blasius function
Deep learning techniques with neural networks have been used effectively in computational fluid dynamics (CFD) to obtain solutions to nonlinear differential equations. This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function. This method eliminates the process of changing the ...
Greeshma Krishna, Malavika S Nair, Pramod P Nair, Anil Lal S
2022-12-31T03:14:42Z
http://arxiv.org/abs/2301.00106v2
# Physics-informed Neural Networks approach to solve the Blasius function ###### Abstract Deep learning techniques with neural networks have been used effectively in computational fluid dynamics (CFD) to obtain solutions to nonlinear differential equations. This paper presents a physics-informed neural network (PINN)...
2301.13146
Enhancing Neural Network Differential Equation Solvers
We motivate the use of neural networks for the construction of numerical solutions to differential equations. We prove that there exists a feed-forward neural network that can arbitrarily minimise an objective function that is zero at the solution of Poisson's equation, allowing us to guarantee that neural network solu...
Matthew J. H. Wright
2022-12-28T17:26:46Z
http://arxiv.org/abs/2301.13146v1
# Enhancing Neural Network Differential Equation Solvers ###### Abstract We motivate the use of neural networks for the construction of numerical solutions to differential equations. We prove that there exists a feed-forward neural network that can arbitrarily minimise an objective function that is zero at the soluti...
2308.16910
Robust Variational Physics-Informed Neural Networks
We introduce a Robust version of the Variational Physics-Informed Neural Networks method (RVPINNs). As in VPINNs, we define the quadratic loss functional in terms of a Petrov-Galerkin-type variational formulation of the PDE problem: the trial space is a (Deep) Neural Network (DNN) manifold, while the test space is a fi...
Sergio Rojas, Paweł Maczuga, Judit Muñoz-Matute, David Pardo, Maciej Paszynski
2023-08-31T17:59:44Z
http://arxiv.org/abs/2308.16910v3
# Robust Variational Physics-Informed Neural Networks ###### Abstract We introduce a Robust version of the Variational Physics-Informed Neural Networks (RVPINNs) to approximate the Partial Differential Equations (PDEs) solution. We start from a weak Petrov-Galerkin formulation of the problem, select a discrete test s...
2310.00337
Quantization of Deep Neural Networks to facilitate self-correction of weights on Phase Change Memory-based analog hardware
In recent years, hardware-accelerated neural networks have gained significant attention for edge computing applications. Among various hardware options, crossbar arrays, offer a promising avenue for efficient storage and manipulation of neural network weights. However, the transition from trained floating-point models ...
Arseni Ivanov
2023-09-30T10:47:25Z
http://arxiv.org/abs/2310.00337v1
Quantization of Deep Neural Networks to facilitate self-correction of weights on Phase Change Memory-based analog hardware ###### Abstract In recent years, hardware-accelerated neural networks have gained significant attention for edge computing applications. Among various hardware options, crossbar arrays, offer a p...
2310.10656
VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints
Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement. Ownership testing techniques are designed to identify model fingerprints for verifying plagiarism. However, previous works often rely on overfitting or robustness features as fingerprints, lacking theoretical guar...
Aoting Hu, Zhigang Lu, Renjie Xie, Minhui Xue
2023-09-07T01:58:12Z
http://arxiv.org/abs/2310.10656v1
# VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints ###### Abstract Deploying Machine Learning as a Service gives rise to model plagiarism, leading to copyright infringement. Ownership testing techniques are designed to identify model fingerprints for verifying plagiarism. Howe...
2309.12095
Bayesian sparsification for deep neural networks with Bayesian model reduction
Deep learning's immense capabilities are often constrained by the complexity of its models, leading to an increasing demand for effective sparsification techniques. Bayesian sparsification for deep learning emerges as a crucial approach, facilitating the design of models that are both computationally efficient and comp...
Dimitrije Marković, Karl J. Friston, Stefan J. Kiebel
2023-09-21T14:10:47Z
http://arxiv.org/abs/2309.12095v2
# Bayesian sparsification for deep neural networks with Bayesian model reduction ###### Abstract Deep learning's immense capabilities are often constrained by the complexity of its models, leading to an increasing demand for effective sparsification techniques. Bayesian sparsification for deep learning emerges as a c...
2304.01015
Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks
The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking Neural Network (SNN) based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of...
Wenxuan Pan, Feifei Zhao, Yi Zeng, Bing Han
2023-03-31T07:36:39Z
http://arxiv.org/abs/2304.01015v1
Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks ###### Abstract The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking Neu...
2309.03167
Split-Boost Neural Networks
The calibration and training of a neural network is a complex and time-consuming procedure that requires significant computational resources to achieve satisfactory results. Key obstacles are a large number of hyperparameters to select and the onset of overfitting in the face of a small amount of data. In this framewor...
Raffaele Giuseppe Cestari, Gabriele Maroni, Loris Cannelli, Dario Piga, Simone Formentin
2023-09-06T17:08:57Z
http://arxiv.org/abs/2309.03167v1
# Split-Boost Neural Networks ###### Abstract The calibration and training of a neural network is a complex and time-consuming procedure that requires significant computational resources to achieve satisfactory results. Key obstacles are a large number of hyperparameters to select and the onset of overfitting in the ...
2309.06779
ZKROWNN: Zero Knowledge Right of Ownership for Neural Networks
Training contemporary AI models requires investment in procuring learning data and computing resources, making the models intellectual property of the owners. Popular model watermarking solutions rely on key input triggers for detection; the keys have to be kept private to prevent discovery, forging, and removal of the...
Nojan Sheybani, Zahra Ghodsi, Ritvik Kapila, Farinaz Koushanfar
2023-09-13T08:06:13Z
http://arxiv.org/abs/2309.06779v1
# ZKROWNN: Zero Knowledge Right of Ownership ###### Abstract Training contemporary AI models requires investment in procuring learning data and computing resources, making the models intellectual property of the owners. Popular model watermarking solutions rely on key input triggers for detection; the keys have to be...
2309.14816
A Comparative Study of Population-Graph Construction Methods and Graph Neural Networks for Brain Age Regression
The difference between the chronological and biological brain age of a subject can be an important biomarker for neurodegenerative diseases, thus brain age estimation can be crucial in clinical settings. One way to incorporate multimodal information into this estimation is through population graphs, which combine vario...
Kyriaki-Margarita Bintsi, Tamara T. Mueller, Sophie Starck, Vasileios Baltatzis, Alexander Hammers, Daniel Rueckert
2023-09-26T10:30:45Z
http://arxiv.org/abs/2309.14816v1
A Comparative Study of Population-Graph Construction Methods and Graph Neural Networks for Brain Age Regression ###### Abstract The difference between the chronological and biological brain age of a subject can be an important biomarker for neurodegenerative diseases, thus brain age estimation can be crucial in clini...
2302.14231
CHGNet: Pretrained universal neural network potential for charge-informed atomistic modeling
The simulation of large-scale systems with complex electron interactions remains one of the greatest challenges for the atomistic modeling of materials. Although classical force fields often fail to describe the coupling between electronic states and ionic rearrangements, the more accurate \textit{ab-initio} molecular ...
Bowen Deng, Peichen Zhong, KyuJung Jun, Janosh Riebesell, Kevin Han, Christopher J. Bartel, Gerbrand Ceder
2023-02-28T01:30:06Z
http://arxiv.org/abs/2302.14231v2
# CHGNet: Pretrained universal neural network potential for charge-informed atomistic modeling ###### Abstract The simulation of large-scale systems with complex electron interactions remains one of the greatest challenges for the atomistic modeling of materials. Although classical force-fields often fail to describe...
2309.06221
Use neural networks to recognize students' handwritten letters and incorrect symbols
Correcting students' multiple-choice answers is a repetitive and mechanical task that can be considered an image multi-classification task. Assuming possible options are 'abcd' and the correct option is one of the four, some students may write incorrect symbols or options that do not exist. In this paper, five classifi...
JiaJun Zhu, Zichuan Yang, Binjie Hong, Jiacheng Song, Jiwei Wang, Tianhao Chen, Shuilan Yang, Zixun Lan, Fei Ma
2023-09-12T13:41:59Z
http://arxiv.org/abs/2309.06221v1
# Use neural networks to recognize students' handwritten letters and incorrect symbols ###### Abstract Correcting students' multiple-choice answers is a repetitive and mechanical task that can be considered an image multi-classification task. Assuming possible options are 'abcd' and the correct option is one of the f...
2309.05067
Mutation-based Fault Localization of Deep Neural Networks
Deep neural networks (DNNs) are susceptible to bugs, just like other types of software systems. A significant uptick in using DNN, and its applications in wide-ranging areas, including safety-critical systems, warrant extensive research on software engineering tools for improving the reliability of DNN-based systems. O...
Ali Ghanbari, Deepak-George Thomas, Muhammad Arbab Arshad, Hridesh Rajan
2023-09-10T16:18:49Z
http://arxiv.org/abs/2309.05067v1
# Mutation-based Fault Localization ###### Abstract Deep neural networks (DNNs) are susceptible to bugs, just like other types of software systems. A significant uptick in using DNN, and its applications in wide-ranging areas, including safety-critical systems, warrant extensive research on software engineering tools...
2309.17113
Meta-Path Learning for Multi-relational Graph Neural Networks
Existing multi-relational graph neural networks use one of two strategies for identifying informative relations: either they reduce this problem to low-level weight learning, or they rely on handcrafted chains of relational dependencies, called meta-paths. However, the former approach faces challenges in the presence o...
Francesco Ferrini, Antonio Longa, Andrea Passerini, Manfred Jaeger
2023-09-29T10:12:30Z
http://arxiv.org/abs/2309.17113v2
# Meta-Path Learning for Multi-relational Graph Neural Networks ###### Abstract Existing multi-relational graph neural networks use one of two strategies for identifying informative relations: either they reduce this problem to low-level weight learning, or they rely on handcrafted chains of relational dependencies, ...
2309.04755
Towards Real-time Training of Physics-informed Neural Networks: Applications in Ultrafast Ultrasound Blood Flow Imaging
Physics-informed Neural Network (PINN) is one of the most preeminent solvers of Navier-Stokes equations, which are widely used as the governing equation of blood flow. However, current approaches, relying on full Navier-Stokes equations, are impractical for ultrafast Doppler ultrasound, the state-of-the-art technique f...
Haotian Guan, Jinping Dong, Wei-Ning Lee
2023-09-09T11:03:06Z
http://arxiv.org/abs/2309.04755v1
Towards Real-time Training of Physics-informed Neural Networks: Applications in Ultrafast Ultrasound Blood Flow Imaging ###### Abstract Physics-informed Neural Network (PINN) is one of the most preeminent solvers of Navier-Stokes equations, which are widely used as the governing equation of blood flow. However, curre...
2309.04782
RRCNN$^{+}$: An Enhanced Residual Recursive Convolutional Neural Network for Non-stationary Signal Decomposition
Time-frequency analysis is an important and challenging task in many applications. Fourier and wavelet analysis are two classic methods that have achieved remarkable success in many fields. They also exhibit limitations when applied to nonlinear and non-stationary signals. To address this challenge, a series of nonline...
Feng Zhou, Antonio Cicone, Haomin Zhou
2023-09-09T13:00:30Z
http://arxiv.org/abs/2309.04782v1
RRCNN\({}^{+}\): An Enhanced Residual Recursive Convolutional Neural Network for Non-stationary Signal Decomposition ###### Abstract Time-frequency analysis is an important and challenging task in many applications. Fourier and wavelet analysis are two classic methods that have achieved remarkable success in many fie...
2309.15328
Exploring Learned Representations of Neural Networks with Principal Component Analysis
Understanding feature representation for deep neural networks (DNNs) remains an open question within the general field of explainable AI. We use principal component analysis (PCA) to study the performance of a k-nearest neighbors classifier (k-NN), nearest class-centers classifier (NCC), and support vector machines on ...
Amit Harlev, Andrew Engel, Panos Stinis, Tony Chiang
2023-09-27T00:18:25Z
http://arxiv.org/abs/2309.15328v1
# Exploring Learned Representations of Neural Networks with Principal Component Analysis ###### Abstract Understanding feature representation for deep neural networks (DNNs) remains an open question within the general field of explainable AI. We use principal component analysis (PCA) to study the performance of a k-n...
2301.00675
FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep Neural Networks
Model compression via quantization and sparsity enhancement has gained an immense interest to enable the deployment of deep neural networks (DNNs) in resource-constrained edge environments. Although these techniques have shown promising results in reducing the energy, latency and memory requirements of the DNNs, their ...
Akul Malhotra, Sumeet Kumar Gupta
2022-12-29T06:06:14Z
http://arxiv.org/abs/2301.00675v1
# FlatENN: Train Flat for Enhanced Fault Tolerance of Quantized Deep Neural Networks ###### Abstract Model compression via quantization and sparsity enhancement has gained an immense interest to enable the deployment of deep neural networks (DNNs) in resource-constrained edge environments. Although these techniques h...
2307.16366
Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's Disease from sMRI and PET Scans
In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. Combining these features l...
Yanteng Zhanga, Xiaohai He, Yi Hao Chan, Qizhi Teng, Jagath C. Rajapakse
2023-07-31T02:04:05Z
http://arxiv.org/abs/2307.16366v1
# Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's Disease from sMRI and PET Scans ###### Abstract In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomogra...
2309.05846
Designs and Implementations in Neural Network-based Video Coding
The past decade has witnessed the huge success of deep learning in well-known artificial intelligence applications such as face recognition, autonomous driving, and large language model like ChatGPT. Recently, the application of deep learning has been extended to a much wider range, with neural network-based video codi...
Yue Li, Junru Li, Chaoyi Lin, Kai Zhang, Li Zhang, Franck Galpin, Thierry Dumas, Hongtao Wang, Muhammed Coban, Jacob Ström, Du Liu, Kenneth Andersson
2023-09-11T22:12:41Z
http://arxiv.org/abs/2309.05846v2
# Designs and Implementations in Neural Network-based Video Coding ###### Abstract The past decade has witnessed the huge success of deep learning in well-known artificial intelligence applications such as face recognition, autonomous driving, and large language model like ChatGPT. Recently, the application of deep l...
2309.05809
Divergences in Color Perception between Deep Neural Networks and Humans
Deep neural networks (DNNs) are increasingly proposed as models of human vision, bolstered by their impressive performance on image classification and object recognition tasks. Yet, the extent to which DNNs capture fundamental aspects of human vision such as color perception remains unclear. Here, we develop novel expe...
Ethan O. Nadler, Elise Darragh-Ford, Bhargav Srinivasa Desikan, Christian Conaway, Mark Chu, Tasker Hull, Douglas Guilbeault
2023-09-11T20:26:40Z
http://arxiv.org/abs/2309.05809v1
# Divergences in Color Perception between Deep Neural Networks and Humans ###### Abstract Deep neural networks (DNNs) are increasingly proposed as models of human vision, bolstered by their impressive performance on image classification and object recognition tasks. Yet, the extent to which DNNs capture fundamental a...
2309.11101
A New Interpretable Neural Network-Based Rule Model for Healthcare Decision Making
In healthcare applications, understanding how machine/deep learning models make decisions is crucial. In this study, we introduce a neural network framework, $\textit{Truth Table rules}$ (TT-rules), that combines the global and exact interpretability properties of rule-based models with the high performance of deep neu...
Adrien Benamira, Tristan Guerand, Thomas Peyrin
2023-09-20T07:15:48Z
http://arxiv.org/abs/2309.11101v1
# A New Interpretable Neural Network-Based Rule Model for Healthcare Decision Making ###### Abstract In healthcare applications, understanding how machine/deep learning models make decisions is crucial. In this study, we introduce a neural network framework, _Truth Table rules_ (TT-rules), that combines the global an...
2309.08444
Neural Network Exemplar Parallelization with Go
This paper presents a case for exemplar parallelism of neural networks using Go as parallelization framework. Further it is shown that also limited multi-core hardware systems are feasible for these parallelization tasks, as notebooks and single board computer systems. The main question was how much speedup can be gene...
Georg Wiesinger, Erich Schikuta
2023-09-15T14:46:43Z
http://arxiv.org/abs/2309.08444v1
# Neural Network Exemplar Parallelization with Go ###### Abstract This paper presents a case for exemplar parallelism of neural networks using Go as parallelization framework. Further it is shown that also limited multi-core hardware systems are feasible for these parallelization tasks, as notebooks and single board ...
2309.08849
Learning a Stable Dynamic System with a Lyapunov Energy Function for Demonstratives Using Neural Networks
Autonomous Dynamic System (DS)-based algorithms hold a pivotal and foundational role in the field of Learning from Demonstration (LfD). Nevertheless, they confront the formidable challenge of striking a delicate balance between achieving precision in learning and ensuring the overall stability of the system. In respons...
Yu Zhang, Yongxiang Zou, Haoyu Zhang, Xiuze Xia, Long Cheng
2023-09-16T03:03:53Z
http://arxiv.org/abs/2309.08849v6
Learning a Stable Dynamic System with a Lyapunov Energy Function for Demonstratives Using Neural Networks ###### Abstract Autonomous Dynamic System (DS)-based algorithms hold a pivotal and foundational role in the field of Learning from Demonstration (LfD). Nevertheless, they confront the formidable challenge of stri...
2302.14690
On the existence of minimizers in shallow residual ReLU neural network optimization landscapes
Many mathematical convergence results for gradient descent (GD) based algorithms employ the assumption that the GD process is (almost surely) bounded and, also in concrete numerical simulations, divergence of the GD process may slow down, or even completely rule out, convergence of the error function. In practical rele...
Steffen Dereich, Arnulf Jentzen, Sebastian Kassing
2023-02-28T16:01:38Z
http://arxiv.org/abs/2302.14690v1
# On the existence of minimizers in shallow residual ###### Abstract. Many mathematical convergence results for gradient descent (GD) based algorithms employ the assumption that the GD process is (almost surely) _bounded_ and, also in concrete numerical simulations, divergence of the GD process may _slow down_, or ev...
2309.10948
A Novel Deep Neural Network for Trajectory Prediction in Automated Vehicles Using Velocity Vector Field
Anticipating the motion of other road users is crucial for automated driving systems (ADS), as it enables safe and informed downstream decision-making and motion planning. Unfortunately, contemporary learning-based approaches for motion prediction exhibit significant performance degradation as the prediction horizon in...
MReza Alipour Sormoli, Amir Samadi, Sajjad Mozaffari, Konstantinos Koufos, Mehrdad Dianati, Roger Woodman
2023-09-19T22:14:52Z
http://arxiv.org/abs/2309.10948v1
A Novel Deep Neural Network for Trajectory Prediction in Automated Vehicles Using Velocity Vector Field ###### Abstract Anticipating the motion of other road users is crucial for automated driving systems (ADS), as it enables safe and informed downstream decision-making and motion planning. Unfortunately, contemporar...
2308.00127
DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms
Datacenters are increasingly becoming heterogeneous, and are starting to include specialized hardware for networking, video processing, and especially deep learning. To leverage the heterogeneous compute capability of modern datacenters, we develop an approach for compiler-level partitioning of deep neural networks (DN...
Yassine Ghannane, Mohamed S. Abdelfattah
2023-07-31T19:46:49Z
http://arxiv.org/abs/2308.00127v2
# DiviML: A Module-based Heuristic for Mapping Neural Networks onto Heterogeneous Platforms ###### Abstract Datacenters are increasingly becoming heterogeneous, and are starting to include specialized hardware for networking, video processing, and especially deep learning. To leverage the heterogeneous compute capabi...
2309.08652
Quantifying Credit Portfolio sensitivity to asset correlations with interpretable generative neural networks
In this research, we propose a novel approach for the quantification of credit portfolio Value-at-Risk (VaR) sensitivity to asset correlations with the use of synthetic financial correlation matrices generated with deep learning models. In previous work Generative Adversarial Networks (GANs) were employed to demonstrat...
Sergio Caprioli, Emanuele Cagliero, Riccardo Crupi
2023-09-15T15:21:14Z
http://arxiv.org/abs/2309.08652v2
Quantifying Credit Portfolio sensitivity to asset correlations with interpretable generative neural networks ###### Abstract In this research, we propose a novel approach for the quantification of credit portfolio Value-at-Risk (VaR) sensitivity to asset correlations with the use of synthetic financial correlation ma...
2308.00053
T-Fusion Net: A Novel Deep Neural Network Augmented with Multiple Localizations based Spatial Attention Mechanisms for Covid-19 Detection
In recent years, deep neural networks are yielding better performance in image classification tasks. However, the increasing complexity of datasets and the demand for improved performance necessitate the exploration of innovative techniques. The present work proposes a new deep neural network (called as, T-Fusion Net) ...
Susmita Ghosh, Abhiroop Chatterjee
2023-07-31T18:18:01Z
http://arxiv.org/abs/2308.00053v1
T-Fusion Net: A Novel Deep Neural Network Augmented with Multiple Localizations based Spatial Attention Mechanisms for Covid-19 Detection ###### Abstract In recent years, deep neural networks are yielding better performance in image classification tasks. However, the increasing complexity of datasets and the demand f...
2309.07367
The kernel-balanced equation for deep neural networks
Deep neural networks have shown many fruitful applications in this decade. A network can get the generalized function through training with a finite dataset. The degree of generalization is a realization of the proximity scale in the data space. Specifically, the scale is not clear if the dataset is complicated. Here w...
Kenichi Nakazato
2023-09-14T01:00:05Z
http://arxiv.org/abs/2309.07367v1
# The kernel-balanced equation for deep neural networks ###### Abstract Deep neural networks have shown many fruitful applications in this decade. A network can get the generalized function through training with a finite dataset. The degree of generalization is a realization of the proximity scale in the data space. ...
2305.19717
Is Rewiring Actually Helpful in Graph Neural Networks?
Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is a...
Domenico Tortorella, Alessio Micheli
2023-05-31T10:12:23Z
http://arxiv.org/abs/2305.19717v1
# Is Rewiring Actually Helpful in ###### Abstract Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothin...
2309.06535
Automatic quantification of abdominal subcutaneous and visceral adipose tissue in children, through MRI study, using total intensity maps and Convolutional Neural Networks
Childhood overweight and obesity is one of the main health problems in the world since it is related to the early appearance of different diseases, in addition to being a risk factor for later developing obesity in adulthood with its health and economic consequences. Visceral abdominal tissue (VAT) is strongly related ...
José Gerardo Suárez-García, Po-Wah So, Javier Miguel Hernández-López, Silvia S. Hidalgo-Tobón, Pilar Dies-Suárez, Benito de Celis-Alonso
2023-09-12T19:19:47Z
http://arxiv.org/abs/2309.06535v1
###### Abstract ###### Abstract Childhood overweight and obesity is one of the main health problems in the world since it is related to the early appearance of different diseases, in addition to being a risk factor for later developing obesity in adulthood with its health and economic consequences. Visceral abdominal...
2309.10418
Graph Neural Networks for Dynamic Modeling of Roller Bearing
In the presented work, we propose to apply the framework of graph neural networks (GNNs) to predict the dynamics of a rolling element bearing. This approach offers generalizability and interpretability, having the potential for scalable use in real-time operational digital twin systems for monitoring the health state o...
Vinay Sharma, Jens Ravesloot, Cees Taal, Olga Fink
2023-09-19T08:30:10Z
http://arxiv.org/abs/2309.10418v1
# Graph Neural Networks for Dynamic Modeling of Roller Bearing ###### Abstract In the presented work, we propose to apply the framework of graph neural networks (GNNs) to predict the dynamics of a rolling element bearing. This approach offers generalizability and interpretability, having the potential for scalable us...
2309.17363
Relational Constraints On Neural Networks Reproduce Human Biases towards Abstract Geometric Regularity
Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors. One illustration of this is in the visual perception of geometric forms. Studies have shown a uniquely human bias toward geometric regularity, with...
Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths
2023-09-29T16:12:51Z
http://arxiv.org/abs/2309.17363v1
Relational Constraints on Neural Networks Reproduce Human Biases Towards Abstract Geometric Regularity ###### Abstract Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors. One illustration of this i...
2305.00535
Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte Carlo Tree Search
Graph neural networks are useful for learning problems, as well as for combinatorial and graph problems such as the Subgraph Isomorphism Problem and the Traveling Salesman Problem. We describe an approach for computing Steiner Trees by combining a graph neural network and Monte Carlo Tree Search. We first train a graph...
Reyan Ahmed, Mithun Ghosh, Kwang-Sung Jun, Stephen Kobourov
2023-04-30T17:15:38Z
http://arxiv.org/abs/2305.00535v1
# Nearly Optimal Steiner Trees using Graph Neural Network Assisted Monte Carlo Tree Search ###### Abstract Graph neural networks are useful for learning problems, as well as for combinatorial and graph problems such as the Subgraph Isomorphism Problem and the Traveling Salesman Problem. We describe an approach for co...
2307.16506
Explainable Equivariant Neural Networks for Particle Physics: PELICAN
PELICAN is a novel permutation equivariant and Lorentz invariant or covariant aggregator network designed to overcome common limitations found in architectures applied to particle physics problems. Compared to many approaches that use non-specialized architectures that neglect underlying physics principles and require ...
Alexander Bogatskiy, Timothy Hoffman, David W. Miller, Jan T. Offermann, Xiaoyang Liu
2023-07-31T09:08:40Z
http://arxiv.org/abs/2307.16506v4
# Explainable Equivariant Neural Networks for Particle Physics: PELICAN ###### Abstract We present a comprehensive study of the PELICAN machine learning algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks, including the difficult task of sp...
2304.00146
On the Relationships between Graph Neural Networks for the Simulation of Physical Systems and Classical Numerical Methods
Recent developments in Machine Learning approaches for modelling physical systems have begun to mirror the past development of numerical methods in the computational sciences. In this survey, we begin by providing an example of this with the parallels between the development trajectories of graph neural network acceler...
Artur P. Toshev, Ludger Paehler, Andrea Panizza, Nikolaus A. Adams
2023-03-31T21:51:00Z
http://arxiv.org/abs/2304.00146v1
On the Relationships between Graph Neural Networks for the Simulation of Physical Systems and Classical Numerical Methods ###### Abstract Recent developments in Machine Learning approaches for modelling physical systems have begun to mirror the past development of numerical methods in the computational sciences. In t...
2302.14726
Spiking Neural Network Nonlinear Demapping on Neuromorphic Hardware for IM/DD Optical Communication
Neuromorphic computing implementing spiking neural networks (SNN) is a promising technology for reducing the footprint of optical transceivers, as required by the fast-paced growth of data center traffic. In this work, an SNN nonlinear demapper is designed and evaluated on a simulated intensity-modulation direct-detect...
Elias Arnold, Georg Böcherer, Florian Strasser, Eric Müller, Philipp Spilger, Sebastian Billaudelle, Johannes Weis, Johannes Schemmel, Stefano Calabrò, Maxim Kuschnerov
2023-02-28T16:33:39Z
http://arxiv.org/abs/2302.14726v1
# Spiking Neural Network Nonlinear Demapping on Neuromorphic Hardware for IM/DD Optical Communication ###### Abstract Neuromorphic computing implementing spiking neural networks (SNN) is a promising technology for reducing the footprint of optical transceivers, as required by the fast-paced growth of data center traf...
2309.15762
Rapid Network Adaptation: Learning to Adapt Neural Networks Using Test-Time Feedback
We propose a method for adapting neural networks to distribution shifts at test-time. In contrast to training-time robustness mechanisms that attempt to anticipate and counter the shift, we create a closed-loop system and make use of a test-time feedback signal to adapt a network on the fly. We show that this loop can ...
Teresa Yeo, Oğuzhan Fatih Kar, Zahra Sodagar, Amir Zamir
2023-09-27T16:20:39Z
http://arxiv.org/abs/2309.15762v1
# Rapid Network Adaptation: ###### Abstract We propose a method for adapting neural networks to distribution shifts at test-time. In contrast to **training-time** robustness mechanisms that attempt to **anticipate** and counter the shift, we create a **closed-loop** system and make use of a **test-time** feedback sig...
2309.04860
Approximation Results for Gradient Descent trained Neural Networks
The paper contains approximation guarantees for neural networks that are trained with gradient flow, with error measured in the continuous $L_2(\mathbb{S}^{d-1})$-norm on the $d$-dimensional unit sphere and targets that are Sobolev smooth. The networks are fully connected of constant depth and increasing width. Althoug...
G. Welper
2023-09-09T18:47:55Z
http://arxiv.org/abs/2309.04860v1
# Approximation Results for Gradient Descent trained Neural Networks ###### Abstract The paper contains approximation guarantees for neural networks that are trained with gradient flow, with error measured in the continuous \(L_{2}(\mathbb{S}^{d-1})\)-norm on the \(d\)-dimensional unit sphere and targets that are Sob...
2309.09203
Using Artificial Neural Networks to Determine Ontologies Most Relevant to Scientific Texts
This paper provides an insight into the possibility of how to find ontologies most relevant to scientific texts using artificial neural networks. The basic idea of the presented approach is to select a representative paragraph from a source text file, embed it to a vector space by a pre-trained fine-tuned transformer, ...
Lukáš Korel, Alexander S. Behr, Norbert Kockmann, Martin Holeňa
2023-09-17T08:08:50Z
http://arxiv.org/abs/2309.09203v1
# Using Artificial Neural Networks to Determine Ontologies Most Relevant to Scientific Texts ###### Abstract This paper provides an insight into the possibility of how to find ontologies most relevant to scientific texts using artificial neural networks. The basic idea of the presented approach is to select a represe...
2309.04733
A Spatiotemporal Deep Neural Network for Fine-Grained Multi-Horizon Wind Prediction
The prediction of wind in terms of both wind speed and direction, which has a crucial impact on many real-world applications like aviation and wind power generation, is extremely challenging due to the high stochasticity and complicated correlation in the weather data. Existing methods typically focus on a sub-set of i...
Fanling Huang, Yangdong Deng
2023-09-09T09:36:28Z
http://arxiv.org/abs/2309.04733v1
# A Spatiotemporal Deep Neural Network for Fine-Grained Multi-Horizon Wind Prediction ###### Abstract The prediction of wind in terms of both wind speed and direction, which has a crucial impact on many real-world applications like aviation and wind power generation, is extremely challenging due to the high stochasti...
2302.14685
DART: Diversify-Aggregate-Repeat Training Improves Generalization of Neural Networks
Generalization of neural networks is crucial for deploying them safely in the real world. Common training strategies to improve generalization involve the use of data augmentations, ensembling and model averaging. In this work, we first establish a surprisingly simple but strong benchmark for generalization which utili...
Samyak Jain, Sravanti Addepalli, Pawan Sahu, Priyam Dey, R. Venkatesh Babu
2023-02-28T15:54:47Z
http://arxiv.org/abs/2302.14685v2
# DART: Diversify-Aggregate-Repeat Training ###### Abstract Generalization of Neural Networks is crucial for deploying them safely in the real world. Common training strategies to improve generalization involve the use of data augmentations, ensembling and model averaging. In this work, we first establish a surprisin...
2309.10975
SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network Quantization
Quantization is a widely used compression method that effectively reduces redundancies in over-parameterized neural networks. However, existing quantization techniques for deep neural networks often lack a comprehensive error analysis due to the presence of non-convex loss functions and nonlinear activations. In this p...
Jinjie Zhang, Rayan Saab
2023-09-20T00:35:16Z
http://arxiv.org/abs/2309.10975v1
# SPFO: A Stochastic Algorithm and its Error Analysis ###### Abstract. Quantization is a widely used compression method that effectively reduces redundancies in over-parameterized neural networks. However, existing quantization techniques for deep neural networks often lack a comprehensive error analysis due to the p...
2309.14050
NNgTL: Neural Network Guided Optimal Temporal Logic Task Planning for Mobile Robots
In this work, we investigate task planning for mobile robots under linear temporal logic (LTL) specifications. This problem is particularly challenging when robots navigate in continuous workspaces due to the high computational complexity involved. Sampling-based methods have emerged as a promising avenue for addressin...
Ruijia Liu, Shaoyuan Li, Xiang Yin
2023-09-25T11:24:40Z
http://arxiv.org/abs/2309.14050v2
# NNgTL: Neural Network Guided Optimal Temporal Logic ###### Abstract In this work, we investigate task planning for mobile robots under linear temporal logic (LTL) specifications. This problem is particularly challenging when robots navigate in continuous workspaces due to the high computational complexity involved....
2303.18157
MAGNNETO: A Graph Neural Network-based Multi-Agent system for Traffic Engineering
Current trends in networking propose the use of Machine Learning (ML) for a wide variety of network optimization tasks. As such, many efforts have been made to produce ML-based solutions for Traffic Engineering (TE), which is a fundamental problem in ISP networks. Nowadays, state-of-the-art TE optimizers rely on tradit...
Guillermo Bernárdez, José Suárez-Varela, Albert López, Xiang Shi, Shihan Xiao, Xiangle Cheng, Pere Barlet-Ros, Albert Cabellos-Aparicio
2023-03-31T15:47:49Z
http://arxiv.org/abs/2303.18157v1
# MAGNETO: A Graph Neural Network-based Multi-Agent system for Traffic Engineering ###### Abstract Current trends in networking propose the use of Machine Learning (ML) for a wide variety of network optimization tasks. As such, many efforts have been made to produce ML-based solutions for Traffic Engineering (TE), wh...
2309.14722
Physics-informed neural network to augment experimental data: an application to stratified flows
We develop a physics-informed neural network (PINN) to significantly augment state-of-the-art experimental data and apply it to stratified flows. The PINN is a fully-connected deep neural network fed with time-resolved, three-component velocity fields and density fields measured simultaneously in three dimensions at $R...
Lu Zhu, Xianyang Jiang, Adrien Lefauve, Rich R. Kerswell, P. F. Linden
2023-09-26T07:29:42Z
http://arxiv.org/abs/2309.14722v1
# Physics-informed neural network to augment experimental data: an application to stratified flows ###### Abstract We develop a physics-informed neural network (PINN) to significantly augment state-of-the-art experimental data and apply it to stratified flows. The PINN is a fully-connected deep neural network fed wit...
2309.05613
Learning the Geodesic Embedding with Graph Neural Networks
We present GeGnn, a learning-based method for computing the approximate geodesic distance between two arbitrary points on discrete polyhedra surfaces with constant time complexity after fast precomputation. Previous relevant methods either focus on computing the geodesic distance between a single source and all destina...
Bo Pang, Zhongtian Zheng, Guoping Wang, Peng-Shuai Wang
2023-09-11T16:54:34Z
http://arxiv.org/abs/2309.05613v2
# Learning the Geodesic Embedding with Graph Neural Networks ###### Abstract. We present GrGNN, a learning-based method for computing the approximate geodesic distance between two arbitrary points on discrete polyhedra surfaces with constant time complexity after fast precomputation. Previous relevant methods either ...