id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2309.07789
SOT-MRAM-Enabled Probabilistic Binary Neural Networks for Noise-Tolerant and Fast Training
We report the use of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) to implement a probabilistic binary neural network (PBNN) for resource-saving applications. The in-plane magnetized SOT (i-SOT) MRAM not only enables field-free magnetization switching with high endurance (> 10^11), but also hosts...
Puyang Huang, Yu Gu, Chenyi Fu, Jiaqi Lu, Yiyao Zhu, Renhe Chen, Yongqi Hu, Yi Ding, Hongchao Zhang, Shiyang Lu, Shouzhong Peng, Weisheng Zhao, Xufeng Kou
2023-09-14T15:25:36Z
http://arxiv.org/abs/2309.07789v2
# SOT-MRAM-Enabled Probabilistic Binary Neural Networks for Noise-Tolerant and Fast Training ###### Abstract We report the use of spin-orbit torque (SOT) magnetoresistive random-access memory (MRAM) to implement a probabilistic binary neural network (PBNN) for resource-saving applications. The in-plane magnetized SOT...
2310.20519
Enhancing Graph Neural Networks with Quantum Computed Encodings
Transformers are increasingly employed for graph data, demonstrating competitive performance in diverse tasks. To incorporate graph information into these models, it is essential to enhance node and edge features with positional encodings. In this work, we propose novel families of positional encodings tailored for gra...
Slimane Thabet, Romain Fouilland, Mehdi Djellabi, Igor Sokolov, Sachin Kasture, Louis-Paul Henry, Loïc Henriet
2023-10-31T14:56:52Z
http://arxiv.org/abs/2310.20519v1
# Enhancing Graph Neural Networks with ###### Abstract Transformers are increasingly employed for graph data, demonstrating competitive performance in diverse tasks. To incorporate graph information into these models, it is essential to enhance node and edge features with positional encodings. In this work, we propos...
2310.20294
Robust nonparametric regression based on deep ReLU neural networks
In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed noise distributions, the rise of adversarial attacks has emphasized the importance of s...
Juntong Chen
2023-10-31T09:05:09Z
http://arxiv.org/abs/2310.20294v1
# Robust nonparametric regression based on deep relu neural networks ###### Abstract. In this paper, we consider robust nonparametric regression using deep neural networks with ReLU activation function. While several existing theoretically justified methods are geared towards robustness against identical heavy-tailed...
2303.17925
Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks
In this study, we explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs), with a particular focus on complex topologies. We propose a novel methodology for constructing complex ANNs based on various topologies, including Barab\'asi-Albert, Erd\H{o}s-R\'enyi, Watts-...
Tommaso Boccato, Matteo Ferrante, Andrea Duggento, Nicola Toschi
2023-03-31T09:48:16Z
http://arxiv.org/abs/2303.17925v2
# Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks ###### Abstract In this study, we explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs), with a particular focus on complex topologies. We propose a novel methodology for constr...
2307.16666
Improving the temporal resolution of event-based electron detectors using neural network cluster analysis
Novel event-based electron detector platforms provide an avenue to extend the temporal resolution of electron microscopy into the ultrafast domain. Here, we characterize the timing accuracy of a detector based on a TimePix3 architecture using femtosecond electron pulse trains as a reference. With a large dataset of eve...
Alexander Schröder, Leon van Velzen, Maurits Kelder, Sascha Schäfer
2023-07-31T13:45:57Z
http://arxiv.org/abs/2307.16666v1
Improving the temporal resolution of event-based electron detectors using neural network cluster analysis ###### Abstract Novel event-based electron detector platforms provide an avenue to extend the temporal resolution of electron microscopy into the ultrafast domain. Here, we characterize the timing accuracy of a d...
2303.00524
Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach
Prediction of taxi service demand and supply is essential for improving customer's experience and provider's profit. Recently, graph neural networks (GNNs) have been shown promising for this application. This approach models city regions as nodes in a transportation graph and their relations as edges. GNNs utilize loca...
Mahmoud Nazzal, Abdallah Khreishah, Joyoung Lee, Shaahin Angizi, Ala Al-Fuqaha, Mohsen Guizani
2023-02-28T00:21:18Z
http://arxiv.org/abs/2303.00524v2
Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach ###### Abstract Prediction of taxi service demand and supply is essential for improving customer's experience and provider's profit. Recently, graph neural networks (GNNs) have been shown pro...
2309.16335
End-to-end Risk Prediction of Atrial Fibrillation from the 12-Lead ECG by Deep Neural Networks
Background: Atrial fibrillation (AF) is one of the most common cardiac arrhythmias that affects millions of people each year worldwide and it is closely linked to increased risk of cardiovascular diseases such as stroke and heart failure. Machine learning methods have shown promising results in evaluating the risk of d...
Theogene Habineza, Antônio H. Ribeiro, Daniel Gedon, Joachim A. Behar, Antonio Luiz P. Ribeiro, Thomas B. Schön
2023-09-28T10:47:40Z
http://arxiv.org/abs/2309.16335v1
# End-to-end Risk Prediction of Atrial Fibrillation from the 12-Lead ECG by Deep Neural Networks ###### Abstract **Background:** Atrial fibrillation (AF) is one of the most common cardiac arrhythmias that affects millions of people each year worldwide and it is closely linked to increased risk of cardiovascular disea...
2304.00150
E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics
We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts. We benchmark two well-studied fluid flow systems, namely the 3D deca...
Artur P. Toshev, Gianluca Galletti, Johannes Brandstetter, Stefan Adami, Nikolaus A. Adams
2023-03-31T21:56:35Z
http://arxiv.org/abs/2304.00150v1
# E(3) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics ###### Abstract We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-...
2309.16902
Investigating Shift Equivalence of Convolutional Neural Networks in Industrial Defect Segmentation
In industrial defect segmentation tasks, while pixel accuracy and Intersection over Union (IoU) are commonly employed metrics to assess segmentation performance, the output consistency (also referred to equivalence) of the model is often overlooked. Even a small shift in the input image can yield significant fluctuatio...
Zhen Qu, Xian Tao, Fei Shen, Zhengtao Zhang, Tao Li
2023-09-29T00:04:47Z
http://arxiv.org/abs/2309.16902v1
# Investigating Shift Equivalence of Convolutional Neural Networks in Industrial Defect Segmentation ###### Abstract In industrial defect segmentation tasks, while pixel accuracy and Intersection over Union (IoU) are commonly employed metrics to assess segmentation performance, the output consistency (also referred t...
2308.16424
Solar horizontal flow evaluation using neural network and numerical simulation with snapshot data
We suggest a method that evaluates the horizontal velocity in the solar photosphere with easily observable values using a combination of neural network and radiative magnetohydrodynamics simulations. All three-component velocities of thermal convection on the solar surface have important roles in generating waves in th...
Hiroyuki Masaki, Hideyuki Hotta, Yukio Katsukawa, Ryohtaroh T. Ishikawa
2023-08-31T03:28:03Z
http://arxiv.org/abs/2308.16424v2
[ ###### Abstract We suggest a method that evaluates the horizontal velocity in the solar photosphere with easily observable values using a combination of neural network and radiative magnetohydrodynamics simulations. All three-component velocities of thermal convection on the solar surface have important roles in ge...
2309.08533
Automated dermatoscopic pattern discovery by clustering neural network output for human-computer interaction
Background: As available medical image datasets increase in size, it becomes infeasible for clinicians to review content manually for knowledge extraction. The objective of this study was to create an automated clustering resulting in human-interpretable pattern discovery. Methods: Images from the public HAM10000 dat...
Lidia Talavera-Martinez, Philipp Tschandl
2023-09-15T16:50:47Z
http://arxiv.org/abs/2309.08533v1
Automated dermatoscopic pattern discovery by clustering neural network output for human-computer interaction ###### Abstract Background: As available medical image datasets increase in size, it becomes infeasible for clinicians to review content manually for knowledge extraction. The objective of this study was to cr...
2309.04452
Postprocessing of Ensemble Weather Forecasts Using Permutation-invariant Neural Networks
Statistical postprocessing is used to translate ensembles of raw numerical weather forecasts into reliable probabilistic forecast distributions. In this study, we examine the use of permutation-invariant neural networks for this task. In contrast to previous approaches, which often operate on ensemble summary statistic...
Kevin Höhlein, Benedikt Schulz, Rüdiger Westermann, Sebastian Lerch
2023-09-08T17:20:51Z
http://arxiv.org/abs/2309.04452v2
# Postprocessing of Ensemble Weather Forecasts ###### Abstract Statistical postprocessing is used to translate ensembles of raw numerical weather forecasts into reliable probabilistic forecast distributions. In this study, we examine the use of permutation-invariant neural networks for this task. In contrast to previ...
2309.09638
Neural Network-Based Rule Models With Truth Tables
Understanding the decision-making process of a machine/deep learning model is crucial, particularly in security-sensitive applications. In this study, we introduce a neural network framework that combines the global and exact interpretability properties of rule-based models with the high performance of deep neural netw...
Adrien Benamira, Tristan Guérand, Thomas Peyrin, Hans Soegeng
2023-09-18T10:13:59Z
http://arxiv.org/abs/2309.09638v1
# Neural Network-Based Rule Models With Truth Tables ###### Abstract Understanding the decision-making process of a machine/deep learning model is crucial, particularly in security-sensitive applications. In this study, we introduce a neural network framework that combines the global and exact interpretability proper...
2308.16425
On the Equivalence between Implicit and Explicit Neural Networks: A High-dimensional Viewpoint
Implicit neural networks have demonstrated remarkable success in various tasks. However, there is a lack of theoretical analysis of the connections and differences between implicit and explicit networks. In this paper, we study high-dimensional implicit neural networks and provide the high dimensional equivalents for t...
Zenan Ling, Zhenyu Liao, Robert C. Qiu
2023-08-31T03:28:43Z
http://arxiv.org/abs/2308.16425v1
# On the Equivalence between Implicit and Explicit Neural Networks: ###### Abstract Implicit neural networks have demonstrated remarkable success in various tasks. However, there is a lack of theoretical analysis of the connections and differences between implicit and explicit networks. In this paper, we study high-d...
2309.14523
Smooth Exact Gradient Descent Learning in Spiking Neural Networks
Artificial neural networks are highly successfully trained with backpropagation. For spiking neural networks, however, a similar gradient descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance of spikes. Here, we demonstrate exact gradient descent learning based on spiking dynamics that change ...
Christian Klos, Raoul-Martin Memmesheimer
2023-09-25T20:51:00Z
http://arxiv.org/abs/2309.14523v1
# Smooth Exact Gradient Descent Learning in Spiking Neural Networks ###### Abstract Artificial neural networks are highly successfully trained with backpropagation. For spiking neural networks, however, a similar gradient descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance of spikes. Here,...
2309.10976
Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks
Safe deployment of graph neural networks (GNNs) under distribution shift requires models to provide accurate confidence indicators (CI). However, while it is well-known in computer vision that CI quality diminishes under distribution shift, this behavior remains understudied for GNNs. Hence, we begin with a case study ...
Puja Trivedi, Mark Heimann, Rushil Anirudh, Danai Koutra, Jayaraman J. Thiagarajan
2023-09-20T00:35:27Z
http://arxiv.org/abs/2309.10976v1
# Accurate and Scalable Estimation of Epistemic Uncertainty for Graph Neural Networks ###### Abstract Safe deployment of graph neural networks (GNNs) under distribution shift requires models to provide accurate confidence indicators (CI). However, while it is well-known in computer vision that CI quality diminishes u...
2303.00055
Learning time-scales in two-layers neural networks
Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any progress alternate with intervals of rapid decrease. These successive ...
Raphaël Berthier, Andrea Montanari, Kangjie Zhou
2023-02-28T19:52:26Z
http://arxiv.org/abs/2303.00055v3
# Learning time-scales in two-layers neural networks ###### Abstract Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any...
2306.17485
Detection-segmentation convolutional neural network for autonomous vehicle perception
Object detection and segmentation are two core modules of an autonomous vehicle perception system. They should have high efficiency and low latency while reducing computational complexity. Currently, the most commonly used algorithms are based on deep neural networks, which guarantee high efficiency but require high-pe...
Maciej Baczmanski, Robert Synoczek, Mateusz Wasala, Tomasz Kryjak
2023-06-30T08:54:52Z
http://arxiv.org/abs/2306.17485v1
# Detection-segmentation convolutional neural network for autonomous vehicle perception ###### Abstract Object detection and segmentation are two core modules of an autonomous vehicle perception system. They should have high efficiency and low latency while reducing computational complexity. Currently, the most commo...
2309.11856
Activation Compression of Graph Neural Networks using Block-wise Quantization with Improved Variance Minimization
Efficient training of large-scale graph neural networks (GNNs) has been studied with a specific focus on reducing their memory consumption. Work by Liu et al. (2022) proposed extreme activation compression (EXACT) which demonstrated drastic reduction in memory consumption by performing quantization of the intermediate ...
Sebastian Eliassen, Raghavendra Selvan
2023-09-21T07:59:08Z
http://arxiv.org/abs/2309.11856v2
Activation Compression of Graph Neural Networks Using Block-Wise Quantization With Improved Variance Minimization ###### Abstract Efficient training of large-scale graph neural networks (GNNs) has been studied with a specific focus on reducing their memory consumption. Work by Liu et al. (2022) proposed extreme activ...
2310.05950
Quantization of Neural Network Equalizers in Optical Fiber Transmission Experiments
The quantization of neural networks for the mitigation of the nonlinear and components' distortions in dual-polarization optical fiber transmission is studied. Two low-complexity neural network equalizers are applied in three 16-QAM 34.4 GBaud transmission experiments with different representative fibers. A number of p...
Jamal Darweesh, Nelson Costa, Antonio Napoli, Bernhard Spinnler, Yves Jaouen, Mansoor Yousefi
2023-09-09T12:24:55Z
http://arxiv.org/abs/2310.05950v1
# Quantization of Neural Network Equalizers in Optical Fiber Transmission Experiments ###### Abstract The quantization of neural networks for the mitigation of the nonlinear and components' distortions in dual-polarization optical fiber transmission is studied. Two low-complexity neural network equalizers are applied...
2305.19659
Improving Expressivity of Graph Neural Networks using Localization
In this paper, we propose localized versions of Weisfeiler-Leman (WL) algorithms in an effort to both increase the expressivity, as well as decrease the computational overhead. We focus on the specific problem of subgraph counting and give localized versions of $k-$WL for any $k$. We analyze the power of Local $k-$WL a...
Anant Kumar, Shrutimoy Das, Shubhajit Roy, Binita Maity, Anirban Dasgupta
2023-05-31T08:46:11Z
http://arxiv.org/abs/2305.19659v3
# Improving Expressivity of Graph Neural Networks using Localization ###### Abstract In this paper, we propose localized versions of Weisfeiler-Leman (WL) algorithms in an effort to both increase the expressivity, as well as decrease the computational overhead. We focus on the specific problem of subgraph counting an...
2306.17442
Designing strong baselines for ternary neural network quantization through support and mass equalization
Deep neural networks (DNNs) offer the highest performance in a wide range of applications in computer vision. These results rely on over-parameterized backbones, which are expensive to run. This computational burden can be dramatically reduced by quantizing (in either data-free (DFQ), post-training (PTQ) or quantizatio...
Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
2023-06-30T07:35:07Z
http://arxiv.org/abs/2306.17442v1
Designing Strong Baselines for Ternary Neural Network Quantization Through Support and Mass Equalization ###### Abstract Deep neural networks (DNNs) offer the highest performance in a wide range of applications in computer vision. These results rely on over-parameterized backbones, which are expensive to run. This co...
2309.10225
VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition
Spiking Neural Networks (SNNs) are at the forefront of neuromorphic computing thanks to their potential energy-efficiency, low latencies, and capacity for continual learning. While these capabilities are well suited for robotics tasks, SNNs have seen limited adaptation in this field thus far. This work introduces a SNN...
Adam D. Hines, Peter G. Stratton, Michael Milford, Tobias Fischer
2023-09-19T00:38:05Z
http://arxiv.org/abs/2309.10225v2
# VPRTempo: A Fast Temporally Encoded Spiking Neural Network for Visual Place Recognition ###### Abstract Spiking Neural Networks (SNNs) are at the forefront of neuromorphic computing thanks to their potential energy-efficiency, low latencies, and capacity for continual learning. While these capabilities are well sui...
2309.06645
Bregman Graph Neural Network
Numerous recent research on graph neural networks (GNNs) has focused on formulating GNN architectures as an optimization problem with the smoothness assumption. However, in node classification tasks, the smoothing effect induced by GNNs tends to assimilate representations and over-homogenize labels of connected nodes, ...
Jiayu Zhai, Lequan Lin, Dai Shi, Junbin Gao
2023-09-12T23:54:24Z
http://arxiv.org/abs/2309.06645v1
# Bregman Graph Neural Network ###### Abstract Numerous recent research on graph neural networks (GNNs) has focused on formulating GNN architectures as an optimization problem with the smoothness assumption. However, in node classification tasks, the smoothing effect induced by GNNs tends to assimilate representation...
2309.16318
DeepPCR: Parallelizing Sequential Operations in Neural Networks
Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passes are executed layer-by-layer, and the output of diffusion models is produced by app...
Federico Danieli, Miguel Sarabia, Xavier Suau, Pau Rodríguez, Luca Zappella
2023-09-28T10:15:30Z
http://arxiv.org/abs/2309.16318v2
# DeepPCR: Parallelizing Sequential Operations in Neural Networks ###### Abstract Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passe...
2309.03846
Scalable Forward Reachability Analysis of Multi-Agent Systems with Neural Network Controllers
Neural networks (NNs) have been shown to learn complex control laws successfully, often with performance advantages or decreased computational cost compared to alternative methods. Neural network controllers (NNCs) are, however, highly sensitive to disturbances and uncertainty, meaning that it can be challenging to mak...
Oliver Gates, Matthew Newton, Konstantinos Gatsis
2023-09-07T17:02:09Z
http://arxiv.org/abs/2309.03846v1
# Scalable Forward Reachability Analysis of Multi-Agent Systems with Neural Network Controllers ###### Abstract Neural networks (NNs) have been shown to learn complex control laws successfully, often with performance advantages or decreased computational cost compared to alternative methods. Neural network controller...
2303.17883
Single-ended Recovery of Optical fiber Transmission Matrices using Neural Networks
Ultra-thin multimode optical fiber imaging promises next-generation medical endoscopes reaching high image resolution for deep tissues. However, current technology suffers from severe optical distortion, as the fiber's calibration is sensitive to bending and temperature and thus requires in vivo re-measurement with acc...
Yijie Zheng, George S. D. Gordon
2023-03-31T08:35:22Z
http://arxiv.org/abs/2303.17883v2
# Single-ended Recovery of Optical fiber Transmission Matrices using Neural Networks ###### Abstract Ultra-thin multimode optical fiber imaging technology promises next-generation medical endoscopes that provide high image resolution deep in the body (e.g. blood vessels, brain). However, this technology suffers from ...
2301.00012
GANExplainer: GAN-based Graph Neural Networks Explainer
With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component for predictive and trustworthy decision-making. Thus, it is critical to exp...
Yiqiao Li, Jianlong Zhou, Boyuan Zheng, Fang Chen
2022-12-30T23:11:24Z
http://arxiv.org/abs/2301.00012v1
# GANExplainer: GAN-based Graph Neural Networks Explainer ###### Abstract With the rapid deployment of graph neural networks (GNNs) based techniques into a wide range of applications such as link prediction, node classification, and graph classification the explainability of GNNs has become an indispensable component...
2309.07163
Systematic Review of Experimental Paradigms and Deep Neural Networks for Electroencephalography-Based Cognitive Workload Detection
This article summarizes a systematic review of the electroencephalography (EEG)-based cognitive workload (CWL) estimation. The focus of the article is twofold: identify the disparate experimental paradigms used for reliably eliciting discreet and quantifiable levels of cognitive load and the specific nature and represe...
Vishnu KN, Cota Navin Gupta
2023-09-11T14:27:22Z
http://arxiv.org/abs/2309.07163v1
Systematic Review of Experimental Paradigms and Deep Neural Networks for Electroencephalography - Based Cognitive Workload Detection ###### Abstract This article summarizes a systematic review of the electroencephalography (EEG) - based cognitive workload (CWL) estimation. The focus of the article is two-fold, identi...
2301.00007
Selected aspects of complex, hypercomplex and fuzzy neural networks
This short report reviews the current state of the research and methodology on theoretical and practical aspects of Artificial Neural Networks (ANN). It was prepared to gather state-of-the-art knowledge needed to construct complex, hypercomplex and fuzzy neural networks. The report reflects the individual interests o...
Agnieszka Niemczynowicz, Radosław A. Kycia, Maciej Jaworski, Artur Siemaszko, Jose M. Calabuig, Lluis M. García-Raffi, Baruch Schneider, Diana Berseghyan, Irina Perfiljeva, Vilem Novak, Piotr Artiemjew
2022-12-29T12:26:56Z
http://arxiv.org/abs/2301.00007v2
# Selected aspects of complex, hypercomplex and fuzzy neural networks ###### Abstract We present a new class of hypercomplex and fuzzy neural networks, which are the most common examples of hypercomplex and fuzzy neural networks. We show that hypercomplex and fuzzy neural networks are capable of complex and complex, ...
2309.13881
Skip-Connected Neural Networks with Layout Graphs for Floor Plan Auto-Generation
With the advent of AI and computer vision techniques, the quest for automated and efficient floor plan designs has gained momentum. This paper presents a novel approach using skip-connected neural networks integrated with layout graphs. The skip-connected layers capture multi-scale floor plan information, and the encod...
Yuntae Jeon, Dai Quoc Tran, Seunghee Park
2023-09-25T05:20:57Z
http://arxiv.org/abs/2309.13881v2
# Skip-Connected Neural Networks with Layout Graphs for ###### Abstract With the advent of AI and computer vision techniques, the quest for automated and efficient floor plan designs has gained momentum. This paper presents a novel approach using skip-connected neural networks integrated with layout graphs. The skip-...
2309.07412
Advancing Regular Language Reasoning in Linear Recurrent Neural Networks
In recent studies, linear recurrent neural networks (LRNNs) have achieved Transformer-level performance in natural language and long-range modeling, while offering rapid parallel training and constant inference cost. With the resurgence of interest in LRNNs, we study whether they can learn the hidden rules in training ...
Ting-Han Fan, Ta-Chung Chi, Alexander I. Rudnicky
2023-09-14T03:36:01Z
http://arxiv.org/abs/2309.07412v2
# Advancing Regular Language Reasoning in ###### Abstract In recent studies, linear recurrent neural networks (LRNNs) have achieved Transformer-level performance in natural language modeling and long-range modeling while offering rapid parallel training and constant inference costs. With the resurged interest in LRNN...
2309.03890
XpookyNet: Advancement in Quantum System Analysis through Convolutional Neural Networks for Detection of Entanglement
The application of machine learning models in quantum information theory has surged in recent years, driven by the recognition of entanglement and quantum states, which are the essence of this field. However, most of these studies rely on existing prefabricated models, leading to inadequate accuracy. This work aims to ...
Ali Kookani, Yousef Mafi, Payman Kazemikhah, Hossein Aghababa, Kazim Fouladi, Masoud Barati
2023-09-07T17:52:43Z
http://arxiv.org/abs/2309.03890v4
XpookyNet: Advancement in Quantum System Analysis through Convolutional Neural Networks for Detection of Entanglement ###### Abstract The application of machine learning models in quantum information theory has surged in recent years, driven by the recognition of entanglement and quantum states, which are the essence...
2309.06081
Information Flow in Graph Neural Networks: A Clinical Triage Use Case
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs. However, efficient training of GNNs remains challenging, with several open research questions. In this paper, we investigate how the flow of embedding information ...
Víctor Valls, Mykhaylo Zayats, Alessandra Pascale
2023-09-12T09:18:12Z
http://arxiv.org/abs/2309.06081v1
# Information Flow in Graph Neural Networks: ###### Abstract Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs. However, efficient training of GNNs remains challenging, with several open research questions. In th...
2309.05826
KD-FixMatch: Knowledge Distillation Siamese Neural Networks
Semi-supervised learning (SSL) has become a crucial approach in deep learning as a way to address the challenge of limited labeled data. The success of deep neural networks heavily relies on the availability of large-scale high-quality labeled data. However, the process of data labeling is time-consuming and unscalable...
Chien-Chih Wang, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Bryan Wang
2023-09-11T21:11:48Z
http://arxiv.org/abs/2309.05826v1
# KD-FixMatch: Knowledge Distillation Siamese Neural Networks ###### Abstract Semi-supervised learning (SSL) has become a crucial approach in deep learning as a way to address the challenge of limited labeled data. The success of deep neural networks heavily relies on the availability of large-scale high-quality labe...
2301.13694
Are Defenses for Graph Neural Networks Robust?
A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw - virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustne...
Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski
2023-01-31T15:11:48Z
http://arxiv.org/abs/2301.13694v1
# Are Defenses for Graph Neural Networks Robust? ###### Abstract A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw - virtually all of the defenses are evaluated...
2309.05818
Rice Plant Disease Detection and Diagnosis using Deep Convolutional Neural Networks and Multispectral Imaging
Rice is considered a strategic crop in Egypt as it is regularly consumed in the Egyptian people's diet. Even though Egypt is the highest rice producer in Africa with a share of 6 million tons per year, it still imports rice to satisfy its local needs due to production loss, especially due to rice disease. Rice blast di...
Yara Ali Alnaggar, Ahmad Sebaq, Karim Amer, ElSayed Naeem, Mohamed Elhelw
2023-09-11T20:51:21Z
http://arxiv.org/abs/2309.05818v1
Rice Plant Disease Detection and Diagnosis using Deep Convolutional Neural Networks and Multispectral Imaging ###### Abstract Rice is considered a strategic crop in Egypt as it is regularly consumed in the Egyptian people's diet. Even though Egypt is the highest rice producer in Africa with a share of 6 million tons ...
2309.04737
Learning Spiking Neural Network from Easy to Hard task
Starting with small and simple concepts, and gradually introducing complex and difficult concepts is the natural process of human learning. Spiking Neural Networks (SNNs) aim to mimic the way humans process information, but current SNNs models treat all samples equally, which does not align with the principles of human...
Lingling Tang, Jiangtao Hu, Hua Yu, Surui Liu, Jielei Chu
2023-09-09T09:46:32Z
http://arxiv.org/abs/2309.04737v3
# Learning Spiking Neural Network from Easy to Hard task ###### Abstract Starting with small and simple concepts, and gradually introducing complex and difficult concepts is the natural process of human learning. Spiking Neural Networks (SNNs) aim to mimic the way humans process information, but current SNNs models t...
2309.13459
A Model-Agnostic Graph Neural Network for Integrating Local and Global Information
Graph Neural Networks (GNNs) have achieved promising performance in a variety of graph-focused tasks. Despite their success, however, existing GNNs suffer from two significant limitations: a lack of interpretability in results due to their black-box nature, and an inability to learn representations of varying orders. T...
Wenzhuo Zhou, Annie Qu, Keiland W. Cooper, Norbert Fortin, Babak Shahbaba
2023-09-23T19:07:03Z
http://arxiv.org/abs/2309.13459v3
# A Model-Agnostic Graph Neural Network for Integrating Local and Global Information ###### Abstract Graph Neural Networks (GNNs) have achieved promising performance in a variety of graph-focused tasks. Despite their success, existing GNNs suffer from two significant limitations: a lack of interpretability in results...
2309.14691
On the Computational Complexity and Formal Hierarchy of Second Order Recurrent Neural Networks
Artificial neural networks (ANNs) with recurrence and self-attention have been shown to be Turing-complete (TC). However, existing work has shown that these ANNs require multiple turns or unbounded computation time, even with unbounded precision in weights, in order to recognize TC grammars. However, under constraints ...
Ankur Mali, Alexander Ororbia, Daniel Kifer, Lee Giles
2023-09-26T06:06:47Z
http://arxiv.org/abs/2309.14691v1
# On the Computational Complexity and Formal Hierarchy of ###### Abstract Artificial neural networks (ANNs) with recurrence and self-attention have been shown to be Turing-complete (TC). However, existing work has shown that these ANNs require multiple turns or unbounded computation time, even with unbounded precisio...
2309.09171
On the Connection Between Riemann Hypothesis and a Special Class of Neural Networks
The Riemann hypothesis (RH) is a long-standing open problem in mathematics. It conjectures that non-trivial zeros of the zeta function all have real part equal to 1/2. The extent of the consequences of RH is far-reaching and touches a wide spectrum of topics including the distribution of prime numbers, the growth of ar...
Soufiane Hayou
2023-09-17T05:50:12Z
http://arxiv.org/abs/2309.09171v1
# On the Connection Between Riemann Hypothesis ###### Abstract The Riemann hypothesis (\(\mathcal{RH}\)) is a long-standing open problem in mathematics. It conjectures that non-trivial zeros of the zeta function all lie on the line \(\text{Re}(z)=1/2\). The extent of the consequences of \(\mathcal{RH}\) is far-reachi...
2305.19921
Deep Neural Network Estimation in Panel Data Models
In this paper we study neural networks and their approximating power in panel data models. We provide asymptotic guarantees on deep feed-forward neural network estimation of the conditional mean, building on the work of Farrell et al. (2021), and explore latent patterns in the cross-section. We use the proposed estimat...
Ilias Chronopoulos, Katerina Chrysikou, George Kapetanios, James Mitchell, Aristeidis Raftapostolos
2023-05-31T14:58:31Z
http://arxiv.org/abs/2305.19921v1
# Deep Neural Network Estimation in Panel Data Models+ ###### Abstract In this paper we study neural networks and their approximating power in panel data models. We provide asymptotic guarantees on deep feed-forward neural network estimation of the conditional mean, building on the work of Farrell et al. (2021), and ...
2308.16422
Dilated convolutional neural network for detecting extreme-mass-ratio inspirals
The detection of Extreme Mass Ratio Inspirals (EMRIs) is intricate due to their complex waveforms, extended duration, and low signal-to-noise ratio (SNR), making them more challenging to be identified compared to compact binary coalescences. While matched filtering-based techniques are known for their computational dem...
Tianyu Zhao, Yue Zhou, Ruijun Shi, Zhoujian Cao, Zhixiang Ren
2023-08-31T03:16:38Z
http://arxiv.org/abs/2308.16422v3
# DECODE: DilatEd COnvolutional neural network for Detecting Extreme-mass-ratio inspirals ###### Abstract The detection of Extreme Mass Ratio Inspirals (EMRIs) is intricate due to their complex waveforms, extended duration, and low signal-to-noise ratio (SNR), making them more challenging to be identified compared to...
2305.19935
Neural Network Approach to the Simulation of Entangled States with One Bit of Communication
Bell's theorem states that Local Hidden Variables (LHVs) cannot fully explain the statistics of measurements on some entangled quantum states. It is natural to ask how much supplementary classical communication would be needed to simulate them. We study two long-standing open questions in this field with neural network...
Peter Sidajaya, Aloysius Dewen Lim, Baichu Yu, Valerio Scarani
2023-05-31T15:19:00Z
http://arxiv.org/abs/2305.19935v5
# Neural Network Approach to the Simulation of Entangled States with One Bit of Communication ###### Abstract Bell's theorem states that Local Hidden Variables (LHVs) cannot fully explain the statistics of measurements on some entangled quantum states. It is natural to ask how much supplementary classical communicati...
2309.00168
Pose-Graph Attentional Graph Neural Network for Lidar Place Recognition
This paper proposes a pose-graph attentional graph neural network, called P-GAT, which compares (key)nodes between sequential and non-sequential sub-graphs for place recognition tasks as opposed to a common frame-to-frame retrieval problem formulation currently implemented in SOTA place recognition methods. P-GAT uses ...
Milad Ramezani, Liang Wang, Joshua Knights, Zhibin Li, Pauline Pounds, Peyman Moghadam
2023-08-31T23:17:44Z
http://arxiv.org/abs/2309.00168v3
# Pose-Graph Attentional Graph Neural Network ###### Abstract This paper proposes a pose-graph attentional graph neural network, called P-GAT, which compares (key)nodes between sequential and non-sequential sub-graphs for place recognition tasks as opposed to a common frame-to-frame retrieval problem formulation curr...
2309.11717
A class-weighted supervised contrastive learning long-tailed bearing fault diagnosis approach using quadratic neural network
Deep learning has achieved remarkable success in bearing fault diagnosis. However, its performance oftentimes deteriorates when dealing with highly imbalanced or long-tailed data, while such cases are prevalent in industrial settings because fault is a rare event that occurs with an extremely low probability. Conventio...
Wei-En Yu, Jinwei Sun, Shiping Zhang, Xiaoge Zhang, Jing-Xiao Liao
2023-09-21T01:36:46Z
http://arxiv.org/abs/2309.11717v1
A class-weighted supervised contrastive learning long-tailed bearing fault diagnosis approach using quadratic neural network ###### Abstract Deep learning has achieved remarkable success in bearing fault diagnosis. However, its performance oftentimes deteriorates when dealing with highly imbalanced or long-tailed dat...
2309.15559
Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution
Self-interpreting neural networks have garnered significant interest in research. Existing works in this domain often (1) lack a solid theoretical foundation ensuring genuine interpretability or (2) compromise model expressiveness. In response, we formulate a generic Additive Self-Attribution (ASA) framework. Observing...
Ying Sun, Hengshu Zhu, Hui Xiong
2023-09-27T10:31:48Z
http://arxiv.org/abs/2309.15559v1
# Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution ###### Abstract Self-interpreting neural networks have garnered significant interest in research. Existing works in this domain often (1) lack a solid theoretical foundation ensuring genuine interpretability or (2) compr...
2309.04332
Graph Neural Networks Use Graphs When They Shouldn't
Predictions over graphs play a crucial role in various domains, including social networks and medicine. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Although a graph-structure is provided as input to the GNN, in some cases the best solution can be obtained by ignoring i...
Maya Bechler-Speicher, Ido Amos, Ran Gilad-Bachrach, Amir Globerson
2023-09-08T13:59:18Z
http://arxiv.org/abs/2309.04332v2
# Graph Neural Networks Use Graphs When They Shouldn't ###### Abstract Predictions over graphs play a crucial role in various domains, including social networks, molecular biology, medicine, and more. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Instances of graph lab...
2309.13302
Gaining the Sparse Rewards by Exploring Lottery Tickets in Spiking Neural Network
Deploying energy-efficient deep learning algorithms on computational-limited devices, such as robots, is still a pressing issue for real-world applications. Spiking Neural Networks (SNNs), a novel brain-inspired algorithm, offer a promising solution due to their low-latency and low-energy properties over traditional Ar...
Hao Cheng, Jiahang Cao, Erjia Xiao, Mengshu Sun, Renjing Xu
2023-09-23T08:24:36Z
http://arxiv.org/abs/2309.13302v4
# Gaining the Sparse Rewards by Exploring Binary Lottery Tickets in Spiking Neural Networks ###### Abstract Spiking Neural Network (SNN) as a brain-inspired strategy receives lots of attention because of the high-sparsity and low-power properties derived from its inherent spiking information state. To further improve...
2309.11515
Towards Differential Privacy in Sequential Recommendation: A Noisy Graph Neural Network Approach
With increasing frequency of high-profile privacy breaches in various online platforms, users are becoming more concerned about their privacy. And recommender system is the core component of online platforms for providing personalized service, consequently, its privacy preservation has attracted great attention. As the...
Wentao Hu, Hui Fang
2023-09-17T03:12:33Z
http://arxiv.org/abs/2309.11515v2
# Towards Differential Privacy in Sequential Recommendation: A Noisy Graph Neural Network Approach ###### Abstract With increasing frequency of high-profile privacy breaches in various online platforms, users are becoming more concerned about their privacy. And recommender system is the core component of online platf...
2310.12157
Desynchronization of large-scale neural networks by stabilizing unknown unstable incoherent equilibrium states
In large-scale neural networks, coherent limit cycle oscillations usually coexist with unstable incoherent equilibrium states, which are not observed experimentally. We implement a first-order dynamic controller to stabilize unknown equilibrium states and suppress coherent oscillations. The stabilization of incoherent ...
Tatjana Pyragiene, Kestutis Pyragas
2023-09-15T12:00:17Z
http://arxiv.org/abs/2310.12157v1
Desynchronization of large-scale neural networks by stabilizing unknown unstable incoherent equilibrium states ###### Abstract In large-scale neural networks, coherent limit cycle oscillations usually coexist with unstable incoherent equilibrium states, which are not observed experimentally. We implement a first-orde...
2305.19868
Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN
Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANNs with additions, which are more energy-efficient and less computationally intensive. ...
Yangfan Hu, Qian Zheng, Xudong Jiang, Gang Pan
2023-05-31T14:04:41Z
http://arxiv.org/abs/2305.19868v1
# Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN ###### Abstract Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency over traditional artificial neural networks (ANNs) thanks to their event-driven representations. SNNs also replace weight multiplications in ANN...
2301.00169
Generative Graph Neural Networks for Link Prediction
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link prediction and have achieved state-of-the-art performance. Nevertheless, ex...
Xingping Xian, Tao Wu, Xiaoke Ma, Shaojie Qiao, Yabin Shao, Chao Wang, Lin Yuan, Yu Wu
2022-12-31T10:07:19Z
http://arxiv.org/abs/2301.00169v1
# Generative Graph Neural Networks for Link Prediction ###### Abstract Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis. With the recent advances in deep learning, graph neural networks have been used for link pr...
2305.20028
A Study of Bayesian Neural Network Surrogates for Bayesian Optimization
Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query. These objectives are typically represented by Gaussian process (GP) surrogate models which are easy to optimize and support exact inference. While standard GP surrogates have been well-established in Bay...
Yucen Lily Li, Tim G. J. Rudner, Andrew Gordon Wilson
2023-05-31T17:00:00Z
http://arxiv.org/abs/2305.20028v2
# A Study of Bayesian Neural Network Surrogates ###### Abstract Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query. These objectives are typically represented by Gaussian process (GP) surrogate models which are easy to optimize and support exact inferen...
2305.19468
Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
This paper presents an efficient hardware implementation of the recently proposed Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptation of weights and thr...
Ali Mehrabi, Yeshwanth Bethi, André van Schaik, Andrew Wabnitz, Saeed Afshar
2023-05-31T00:34:15Z
http://arxiv.org/abs/2305.19468v1
Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA ###### Abstract This paper presents an efficient hardware implementation of the recently proposed Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-e...
2310.20671
Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction
Quantum Recurrent Neural Networks (QRNNs) are robust candidates to model and predict future values in multivariate time series. However, the effective implementation of some QRNN models is limited by the need of mid-circuit measurements. Those increase the requirements for quantum hardware, which in the current NISQ er...
José Daniel Viqueira, Daniel Faílde, Mariamo M. Juane, Andrés Gómez, David Mera
2023-10-31T17:32:11Z
http://arxiv.org/abs/2310.20671v1
Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction ###### Abstract Quantum Recurrent Neural Networks (QRNNs) are robust candidates to model and predict future values in multivariate time series. However, the effective implementation of some QRNN models is limited by ...
2309.15018
Unidirectional brain-computer interface: Artificial neural network encoding natural images to fMRI response in the visual cortex
While significant advancements in artificial intelligence (AI) have catalyzed progress across various domains, its full potential in understanding visual perception remains underexplored. We propose an artificial neural network dubbed VISION, an acronym for "Visual Interface System for Imaging Output of Neural activity...
Ruixing Liang, Xiangyu Zhang, Qiong Li, Lai Wei, Hexin Liu, Avisha Kumar, Kelley M. Kempski Leadingham, Joshua Punnoose, Leibny Paola Garcia, Amir Manbachi
2023-09-26T15:38:26Z
http://arxiv.org/abs/2309.15018v1
Unidirectional Brain-Computer Interface: Artificial Neural Network Encoding Natural Images to fMRI Response in the Visual Cortex ###### Abstract While significant advancements in artificial intelligence (AI) have catalyzed progress across various domains, its full potential in understanding visual perception remains ...
2309.07193
A Robust SINDy Approach by Combining Neural Networks and an Integral Form
The discovery of governing equations from data has been an active field of research for decades. One widely used methodology for this purpose is sparse regression for nonlinear dynamics, known as SINDy. Despite several attempts, noisy and scarce data still pose a severe challenge to the success of the SINDy approach. I...
Ali Forootani, Pawan Goyal, Peter Benner
2023-09-13T10:50:04Z
http://arxiv.org/abs/2309.07193v1
# A Robust SINDy Approach by Combining Neural Networks and an Integral Form ###### Abstract The discovery of governing equations from data has been an active field of research for decades. One widely used methodology for this purpose is sparse regression for nonlinear dynamics, known as SINDy. Despite several attempt...
2309.14845
Graph Neural Network Based Method for Path Planning Problem
Sampling-based path planning is a widely used method in robotics, particularly in high-dimensional state space. Among the whole process of the path planning, collision detection is the most time-consuming operation. In this paper, we propose a learning-based path planning method that aims to reduce the number of collis...
Xingrong Diao, Wenzheng Chi, Jiankun Wang
2023-09-26T11:20:57Z
http://arxiv.org/abs/2309.14845v2
# Graph Neural Network Based Method for Path Planning Problem ###### Abstract Sampling-based path planning is a widely used method in robotics, particularly in high-dimensional state space. Among the whole process of path planning, collision detection is the most time-consuming operation. In this paper, we propose a ...
2308.16406
CktGNN: Circuit Graph Neural Network for Electronic Design Automation
The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications. In the past decades, intensive research efforts have mostly been paid to automate the transistor sizing with a gi...
Zehao Dong, Weidong Cao, Muhan Zhang, Dacheng Tao, Yixin Chen, Xuan Zhang
2023-08-31T02:20:25Z
http://arxiv.org/abs/2308.16406v2
# CktGNN: Circuit Graph Neural Network for Electronic Design Automation ###### Abstract The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications. In the past decades, in...
2309.17357
Module-wise Training of Neural Networks via the Minimizing Movement Scheme
Greedy layer-wise or module-wise training of neural networks is compelling in constrained and on-device settings where memory is limited, as it circumvents a number of problems of end-to-end back-propagation. However, it suffers from a stagnation problem, whereby early layers overfit and deeper layers stop increasing t...
Skander Karkar, Ibrahim Ayed, Emmanuel de Bézenac, Patrick Gallinari
2023-09-29T16:03:25Z
http://arxiv.org/abs/2309.17357v3
# Module-wise Training of Neural Networks via the Minimizing Movement Scheme ###### Abstract Greedy layer-wise or module-wise training of neural networks is compelling in constrained and on-device settings where memory is limited, as it circumvents a number of problems of end-to-end back-propagation. However, it suff...
2309.16049
Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression
Acoustic howling suppression (AHS) is a critical challenge in audio communication systems. In this paper, we propose a novel approach that leverages the power of neural networks (NN) to enhance the performance of traditional Kalman filter algorithms for AHS. Specifically, our method involves the integration of NN modul...
Yixuan Zhang, Hao Zhang, Meng Yu, Dong Yu
2023-09-27T22:07:00Z
http://arxiv.org/abs/2309.16049v1
# Neural Network Augmented Kalman Filter for Robust Acoustic Howling Suppression ###### Abstract Acoustic howling suppression (AHS) is a critical challenge in audio communication systems. In this paper, we propose a novel approach that leverages the power of neural networks (NN) to enhance the performance of traditio...
2309.04303
Fast Bayesian gravitational wave parameter estimation using convolutional neural networks
The determination of the physical parameters of gravitational wave events is a fundamental pillar in the analysis of the signals observed by the current ground-based interferometers. Typically, this is done using Bayesian inference approaches which, albeit very accurate, are very computationally expensive. We propose a...
M. Andrés-Carcasona, M. Martinez, Ll. M. Mir
2023-09-08T13:04:34Z
http://arxiv.org/abs/2309.04303v2
# Fast Bayesian gravitational wave parameter estimation using convolutional neural networks ###### Abstract The determination of the physical parameters of gravitational wave events is a fundamental pillar in the analysis of the signals observed by the current ground-based interferometers. Typically, this is done usi...
2309.04317
Actor critic learning algorithms for mean-field control with moment neural networks
We develop a new policy gradient and actor-critic algorithm for solving mean-field control problems within a continuous time reinforcement learning setting. Our approach leverages a gradient-based representation of the value function, employing parametrized randomized policies. The learning for both the actor (policy) ...
Huyên Pham, Xavier Warin
2023-09-08T13:29:57Z
http://arxiv.org/abs/2309.04317v1
# Actor critic learning algorithms for mean-field control ###### Abstract We develop a new policy gradient and actor-critic algorithm for solving mean-field control problems within a continuous time reinforcement learning setting. Our approach leverages a gradient-based representation of the value function, employing...
2309.12212
SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices
Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with extremely high energy efficiency. By employing the distinct polarity of current to denote logic `0' and `1', AQFP devices serve as excellent carriers for binary neural network (BNN) computations. Although recent research has made initial strides t...
Zhengang Li, Geng Yuan, Tomoharu Yamauchi, Zabihi Masoud, Yanyue Xie, Peiyan Dong, Xulong Tang, Nobuyuki Yoshikawa, Devesh Tiwari, Yanzhi Wang, Olivia Chen
2023-09-21T16:14:42Z
http://arxiv.org/abs/2309.12212v1
# SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices ###### Abstract Adiabatic Quantum-Flux-Parametron (AQFP) is a superconducting logic with extremely high energy efficiency. By employing the distinct polarity of current to denote logic '0' and '1', AQFP devices serve as exce...
2309.16048
Advancing Acoustic Howling Suppression through Recursive Training of Neural Networks
In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process. This framework integrates a neural network (NN) module into the closed-loop system during training with signals generated recursively on the fly to closel...
Hao Zhang, Yixuan Zhang, Meng Yu, Dong Yu
2023-09-27T22:02:53Z
http://arxiv.org/abs/2309.16048v1
# Advancing Acoustic Howling Suppression Through Recursive Training of Neural Networks ###### Abstract In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process. This framework integrates a neural network (NN) m...
2309.15555
Low Latency of object detection for spikng neural network
Spiking Neural Networks, as a third-generation neural network, are well-suited for edge AI applications due to their binary spike nature. However, when it comes to complex tasks like object detection, SNNs often require a substantial number of time steps to achieve high performance. This limitation significantly hamper...
Nemin Qiu, Chuang Zhu
2023-09-27T10:26:19Z
http://arxiv.org/abs/2309.15555v1
# Low Latency Spiking Neural Network for Object Detection ###### Abstract Spiking Neural Networks (SNNs), as a third-generation neural network, are well-suited for edge AI applications due to their binary spike nature. However, when it comes to complex tasks like object detection, SNNs often require a substantial num...
2308.16375
A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high utility performance, such as accuracy, with a lack of privacy consideration, which is a major concern in mode...
Yi Zhang, Yuying Zhao, Zhaoqing Li, Xueqi Cheng, Yu Wang, Olivera Kotevska, Philip S. Yu, Tyler Derr
2023-08-31T00:31:08Z
http://arxiv.org/abs/2308.16375v3
# A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications ###### Abstract Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high uti...
2309.11928
Video Scene Location Recognition with Neural Networks
This paper provides an insight into the possibility of scene recognition from a video sequence with a small set of repeated shooting locations (such as in television series) using artificial neural networks. The basic idea of the presented approach is to select a set of frames from each scene, transform them by a pre-t...
Lukáš Korel, Petr Pulc, Jiří Tumpach, Martin Holeňa
2023-09-21T09:42:39Z
http://arxiv.org/abs/2309.11928v1
# Video Scene Location Recognition with Neural Networks ###### Abstract This paper provides an insight into the possibility of scene recognition from a video sequence with a small set of repeated shooting locations (such as in television series) using artificial neural networks. The basic idea of the presented approa...
2303.00498
Adaptive Hybrid Spatial-Temporal Graph Neural Network for Cellular Traffic Prediction
Cellular traffic prediction is an indispensable part for intelligent telecommunication networks. Nevertheless, due to the frequent user mobility and complex network scheduling mechanisms, cellular traffic often inherits complicated spatial-temporal patterns, making the prediction incredibly challenging. Although recent...
Xing Wang, Kexin Yang, Zhendong Wang, Junlan Feng, Lin Zhu, Juan Zhao, Chao Deng
2023-02-28T06:46:50Z
http://arxiv.org/abs/2303.00498v1
# Adaptive Hybrid Spatial-Temporal Graph Neural Network for Cellular Traffic Prediction ###### Abstract Cellular traffic prediction is an indispensable part for intelligent telecommunication networks. Nevertheless, due to the frequent user mobility and complex network scheduling mechanisms, cellular traffic often inh...
2309.07390
Unleashing the Power of Depth and Pose Estimation Neural Networks by Designing Compatible Endoscopic Images
Deep learning models have witnessed depth and pose estimation framework on unannotated datasets as a effective pathway to succeed in endoscopic navigation. Most current techniques are dedicated to developing more advanced neural networks to improve the accuracy. However, existing methods ignore the special properties o...
Junyang Wu, Yun Gu
2023-09-14T02:19:38Z
http://arxiv.org/abs/2309.07390v1
Unleashing the Power of Depth and Pose Estimation Neural Networks by Designing Compatible Endoscopic Images ###### Abstract Deep learning models have witnessed depth and pose estimation framework on unannotated datasets as a effective pathway to succeed in endoscopic navigation. Most current techniques are dedicated ...
2309.09550
Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks
The human brain can self-organize rich and diverse sparse neural pathways to incrementally master hundreds of cognitive tasks. However, most existing continual learning algorithms for deep artificial and spiking neural networks are unable to adequately auto-regulate the limited resources in the network, which leads to ...
Bing Han, Feifei Zhao, Wenxuan Pan, Zhaoya Zhao, Xianqi Li, Qingqun Kong, Yi Zeng
2023-09-18T07:56:40Z
http://arxiv.org/abs/2309.09550v2
# Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks ###### Abstract The human brain can self-organize rich and diverse sparse neural pathways to incrementally master hundreds of cognitive tasks. However, most existing continual learning algorithms for deep artificial and s...
2309.15179
ParamANN: A Neural Network to Estimate Cosmological Parameters for $Λ$CDM Universe Using Hubble Measurements
In this article, we employ a machine learning (ML) approach for the estimations of four fundamental parameters, namely, the Hubble constant ($H_0$), matter ($\Omega_{0m}$), curvature ($\Omega_{0k}$) and vacuum ($\Omega_{0\Lambda}$) densities of non-flat $\Lambda$CDM model. We use $31$ Hubble parameter values measured b...
Srikanta Pal, Rajib Saha
2023-09-26T18:25:57Z
http://arxiv.org/abs/2309.15179v3
ParamANN: A Neural Network to Estimate Cosmological Parameters for \(\Lambda\)CDM Universe using Hubble Measurements ###### Abstract In this article, we employ a machine learning (ML) approach for the estimations of four fundamental parameters, namely, the Hubble constant (\(H_{0}\)), matter (\(\Omega_{0m}\)), curvat...
2310.12985
Enabling energy-Efficient object detection with surrogate gradient descent in spiking neural networks
Spiking Neural Networks (SNNs) are a biologically plausible neural network model with significant advantages in both event-driven processing and spatio-temporal information processing, rendering SNNs an appealing choice for energyefficient object detection. However, the non-differentiability of the biological neuronal ...
Jilong Luo, Shanlin Xiao, Yinsheng Chen, Zhiyi Yu
2023-09-07T15:48:00Z
http://arxiv.org/abs/2310.12985v1
Enabling Energy-Efficient Object Detection with Surrogate Gradient Descent in Spiking Neural Networks ###### Abstract Spiking Neural Networks (SNNs) are a biologically plausible neural network model with significant advantages in both event-driven processing and spatio-temporal information processing, rendering SNNs ...
2309.13534
Comparison of Random Forest and Neural Network Framework for Prediction of Fatigue Crack Growth Rate in Nickel Superalloys
The rate of fatigue crack growth in Nickle superalloys is a critical factor of safety in the aerospace industry. A machine learning approach is chosen to predict the fatigue crack growth rate as a function of the material composition, material properties and environmental conditions. Random forests and neural network f...
Raghunandan Pratoori
2023-09-24T03:08:52Z
http://arxiv.org/abs/2309.13534v1
Comparison of Random Forest and Neural Network Framework for Prediction of Fatigue Crack Growth Rate in Nickel Superalloys ###### Abstract The rate of fatigue crack growth in Nickle superalloys is a critical factor of safety in the aerospace industry. A machine learning approach is chosen to predict the fatigue crack...
2309.16114
Comparing Active Learning Performance Driven by Gaussian Processes or Bayesian Neural Networks for Constrained Trajectory Exploration
Robots with increasing autonomy progress our space exploration capabilities, particularly for in-situ exploration and sampling to stand in for human explorers. Currently, humans drive robots to meet scientific objectives, but depending on the robot's location, the exchange of information and driving commands between th...
Sapphira Akins, Frances Zhu
2023-09-28T02:45:14Z
http://arxiv.org/abs/2309.16114v1
Comparing Active Learning Performance Driven by Gaussian Processes or Bayesian Neural Networks for Constrained Trajectory Exploration ###### Abstract Robots with increasing autonomy progress our space exploration capabilities, particularly for in-situ exploration and sampling to stand in for human explorers. Currentl...
2309.13736
Geometry of Linear Neural Networks: Equivariance and Invariance under Permutation Groups
The set of functions parameterized by a linear fully-connected neural network is a determinantal variety. We investigate the subvariety of functions that are equivariant or invariant under the action of a permutation group. Examples of such group actions are translations or $90^\circ$ rotations on images. We describe s...
Kathlén Kohn, Anna-Laura Sattelberger, Vahid Shahverdi
2023-09-24T19:40:15Z
http://arxiv.org/abs/2309.13736v2
# Geometry of Linear Neural Networks: ###### Abstract The set of functions parameterized by a linear fully-connected neural network is a determinantal variety. We investigate the subvariety of functions that are equivariant or invariant under the action of a permutation group. Examples of such group actions are trans...
2301.00181
Smooth Mathematical Function from Compact Neural Networks
This is paper for the smooth function approximation by neural networks (NN). Mathematical or physical functions can be replaced by NN models through regression. In this study, we get NNs that generate highly accurate and highly smooth function, which only comprised of a few weight parameters, through discussing a few t...
I. K. Hong
2022-12-31T11:33:24Z
http://arxiv.org/abs/2301.00181v1
# Smooth Mathematical Function from Compact Neural Networks ###### Abstract This is paper for the smooth function approximation by neural networks (NN). Mathematical or physical functions can be replaced by NN models through regression. In this study, we get NNs that generate highly accurate and highly smooth functio...
2309.13907
HiGNN-TTS: Hierarchical Prosody Modeling with Graph Neural Networks for Expressive Long-form TTS
Recent advances in text-to-speech, particularly those based on Graph Neural Networks (GNNs), have significantly improved the expressiveness of short-form synthetic speech. However, generating human-parity long-form speech with high dynamic prosodic variations is still challenging. To address this problem, we expand the...
Dake Guo, Xinfa Zhu, Liumeng Xue, Tao Li, Yuanjun Lv, Yuepeng Jiang, Lei Xie
2023-09-25T07:07:02Z
http://arxiv.org/abs/2309.13907v2
# Hignn-Tts: Hierarchical prosody modeling with graph neural networks for expressive long-form TTS ###### Abstract Recent advances in text-to-speech, particularly those based on Graph Neural Networks (GNNs), have significantly improved the expressiveness of short-form synthetic speech. However, generating human-parit...
2309.11651
Drift Control of High-Dimensional RBM: A Computational Method Based on Neural Networks
Motivated by applications in queueing theory, we consider a stochastic control problem whose state space is the $d$-dimensional positive orthant. The controlled process $Z$ evolves as a reflected Brownian motion whose covariance matrix is exogenously specified, as are its directions of reflection from the orthant's bou...
Baris Ata, J. Michael Harrison, Nian Si
2023-09-20T21:32:58Z
http://arxiv.org/abs/2309.11651v4
# Drift Control of High-Dimensional RBM: A Computational Method Based on Neural Networks ###### Abstract Motivated by applications in queueing theory, we consider a stochastic control problem whose state space is the \(d\)-dimensional positive orthant. The controlled process \(Z\) evolves as a reflected Brownian moti...
2306.17670
Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings
Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer to the time needed for one spike to travel from one neuron to another. These delays matter because they influence...
Ilyass Hammouamri, Ismail Khalfaoui-Hassani, Timothée Masquelier
2023-06-30T14:01:53Z
http://arxiv.org/abs/2306.17670v3
# Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings ###### Abstract Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer...
2309.10759
A Blueprint for Precise and Fault-Tolerant Analog Neural Networks
Analog computing has reemerged as a promising avenue for accelerating deep neural networks (DNNs) due to its potential to overcome the energy efficiency and scalability challenges posed by traditional digital architectures. However, achieving high precision and DNN accuracy using such technologies is challenging, as hi...
Cansu Demirkiran, Lakshmi Nair, Darius Bunandar, Ajay Joshi
2023-09-19T17:00:34Z
http://arxiv.org/abs/2309.10759v1
# A blueprint for precise and fault-tolerant analog neural networks ###### Abstract Analog computing has reemerged as a promising avenue for accelerating deep neural networks (DNNs) due to its potential to overcome the energy efficiency and scalability challenges posed by traditional digital architectures. However, a...
2309.16314
A Primer on Bayesian Neural Networks: Review and Debates
Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks. To address these challenges, Bayesian neural networks (BN...
Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, Vincent Fortuin
2023-09-28T10:09:15Z
http://arxiv.org/abs/2309.16314v1
# A Primer on Bayesian Neural Networks: Review and Debates ###### Abstract Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to a...
2309.06661
Sound field decomposition based on two-stage neural networks
A method for sound field decomposition based on neural networks is proposed. The method comprises two stages: a sound field separation stage and a single-source localization stage. In the first stage, the sound pressure at microphones synthesized by multiple sources is separated into one excited by each sound source. I...
Ryo Matsuda, Makoto Otani
2023-09-13T01:32:46Z
http://arxiv.org/abs/2309.06661v1
# Sound field decomposition based on two-stage neural networks ###### Abstract A method for sound field decomposition based on neural networks is proposed. The method comprises two stages: a sound field separation stage and a single-source localization stage. In the first stage, the sound pressure at microphones synt...
2307.16727
Multi Agent Navigation in Unconstrained Environments using a Centralized Attention based Graphical Neural Network Controller
In this work, we propose a learning based neural model that provides both the longitudinal and lateral control commands to simultaneously navigate multiple vehicles. The goal is to ensure that each vehicle reaches a desired target state without colliding with any other vehicle or obstacle in an unconstrained environmen...
Yining Ma, Qadeer Khan, Daniel Cremers
2023-07-31T14:48:45Z
http://arxiv.org/abs/2307.16727v2
Multi Agent Navigation in Unconstrained Environments using a Centralized Attention based Graphical Neural Network Controller ###### Abstract In this work, we propose a learning based neural model that provides both the longitudinal and lateral control commands to simultaneously navigate multiple vehicles. The goal is...
2309.05208
Quaternion MLP Neural Networks Based on the Maximum Correntropy Criterion
We propose a gradient ascent algorithm for quaternion multilayer perceptron (MLP) networks based on the cost function of the maximum correntropy criterion (MCC). In the algorithm, we use the split quaternion activation function based on the generalized Hamilton-real quaternion gradient. By introducing a new quaternion ...
Gang Wang, Xinyu Tian, Zuxuan Zhang
2023-09-11T02:56:55Z
http://arxiv.org/abs/2309.05208v2
# Quaternion MLP Neural Networks Based on the Maximum Correntropy Criterion ###### Abstract We propose a gradient ascent algorithm for quaternion multilayer perceptron (MLP) networks based on the cost function of the maximum correntropy criterion (MCC). In the algorithm, we use the split quaternion activation functio...
2309.10605
An Active Noise Control System Based on Soundfield Interpolation Using a Physics-informed Neural Network
Conventional multiple-point active noise control (ANC) systems require placing error microphones within the region of interest (ROI), inconveniencing users. This paper designs a feasible monitoring microphone arrangement placed outside the ROI, providing a user with more freedom of movement. The soundfield within the R...
Yile Angela Zhang, Fei Ma, Thushara Abhayapala, Prasanga Samarasinghe, Amy Bastine
2023-09-19T13:20:47Z
http://arxiv.org/abs/2309.10605v1
An Active Noise Control System Based on Soundfield Interpolation Using a Physics-Informed Neural Network ###### Abstract Conventional multiple-point active noise control (ANC) systems require placing error microphones within the region of interest (ROI), inconveniencing users. This paper designs a feasible monitoring...
2309.09195
SplitEE: Early Exit in Deep Neural Networks with Split Computing
Deep Neural Networks (DNNs) have drawn attention because of their outstanding performance on various tasks. However, deploying full-fledged DNNs in resource-constrained devices (edge, mobile, IoT) is difficult due to their large size. To overcome the issue, various approaches are considered, like offloading part of the...
Divya J. Bajpai, Vivek K. Trivedi, Sohan L. Yadav, Manjesh K. Hanawal
2023-09-17T07:48:22Z
http://arxiv.org/abs/2309.09195v1
# SplitEE: Early Exit in Deep Neural Networks with Split Computing ###### Abstract. Deep Neural Networks (DNNs) have drawn attention because of their outstanding performance on various tasks. However, deploying full-fledged DNNs in resource-constrained devices (edge, mobile, IoT) is difficult due to their large size....
2309.13132
Understanding Calibration of Deep Neural Networks for Medical Image Classification
In the field of medical image analysis, achieving high accuracy is not enough; ensuring well-calibrated predictions is also crucial. Confidence scores of a deep neural network play a pivotal role in explainability by providing insights into the model's certainty, identifying cases that require attention, and establishi...
Abhishek Singh Sambyal, Usma Niyaz, Narayanan C. Krishnan, Deepti R. Bathula
2023-09-22T18:36:07Z
http://arxiv.org/abs/2309.13132v2
# Understanding Calibration of Deep Neural Networks for Medical Image Classification ###### Abstract **Background and Objective -** In the field of medical image analysis, achieving high accuracy is not enough; ensuring well-calibrated predictions is also crucial. Confidence scores of a deep neural network play a piv...
2301.13710
On the Initialisation of Wide Low-Rank Feedforward Neural Networks
The edge-of-chaos dynamics of wide randomly initialized low-rank feedforward networks are analyzed. Formulae for the optimal weight and bias variances are extended from the full-rank to low-rank setting and are shown to follow from multiplicative scaling. The principle second order effect, the variance of the input-out...
Thiziri Nait Saada, Jared Tanner
2023-01-31T15:40:50Z
http://arxiv.org/abs/2301.13710v1
# On the Initialisation of Wide Low-Rank Feedforward Neural Networks ###### Abstract The edge-of-chaos dynamics of wide randomly initialized low-rank feedforward networks are analyzed. Formulae for the optimal weight and bias variances are extended from the full-rank to low-rank setting and are shown to follow from m...
2306.00091
A General Framework for Equivariant Neural Networks on Reductive Lie Groups
Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging. In this paper, we present a general Equivariant Neu...
Ilyes Batatia, Mario Geiger, Jose Munoz, Tess Smidt, Lior Silberman, Christoph Ortner
2023-05-31T18:09:37Z
http://arxiv.org/abs/2306.00091v1
# A General Framework for Equivariant Neural Networks on Reductive Lie Groups ###### Abstract Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molec...
2309.12204
PrNet: A Neural Network for Correcting Pseudoranges to Improve Positioning with Android Raw GNSS Measurements
We present a neural network for mitigating biased errors in pseudoranges to improve localization performance with data collected from mobile phones. A satellite-wise Multilayer Perceptron (MLP) is designed to regress the pseudorange bias correction from six satellite, receiver, context-related features derived from And...
Xu Weng, Keck Voon Ling, Haochen Liu
2023-09-16T10:43:59Z
http://arxiv.org/abs/2309.12204v2
PrNet: A Neural Network for Correcting Pseudoranges to Improve Positioning with Android Raw GNSS Measurements ###### Abstract We present a neural network for mitigating pseudorage bias to improve localization performance with data collected from Android smartphones. We represent pseudorange bias using a pragmatic sat...
2309.09142
Performance of Graph Neural Networks for Point Cloud Applications
Graph Neural Networks (GNNs) have gained significant momentum recently due to their capability to learn on unstructured graph data. Dynamic GNNs (DGNNs) are the current state-of-the-art for point cloud applications; such applications (viz. autonomous driving) require real-time processing at the edge with tight latency ...
Dhruv Parikh, Bingyi Zhang, Rajgopal Kannan, Viktor Prasanna, Carl Busart
2023-09-17T03:05:13Z
http://arxiv.org/abs/2309.09142v1
# Performance of Graph Neural Networks for Point Cloud Applications ###### Abstract Graph Neural Networks (GNNs) have gained significant momentum recently due to their capability to learn on unstructured graph data. Dynamic GNNs (DGNNs) are the current state-of-the-art for point cloud applications; such applications ...
2309.12211
Physics-informed State-space Neural Networks for Transport Phenomena
This work introduces Physics-informed State-space neural network Models (PSMs), a novel solution to achieving real-time optimization, flexibility, and fault tolerance in autonomous systems, particularly in transport-dominated systems such as chemical, biomedical, and power plants. Traditional data-driven methods fall s...
Akshay J. Dave, Richard B. Vilim
2023-09-21T16:14:36Z
http://arxiv.org/abs/2309.12211v2
# Physics-informed State-space Neural Networks for Transport Phenomena ###### Abstract This work introduces Physics-informed State-space neural network Models (PSMs), a novel solution to achieving real-time optimization, flexibility, and fault tolerance in autonomous systems, particularly in transport-dominated syste...
2309.08275
User Power Measurement Based IRS Channel Estimation via Single-Layer Neural Network
One main challenge for implementing intelligent reflecting surface (IRS) aided communications lies in the difficulty to obtain the channel knowledge for the base station (BS)-IRS-user cascaded links, which is needed to design high-performance IRS reflection in practice. Traditional methods for estimating IRS cascaded c...
He Sun, Weidong Mei, Lipeng Zhu, Rui Zhang
2023-09-15T09:36:22Z
http://arxiv.org/abs/2309.08275v1
# User Power Measurement Based IRS Channel Estimation via Single-Layer Neural Network ###### Abstract One main challenge for implementing intelligent reflecting surface (IRS) aided communications lies in the difficulty to obtain the channel knowledge for the base station (BS)-IRS-user cascaded links, which is needed ...
2309.13866
On Calibration of Modern Quantized Efficient Neural Networks
We explore calibration properties at various precisions for three architectures: ShuffleNetv2, GhostNet-VGG, and MobileOne; and two datasets: CIFAR-100 and PathMNIST. The quality of calibration is observed to track the quantization quality; it is well-documented that performance worsens with lower precision, and we obs...
Joey Kuang, Alexander Wong
2023-09-25T04:30:18Z
http://arxiv.org/abs/2309.13866v2
# On Calibration of Modern Quantized Efficient Neural Networks ###### Abstract We explore calibration properties at various precisions for three architectures: ShuffleNetv2, GhostNet-VGG, and MobileOne; and two datasets: CIFAR-100 and PathMNIST. The quality of calibration is observed to track the quantization quality...
2303.17939
LyAl-Net: A high-efficiency Lyman-$α$ forest simulation with a neural network
The inference of cosmological quantities requires accurate and large hydrodynamical cosmological simulations. Unfortunately, their computational time can take millions of CPU hours for a modest coverage in cosmological scales ($\approx (100 {h^{-1}}\,\text{Mpc})^3)$). The possibility to generate large quantities of moc...
Chotipan Boonkongkird, Guilhem Lavaux, Sebastien Peirani, Yohan Dubois, Natalia Porqueres, Eleni Tsaprazi
2023-03-31T10:06:59Z
http://arxiv.org/abs/2303.17939v1
# LyAI-Net: A high-efficiency Lyman-\(\alpha\) forest simulation with a neural network ###### Abstract Context:The inference of cosmological quantities requires accurate and large hydrodynamical cosmological simulations. Unfortunately, their computational time can take millions of CPU hours for a modest coverage in c...
2309.15378
Adversarial Object Rearrangement in Constrained Environments with Heterogeneous Graph Neural Networks
Adversarial object rearrangement in the real world (e.g., previously unseen or oversized items in kitchens and stores) could benefit from understanding task scenes, which inherently entail heterogeneous components such as current objects, goal objects, and environmental constraints. The semantic relationships among the...
Xibai Lou, Houjian Yu, Ross Worobel, Yang Yang, Changhyun Choi
2023-09-27T03:15:45Z
http://arxiv.org/abs/2309.15378v1
Adversarial Object Rearrangement in Constrained Environments with Heterogeneous Graph Neural Networks ###### Abstract Adversarial object rearrangement in the real world (e.g., previously unseen or oversized items in kitchens and stores) could benefit from understanding task scenes, which inherently entail heterogeneo...
2309.12417
Advances in developing deep neural networks for finding primary vertices in proton-proton collisions at the LHC
We are studying the use of deep neural networks (DNNs) to identify and locate primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work focused on finding primary vertices in simulated LHCb data using a hybrid approach that started with kernel density estimators (KDEs) derived heuristically from the e...
Simon Akar, Mohamed Elashri, Rocky Bala Garg, Elliott Kauffman, Michael Peters, Henry Schreiner, Michael Sokoloff, William Tepe, Lauren Tompkins
2023-09-21T18:34:00Z
http://arxiv.org/abs/2309.12417v2
Advances in developing deep neural networks for finding primary vertices in proton-proton collisions at the LHC ###### Abstract We are studying the use of deep neural networks (DNNs) to identify and locate primary vertices (PVs) in proton-proton collisions at the LHC. Earlier work focused on finding primary vertices ...
2301.13659
Spyker: High-performance Library for Spiking Deep Neural Networks
Spiking neural networks (SNNs) have been recently brought to light due to their promising capabilities. SNNs simulate the brain with higher biological plausibility compared to previous generations of neural networks. Learning with fewer samples and consuming less power are among the key features of these networks. Howe...
Shahriar Rezghi Shirsavar, Mohammad-Reza A. Dehaqani
2023-01-31T14:25:03Z
http://arxiv.org/abs/2301.13659v1
# Spyker: High-performance Library for Spiking Deep Neural Networks ###### Abstract Spiking neural networks (SNNs) have been recently brought to light due to their promising capabilities. SNNs simulate the brain with higher biological plausibility compared to previous generations of neural networks. Learning with few...