id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2009.13972
Neural Topic Modeling by Incorporating Document Relationship Graph
Graph Neural Networks (GNNs) that capture the relationships between graph nodes via message passing have been a hot research direction in the natural language processing community. In this paper, we propose Graph Topic Model (GTM), a GNN based neural topic model that represents a corpus as a document relationship graph. Documents and words in the corpus become nodes in the graph and are connected based on document-word co-occurrences. By introducing the graph structure, the relationships between documents are established through their shared words and thus the topical representation of a document is enriched by aggregating information from its neighboring nodes using graph convolution. Extensive experiments on three datasets were conducted and the results demonstrate the effectiveness of the proposed approach.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
197,901
2005.09946
GM-CTSC at SemEval-2020 Task 1: Gaussian Mixtures Cross Temporal Similarity Clustering
This paper describes the system proposed for the SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection. We focused our approach on the detection problem. Given the semantics of words captured by temporal word embeddings in different time periods, we investigate the use of unsupervised methods to detect when the target word has gained or loosed senses. To this end, we defined a new algorithm based on Gaussian Mixture Models to cluster the target similarities computed over the two periods. We compared the proposed approach with a number of similarity-based thresholds. We found that, although the performance of the detection methods varies across the word embedding algorithms, the combination of Gaussian Mixture with Temporal Referencing resulted in our best system.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
178,043
1911.03540
Cross-subject Decoding of Eye Movement Goals from Local Field Potentials
Objective. We consider the cross-subject decoding problem from local field potential (LFP) signals, where training data collected from the prefrontal cortex (PFC) of a source subject is used to decode intended motor actions in a destination subject. Approach. We propose a novel supervised transfer learning technique, referred to as data centering, which is used to adapt the feature space of the source to the feature space of the destination. The key ingredients of data centering are the transfer functions used to model the deterministic component of the relationship between the source and destination feature spaces. We propose an efficient data-driven estimation approach for linear transfer functions that uses the first and second order moments of the class-conditional distributions. Main result. We apply our data centering technique with linear transfer functions for cross-subject decoding of eye movement intentions in an experiment where two macaque monkeys perform memory-guided visual saccades to one of eight target locations. The results show peak cross-subject decoding performance of $80\%$, which marks a substantial improvement over random choice decoder. In addition to this, data centering also outperforms standard sampling-based methods in setups with imbalanced training data. Significance. The analyses presented herein demonstrate that the proposed data centering is a viable novel technique for reliable LFP-based cross-subject brain-computer interfacing and neural prostheses.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
152,672
1802.08077
Discriminative Label Consistent Domain Adaptation
Domain adaptation (DA) is transfer learning which aims to learn an effective predictor on target data from source data despite data distribution mismatch between source and target. We present in this paper a novel unsupervised DA method for cross-domain visual recognition which simultaneously optimizes the three terms of a theoretically established error bound. Specifically, the proposed DA method iteratively searches a latent shared feature subspace where not only the divergence of data distributions between the source domain and the target domain is decreased as most state-of-the-art DA methods do, but also the inter-class distances are increased to facilitate discriminative learning. Moreover, the proposed DA method sparsely regresses class labels from the features achieved in the shared subspace while minimizing the prediction errors on the source data and ensuring label consistency between source and target. Data outliers are also accounted for to further avoid negative knowledge transfer. Comprehensive experiments and in-depth analysis verify the effectiveness of the proposed DA method which consistently outperforms the state-of-the-art DA methods on standard DA benchmarks, i.e., 12 cross-domain image classification tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
91,029
2310.18964
LLMs and Finetuning: Benchmarking cross-domain performance for hate speech detection
In the evolving landscape of online communication, hate speech detection remains a formidable challenge, further compounded by the diversity of digital platforms. This study investigates the effectiveness and adaptability of pre-trained and fine-tuned Large Language Models (LLMs) in identifying hate speech, to address two central questions: (1) To what extent does the model performance depend on the fine-tuning and training parameters?, (2) To what extent do models generalize to cross-domain hate speech detection? and (3) What are the specific features of the datasets or models that influence the generalization potential? The experiment shows that LLMs offer a huge advantage over the state-of-the-art even without pretraining. Ordinary least squares analyses suggest that the advantage of training with fine-grained hate speech labels is washed away with the increase in dataset size. We conclude with a vision for the future of hate speech detection, emphasizing cross-domain generalizability and appropriate benchmarking practices.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
403,791
2404.09222
Design and Fabrication of String-driven Origami Robots
Origami designs and structures have been widely used in many fields, such as morphing structures, robotics, and metamaterials. However, the design and fabrication of origami structures rely on human experiences and skills, which are both time and labor-consuming. In this paper, we present a rapid design and fabrication method for string-driven origami structures and robots. We developed an origami design software to generate desired crease patterns based on analytical models and Evolution Strategies (ES). Additionally, the software can automatically produce 3D models of origami designs. We then used a dual-material 3D printer to fabricate those wrapping-based origami structures with the required mechanical properties. We utilized Twisted String Actuators (TSAs) to fold the target 3D structures from flat plates. To demonstrate the capability of these techniques, we built and tested an origami crawling robot and an origami robotic arm using 3D-printed origami structures driven by TSAs.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
446,584
2306.15905
Dimension Independent Mixup for Hard Negative Sample in Collaborative Filtering
Collaborative filtering (CF) is a widely employed technique that predicts user preferences based on past interactions. Negative sampling plays a vital role in training CF-based models with implicit feedback. In this paper, we propose a novel perspective based on the sampling area to revisit existing sampling methods. We point out that current sampling methods mainly focus on Point-wise or Line-wise sampling, lacking flexibility and leaving a significant portion of the hard sampling area un-explored. To address this limitation, we propose Dimension Independent Mixup for Hard Negative Sampling (DINS), which is the first Area-wise sampling method for training CF-based models. DINS comprises three modules: Hard Boundary Definition, Dimension Independent Mixup, and Multi-hop Pooling. Experiments with real-world datasets on both matrix factorization and graph-based models demonstrate that DINS outperforms other negative sampling methods, establishing its effectiveness and superiority. Our work contributes a new perspective, introduces Area-wise sampling, and presents DINS as a novel approach that achieves state-of-the-art performance for negative sampling. Our implementations are available in PyTorch.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
376,196
2209.10310
Seeking Diverse Reasoning Logic: Controlled Equation Expression Generation for Solving Math Word Problems
To solve Math Word Problems, human students leverage diverse reasoning logic that reaches different possible equation solutions. However, the mainstream sequence-to-sequence approach of automatic solvers aims to decode a fixed solution equation supervised by human annotation. In this paper, we propose a controlled equation generation solver by leveraging a set of control codes to guide the model to consider certain reasoning logic and decode the corresponding equations expressions transformed from the human reference. The empirical results suggest that our method universally improves the performance on single-unknown (Math23K) and multiple-unknown (DRAW1K, HMWP) benchmarks, with substantial improvements up to 13.2% accuracy on the challenging multiple-unknown datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
318,819
2206.03583
Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Deep neural networks for image classification are well-known to be vulnerable to adversarial attacks. One such attack that has garnered recent attention is the adversarial backdoor attack, which has demonstrated the capability to perform targeted misclassification of specific examples. In particular, backdoor attacks attempt to force a model to learn spurious relations between backdoor trigger patterns and false labels. In response to this threat, numerous defensive measures have been proposed; however, defenses against backdoor attacks focus on backdoor pattern detection, which may be unreliable against novel or unexpected types of backdoor pattern designs. We introduce a novel re-contextualization of the adversarial setting, where the presence of an adversary implicitly admits the existence of multiple database contributors. Then, under the mild assumption of contributor awareness, it becomes possible to exploit this knowledge to defend against backdoor attacks by destroying the false label associations. We propose a contributor-aware universal defensive framework for learning in the presence of multiple, potentially adversarial data sources that utilizes semi-supervised ensembles and learning from crowds to filter the false labels produced by adversarial triggers. Importantly, this defensive strategy is agnostic to backdoor pattern design, as it functions without needing -- or even attempting -- to perform either adversary identification or backdoor pattern detection during either training or inference. Our empirical studies demonstrate the robustness of the proposed framework against adversarial backdoor attacks from multiple simultaneous adversaries.
false
false
false
false
true
false
true
false
false
false
false
true
true
false
false
false
false
false
301,330
2402.05790
Underwater MEMS Gyrocompassing: A Virtual Testing Ground
In underwater navigation, accurate heading information is crucial for accurately and continuously tracking trajectories, especially during extended missions beneath the waves. In order to determine the initial heading, a gyrocompassing procedure must be employed. As unmanned underwater vehicles (UUV) are susceptible to ocean currents and other disturbances, the model-based gyrocompassing procedure may experience degraded performance. To cope with such situations, this paper introduces a dedicated learning framework aimed at mitigating environmental effects and offering precise underwater gyrocompassing. Through the analysis of the dynamic UUV signature obtained from inertial measurements, our proposed framework learns to refine disturbed signals, enabling a focused examination of the earth's rotation rate vector. Leveraging recent machine learning advancements, empirical simulations assess the framework's adaptability to challenging underwater conditions. Ultimately, its contribution lies in providing a resilient gyrocompassing solution for UUVs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
428,002
1809.03258
Using phase instead of optical flow for action recognition
Currently, the most common motion representation for action recognition is optical flow. Optical flow is based on particle tracking which adheres to a Lagrangian perspective on dynamics. In contrast to the Lagrangian perspective, the Eulerian model of dynamics does not track, but describes local changes. For video, an Eulerian phase-based motion representation, using complex steerable filters, has been successfully employed recently for motion magnification and video frame interpolation. Inspired by these previous works, here, we proposes learning Eulerian motion representations in a deep architecture for action recognition. We learn filters in the complex domain in an end-to-end manner. We design these complex filters to resemble complex Gabor filters, typically employed for phase-information extraction. We propose a phase-information extraction module, based on these complex filters, that can be used in any network architecture for extracting Eulerian representations. We experimentally analyze the added value of Eulerian motion representations, as extracted by our proposed phase extraction module, and compare with existing motion representations based on optical flow, on the UCF101 dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
107,276
2003.01670
EXPLAIN-IT: Towards Explainable AI for Unsupervised Network Traffic Analysis
The application of unsupervised learning approaches, and in particular of clustering techniques, represents a powerful exploration means for the analysis of network measurements. Discovering underlying data characteristics, grouping similar measurements together, and identifying eventual patterns of interest are some of the applications which can be tackled through clustering. Being unsupervised, clustering does not always provide precise and clear insight into the produced output, especially when the input data structure and distribution are complex and difficult to grasp. In this paper we introduce EXPLAIN-IT, a methodology which deals with unlabeled data, creates meaningful clusters, and suggests an explanation to the clustering results for the end-user. EXPLAIN-IT relies on a novel explainable Artificial Intelligence (AI) approach, which allows to understand the reasons leading to a particular decision of a supervised learning-based model, additionally extending its application to the unsupervised learning domain. We apply EXPLAIN-IT to the problem of YouTube video quality classification under encrypted traffic scenarios, showing promising results.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
166,722
1505.00529
Learning Document Image Binarization from Data
In this paper we present a fully trainable binarization solution for degraded document images. Unlike previous attempts that often used simple features with a series of pre- and post-processing, our solution encodes all heuristics about whether or not a pixel is foreground text into a high-dimensional feature vector and learns a more complicated decision function. In particular, we prepare features of three types: 1) existing features for binarization such as intensity [1], contrast [2], [3], and Laplacian [4], [5]; 2) reformulated features from existing binarization decision functions such those in [6] and [7]; and 3) our newly developed features, namely the Logarithm Intensity Percentile (LIP) and the Relative Darkness Index (RDI). Our initial experimental results show that using only selected samples (about 1.5% of all available training data), we can achieve a binarization performance comparable to those fine-tuned (typically by hand), state-of-the-art methods. Additionally, the trained document binarization classifier shows good generalization capabilities on out-of-domain data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
42,742
2201.10756
On the Achievability of Interference Channel Coding
This paper investigates the achievability of the interference channel coding. It is clarified that the rate-splitting technique is unnecessary to achieve Han-Kobayashi and Jian-Xin-Garg inner regions. Codes are constructed by using sparse matrices (with logarithmic column degree) and the constrained-random-number generators. By extending the problem, we can establish a possible extension of known inner regions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
277,092
2309.10367
Toward efficient resource utilization at edge nodes in federated learning
Federated learning (FL) enables edge nodes to collaboratively contribute to constructing a global model without sharing their data. This is accomplished by devices computing local, private model updates that are then aggregated by a server. However, computational resource constraints and network communication can become a severe bottleneck for larger model sizes typical for deep learning applications. Edge nodes tend to have limited hardware resources (RAM, CPU), and the network bandwidth and reliability at the edge is a concern for scaling federated fleet applications. In this paper, we propose and evaluate a FL strategy inspired by transfer learning in order to reduce resource utilization on devices, as well as the load on the server and network in each global training round. For each local model update, we randomly select layers to train, freezing the remaining part of the model. In doing so, we can reduce both server load and communication costs per round by excluding all untrained layer weights from being transferred to the server. The goal of this study is to empirically explore the potential trade-off between resource utilization on devices and global model convergence under the proposed strategy. We implement the approach using the federated learning framework FEDn. A number of experiments were carried out over different datasets (CIFAR-10, CASA, and IMDB), performing different tasks using different deep-learning model architectures. Our results show that training the model partially can accelerate the training process, efficiently utilizes resources on-device, and reduce the data transmission by around 75% and 53% when we train 25%, and 50% of the model layers, respectively, without harming the resulting global model accuracy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
392,981
2103.17238
PySDM v1: particle-based cloud modelling package for warm-rain microphysics and aqueous chemistry
PySDM is an open-source Python package for simulating the dynamics of particles undergoing condensational and collisional growth, interacting with a fluid flow and subject to chemical composition changes. It is intended to serve as a building block for process-level as well as computational-fluid-dynamics simulation systems involving representation of a continuous phase (air) and a dispersed phase (aerosol), with PySDM being responsible for representation of the dispersed phase. The PySDM package core is a Pythonic high-performance implementation of the Super-Droplet Method (SDM) Monte-Carlo algorithm for representing collisional growth, hence the name. PySDM has two alternative parallel number-crunching backends available: multi-threaded CPU backend based on Numba and GPU-resident backend built on top of ThrustRTC. The usage examples are built on top of four simple atmospheric cloud modelling frameworks: box, adiabatic parcel, single-column and 2D prescribed flow kinematic models. In addition, the package ships with tutorial code depicting how PySDM can be used from Julia and Matlab.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
227,828
2005.06341
Human Mobility in Response to COVID-19 in France, Italy and UK
The policies implemented to hinder the COVID-19 outbreak represent one of the largest critical events in history. The understanding of this process is fundamental for crafting and tailoring post-disaster relief. In this work we perform a massive data analysis, through geolocalized data from 13M Facebook users, on how such a stress affected mobility patterns in France, Italy and UK. We find that the general reduction of the overall efficiency in the network of movements is accompanied by geographical fragmentation with a massive reduction of long-range connections. The impact, however, differs among nations according to their initial mobility structure. Indeed, we find that the mobility network after the lockdown is more concentrated in the case of France and UK and more distributed in Italy. Such a process can be approximated through percolation to quantify the substantial impact of the lockdown.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
176,980
2104.01914
Novel DNNs for Stiff ODEs with Applications to Chemically Reacting Flows
Chemically reacting flows are common in engineering, such as hypersonic flow, combustion, explosions, manufacturing processes and environmental assessments. For combustion, the number of reactions can be significant (over 100) and due to the very large CPU requirements of chemical reactions (over 99%) a large number of flow and combustion problems are presently beyond the capabilities of even the largest supercomputers. Motivated by this, novel Deep Neural Networks (DNNs) are introduced to approximate stiff ODEs. Two approaches are compared, i.e., either learn the solution or the derivative of the solution to these ODEs. These DNNs are applied to multiple species and reactions common in chemically reacting flows. Experimental results show that it is helpful to account for the physical properties of species while designing DNNs. The proposed approach is shown to generalize well.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
228,522
2310.00057
A multi-fidelity deep operator network (DeepONet) for fusing simulation and monitoring data: Application to real-time settlement prediction during tunnel construction
Ground settlement prediction during the process of mechanized tunneling is of paramount importance and remains a challenging research topic. Typically, two paradigms are existing: a physics-driven approach utilizing process-oriented computational simulation models for the tunnel-soil interaction and the settlement prediction, and a data-driven approach employing machine learning techniques to establish mappings between influencing factors and the ground settlement. To integrate the advantages of both approaches and to assimilate the data from different sources, we propose a multi-fidelity deep operator network (DeepONet) framework, leveraging the recently developed operator learning methods. The presented framework comprises of two components: a low-fidelity subnet that captures the fundamental ground settlement patterns obtained from finite element simulations, and a high-fidelity subnet that learns the nonlinear correlation between numerical models and real engineering monitoring data. A pre-processing strategy for causality is adopted to consider the spatio-temporal characteristics of the settlement during tunnel excavation. Transfer learning is utilized to reduce the training cost for the low-fidelity subnet. The results show that the proposed method can effectively capture the physical information provided by the numerical simulations and accurately fit measured data as well. Remarkably, even with very limited noisy monitoring data, the proposed model can achieve rapid, accurate, and robust predictions of the full-field ground settlement in real-time during mechanized tunnel excavation.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
395,802
2208.06709
Simulating Personal Food Consumption Patterns using a Modified Markov Chain
Food image classification serves as the foundation of image-based dietary assessment to predict food categories. Since there are many different food classes in real life, conventional models cannot achieve sufficiently high accuracy. Personalized classifiers aim to largely improve the accuracy of food image classification for each individual. However, a lack of public personal food consumption data proves to be a challenge for training such models. To address this issue, we propose a novel framework to simulate personal food consumption data patterns, leveraging the use of a modified Markov chain model and self-supervised learning. Our method is capable of creating an accurate future data pattern from a limited amount of initial data, and our simulated data patterns can be closely correlated with the initial data pattern. Furthermore, we use Dynamic Time Warping distance and Kullback-Leibler divergence as metrics to evaluate the effectiveness of our method on the public Food-101 dataset. Our experimental results demonstrate promising performance compared with random simulation and the original Markov chain method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,803
2303.14483
Spatio-Temporal Graph Neural Networks for Predictive Learning in Urban Computing: A Survey
With recent advances in sensing technologies, a myriad of spatio-temporal data has been generated and recorded in smart cities. Forecasting the evolution patterns of spatio-temporal data is an important yet demanding aspect of urban computing, which can enhance intelligent management decisions in various fields, including transportation, environment, climate, public safety, healthcare, and others. Traditional statistical and deep learning methods struggle to capture complex correlations in urban spatio-temporal data. To this end, Spatio-Temporal Graph Neural Networks (STGNN) have been proposed, achieving great promise in recent years. STGNNs enable the extraction of complex spatio-temporal dependencies by integrating graph neural networks (GNNs) and various temporal learning methods. In this manuscript, we provide a comprehensive survey on recent progress on STGNN technologies for predictive learning in urban computing. Firstly, we provide a brief introduction to the construction methods of spatio-temporal graph data and the prevalent deep-learning architectures used in STGNNs. We then sort out the primary application domains and specific predictive learning tasks based on existing literature. Afterward, we scrutinize the design of STGNNs and their combination with some advanced technologies in recent years. Finally, we conclude the limitations of existing research and suggest potential directions for future work.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
354,117
1910.08617
A Bid-Validity Mechanism for Sequential Heat and Electricity Market Clearing
Coordinating the operation of units at the interface between heat and electricity systems, such as combined heat and power plants and heat pumps, is essential to reduce inefficiencies in each system and help achieve a cost-effective and efficient operation of the overall energy system. These energy systems are currently operated by sequential markets, which interface the technical and economic aspects of the systems. In that context, this study introduces an electricity-aware heat unit commitment model, which seeks to optimize the operation of the heat system while accounting for the techno-economic interdependencies between heat and electricity markets. These interdependencies are represented by bid-validity constraints, which model the linkage between the heat and electricity outputs and costs of combined heat and power plants and heat pumps. This approach also constitutes a novel market mechanism for the coordination of heat and electricity systems, which defines heat bids conditionally on electricity prices. Additionally, a tractable reformulation of the resulting trilevel optimization problem as a mixed integer linear program is proposed. Finally, it is shown on a case study that the proposed model yields a 23% reduction in total operating cost and a 6% reduction in wind curtailment compared to a traditional decoupled unit commitment model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
149,914
2201.01191
Automated 3D reconstruction of LoD2 and LoD1 models for all 10 million buildings of the Netherlands
In this paper we present our workflow to automatically reconstruct 3D building models based on 2D building polygons and a LiDAR point cloud. The workflow generates models at different levels of detail (LoDs) to support data requirements of different applications from one consistent source. Specific attention has been paid to make the workflow robust to quickly run a new iteration in case of improvements in an algorithm or in case new input data become available. The quality of the reconstructed data highly depends on the quality of the input data and is monitored in several steps of the process. A 3D viewer has been developed to view and download the openly available 3D data at different LoDs in different formats. The workflow has been applied to all 10 million buildings of The Netherlands. The 3D service will be updated after new input data becomes available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
274,177
2104.12756
InfographicVQA
Infographics are documents designed to effectively communicate information using a combination of textual, graphical and visual elements. In this work, we explore the automatic understanding of infographic images by using Visual Question Answering technique.To this end, we present InfographicVQA, a new dataset that comprises a diverse collection of infographics along with natural language questions and answers annotations. The collected questions require methods to jointly reason over the document layout, textual content, graphical elements, and data visualizations. We curate the dataset with emphasis on questions that require elementary reasoning and basic arithmetic skills. Finally, we evaluate two strong baselines based on state of the art multi-modal VQA models, and establish baseline performance for the new task. The dataset, code and leaderboard will be made available at http://docvqa.org
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
232,311
1806.00081
Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders
Susceptibility of deep neural networks to adversarial attacks poses a major theoretical and practical challenge. All efforts to harden classifiers against such attacks have seen limited success. Two distinct categories of samples to which deep networks are vulnerable, "adversarial samples" and "fooling samples", have been tackled separately so far due to the difficulty posed when considered together. In this work, we show how one can address them both under one unified framework. We tie a discriminative model with a generative model, rendering the adversarial objective to entail a conflict. Our model has the form of a variational autoencoder, with a Gaussian mixture prior on the latent vector. Each mixture component of the prior distribution corresponds to one of the classes in the data. This enables us to perform selective classification, leading to the rejection of adversarial samples instead of misclassification. Our method inherently provides a way of learning a selective classifier in a semi-supervised scenario as well, which can resist adversarial attacks. We also show how one can reclassify the rejected adversarial samples.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
99,226
2201.04753
Largest Eigenvalues of the Conjugate Kernel of Single-Layered Neural Networks
This paper is concerned with the asymptotic distribution of the largest eigenvalues for some nonlinear random matrix ensemble stemming from the study of neural networks. More precisely we consider $M= \frac{1}{m} YY^\top$ with $Y=f(WX)$ where $W$ and $X$ are random rectangular matrices with i.i.d. centered entries. This models the data covariance matrix or the Conjugate Kernel of a single layered random Feed-Forward Neural Network. The function $f$ is applied entrywise and can be seen as the activation function of the neural network. We show that the largest eigenvalue has the same limit (in probability) as that of some well-known linear random matrix ensembles. In particular, we relate the asymptotic limit of the largest eigenvalue for the nonlinear model to that of an information-plus-noise random matrix, establishing a possible phase transition depending on the function $f$ and the distribution of $W$ and $X$. This may be of interest for applications to machine learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
275,180
1709.01755
Energy-aware Mode Selection for Throughput Maximization in RF-Powered D2D Communications
Doubly-near-far problem in RF-powered networks can be mitigated by choosing appropriate device-to-device (D2D) communication mode and implementing energy-efficient information transfer (IT). In this work, we present a novel RF energy harvesting architecture where each transmitting-receiving user pair is allocated a disjoint channel for its communication which is fully powered by downlink energy transfer (ET) from hybrid access point (HAP). Considering that each user pair can select either D2D or cellular mode of communication, we propose an optimized transmission protocol controlled by the HAP that involves harvested energy-aware jointly optimal mode selection (MS) and time allocation (TA) for ET and IT to maximize the sum-throughput. Jointly global optimal solutions are derived by efficiently resolving the combinatorial issue with the help of optimal MS strategy for a given TA for ET. Closed-form expressions for the optimal TA in D2D and cellular modes are also derived to gain further analytical insights. Numerical results show that the joint optimal MS and TA, which significantly outperforms the benchmark schemes in terms of achievable RF-powered sum-throughput, is closely followed by the optimal TA scheme for D2D users. In fact, about $2/3$ fraction of the total user pairs prefer to follow the D2D mode for efficient RF-powered IT.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
80,145
1711.07325
PRT (Personal Rapid Transit) network simulation
Transportation problems of large urban conurbations inspire search for new transportation systems, that meet high environmental standards, are relatively cheap and user friendly. The latter element also includes the needs of disabled and elderly people. This article concerns a new transportation system PRT - Personal Rapid Transit. In this article the attention is focused on the analysis of the efficiency of the PRT transport network. The simulator of vehicle movement in PRT network as well as algorithms for traffic management and control will be presented. The proposal of its physical implementation will be also included.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
84,967
2109.15000
A surprisal--duration trade-off across and within the world's languages
While there exist scores of natural languages, each with its unique features and idiosyncrasies, they all share a unifying theme: enabling human communication. We may thus reasonably predict that human cognition shapes how these languages evolve and are used. Assuming that the capacity to process information is roughly constant across human populations, we expect a surprisal--duration trade-off to arise both across and within languages. We analyse this trade-off using a corpus of 600 languages and, after controlling for several potential confounds, we find strong supporting evidence in both settings. Specifically, we find that, on average, phones are produced faster in languages where they are less surprising, and vice versa. Further, we confirm that more surprising phones are longer, on average, in 319 languages out of the 600. We thus conclude that there is strong evidence of a surprisal--duration trade-off in operation, both across and within the world's languages.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
258,137
1902.05826
The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the xAUC Metric
Where machine-learned predictive risk scores inform high-stakes decisions, such as bail and sentencing in criminal justice, fairness has been a serious concern. Recent work has characterized the disparate impact that such risk scores can have when used for a binary classification task. This may not account, however, for the more diverse downstream uses of risk scores and their non-binary nature. To better account for this, in this paper, we investigate the fairness of predictive risk scores from the point of view of a bipartite ranking task, where one seeks to rank positive examples higher than negative ones. We introduce the xAUC disparity as a metric to assess the disparate impact of risk scores and define it as the difference in the probabilities of ranking a random positive example from one protected group above a negative one from another group and vice versa. We provide a decomposition of bipartite ranking loss into components that involve the discrepancy and components that involve pure predictive ability within each group. We use xAUC analysis to audit predictive risk scores for recidivism prediction, income prediction, and cardiac arrest prediction, where it describes disparities that are not evident from simply comparing within-group predictive performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
121,630
0812.3147
Comparison of Binary Classification Based on Signed Distance Functions with Support Vector Machines
We investigate the performance of a simple signed distance function (SDF) based method by direct comparison with standard SVM packages, as well as K-nearest neighbor and RBFN methods. We present experimental results comparing the SDF approach with other classifiers on both synthetic geometric problems and five benchmark clinical microarray data sets. On both geometric problems and microarray data sets, the non-optimized SDF based classifiers perform just as well or slightly better than well-developed, standard SVM methods. These results demonstrate the potential accuracy of SDF-based methods on some types of problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
2,814
1611.05939
SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
With recent advancing of Internet of Things (IoTs), it becomes very attractive to implement the deep convolutional neural networks (DCNNs) onto embedded/portable systems. Presently, executing the software-based DCNNs requires high-performance server clusters in practice, restricting their widespread deployment on the mobile devices. To overcome this issue, considerable research efforts have been conducted in the context of developing highly-parallel and specific DCNN hardware, utilizing GPGPUs, FPGAs, and ASICs. Stochastic Computing (SC), which uses bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has a high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power/energy and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources bring about immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents the first comprehensive design and optimization framework of SC-based DCNNs (SC-DCNNs). We first present the optimal designs of function blocks that perform the basic operations, i.e., inner product, pooling, and activation function. Then we propose the optimal design of four types of combinations of basic function blocks, named feature extraction blocks, which are in charge of extracting features from input feature maps. Besides, weight storage methods are investigated to reduce the area and power/energy consumption for storing weights. Finally, the whole SC-DCNN implementation is optimized, with feature extraction blocks carefully selected, to minimize area and power/energy consumption while maintaining a high network accuracy level.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,094
2312.03698
Intrinsic Harmonization for Illumination-Aware Compositing
Despite significant advancements in network-based image harmonization techniques, there still exists a domain disparity between typical training pairs and real-world composites encountered during inference. Most existing methods are trained to reverse global edits made on segmented image regions, which fail to accurately capture the lighting inconsistencies between the foreground and background found in composited images. In this work, we introduce a self-supervised illumination harmonization approach formulated in the intrinsic image domain. First, we estimate a simple global lighting model from mid-level vision representations to generate a rough shading for the foreground region. A network then refines this inferred shading to generate a harmonious re-shading that aligns with the background scene. In order to match the color appearance of the foreground and background, we utilize ideas from prior harmonization approaches to perform parameterized image edits in the albedo domain. To validate the effectiveness of our approach, we present results from challenging real-world composites and conduct a user study to objectively measure the enhanced realism achieved compared to state-of-the-art harmonization methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
413,358
2307.09861
BSDM: Background Suppression Diffusion Model for Hyperspectral Anomaly Detection
Hyperspectral anomaly detection (HAD) is widely used in Earth observation and deep space exploration. A major challenge for HAD is the complex background of the input hyperspectral images (HSIs), resulting in anomalies confused in the background. On the other hand, the lack of labeled samples for HSIs leads to poor generalization of existing HAD methods. This paper starts the first attempt to study a new and generalizable background learning problem without labeled samples. We present a novel solution BSDM (background suppression diffusion model) for HAD, which can simultaneously learn latent background distributions and generalize to different datasets for suppressing complex background. It is featured in three aspects: (1) For the complex background of HSIs, we design pseudo background noise and learn the potential background distribution in it with a diffusion model (DM). (2) For the generalizability problem, we apply a statistical offset module so that the BSDM adapts to datasets of different domains without labeling samples. (3) For achieving background suppression, we innovatively improve the inference process of DM by feeding the original HSIs into the denoising network, which removes the background as noise. Our work paves a new background suppression way for HAD that can improve HAD performance without the prerequisite of manually labeled data. Assessments and generalization experiments of four HAD methods on several real HSI datasets demonstrate the above three unique properties of the proposed method. The code is available at https://github.com/majitao-xd/BSDM-HAD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
380,328
2308.06740
Weighted Sparse Partial Least Squares for Joint Sample and Feature Selection
Sparse Partial Least Squares (sPLS) is a common dimensionality reduction technique for data fusion, which projects data samples from two views by seeking linear combinations with a small number of variables with the maximum variance. However, sPLS extracts the combinations between two data sets with all data samples so that it cannot detect latent subsets of samples. To extend the application of sPLS by identifying a specific subset of samples and remove outliers, we propose an $\ell_\infty/\ell_0$-norm constrained weighted sparse PLS ($\ell_\infty/\ell_0$-wsPLS) method for joint sample and feature selection, where the $\ell_\infty/\ell_0$-norm constrains are used to select a subset of samples. We prove that the $\ell_\infty/\ell_0$-norm constrains have the Kurdyka-\L{ojasiewicz}~property so that a globally convergent algorithm is developed to solve it. Moreover, multi-view data with a same set of samples can be available in various real problems. To this end, we extend the $\ell_\infty/\ell_0$-wsPLS model and propose two multi-view wsPLS models for multi-view data fusion. We develop an efficient iterative algorithm for each multi-view wsPLS model and show its convergence property. As well as numerical and biomedical data experiments demonstrate the efficiency of the proposed methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
385,249
1906.04598
Joint Subspace Recovery and Enhanced Locality Driven Robust Flexible Discriminative Dictionary Learning
We propose a joint subspace recovery and enhanced locality based robust flexible label consistent dictionary learning method called Robust Flexible Discriminative Dictionary Learning (RFDDL). RFDDL mainly improves the data representation and classification abilities by enhancing the robust property to sparse errors and encoding the locality, reconstruction error and label consistency more accurately. First, for the robustness to noise and sparse errors in data and atoms, RFDDL aims at recovering the underlying clean data and clean atom subspaces jointly, and then performs DL and encodes the locality in the recovered subspaces. Second, to enable the data sampled from a nonlinear manifold to be handled potentially and obtain the accurate reconstruction by avoiding the overfitting, RFDDL minimizes the reconstruction error in a flexible manner. Third, to encode the label consistency accurately, RFDDL involves a discriminative flexible sparse code error to encourage the coefficients to be soft. Fourth, to encode the locality well, RFDDL defines the Laplacian matrix over recovered atoms, includes label information of atoms in terms of intra-class compactness and inter-class separation, and associates with group sparse codes and classifier to obtain the accurate discriminative locality-constrained coefficients and classifier. Extensive results on public databases show the effectiveness of our RFDDL.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
134,772
2201.12018
Transfer Learning In Differential Privacy's Hybrid-Model
The hybrid-model (Avent et al 2017) in Differential Privacy is a an augmentation of the local-model where in addition to N local-agents we are assisted by one special agent who is in fact a curator holding the sensitive details of n additional individuals. Here we study the problem of machine learning in the hybrid-model where the n individuals in the curators dataset are drawn from a different distribution than the one of the general population (the local-agents). We give a general scheme -- Subsample-Test-Reweigh -- for this transfer learning problem, which reduces any curator-model DP-learner to a hybrid-model learner in this setting using iterative subsampling and reweighing of the n examples held by the curator based on a smooth variation of the Multiplicative-Weights algorithm (introduced by Bun et al, 2020). Our scheme has a sample complexity which relies on the chi-squared divergence between the two distributions. We give worst-case analysis bounds on the sample complexity required for our private reduction. Aiming to reduce said sample complexity, we give two specific instances our sample complexity can be drastically reduced (one instance is analyzed mathematically, while the other - empirically) and pose several directions for follow-up work.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
277,496
1607.07660
Fundamental Matrices from Moving Objects Using Line Motion Barcodes
Computing the epipolar geometry between cameras with very different viewpoints is often very difficult. The appearance of objects can vary greatly, and it is difficult to find corresponding feature points. Prior methods searched for corresponding epipolar lines using points on the convex hull of the silhouette of a single moving object. These methods fail when the scene includes multiple moving objects. This paper extends previous work to scenes having multiple moving objects by using the "Motion Barcodes", a temporal signature of lines. Corresponding epipolar lines have similar motion barcodes, and candidate pairs of corresponding epipoar lines are found by the similarity of their motion barcodes. As in previous methods we assume that cameras are relatively stationary and that moving objects have already been extracted using background subtraction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
59,055
2410.19449
Learned Reference-based Diffusion Sampling for multi-modal distributions
Over the past few years, several approaches utilizing score-based diffusion have been proposed to sample from probability distributions, that is without having access to exact samples and relying solely on evaluations of unnormalized densities. The resulting samplers approximate the time-reversal of a noising diffusion process, bridging the target distribution to an easy-to-sample base distribution. In practice, the performance of these methods heavily depends on key hyperparameters that require ground truth samples to be accurately tuned. Our work aims to highlight and address this fundamental issue, focusing in particular on multi-modal distributions, which pose significant challenges for existing sampling methods. Building on existing approaches, we introduce Learned Reference-based Diffusion Sampler (LRDS), a methodology specifically designed to leverage prior knowledge on the location of the target modes in order to bypass the obstacle of hyperparameter tuning. LRDS proceeds in two steps by (i) learning a reference diffusion model on samples located in high-density space regions and tailored for multimodality, and (ii) using this reference model to foster the training of a diffusion-based sampler. We experimentally demonstrate that LRDS best exploits prior knowledge on the target distribution compared to competing algorithms on a variety of challenging distributions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
502,316
1305.4345
Ensembles of Classifiers based on Dimensionality Reduction
We present a novel approach for the construction of ensemble classifiers based on dimensionality reduction. Dimensionality reduction methods represent datasets using a small number of attributes while preserving the information conveyed by the original dataset. The ensemble members are trained based on dimension-reduced versions of the training set. These versions are obtained by applying dimensionality reduction to the original training set using different values of the input parameters. This construction meets both the diversity and accuracy criteria which are required to construct an ensemble classifier where the former criterion is obtained by the various input parameter values and the latter is achieved due to the decorrelation and noise reduction properties of dimensionality reduction. In order to classify a test sample, it is first embedded into the dimension reduced space of each individual classifier by using an out-of-sample extension algorithm. Each classifier is then applied to the embedded sample and the classification is obtained via a voting scheme. We present three variations of the proposed approach based on the Random Projections, the Diffusion Maps and the Random Subspaces dimensionality reduction algorithms. We also present a multi-strategy ensemble which combines AdaBoost and Diffusion Maps. A comparison is made with the Bagging, AdaBoost, Rotation Forest ensemble classifiers and also with the base classifier which does not incorporate dimensionality reduction. Our experiments used seventeen benchmark datasets from the UCI repository. The results obtained by the proposed algorithms were superior in many cases to other algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
24,687
2308.10362
Vehicle Cameras Guide mmWave Beams: Approach and Real-World V2V Demonstration
Accurately aligning millimeter-wave (mmWave) and terahertz (THz) narrow beams is essential to satisfy reliability and high data rates of 5G and beyond wireless communication systems. However, achieving this objective is difficult, especially in vehicle-to-vehicle (V2V) communication scenarios, where both transmitter and receiver are constantly mobile. Recently, additional sensing modalities, such as visual sensors, have attracted significant interest due to their capability to provide accurate information about the wireless environment. To that end, in this paper, we develop a deep learning solution for V2V scenarios to predict future beams using images from a 360 camera attached to the vehicle. The developed solution is evaluated on a real-world multi-modal mmWave V2V communication dataset comprising co-existing 360 camera and mmWave beam training data. The proposed vision-aided solution achieves $\approx 85\%$ top-5 beam prediction accuracy while significantly reducing the beam training overhead. This highlights the potential of utilizing vision for enabling highly-mobile V2V communications.
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
386,691
2008.12378
Measuring the Biases and Effectiveness of Content-Style Disentanglement
A recent spate of state-of-the-art semi- and un-supervised solutions disentangle and encode image "content" into a spatial tensor and image appearance or "style" into a vector, to achieve good performance in spatially equivariant tasks (e.g. image-to-image translation). To achieve this, they employ different model design, learning objective, and data biases. While considerable effort has been made to measure disentanglement in vector representations, and assess its impact on task performance, such analysis for (spatial) content - style disentanglement is lacking. In this paper, we conduct an empirical study to investigate the role of different biases in content-style disentanglement settings and unveil the relationship between the degree of disentanglement and task performance. In particular, we consider the setting where we: (i) identify key design choices and learning constraints for three popular content-style disentanglement models; (ii) relax or remove such constraints in an ablation fashion; and (iii) use two metrics to measure the degree of disentanglement and assess its effect on each task performance. Our experiments reveal that there is a "sweet spot" between disentanglement, task performance and - surprisingly - content interpretability, suggesting that blindly forcing for higher disentanglement can hurt model performance and content factors semanticness. Our findings, as well as the used task-independent metrics, can be used to guide the design and selection of new models for tasks where content-style representations are useful.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,561
1704.02801
Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes
Predicated on the increasing abundance of electronic health records, we investi- gate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi- task learning framework in which factual and counterfactual outcomes are mod- eled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregion- alization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counter- factual outcomes. We conduct experiments on observational datasets for an inter- ventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experi- ments, we show that our method significantly outperforms the state-of-the-art.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
71,515
1704.07597
Learning Agents in Black-Scholes Financial Markets: Consensus Dynamics and Volatility Smiles
Black-Scholes (BS) is the standard mathematical model for option pricing in financial markets. Option prices are calculated using an analytical formula whose main inputs are strike (at which price to exercise) and volatility. The BS framework assumes that volatility remains constant across all strikes, however, in practice it varies. How do traders come to learn these parameters? We introduce natural models of learning agents, in which they update their beliefs about the true implied volatility based on the opinions of other traders. We prove convergence of these opinion dynamics using techniques from control theory and leader-follower models, thus providing a resolution between theory and market practices. We allow for two different models, one with feedback and one with an unknown leader.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
72,386
2302.07800
An Efficient B-tree Implementation for Memory-Constrained Embedded Systems
Embedded devices collect and process significant amounts of data in a variety of applications including environmental monitoring, industrial automation and control, and other Internet of Things (IoT) applications. Storing data efficiently is critically important, especially when the device must perform local processing on the data. The most widely used data structure for high performance query and insert is the B-tree. However, existing implementations consume too much memory for small embedded devices and often rely on operating system support. This work presents an extremely memory efficient implementation of B-trees for embedded devices that functions on the smallest devices and does not require an operating system. Experimental results demonstrate that the B-tree implementation can run on devices with as little as 4 KB of RAM while efficiently processing thousands of records.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
345,835
2309.00824
Leveraging Semi-Supervised Graph Learning for Enhanced Diabetic Retinopathy Detection
Diabetic Retinopathy (DR) is a significant cause of blindness globally, highlighting the urgent need for early detection and effective treatment. Recent advancements in Machine Learning (ML) techniques have shown promise in DR detection, but the availability of labeled data often limits their performance. This research proposes a novel Semi-Supervised Graph Learning SSGL algorithm tailored for DR detection, which capitalizes on the relationships between labelled and unlabeled data to enhance accuracy. The work begins by investigating data augmentation and preprocessing techniques to address the challenges of image quality and feature variations. Techniques such as image cropping, resizing, contrast adjustment, normalization, and data augmentation are explored to optimize feature extraction and improve the overall quality of retinal images. Moreover, apart from detection and diagnosis, this work delves into applying ML algorithms for predicting the risk of developing DR or the likelihood of disease progression. Personalized risk scores for individual patients are generated using comprehensive patient data encompassing demographic information, medical history, and retinal images. The proposed Semi-Supervised Graph learning algorithm is rigorously evaluated on two publicly available datasets and is benchmarked against existing methods. Results indicate significant improvements in classification accuracy, specificity, and sensitivity while demonstrating robustness against noise and outlie rs.Notably, the proposed algorithm addresses the challenge of imbalanced datasets, common in medical image analysis, further enhancing its practical applicability.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
389,433
2110.05712
DecGAN: Decoupling Generative Adversarial Network detecting abnormal neural circuits for Alzheimer's disease
One of the main reasons for Alzheimer's disease (AD) is the disorder of some neural circuits. Existing methods for AD prediction have achieved great success, however, detecting abnormal neural circuits from the perspective of brain networks is still a big challenge. In this work, a novel decoupling generative adversarial network (DecGAN) is proposed to detect abnormal neural circuits for AD. Concretely, a decoupling module is designed to decompose a brain network into two parts: one part is composed of a few sparse graphs which represent the neural circuits largely determining the development of AD; the other part is a supplement graph, whose influence on AD can be ignored. Furthermore, the adversarial strategy is utilized to guide the decoupling module to extract the feature more related to AD. Meanwhile, by encoding the detected neural circuits to hypergraph data, an analytic module associated with the hyperedge neurons algorithm is designed to identify the neural circuits. More importantly, a novel sparse capacity loss based on the spatial-spectral hypergraph similarity is developed to minimize the intrinsic topological distribution of neural circuits, which can significantly improve the accuracy and robustness of the proposed model. Experimental results demonstrate that the proposed model can effectively detect the abnormal neural circuits at different stages of AD, which is helpful for pathological study and early treatment.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
260,368
1411.6233
A Convex Sparse PCA for Feature Analysis
Principal component analysis (PCA) has been widely applied to dimensionality reduction and data pre-processing for different applications in engineering, biology and social science. Classical PCA and its variants seek for linear projections of the original variables to obtain a low dimensional feature representation with maximal variance. One limitation is that it is very difficult to interpret the results of PCA. In addition, the classical PCA is vulnerable to certain noisy data. In this paper, we propose a convex sparse principal component analysis (CSPCA) algorithm and apply it to feature analysis. First we show that PCA can be formulated as a low-rank regression optimization problem. Based on the discussion, the l 2 , 1 -norm minimization is incorporated into the objective function to make the regression coefficients sparse, thereby robust to the outliers. In addition, based on the sparse model used in CSPCA, an optimal weight is assigned to each of the original feature, which in turn provides the output with good interpretability. With the output of our CSPCA, we can effectively analyze the importance of each feature under the PCA criteria. The objective function is convex, and we propose an iterative algorithm to optimize it. We apply the CSPCA algorithm to feature selection and conduct extensive experiments on six different benchmark datasets. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art unsupervised feature selection algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
37,813
2502.09131
A Stochastic Fundamental Lemma with Reduced Disturbance Data Requirements
Recently, the fundamental lemma by Willems et. al has been extended towards stochastic LTI systems subject to process disturbances. Using this lemma requires previously recorded data of inputs, outputs, and disturbances. In this paper, we exploit causality concepts of stochastic control to propose a variant of the stochastic fundamental lemma that does not require past disturbance data in the Hankel matrices. Our developments rely on polynomial chaos expansions and on the knowledge of the disturbance distribution. Similar to our previous results, the proposed variant of the fundamental lemma allows to predict future input-output trajectories of stochastic LTI systems. We draw upon a numerical example to illustrate the proposed variant in data-driven control context.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
533,319
1808.06846
Search for Common Minima in Joint Optimization of Multiple Cost Functions
We present a novel optimization method, named the Combined Optimization Method (COM), for the joint optimization of two or more cost functions. Unlike the conventional joint optimization schemes, which try to find minima in a weighted sum of cost functions, the COM explores search space for common minima shared by all the cost functions. Given a set of multiple cost functions that have qualitatively different distributions of local minima with each other, the proposed method finds the common minima with a high success rate without the help of any metaheuristics. As a demonstration, we apply the COM to the crystal structure prediction in materials science. By introducing the concept of data assimilation, i.e., adopting the theoretical potential energy of the crystal and the crystallinity, which characterizes the agreement with the theoretical and experimental X-ray diffraction patterns, as cost functions, we show that the correct crystal structures of Si diamond, low quartz, and low cristobalite can be predicted with significantly higher success rates than the previous methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
105,623
1812.11526
Improving forecasting accuracy of time series data using a new ARIMA-ANN hybrid method and empirical mode decomposition
Many applications in different domains produce large amount of time series data. Making accurate forecasting is critical for many decision makers. Various time series forecasting methods exist which use linear and nonlinear models separately or combination of both. Studies show that combining of linear and nonlinear models can be effective to improve forecasting performance. However, some assumptions that those existing methods make, might restrict their performance in certain situations. We provide a new Autoregressive Integrated Moving Average (ARIMA)-Artificial Neural Network(ANN) hybrid method that work in a more general framework. Experimental results show that strategies for decomposing the original data and for combining linear and nonlinear models throughout the hybridization process are key factors in the forecasting performance of the methods. By using appropriate strategies, our hybrid method can be an effective way to improve forecasting accuracy obtained by traditional hybrid methods and also either of the individual methods used separately.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
117,580
2306.01494
Local Message Passing on Frustrated Systems
Message passing on factor graphs is a powerful framework for probabilistic inference, which finds important applications in various scientific domains. The most wide-spread message passing scheme is the sum-product algorithm (SPA) which gives exact results on trees but often fails on graphs with many small cycles. We search for an alternative message passing algorithm that works particularly well on such cyclic graphs. Therefore, we challenge the extrinsic principle of the SPA, which loses its objective on graphs with cycles. We further replace the local SPA message update rule at the factor nodes of the underlying graph with a generic mapping, which is optimized in a data-driven fashion. These modifications lead to a considerable improvement in performance while preserving the simplicity of the SPA. We evaluate our method for two classes of cyclic graphs: the 2x2 fully connected Ising grid and factor graphs for symbol detection on linear communication channels with inter-symbol interference. To enable the method for large graphs as they occur in practical applications, we develop a novel loss function that is inspired by the Bethe approximation from statistical physics and allows for training in an unsupervised fashion.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
370,482
2410.12441
A Primal-dual algorithm for image reconstruction with ICNNs
We address the optimization problem in a data-driven variational reconstruction framework, where the regularizer is parameterized by an input-convex neural network (ICNN). While gradient-based methods are commonly used to solve such problems, they struggle to effectively handle non-smoothness which often leads to slow convergence. Moreover, the nested structure of the neural network complicates the application of standard non-smooth optimization techniques, such as proximal algorithms. To overcome these challenges, we reformulate the problem and eliminate the network's nested structure. By relating this reformulation to epigraphical projections of the activation functions, we transform the problem into a convex optimization problem that can be efficiently solved using a primal-dual algorithm. We also prove that this reformulation is equivalent to the original variational problem. Through experiments on several imaging tasks, we demonstrate that the proposed approach outperforms subgradient methods in terms of both speed and stability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
499,031
2301.06024
A data science and machine learning approach to continuous analysis of Shakespeare's plays
The availability of quantitative text analysis methods has provided new ways of analyzing literature in a manner that was not available in the pre-information era. Here we apply comprehensive machine learning analysis to the work of William Shakespeare. The analysis shows clear changes in the style of writing over time, with the most significant changes in the sentence length, frequency of adjectives and adverbs, and the sentiments expressed in the text. Applying machine learning to make a stylometric prediction of the year of the play shows a Pearson correlation of 0.71 between the actual and predicted year, indicating that Shakespeare's writing style as reflected by the quantitative measurements changed over time. Additionally, it shows that the stylometrics of some of the plays is more similar to plays written either before or after the year they were written. For instance, Romeo and Juliet is dated 1596, but is more similar in stylometrics to plays written by Shakespeare after 1600. The source code for the analysis is available for free download.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
340,523
2011.08319
Multi-Step Recurrent Q-Learning for Robotic Velcro Peeling
Learning object manipulation is a critical skill for robots to interact with their environment. Even though there has been significant progress in robotic manipulation of rigid objects, interacting with non-rigid objects remains challenging for robots. In this work, we introduce velcro peeling as a representative application for robotic manipulation of non-rigid objects in complex environments. We present a method of learning force-based manipulation from noisy and incomplete sensor inputs in partially observable environments by modeling long term dependencies between measurements with a multi-step deep recurrent network. We present experiments on a real robot to show the necessity of modeling these long term dependencies and validate our approach in simulation and robot experiments. Our results show that using tactile input enables the robot to overcome geometric uncertainties present in the environment with high fidelity in ~90% of all cases, outperforming the baselines by a large margin.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
206,830
2404.09565
Reliability Estimation of News Media Sources: Birds of a Feather Flock Together
Evaluating the reliability of news sources is a routine task for journalists and organizations committed to acquiring and disseminating accurate information. Recent research has shown that predicting sources' reliability represents an important first-prior step in addressing additional challenges such as fake news detection and fact-checking. In this paper, we introduce a novel approach for source reliability estimation that leverages reinforcement learning strategies for estimating the reliability degree of news sources. Contrary to previous research, our proposed approach models the problem as the estimation of a reliability degree, and not a reliability label, based on how all the news media sources interact with each other on the Web. We validated the effectiveness of our method on a news media reliability dataset that is an order of magnitude larger than comparable existing datasets. Results show that the estimated reliability degrees strongly correlates with journalists-provided scores (Spearman=0.80) and can effectively predict reliability labels (macro-avg. F$_1$ score=81.05). We release our implementation and dataset, aiming to provide a valuable resource for the NLP community working on information verification.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
446,737
1512.03419
Major Transitions in Political Order
We present three major transitions that occur on the way to the elaborate and diverse societies of the modern era. Our account links the worlds of social animals such as pigtail macaques and monk parakeets to examples from human history, including 18th Century London and the contemporary online phenomenon of Wikipedia. From the first awareness and use of group-level social facts to the emergence of norms and their self-assembly into normative bundles, each transition represents a new relationship between the individual and the group. At the center of this relationship is the use of coarse-grained information gained via lossy compression. The role of top-down causation in the origin of society parallels that conjectured to occur in the origin and evolution of life itself.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
50,031
2306.14709
Self-supervised novel 2D view synthesis of large-scale scenes with efficient multi-scale voxel carving
The task of generating novel views of real scenes is increasingly important nowadays when AI models become able to create realistic new worlds. In many practical applications, it is important for novel view synthesis methods to stay grounded in the physical world as much as possible, while also being able to imagine it from previously unseen views. While most current methods are developed and tested in virtual environments with small scenes and no errors in pose and depth information, we push the boundaries to the real-world domain of large scales in the new context of UAVs. Our algorithmic contributions are two folds. First, we manage to stay anchored in the real 3D world, by introducing an efficient multi-scale voxel carving method, which is able to accommodate significant noises in pose, depth, and illumination variations, while being able to reconstruct the view of the world from drastically different poses at test time. Second, our final high-resolution output is efficiently self-trained on data automatically generated by the voxel carving module, which gives it the flexibility to adapt efficiently to any scene. We demonstrated the effectiveness of our method on highly complex and large-scale scenes in real environments while outperforming the current state-of-the-art. Our code is publicly available: https://github.com/onorabil/MSVC.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
375,787
2309.16829
An analysis of the derivative-free loss method for solving PDEs
This study analyzes the derivative-free loss method to solve a certain class of elliptic PDEs using neural networks. The derivative-free loss method uses the Feynman-Kac formulation, incorporating stochastic walkers and their corresponding average values. We investigate the effect of the time interval related to the Feynman-Kac formulation and the walker size in the context of computational efficiency, trainability, and sampling errors. Our analysis shows that the training loss bias is proportional to the time interval and the spatial gradient of the neural network while inversely proportional to the walker size. We also show that the time interval must be sufficiently long to train the network. These analytic results tell that we can choose the walker size as small as possible based on the optimal lower bound of the time interval. We also provide numerical tests supporting our analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
395,515
1704.07945
Spatio-temporal Person Retrieval via Natural Language Queries
In this paper, we address the problem of spatio-temporal person retrieval from multiple videos using a natural language query, in which we output a tube (i.e., a sequence of bounding boxes) which encloses the person described by the query. For this problem, we introduce a novel dataset consisting of videos containing people annotated with bounding boxes for each second and with five natural language descriptions. To retrieve the tube of the person described by a given natural language query, we design a model that combines methods for spatio-temporal human detection and multimodal retrieval. We conduct comprehensive experiments to compare a variety of tube and text representations and multimodal retrieval methods, and present a strong baseline in this task as well as demonstrate the efficacy of our tube representation and multimodal feature embedding technique. Finally, we demonstrate the versatility of our model by applying it to two other important tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
72,447
2404.03713
Explaining Explainability: Recommendations for Effective Use of Concept Activation Vectors
Concept-based explanations translate the internal representations of deep learning models into a language that humans are familiar with: concepts. One popular method for finding concepts is Concept Activation Vectors (CAVs), which are learnt using a probe dataset of concept exemplars. In this work, we investigate three properties of CAVs: (1) inconsistency across layers, (2) entanglement with other concepts, and (3) spatial dependency. Each property provides both challenges and opportunities in interpreting models. We introduce tools designed to detect the presence of these properties, provide insight into how each property can lead to misleading explanations, and provide recommendations to mitigate their impact. To demonstrate practical applications, we apply our recommendations to a melanoma classification task, showing how entanglement can lead to uninterpretable results and that the choice of negative probe set can have a substantial impact on the meaning of a CAV. Further, we show that understanding these properties can be used to our advantage. For example, we introduce spatially dependent CAVs to test if a model is translation invariant with respect to a specific concept and class. Our experiments are performed on natural images (ImageNet), skin lesions (ISIC 2019), and a new synthetic dataset, Elements. Elements is designed to capture a known ground truth relationship between concepts and classes. We release this dataset to facilitate further research in understanding and evaluating interpretability methods.
true
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
444,380
2406.00606
LLMs Could Autonomously Learn Without External Supervision
In the quest for super-human performance, Large Language Models (LLMs) have traditionally been tethered to human-annotated datasets and predefined training objectives-a process that is both labor-intensive and inherently limited. This paper presents a transformative approach: Autonomous Learning for LLMs, a self-sufficient learning paradigm that frees models from the constraints of human supervision. This method endows LLMs with the ability to self-educate through direct interaction with text, akin to a human reading and comprehending literature. Our approach eliminates the reliance on annotated data, fostering an Autonomous Learning environment where the model independently identifies and reinforces its knowledge gaps. Empirical results from our comprehensive experiments, which utilized a diverse array of learning materials and were evaluated against standard public quizzes, reveal that Autonomous Learning outstrips the performance of both Pre-training and Supervised Fine-Tuning (SFT), as well as retrieval-augmented methods. These findings underscore the potential of Autonomous Learning to not only enhance the efficiency and effectiveness of LLM training but also to pave the way for the development of more advanced, self-reliant AI systems.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
459,940
2408.04349
Optimal Layout-Aware CNOT Circuit Synthesis with Qubit Permutation
CNOT optimization plays a significant role in noise reduction for Quantum Circuits. Several heuristic and exact approaches exist for CNOT optimization. In this paper, we investigate more complicated variations of optimal synthesis by allowing qubit permutations and handling layout restrictions. We encode such problems into Planning, SAT, and QBF. We provide optimization for both CNOT gate count and circuit depth. For experimental evaluation, we consider standard T-gate optimized benchmarks and optimize CNOT sub-circuits. We show that allowing qubit permutations can further reduce up to 56% in CNOT count and 46% in circuit depth. In the case of optimally mapped circuits under layout restrictions, we observe a reduction up to 17% CNOT count and 19% CNOT depth.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
479,356
2208.00438
Toward Understanding WordArt: Corner-Guided Transformer for Scene Text Recognition
Artistic text recognition is an extremely challenging task with a wide range of applications. However, current scene text recognition methods mainly focus on irregular text while have not explored artistic text specifically. The challenges of artistic text recognition include the various appearance with special-designed fonts and effects, the complex connections and overlaps between characters, and the severe interference from background patterns. To alleviate these problems, we propose to recognize the artistic text at three levels. Firstly, corner points are applied to guide the extraction of local features inside characters, considering the robustness of corner structures to appearance and shape. In this way, the discreteness of the corner points cuts off the connection between characters, and the sparsity of them improves the robustness for background interference. Secondly, we design a character contrastive loss to model the character-level feature, improving the feature representation for character classification. Thirdly, we utilize Transformer to learn the global feature on image-level and model the global relationship of the corner points, with the assistance of a corner-query cross-attention mechanism. Besides, we provide an artistic text dataset to benchmark the performance. Experimental results verify the significant superiority of our proposed method on artistic text recognition and also achieve state-of-the-art performance on several blurred and perspective datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
310,854
1207.0872
Differential Privacy for Relational Algebra: Improving the Sensitivity Bounds via Constraint Systems
Differential privacy is a modern approach in privacy-preserving data analysis to control the amount of information that can be inferred about an individual by querying a database. The most common techniques are based on the introduction of probabilistic noise, often defined as a Laplacian parametric on the sensitivity of the query. In order to maximize the utility of the query, it is crucial to estimate the sensitivity as precisely as possible. In this paper we consider relational algebra, the classical language for queries in relational databases, and we propose a method for computing a bound on the sensitivity of queries in an intuitive and compositional way. We use constraint-based techniques to accumulate the information on the possible values for attributes provided by the various components of the query, thus making it possible to compute tight bounds on the sensitivity.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
17,203
2303.09597
Residual Physics Learning and System Identification for Sim-to-real Transfer of Policies on Buoyancy Assisted Legged Robots
The light and soft characteristics of Buoyancy Assisted Lightweight Legged Unit (BALLU) robots have a great potential to provide intrinsically safe interactions in environments involving humans, unlike many heavy and rigid robots. However, their unique and sensitive dynamics impose challenges to obtaining robust control policies in the real world. In this work, we demonstrate robust sim-to-real transfer of control policies on the BALLU robots via system identification and our novel residual physics learning method, Environment Mimic (EnvMimic). First, we model the nonlinear dynamics of the actuators by collecting hardware data and optimizing the simulation parameters. Rather than relying on standard supervised learning formulations, we utilize deep reinforcement learning to train an external force policy to match real-world trajectories, which enables us to model residual physics with greater fidelity. We analyze the improved simulation fidelity by comparing the simulation trajectories against the real-world ones. We finally demonstrate that the improved simulator allows us to learn better walking and turning policies that can be successfully deployed on the hardware of BALLU.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
352,105
2410.05842
Privacy-aware Fully Model-Free Event-triggered Cloud-based HVAC Control
Privacy is a major concern when computing-as-a-service (CaaS) platforms, e.g., cloud-computing platforms, are utilized for building automation, as CaaS platforms can infer sensitive information, such as occupancy, using the sensor measurements of a building. Although the existing encrypted model-based control algorithms can ensure the security and privacy of sensor measurements, they are highly complex to implement and require high computational resources, which result in a high cost of using CaaS platforms. To address these issues, in this paper, we propose an encrypted fully model-free event-triggered cloud-based HVAC control framework that ensures the privacy of occupancy information and minimizes the communication and computation overhead associated with encrypted HVAC control. To this end, we first develop a model-free controller for regulating indoor temperature and CO2 levels. We then design a model-free event-triggering unit which reduces the communication and computation costs of encrypted HVAC control using an optimal triggering policy. Finally, we evaluate the performance of the proposed encrypted fully model-free event-triggered cloud-based HVAC control framework using the TRNSYS simulator, comparing it to an encrypted model-based event-triggered control framework, which uses model predictive control to regulate the indoor climate. Our numerical results demonstrate that, compared to the encrypted model-based method, the proposed fully model-free framework improves the control performance while reducing the communication and computation costs. More specifically, it reduces the communication between the system and the CaaS platform by 64% amount, and its computation time is 75% less than that of the model-based control.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
495,945
2003.04078
A Survey on The Expressive Power of Graph Neural Networks
Graph neural networks (GNNs) are effective machine learning models for various graph learning problems. Despite their empirical successes, the theoretical limitations of GNNs have been revealed recently. Consequently, many GNN models have been proposed to overcome these limitations. In this survey, we provide a comprehensive overview of the expressive power of GNNs and provably powerful variants of GNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
167,447
1205.4390
Reduced-Rank Adaptive Filtering Based on Joint Iterative Optimization of Adaptive Filters
This letter proposes a novel adaptive reduced-rank filtering scheme based on joint iterative optimization of adaptive filters. The novel scheme consists of a joint iterative optimization of a bank of full-rank adaptive filters that forms the projection matrix and an adaptive reduced-rank filter that operates at the output of the bank of filters. We describe minimum mean-squared error (MMSE) expressions for the design of the projection matrix and the reduced-rank filter and low-complexity normalized least-mean squares (NLMS) adaptive algorithms for its efficient implementation. Simulations for an interference suppression application show that the proposed scheme outperforms in convergence and tracking the state-ofthe- art reduced-rank schemes at significantly lower complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
16,094
2309.07431
Asynchronous Spatial-Temporal Allocation for Trajectory Planning of Heterogeneous Multi-Agent Systems
To plan the trajectories of a large-scale heterogeneous swarm, sequentially or synchronously distributed methods usually become intractable due to the lack of global clock synchronization. To this end, we provide a novel asynchronous spatial-temporal allocation method. Specifically, between a pair of agents, the allocation is proposed to determine their corresponding derivable time-stamped space and can be updated in an asynchronous way, by inserting a waiting duration between two consecutive replanning steps. Via theoretical analysis, the inter-agent collision is proved to be avoided and the allocation ensures timely updates. Comprehensive simulations and comparisons with five baselines validate the effectiveness of the proposed method and illustrate its improvement in completion time and moving distance. Finally, hardware experiments are carried out, where $8$ heterogeneous unmanned ground vehicles with onboard computation navigate in cluttered scenarios with high agility.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
391,781
2408.12177
Revisiting the Phenomenon of Syntactic Complexity Convergence on German Dialogue Data
We revisit the phenomenon of syntactic complexity convergence in conversational interaction, originally found for English dialogue, which has theoretical implication for dialogical concepts such as mutual understanding. We use a modified metric to quantify syntactic complexity based on dependency parsing. The results show that syntactic complexity convergence can be statistically confirmed in one of three selected German datasets that were analysed. Given that the dataset which shows such convergence is much larger than the other two selected datasets, the empirical results indicate a certain degree of linguistic generality of syntactic complexity convergence in conversational interaction. We also found a different type of syntactic complexity convergence in one of the datasets while further investigation is still necessary.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
482,627
1904.10257
A hybridizable discontinuous Galerkin method for electromagnetics with a view on subsurface applications
Two Hybridizable Discontinuous Galerkin (HDG) schemes for the solution of Maxwell's equations in the time domain are presented. The first method is based on an electromagnetic diffusion equation, while the second is based on Faraday's and Maxwell--Amp\`ere's laws. Both formulations include the diffusive term depending on the conductivity of the medium. The three-dimensional formulation of the electromagnetic diffusion equation in the framework of HDG methods, the introduction of the conduction current term and the choice of the electric field as hybrid variable in a mixed formulation are the key points of the current study. Numerical results are provided for validation purposes and convergence studies of spatial and temporal discretizations are carried out. The test cases include both simulation in dielectric and conductive media.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
128,587
2401.14992
Graph-based Active Learning for Entity Cluster Repair
Cluster repair methods aim to determine errors in clusters and modify them so that each cluster consists of records representing the same entity. Current cluster repair methodologies primarily assume duplicate-free data sources, where each record from one source corresponds to a unique record from another. However, real-world data often deviates from this assumption due to quality issues. Recent approaches apply clustering methods in combination with link categorization methods so they can be applied to data sources with duplicates. Nevertheless, the results do not show a clear picture since the quality highly varies depending on the configuration and dataset. In this study, we introduce a novel approach for cluster repair that utilizes graph metrics derived from the underlying similarity graphs. These metrics are pivotal in constructing a classification model to distinguish between correct and incorrect edges. To address the challenge of limited training data, we integrate an active learning mechanism tailored to cluster-specific attributes. The evaluation shows that the method outperforms existing cluster repair methods without distinguishing between duplicate-free or dirty data sources. Notably, our modified active learning strategy exhibits enhanced performance when dealing with datasets containing duplicates, showcasing its effectiveness in such scenarios.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
424,285
2306.10982
Differentially Private Over-the-Air Federated Learning Over MIMO Fading Channels
Federated learning (FL) enables edge devices to collaboratively train machine learning models, with model communication replacing direct data uploading. While over-the-air model aggregation improves communication efficiency, uploading models to an edge server over wireless networks can pose privacy risks. Differential privacy (DP) is a widely used quantitative technique to measure statistical data privacy in FL. Previous research has focused on over-the-air FL with a single-antenna server, leveraging communication noise to enhance user-level DP. This approach achieves the so-called "free DP" by controlling transmit power rather than introducing additional DP-preserving mechanisms at devices, such as adding artificial noise. In this paper, we study differentially private over-the-air FL over a multiple-input multiple-output (MIMO) fading channel. We show that FL model communication with a multiple-antenna server amplifies privacy leakage as the multiple-antenna server employs separate receive combining for model aggregation and information inference. Consequently, relying solely on communication noise, as done in the multiple-input single-output system, cannot meet high privacy requirements, and a device-side privacy-preserving mechanism is necessary for optimal DP design. We analyze the learning convergence and privacy loss of the studied FL system and propose a transceiver design algorithm based on alternating optimization. Numerical results demonstrate that the proposed method achieves a better privacy-learning trade-off compared to prior work.
false
false
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
374,432
2208.09915
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models
Protecting NLP models against misspellings whether accidental or adversarial has been the object of research interest for the past few years. Existing remediations have typically either compromised accuracy or required full model re-training with each new class of attacks. We propose a novel method of retroactively adding resilience to misspellings to transformer-based NLP models. This robustness can be achieved without the need for re-training of the original NLP model and with only a minimal loss of language understanding performance on inputs without misspellings. Additionally we propose a new efficient approximate method of generating adversarial misspellings, which significantly reduces the cost needed to evaluate a model's resilience to adversarial attacks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
313,885
1006.3128
The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing
Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,804
2209.12778
Developing A Visual-Interactive Interface for Electronic Health Record Labeling: An Explainable Machine Learning Approach
Labeling a large number of electronic health records is expensive and time consuming, and having a labeling assistant tool can significantly reduce medical experts' workload. Nevertheless, to gain the experts' trust, the tool must be able to explain the reasons behind its outputs. Motivated by this, we introduce Explainable Labeling Assistant (XLabel) a new visual-interactive tool for data labeling. At a high level, XLabel uses Explainable Boosting Machine (EBM) to classify the labels of each data point and visualizes heatmaps of EBM's explanations. As a case study, we use XLabel to help medical experts label electronic health records with four common non-communicable diseases (NCDs). Our experiments show that 1) XLabel helps reduce the number of labeling actions, 2) EBM as an explainable classifier is as accurate as other well-known machine learning models outperforms a rule-based model used by NCD experts, and 3) even when more than 40% of the records were intentionally mislabeled, EBM could recall the correct labels of more than 90% of these records.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
319,654
2406.00868
Dual Policy Reinforcement Learning for Real-time Rebalancing in Bike-sharing Systems
Bike-sharing systems play a crucial role in easing traffic congestion and promoting healthier lifestyles. However, ensuring their reliability and user acceptance requires effective strategies for rebalancing bikes. This study introduces a novel approach to address the real-time rebalancing problem with a fleet of vehicles. It employs a dual policy reinforcement learning algorithm that decouples inventory and routing decisions, enhancing realism and efficiency compared to previous methods where both decisions were made simultaneously. We first formulate the inventory and routing subproblems as a multi-agent Markov Decision Process within a continuous time framework. Subsequently, we propose a DQN-based dual policy framework to jointly estimate the value functions, minimizing the lost demand. To facilitate learning, a comprehensive simulator is applied to operate under a first-arrive-first-serve rule, which enables the computation of immediate rewards across diverse demand scenarios. We conduct extensive experiments on various datasets generated from historical real-world data, affected by both temporal and weather factors. Our proposed algorithm demonstrates significant performance improvements over previous baseline methods. It offers valuable practical insights for operators and further explores the incorporation of reinforcement learning into real-world dynamic programming problems, paving the way for more intelligent and robust urban mobility solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
460,061
2102.11887
Quantum Cross Entropy and Maximum Likelihood Principle
Quantum machine learning is an emerging field at the intersection of machine learning and quantum computing. Classical cross entropy plays a central role in machine learning. We define its quantum generalization, the quantum cross entropy, prove its lower bounds, and investigate its relation to quantum fidelity. In the classical case, minimizing cross entropy is equivalent to maximizing likelihood. In the quantum case, when the quantum cross entropy is constructed from quantum data undisturbed by quantum measurements, this relation holds. Classical cross entropy is equal to negative log-likelihood. When we obtain quantum cross entropy through empirical density matrix based on measurement outcomes, the quantum cross entropy is lower-bounded by negative log-likelihood. These two different scenarios illustrate the information loss when making quantum measurements. We conclude that to achieve the goal of full quantum machine learning, it is crucial to utilize the deferred measurement principle.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
221,547
1808.09653
Neural Metaphor Detection in Context
We present end-to-end neural models for detecting metaphorical word use in context. We show that relatively standard BiLSTM models which operate on complete sentences work well in this setting, in comparison to previous work that used more restricted forms of linguistic context. These models establish a new state-of-the-art on existing verb metaphor detection benchmarks, and show strong performance on jointly predicting the metaphoricity of all words in a running text.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
106,241
2306.03831
GEO-Bench: Toward Foundation Models for Earth Monitoring
Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks. Such models, recently coined foundation models, have been transformational to the field of natural language processing. Variants have also been proposed for image data, but their applicability to remote sensing tasks is limited. To stimulate the development of foundation models for Earth monitoring, we propose a benchmark comprised of six classification and six segmentation tasks, which were carefully curated and adapted to be both relevant to the field and well-suited for model evaluation. We accompany this benchmark with a robust methodology for evaluating models and reporting aggregated results to enable a reliable assessment of progress. Finally, we report results for 20 baselines to gain information about the performance of existing models. We believe that this benchmark will be a driver of progress across a variety of Earth monitoring tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
371,494
1612.06821
User Bias Removal in Review Score Prediction
Review score prediction of text reviews has recently gained a lot of attention in recommendation systems. A major problem in models for review score prediction is the presence of noise due to user-bias in review scores. We propose two simple statistical methods to remove such noise and improve review score prediction. Compared to other methods that use multiple classifiers, one for each user, our model uses a single global classifier to predict review scores. We empirically evaluate our methods on two major categories (\textit{Electronics} and \textit{Movies and TV}) of the SNAP published Amazon e-Commerce Reviews data-set and Amazon \textit{Fine Food} reviews data-set. We obtain improved review score prediction for three commonly used text feature representations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
65,864
2402.00017
Deploying ADVISER: Impact and Lessons from Using Artificial Intelligence for Child Vaccination Uptake in Nigeria
More than 5 million children under five years die from largely preventable or treatable medical conditions every year, with an overwhelmingly large proportion of deaths occurring in underdeveloped countries with low vaccination uptake. One of the United Nations' sustainable development goals (SDG 3) aims to end preventable deaths of newborns and children under five years of age. We focus on Nigeria, where the rate of infant mortality is appalling. In particular, low vaccination uptake in Nigeria is a major driver of more than 2,000 daily deaths of children under the age of five years. In this paper, we describe our collaboration with government partners in Nigeria to deploy ADVISER: AI-Driven Vaccination Intervention Optimiser. The framework, based on an integer linear program that seeks to maximize the cumulative probability of successful vaccination, is the first successful deployment of an AI-enabled toolchain for optimizing the allocation of health interventions in Nigeria. In this paper, we provide a background of the ADVISER framework and present results, lessons, and success stories of deploying ADVISER to more than 13,000 families in the state of Oyo, Nigeria.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
425,432
1204.3663
Thermodynamic Principles in Social Collaborations
A thermodynamic framework is presented to characterize the evolution of efficiency, order, and quality in social content production systems, and this framework is applied to the analysis of Wikipedia. Contributing editors are characterized by their (creative) energy levels in terms of number of edits. We develop a definition of entropy that can be used to analyze the efficiency of the system as a whole, and relate it to the evolution of power-law distributions and a metric of quality. The concept is applied to the analysis of eight years of Wikipedia editing data and results show that (1) Wikipedia has become more efficient during its evolution and (2) the entropy-based efficiency metric has high correlation with observed readership of Wikipedia pages.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
15,519
2003.03793
Bayesian Particles on Cyclic Graphs
We consider the problem of designing synthetic cells to achieve a complex goal (e.g., mimicking the immune system by seeking invaders) in a complex environment (e.g., the circulatory system), where they might have to change their control policy, communicate with each other, and deal with stochasticity including false positives and negatives---all with minimal capabilities and only a few bits of memory. We simulate the immune response using cyclic, maze-like environments and use targets at unknown locations to represent invading cells. Using only a few bits of memory, the synthetic cells are programmed to perform a reinforcement learning-type algorithm with which they update their control policy based on randomized encounters with other cells. As the synthetic cells work together to find the target, their interactions as an ensemble function as a physical implementation of a Bayesian update. That is, the particles act as a particle filter. This result provides formal properties about the behavior of the synthetic cell ensemble that can be used to ensure robustness and safety. This method of simplified reinforcement learning is evaluated in simulations, and applied to an actual model of the human circulatory system.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
167,368
2401.02386
Direction of Arrival Estimation Using Microphone Array Processing for Moving Humanoid Robots
The auditory system of humanoid robots has gained increased attention in recent years. This system typically acquires the surrounding sound field by means of a microphone array. Signals acquired by the array are then processed using various methods. One of the widely applied methods is direction of arrival estimation. The conventional direction of arrival estimation methods assume that the array is fixed at a given position during the estimation. However, this is not necessarily true for an array installed on a moving humanoid robot. The array motion, if not accounted for appropriately, can introduce a significant error in the estimated direction of arrival. The current paper presents a signal model that takes the motion into account. Based on this model, two processing methods are proposed. The first one compensates for the motion of the robot. The second method is applicable to periodic signals and utilizes the motion in order to enhance the performance to a level beyond that of a stationary array. Numerical simulations and an experimental study are provided, demonstrating that the motion compensation method almost eliminates the motion-related error. It is also demonstrated that by using the motion-based enhancement method it is possible to improve the direction of arrival estimation performance, as compared to that obtained when using a stationary array.
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
419,696
2103.12953
Supporting Clustering with Contrastive Learning
Unsupervised clustering aims at discovering the semantic categories of data according to some distance measured in the representation space. However, different categories often overlap with each other in the representation space at the beginning of the learning process, which poses a significant challenge for distance-based clustering in achieving good separation between different categories. To this end, we propose Supporting Clustering with Contrastive Learning (SCCL) -- a novel framework to leverage contrastive learning to promote better separation. We assess the performance of SCCL on short text clustering and show that SCCL significantly advances the state-of-the-art results on most benchmark datasets with 3%-11% improvement on Accuracy and 4%-15% improvement on Normalized Mutual Information. Furthermore, our quantitative analysis demonstrates the effectiveness of SCCL in leveraging the strengths of both bottom-up instance discrimination and top-down clustering to achieve better intra-cluster and inter-cluster distances when evaluated with the ground truth cluster labels.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
226,333
1809.00969
A Deeper Insight into the UnDEMoN: Unsupervised Deep Network for Depth and Ego-Motion Estimation
This paper presents an unsupervised deep learning framework called UnDEMoN for estimating dense depth map and 6-DoF camera pose information directly from monocular images. The proposed network is trained using unlabeled monocular stereo image pairs and is shown to provide superior performance in depth and ego-motion estimation compared to the existing state-of-the-art. These improvements are achieved by introducing a new objective function that aims to minimize spatial as well as temporal reconstruction losses simultaneously. These losses are defined using bi-linear sampling kernel and penalized using the Charbonnier penalty function. The objective function, thus created, provides robustness to image gradient noises thereby improving the overall estimation accuracy without resorting to any coarse to fine strategies which are currently prevalent in the literature. Another novelty lies in the fact that we combine a disparity-based depth estimation network with a pose estimation network to obtain absolute scale-aware 6 DOF Camera pose and superior depth map. The effectiveness of the proposed approach is demonstrated through performance comparison with the existing supervised and unsupervised methods on the KITTI driving dataset.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
106,704
2403.08058
CHAI: Clustered Head Attention for Efficient LLM Inference
Large Language Models (LLMs) with hundreds of billions of parameters have transformed the field of machine learning. However, serving these models at inference time is both compute and memory intensive, where a single request can require multiple GPUs and tens of Gigabytes of memory. Multi-Head Attention is one of the key components of LLMs, which can account for over 50% of LLMs memory and compute requirement. We observe that there is a high amount of redundancy across heads on which tokens they pay attention to. Based on this insight, we propose Clustered Head Attention (CHAI). CHAI combines heads with a high amount of correlation for self-attention at runtime, thus reducing both memory and compute. In our experiments, we show that CHAI is able to reduce the memory requirements for storing K,V cache by up to 21.4% and inference time latency by up to 1.73x without any fine-tuning required. CHAI achieves this with a maximum 3.2% deviation in accuracy across 3 different models (i.e. OPT-66B, LLAMA-7B, LLAMA-33B) and 5 different evaluation datasets.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
437,158
2309.11081
Dense 2D-3D Indoor Prediction with Sound via Aligned Cross-Modal Distillation
Sound can convey significant information for spatial reasoning in our daily lives. To endow deep networks with such ability, we address the challenge of dense indoor prediction with sound in both 2D and 3D via cross-modal knowledge distillation. In this work, we propose a Spatial Alignment via Matching (SAM) distillation framework that elicits local correspondence between the two modalities in vision-to-audio knowledge transfer. SAM integrates audio features with visually coherent learnable spatial embeddings to resolve inconsistencies in multiple layers of a student model. Our approach does not rely on a specific input representation, allowing for flexibility in the input shapes or dimensions without performance degradation. With a newly curated benchmark named Dense Auditory Prediction of Surroundings (DAPS), we are the first to tackle dense indoor prediction of omnidirectional surroundings in both 2D and 3D with audio observations. Specifically, for audio-based depth estimation, semantic segmentation, and challenging 3D scene reconstruction, the proposed distillation framework consistently achieves state-of-the-art performance across various metrics and backbone architectures.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
393,270
2104.05442
Out-of-distribution detection in satellite image classification
In satellite image analysis, distributional mismatch between the training and test data may arise due to several reasons, including unseen classes in the test data and differences in the geographic area. Deep learning based models may behave in unexpected manner when subjected to test data that has such distributional shifts from the training data, also called out-of-distribution (OOD) examples. Predictive uncertainly analysis is an emerging research topic which has not been explored much in context of satellite image analysis. Towards this, we adopt a Dirichlet Prior Network based model to quantify distributional uncertainty of deep learning models for remote sensing. The approach seeks to maximize the representation gap between the in-domain and OOD examples for a better identification of unknown examples at test time. Experimental results on three exemplary test scenarios show the efficacy of the model in satellite image analysis.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
229,729
2212.14225
Symplectic self-orthogonal quasi-cyclic codes
In this paper, we establish the necessary and sufficient conditions for quasi-cyclic (QC) codes with index even to be symplectic self-orthogonal. Subsequently, we present the lower and upper bounds on the minimum symplectic distances of a class of $1$-generator QC codes and their symplectic dual codes by decomposing code spaces. As an application, we construct numerous new binary symplectic self-orthogonal QC codes with excellent parameters, leading to $117$ record-breaking quantum error-correction codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
338,546
2103.16670
Contrastive Learning of Single-Cell Phenotypic Representations for Treatment Classification
Learning robust representations to discriminate cell phenotypes based on microscopy images is important for drug discovery. Drug development efforts typically analyse thousands of cell images to screen for potential treatments. Early works focus on creating hand-engineered features from these images or learn such features with deep neural networks in a fully or weakly-supervised framework. Both require prior knowledge or labelled datasets. Therefore, subsequent works propose unsupervised approaches based on generative models to learn these representations. Recently, representations learned with self-supervised contrastive loss-based methods have yielded state-of-the-art results on various imaging tasks compared to earlier unsupervised approaches. In this work, we leverage a contrastive learning framework to learn appropriate representations from single-cell fluorescent microscopy images for the task of Mechanism-of-Action classification. The proposed work is evaluated on the annotated BBBC021 dataset, and we obtain state-of-the-art results in NSC, NCSB and drop metrics for an unsupervised approach. We observe an improvement of 10% in NCSB accuracy and 11% in NSC-NSCB drop over the previously best unsupervised method. Moreover, the performance of our unsupervised approach ties with the best supervised approach. Additionally, we observe that our framework performs well even without post-processing, unlike earlier methods. With this, we conclude that one can learn robust cell representations with contrastive learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
227,658
2207.11237
Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders
While sequential recommender systems achieve significant improvements on capturing user dynamics, we argue that sequential recommenders are vulnerable against substitution-based profile pollution attacks. To demonstrate our hypothesis, we propose a substitution-based adversarial attack algorithm, which modifies the input sequence by selecting certain vulnerable elements and substituting them with adversarial items. In both untargeted and targeted attack scenarios, we observe significant performance deterioration using the proposed profile pollution algorithm. Motivated by such observations, we design an efficient adversarial defense method called Dirichlet neighborhood sampling. Specifically, we sample item embeddings from a convex hull constructed by multi-hop neighbors to replace the original items in input sequences. During sampling, a Dirichlet distribution is used to approximate the probability distribution in the neighborhood such that the recommender learns to combat local perturbations. Additionally, we design an adversarial training method tailored for sequential recommender systems. In particular, we represent selected items with one-hot encodings and perform gradient ascent on the encodings to search for the worst case linear combination of item embeddings in training. As such, the embedding function learns robust item representations and the trained recommender is resistant to test-time adversarial examples. Extensive experiments show the effectiveness of both our attack and defense methods, which consistently outperform baselines by a significant margin across model architectures and datasets.
false
false
false
false
true
true
true
false
false
false
false
false
true
false
false
false
false
false
309,564
2202.02352
Learning Interpretable, High-Performing Policies for Autonomous Driving
Gradient-based approaches in reinforcement learning (RL) have achieved tremendous success in learning policies for autonomous vehicles. While the performance of these approaches warrants real-world adoption, these policies lack interpretability, limiting deployability in the safety-critical and legally-regulated domain of autonomous driving (AD). AD requires interpretable and verifiable control policies that maintain high performance. We propose Interpretable Continuous Control Trees (ICCTs), a tree-based model that can be optimized via modern, gradient-based, RL approaches to produce high-performing, interpretable policies. The key to our approach is a procedure for allowing direct optimization in a sparse decision-tree-like representation. We validate ICCTs against baselines across six domains, showing that ICCTs are capable of learning interpretable policy representations that parity or outperform baselines by up to 33% in AD scenarios while achieving a 300x-600x reduction in the number of policy parameters against deep learning baselines. Furthermore, we demonstrate the interpretability and utility of our ICCTs through a 14-car physical robot demonstration.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
278,769
2411.13785
Throughput Maximization for Movable Antenna Systems with Movement Delay Consideration
In this paper, we model the minimum achievable throughput within a transmission block of restricted duration and aim to maximize it in movable antenna (MA)-enabled multiuser downlink communications. Particularly, we account for the antenna moving delay caused by mechanical movement, which has not been fully considered in previous studies, and reveal the trade-off between the delay and signal-to-interference-plus-noise ratio at users. To this end, we first consider a single-user setup to analyze the necessity of antenna movement. By quantizing the virtual angles of arrival, we derive the requisite region size for antenna moving, design the initial MA position, and elucidate the relationship between quantization resolution and moving region size. Furthermore, an efficient algorithm is developed to optimize MA position via successive convex approximation, which is subsequently extended to the general multiuser setup. Numerical results demonstrate that the proposed algorithms outperform fixed-position antenna schemes and existing ones without consideration of movement delay. Additionally, our algorithms exhibit excellent adaptability and stability across various transmission block durations and moving region sizes, and are robust to different antenna moving speeds. This allows the hardware cost of MA-aided systems to be reduced by employing low rotational speed motors.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
509,918
1608.04738
An Efficient Character-Level Neural Machine Translation
Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems on the task of English-to-French translation. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose an efficient architecture to train a deep character-level neural machine translation by introducing a decimator and an interpolator. The decimator is used to sample the source sequence before encoding while the interpolator is used to resample after decoding. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is much faster and more memory-efficient in training than conventional character-based models. More interestingly, our model is able to translate the misspelled word like human beings.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
59,879
2209.06481
Targeting interventions for displacement minimization in opinion dynamics
Social influence is largely recognized as a key factor in opinion formation processes. Recently, the role of external forces in inducing opinion displacement and polarization in social networks has attracted significant attention. This is in particular motivated by the necessity to understand and possibly prevent interference phenomena during political campaigns and elections. In this paper, we formulate and solve a targeted intervention problem for opinion displacement minimization on a social network. Specifically, we consider a min-max problem whereby a social planner (the defender) aims at selecting the optimal network intervention within her given budget constraint in order to minimize the opinion displacement in the system that an adversary (the attacker) is instead trying to maximize. Our results show that the optimal intervention of the defender has two regimes. For large enough budget, the optimal intervention of the social planner acts on all nodes proportionally to a new notion of network centrality. For lower budget values, such optimal intervention has a more delicate structure and is rather concentrated on a few target individuals.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
317,421
2107.12248
Are Bayesian neural networks intrinsically good at out-of-distribution detection?
The need to avoid confident predictions on unfamiliar data has sparked interest in out-of-distribution (OOD) detection. It is widely assumed that Bayesian neural networks (BNN) are well suited for this task, as the endowed epistemic uncertainty should lead to disagreement in predictions on outliers. In this paper, we question this assumption and provide empirical evidence that proper Bayesian inference with common neural network architectures does not necessarily lead to good OOD detection. To circumvent the use of approximate inference, we start by studying the infinite-width case, where Bayesian inference can be exact considering the corresponding Gaussian process. Strikingly, the kernels induced under common architectural choices lead to uncertainties that do not reflect the underlying data generating process and are therefore unsuited for OOD detection. Finally, we study finite-width networks using HMC, and observe OOD behavior that is consistent with the infinite-width case. Overall, our study discloses fundamental problems when naively using BNNs for OOD detection and opens interesting avenues for future research.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,846
1905.13300
Generative Imaging and Image Processing via Generative Encoder
This paper introduces a novel generative encoder (GE) model for generative imaging and image processing with applications in compressed sensing and imaging, image compression, denoising, inpainting, deblurring, and super-resolution. The GE model consists of a pre-training phase and a solving phase. In the pre-training phase, we separately train two deep neural networks: a generative adversarial network (GAN) with a generator $\G$ that captures the data distribution of a given image set, and an auto-encoder (AE) network with an encoder $\EN$ that compresses images following the estimated distribution by GAN. In the solving phase, given a noisy image $x=\mathcal{P}(x^*)$, where $x^*$ is the target unknown image, $\mathcal{P}$ is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image $x$ in the compressed domain, i.e., given $m=\EN(x)$, we solve the optimization problem \[ z^*=\underset{z}{\mathrm{argmin}} \|\EN(\G(z))-m\|_2^2+\lambda\|z\|_2^2 \] to recover the image $x^*$ in a generative way via $\hat{x}:=\G(z^*)\approx x^*$, where $\lambda>0$ is a hyperparameter. The GE model unifies the generative capacity of GANs and the stability of AEs in an optimization framework above instead of stacking GANs and AEs into a single network or combining their loss functions into one as in existing literature. Numerical experiments show that the proposed model outperforms several state-of-the-art algorithms.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
133,064