id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1803.07438
Ontology-Based Reasoning about the Trustworthiness of Cyber-Physical Systems
It has been challenging for the technical and regulatory communities to formulate requirements for trustworthiness of the cyber-physical systems (CPS) due to the complexity of the issues associated with their design, deployment, and operations. The US National Institute of Standards and Technology (NIST), through a public working group, has released a CPS Framework that adopts a broad and integrated view of CPS and positions trustworthiness among other aspects of CPS. This paper takes the model created by the CPS Framework and its further developments one step further, by applying ontological approaches and reasoning techniques in order to achieve greater understanding of CPS. The example analyzed in the paper demonstrates the enrichment of the original CPS model obtained through ontology and reasoning and its ability to deliver additional insights to the developers and operators of CPS.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
93,053
1809.02750
Ultimate Boundedness for Switched Systems with Multiple Equilibria Under Disturbances
In this paper, we investigate the robustness to external disturbances of switched discrete and continuous systems with multiple equilibria. It is shown that if each subsystem of the switched system is Input-to-State Stable (ISS), then under switching signals that satisfy an average dwell-time bound, the solutions are ultimately bounded within a compact set. Furthermore, the size of this set varies monotonically with the supremum norm of the disturbance signal. It is observed that when the subsystems share a common equilibrium, ISS is recovered for solutions of the corresponding switched system; hence, the results in this paper are a natural generalization of classical results in switched systems that exhibit a common equilibrium. Additionally, we provide a method to analytically compute the average dwell time if each subsystem possesses a quadratic ISS-Lyapunov function. Our motivation for studying this class of switched systems arises from certain motion planning problems in robotics, where primitive motions, each corresponding to an equilibrium point of a dynamical system, must be composed to realize a task. However, the results are relevant to a much broader class of applications, in which composition of different modes of behavior is required.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
107,122
2301.02820
Observations on the Lov\'asz $\theta$-Function, Graph Capacity, Eigenvalues, and Strong Products
This paper provides new observations on the Lov\'{a}sz $\theta$-function of graphs. These include a simple closed-form expression of that function for all strongly regular graphs, together with upper and lower bounds on that function for all regular graphs. These bounds are expressed in terms of the second-largest and smallest eigenvalues of the adjacency matrix of the regular graph, together with sufficient conditions for equalities (the upper bound is due to Lov\'{a}sz, followed by a new sufficient condition for its tightness). These results are shown to be useful in many ways, leading to the determination of the exact value of the Shannon capacity of various graphs, eigenvalue inequalities, and bounds on the clique and chromatic numbers of graphs. Since the Lov\'{a}sz $\theta$-function factorizes for the strong product of graphs, the results are also particularly useful for parameters of strong products or strong powers of graphs. Bounds on the smallest and second-largest eigenvalues of strong products of regular graphs are consequently derived, expressed as functions of the Lov\'{a}sz $\theta$-function (or the smallest eigenvalue) of each factor. The resulting lower bound on the second-largest eigenvalue of a $k$-fold strong power of a regular graph is compared to the Alon--Boppana bound; under a certain condition, the new bound is superior in its exponential growth rate (in $k$). Lower bounds on the chromatic number of strong products of graphs are expressed in terms of the order and the Lov\'{a}sz $\theta$-function of each factor. The utility of these bounds is exemplified, leading in some cases to an exact determination of the chromatic numbers of strong products or strong powers of graphs. The present research paper is aimed to have tutorial value as well.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
339,608
2401.11174
Pixel-Wise Recognition for Holistic Surgical Scene Understanding
This paper presents the Holistic and Multi-Granular Surgical Scene Understanding of Prostatectomies (GraSP) dataset, a curated benchmark that models surgical scene understanding as a hierarchy of complementary tasks with varying levels of granularity. Our approach encompasses long-term tasks, such as surgical phase and step recognition, and short-term tasks, including surgical instrument segmentation and atomic visual actions detection. To exploit our proposed benchmark, we introduce the Transformers for Actions, Phases, Steps, and Instrument Segmentation (TAPIS) model, a general architecture that combines a global video feature extractor with localized region proposals from an instrument segmentation model to tackle the multi-granularity of our benchmark. Through extensive experimentation in ours and alternative benchmarks, we demonstrate TAPIS's versatility and state-of-the-art performance across different tasks. This work represents a foundational step forward in Endoscopic Vision, offering a novel framework for future research towards holistic surgical scene understanding.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
422,899
2312.13035
Evolutionary Optimization of 1D-CNN for Non-contact Respiration Pattern Classification
In this study, we present a deep learning-based approach for time-series respiration data classification. The dataset contains regular breathing patterns as well as various forms of abnormal breathing, obtained through non-contact incoherent light-wave sensing (LWS) technology. Given the one-dimensional (1D) nature of the data, we employed a 1D convolutional neural network (1D-CNN) for classification purposes. Genetic algorithm was employed to optimize the 1D-CNN architecture to maximize classification accuracy. Addressing the computational complexity associated with training the 1D-CNN across multiple generations, we implemented transfer learning from a pre-trained model. This approach significantly reduced the computational time required for training, thereby enhancing the efficiency of the optimization process. This study contributes valuable insights into the potential applications of deep learning methodologies for enhancing respiratory anomaly detection through precise and efficient respiration classification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
417,190
2008.10467
On-line Capacity Estimation for Lithium-ion Battery Cells via an Electrochemical Model-based Adaptive Interconnected Observer
Battery aging is a natural process that contributes to capacity and power fade, resulting in a gradual performance degradation over time and usage. State of Charge (SOC) and State of Health (SOH) monitoring of an aging battery poses a challenging task to the Battery Management System (BMS) due to the lack of direct measurements. Estimation algorithms based on an electrochemical model that take into account the impact of aging on physical battery parameters can provide accurate information on lithium concentration and cell capacity over a battery's usable lifespan. A temperature-dependent electrochemical model, the Enhanced Single Particle Model (ESPM), forms the basis for the synthesis of an adaptive interconnected observer that exploits the relationship between capacity and power fade, due to the growth of Solid Electrolyte Interphase layer (SEI), to enable combined estimation of states (lithium concentration in both electrodes and cell capacity) and aging-sensitive transport parameters (anode diffusion coefficient and SEI layer ionic conductivity). The practical stability conditions for the adaptive observer are derived using Lyapunov's theory. Validation results against experimental data show a bounded capacity estimation error within 2% of its true value. Further, effectiveness of capacity estimation is tested for two cells at different stages of aging. Robustness of capacity estimates under measurement noise and sensor bias are studied.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
193,011
2402.08952
A two-stage solution to quantum process tomography: error analysis and optimal design
Quantum process tomography is a critical task for characterizing the dynamics of quantum systems and achieving precise quantum control. In this paper, we propose a two-stage solution for both trace-preserving and non-trace-preserving quantum process tomography. Utilizing a tensor structure, our algorithm exhibits a computational complexity of $O(MLd^2)$ where $d$ is the dimension of the quantum system and $ M $, $ L $ represent the numbers of different input states and measurement operators, respectively. We establish an analytical error upper bound and then design the optimal input states and the optimal measurement operators, which are both based on minimizing the error upper bound and maximizing the robustness characterized by the condition number. Numerical examples and testing on IBM quantum devices are presented to demonstrate the performance and efficiency of our algorithm.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
429,303
1401.5221
Optimal Intelligent Control for Wind Turbulence Rejection in WECS Using ANNs and Genetic Fuzzy Approach
One of the disadvantages in Connection of wind energy conversion systems (WECSs) to transmission networks is plentiful turbulence of wind speed. Therefore effects of this problem must be controlled. Nowadays, pitch-controlled WECSs are increasingly used for variable speed and pitch wind turbines. Megawatt class wind turbines generally turn at variable speed in wind farm. Thus turbine operation must be controlled in order to maximize the conversion efficiency below rated power and reduce loading on the drive-train. Due to random and non-linear nature of the wind turbulence and the ability of Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) Artificial Neural Networks (ANNs) in the modeling and control of this turbulence, in this study, widespread changes of wind have been perused using MLP and RBF artificial NNs. In addition in this study, a new genetic fuzzy system has been successfully applied to identify disturbance wind in turbine input. Thus output power has been regulated in optimal and nominal range by pitch angle regulation. Consequently, our proposed approaches have regulated output aerodynamic power and torque in the nominal rang.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
30,175
2408.03706
Local Topology Measures of Contextual Language Model Latent Spaces With Applications to Dialogue Term Extraction
A common approach for sequence tagging tasks based on contextual word representations is to train a machine learning classifier directly on these embedding vectors. This approach has two shortcomings. First, such methods consider single input sequences in isolation and are unable to put an individual embedding vector in relation to vectors outside the current local context of use. Second, the high performance of these models relies on fine-tuning the embedding model in conjunction with the classifier, which may not always be feasible due to the size or inaccessibility of the underlying feature-generation model. It is thus desirable, given a collection of embedding vectors of a corpus, i.e., a datastore, to find features of each vector that describe its relation to other, similar vectors in the datastore. With this in mind, we introduce complexity measures of the local topology of the latent space of a contextual language model with respect to a given datastore. The effectiveness of our features is demonstrated through their application to dialogue term extraction. Our work continues a line of research that explores the manifold hypothesis for word embeddings, demonstrating that local structure in the space carved out by word embeddings can be exploited to infer semantic properties.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
479,125
2502.13442
TreeCut: A Synthetic Unanswerable Math Word Problem Dataset for LLM Hallucination Evaluation
Large language models (LLMs) now achieve near-human performance on standard math word problem benchmarks (e.g., GSM8K), yet their true reasoning ability remains disputed. A key concern is that models often produce confident, yet unfounded, answers to unanswerable problems. We introduce TreeCut, a synthetic dataset that systematically generates infinite unanswerable math word problems and their answerable counterparts, by representing each question as a tree and removing chosen necessary conditions. Experiments show TreeCut effectively induce hallucinations in large language models, including GPT-4o and o3-mini, with rates of 61% and 42% in their respective worst-case scenarios. Further analysis highlights that deeper or more complex trees, composite item names, and removing necessary condition near the middle of a path all increase the likelihood of hallucinations, underscoring the persistent challenges LLMs face in identifying unanswerable math problems.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
535,365
1904.01855
A Stochastic Interpretation of Stochastic Mirror Descent: Risk-Sensitive Optimality
Stochastic mirror descent (SMD) is a fairly new family of algorithms that has recently found a wide range of applications in optimization, machine learning, and control. It can be considered a generalization of the classical stochastic gradient algorithm (SGD), where instead of updating the weight vector along the negative direction of the stochastic gradient, the update is performed in a "mirror domain" defined by the gradient of a (strictly convex) potential function. This potential function, and the mirror domain it yields, provides considerable flexibility in the algorithm compared to SGD. While many properties of SMD have already been obtained in the literature, in this paper we exhibit a new interpretation of SMD, namely that it is a risk-sensitive optimal estimator when the unknown weight vector and additive noise are non-Gaussian and belong to the exponential family of distributions. The analysis also suggests a modified version of SMD, which we refer to as symmetric SMD (SSMD). The proofs rely on some simple properties of Bregman divergence, which allow us to extend results from quadratics and Gaussians to certain convex functions and exponential families in a rather seamless way.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
126,268
2003.13910
Attention-based Multi-modal Fusion Network for Semantic Scene Completion
This paper presents an end-to-end 3D convolutional network named attention-based multi-modal fusion network (AMFNet) for the semantic scene completion (SSC) task of inferring the occupancy and semantic labels of a volumetric 3D scene from single-view RGB-D images. Compared with previous methods which use only the semantic features extracted from RGB-D images, the proposed AMFNet learns to perform effective 3D scene completion and semantic segmentation simultaneously via leveraging the experience of inferring 2D semantic segmentation from RGB-D images as well as the reliable depth cues in spatial dimension. It is achieved by employing a multi-modal fusion architecture boosted from 2D semantic segmentation and a 3D semantic completion network empowered by residual attention blocks. We validate our method on both the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset and the results show that our method respectively achieves the gains of 2.5% and 2.6% on the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset against the state-of-the-art method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
170,353
2204.09088
Exploration of Machine Learning Classification Models Used for Behavioral Biometrics Authentication
Mobile devices have been manufactured and enhanced at growing rates in the past decades. While this growth has significantly evolved the capability of these devices, their security has been falling behind. This contrast in development between capability and security of mobile devices is a significant problem with the sensitive information of the public at risk. Continuing the previous work in this field, this study identifies key Machine Learning algorithms currently being used for behavioral biometric mobile authentication schemes and aims to provide a comprehensive review of these algorithms when used with touch dynamics and phone movement. Throughout this paper the benefits, limitations, and recommendations for future work will be discussed.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
292,310
2103.04329
Pose Discrepancy Spatial Transformer Based Feature Disentangling for Partial Aspect Angles SAR Target Recognition
This letter presents a novel framework termed DistSTN for the task of synthetic aperture radar (SAR) automatic target recognition (ATR). In contrast to the conventional SAR ATR algorithms, DistSTN considers a more challenging practical scenario for non-cooperative targets whose aspect angles for training are incomplete and limited in a partial range while those of testing samples are unlimited. To address this issue, instead of learning the pose invariant features, DistSTN newly involves an elaborated feature disentangling model to separate the learned pose factors of a SAR target from the identity ones so that they can independently control the representation process of the target image. To disentangle the explainable pose factors, we develop a pose discrepancy spatial transformer module in DistSTN to characterize the intrinsic transformation between the factors of two different targets with an explicit geometric model. Furthermore, DistSTN develops an amortized inference scheme that enables efficient feature extraction and recognition using an encoder-decoder mechanism. Experimental results with the moving and stationary target acquisition and recognition (MSTAR) benchmark demonstrate the effectiveness of our proposed approach. Compared with the other ATR algorithms, DistSTN can achieve higher recognition accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
223,602
2308.09892
Utilizing Semantic Textual Similarity for Clinical Survey Data Feature Selection
Survey data can contain a high number of features while having a comparatively low quantity of examples. Machine learning models that attempt to predict outcomes from survey data under these conditions can overfit and result in poor generalizability. One remedy to this issue is feature selection, which attempts to select an optimal subset of features to learn upon. A relatively unexplored source of information in the feature selection process is the usage of textual names of features, which may be semantically indicative of which features are relevant to a target outcome. The relationships between feature names and target names can be evaluated using language models (LMs) to produce semantic textual similarity (STS) scores, which can then be used to select features. We examine the performance using STS to select features directly and in the minimal-redundancy-maximal-relevance (mRMR) algorithm. The performance of STS as a feature selection metric is evaluated against preliminary survey data collected as a part of a clinical study on persistent post-surgical pain (PPSP). The results suggest that features selected with STS can result in higher performance models compared to traditional feature selection algorithms.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
386,467
1902.09030
Rapidly Adapting Moment Estimation
Adaptive gradient methods such as Adam have been shown to be very effective for training deep neural networks (DNNs) by tracking the second moment of gradients to compute the individual learning rates. Differently from existing methods, we make use of the most recent first moment of gradients to compute the individual learning rates per iteration. The motivation behind it is that the dynamic variation of the first moment of gradients may provide useful information to obtain the learning rates. We refer to the new method as the rapidly adapting moment estimation (RAME). The theoretical convergence of deterministic RAME is studied by using an analysis similar to the one used in [1] for Adam. Experimental results for training a number of DNNs show promising performance of RAME w.r.t. the convergence speed and generalization performance compared to the stochastic heavy-ball (SHB) method, Adam, and RMSprop.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
122,324
1801.05931
Faster Learning by Reduction of Data Access Time
Nowadays, the major challenge in machine learning is the Big Data challenge. The big data problems due to large number of data points or large number of features in each data point, or both, the training of models have become very slow. The training time has two major components: Time to access the data and time to process (learn from) the data. So far, the research has focused only on the second part, i.e., learning from the data. In this paper, we have proposed one possible solution to handle the big data problems in machine learning. The idea is to reduce the training time through reducing data access time by proposing systematic sampling and cyclic/sequential sampling to select mini-batches from the dataset. To prove the effectiveness of proposed sampling techniques, we have used Empirical Risk Minimization, which is commonly used machine learning problem, for strongly convex and smooth case. The problem has been solved using SAG, SAGA, SVRG, SAAG-II and MBSGD (Mini-batched SGD), each using two step determination techniques, namely, constant step size and backtracking line search method. Theoretical results prove the same convergence for systematic sampling, cyclic sampling and the widely used random sampling technique, in expectation. Experimental results with bench marked datasets prove the efficacy of the proposed sampling techniques and show up to six times faster training.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
88,538
1911.10317
PlantDoc: A Dataset for Visual Plant Disease Detection
India loses 35% of the annual crop yield due to plant diseases. Early detection of plant diseases remains difficult due to the lack of lab infrastructure and expertise. In this paper, we explore the possibility of computer vision approaches for scalable and early plant disease detection. The lack of availability of sufficiently large-scale non-lab data set remains a major challenge for enabling vision based plant disease detection. Against this background, we present PlantDoc: a dataset for visual plant disease detection. Our dataset contains 2,598 data points in total across 13 plant species and up to 17 classes of diseases, involving approximately 300 human hours of effort in annotating internet scraped images. To show the efficacy of our dataset, we learn 3 models for the task of plant disease classification. Our results show that modelling using our dataset can increase the classification accuracy by up to 31%. We believe that our dataset can help reduce the entry barrier of computer vision techniques in plant disease detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
154,788
2212.05716
On Generalization and Regularization via Wasserstein Distributionally Robust Optimization
Wasserstein distributionally robust optimization (DRO) has gained prominence in operations research and machine learning as a powerful method for achieving solutions with favorable out-of-sample performance. Two compelling explanations for its success are the generalization bounds derived from Wasserstein DRO and its equivalence to regularization schemes commonly used in machine learning. However, existing results on generalization bounds and regularization equivalence are largely limited to settings where the Wasserstein ball is of a specific type, and the decision criterion takes certain forms of expected functions. In this paper, we show that generalization bounds and regularization equivalence can be obtained in a significantly broader setting, where the Wasserstein ball is of a general type and the decision criterion accommodates any form, including general risk measures. This not only addresses important machine learning and operations management applications but also expands to general decision-theoretical frameworks previously unaddressed by Wasserstein DRO. Our results are strong in that the generalization bounds do not suffer from the curse of dimensionality and the equivalency to regularization is exact. As a by-product, we show that Wasserstein DRO coincides with the recent max-sliced Wasserstein DRO for {\it any} decision criterion under affine decision rules -- resulting in both being efficiently solvable as convex programs via our general regularization results. These general assurances provide a strong foundation for expanding the application of Wasserstein DRO across diverse domains of data-driven decision problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
335,868
1603.09558
Sub-pixel accuracy edge fitting by means of B-spline
Local perturbations around contours strongly disturb the final result of computer vision tasks. It is common to introduce a priori information in the estimation process. Improvement can be achieved via a deformable model such as the snake model. In recent works, the deformable contour is modeled by means of B-spline snakes which allows local control, concise representation, and the use of fewer parameters. The estimation of the sub-pixel edges using a global B-spline model relies on the contour global determination according to a maximum likelihood framework and using the observed data likelihood. This procedure guarantees that the noisiest data will be filtered out. The data likelihood is computed as a consequence of the observation model which includes both orientation and position information. Comparative experiments of this algorithm and the classical spline interpolation have shown that the proposed algorithm outperforms the classical approach for Gaussian and Salt & Pepper noise.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
53,939
2306.09426
Deep learning techniques for blind image super-resolution: A high-scale multi-domain perspective evaluation
Despite several solutions and experiments have been conducted recently addressing image super-resolution (SR), boosted by deep learning (DL) techniques, they do not usually design evaluations with high scaling factors, capping it at 2x or 4x. Moreover, the datasets are generally benchmarks which do not truly encompass significant diversity of domains to proper evaluate the techniques. It is also interesting to remark that blind SR is attractive for real-world scenarios since it is based on the idea that the degradation process is unknown, and hence techniques in this context rely basically on low-resolution (LR) images. In this article, we present a high-scale (8x) controlled experiment which evaluates five recent DL techniques tailored for blind image SR: Adaptive Pseudo Augmentation (APA), Blind Image SR with Spatially Variant Degradations (BlindSR), Deep Alternating Network (DAN), FastGAN, and Mixture of Experts Super-Resolution (MoESR). We consider 14 small datasets from five different broader domains which are: aerial, fauna, flora, medical, and satellite. Another distinctive characteristic of our evaluation is that some of the DL approaches were designed for single-image SR but others not. Two no-reference metrics were selected, being the classical natural image quality evaluator (NIQE) and the recent transformer-based multi-dimension attention network for no-reference image quality assessment (MANIQA) score, to assess the techniques. Overall, MoESR can be regarded as the best solution although the perceptual quality of the created HR images of all the techniques still needs to improve. Supporting code: https://github.com/vsantjr/DL_BlindSR. Datasets: https://www.kaggle.com/datasets/valdivinosantiago/dl-blindsr-datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
373,826
1307.3780
On the Convergence Speed of Spatially Coupled LDPC Ensembles
Spatially coupled low-density parity-check codes show an outstanding performance under the low-complexity belief propagation (BP) decoding algorithm. They exhibit a peculiar convergence phenomenon above the BP threshold of the underlying non-coupled ensemble, with a wave-like convergence propagating through the spatial dimension of the graph, allowing to approach the MAP threshold. We focus on this particularly interesting regime in between the BP and MAP thresholds. On the binary erasure channel, it has been proved that the information propagates with a constant speed toward the successful decoding solution. We derive an upper bound on the propagation speed, only depending on the basic parameters of the spatially coupled code ensemble such as degree distribution and the coupling factor $w$. We illustrate the convergence speed of different code ensembles by simulation results, and show how optimizing degree profiles helps to speed up the convergence.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
25,832
1911.11236
RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds
We study the problem of efficient semantic segmentation for large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Extensive experiments show that our RandLA-Net can process 1 million points in a single pass with up to 200X faster than existing approaches. Moreover, our RandLA-Net clearly surpasses state-of-the-art approaches for semantic segmentation on two large-scale benchmarks Semantic3D and SemanticKITTI.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
155,054
1308.5964
Automated, Credible Autocoding of An Unmanned Aggressive Maneuvering Car Controller
This article describes the application of a credible autocoding framework for control systems towards a nonlinear car controller example. The framework generates code, along with guarantees of high level functional properties about the code that can be independently verified. These high-level functional properties not only serves as a certificate of good system behvaior but also can be used to guarantee the absence of runtime errors. In one of our previous works, we have constructed a prototype autocoder with proofs that demonstrates this framework in a fully automatic fashion for linear and quasi-nonlinear controllers. With the nonlinear car example, we propose to further extend the prototype's dataflow annotation language environment with with several new annotation symbols to enable the expression of general predicates and dynamical systems. We demonstrate manually how the new extensions to the prototype autocoder work on the car controller using the output language Matlab. Finally, we discuss the requirements and scalability issues of the automatic analysis and verification of the documented output code.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
26,676
2110.00925
Matching Models for Graph Retrieval
Graph Retrieval has witnessed continued interest and progress in the past few years. In thisreport, we focus on neural network based approaches for Graph matching and retrieving similargraphs from a corpus of graphs. We explore methods which can soft predict the similaritybetween two graphs. Later, we gauge the power of a particular baseline (Shortest Path Kernel)and try to model it in our product graph random walks setting while making it more generalised.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
258,587
2305.02909
Aligning Bird-Eye View Representation of Point Cloud Sequences using Scene Flow
Low-resolution point clouds are challenging for object detection methods due to their sparsity. Densifying the present point cloud by concatenating it with its predecessors is a popular solution to this challenge. Such concatenation is possible thanks to the removal of ego vehicle motion using its odometry. This method is called Ego Motion Compensation (EMC). Thanks to the added points, EMC significantly improves the performance of single-frame detectors. However, it suffers from the shadow effect that manifests in dynamic objects' points scattering along their trajectories. This effect results in a misalignment between feature maps and objects' locations, thus limiting performance improvement to stationary and slow-moving objects only. Scene flow allows aligning point clouds in 3D space, thus naturally resolving the misalignment in feature spaces. By observing that scene flow computation shares several components with 3D object detection pipelines, we develop a plug-in module that enables single-frame detectors to compute scene flow to rectify their Bird-Eye View representation. Experiments on the NuScenes dataset show that our module leads to a significant increase (up to 16%) in the Average Precision of large vehicles, which interestingly demonstrates the most severe shadow effect. The code is published at https://github.com/quan-dao/pc-corrector.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
362,210
2103.08640
UPANets: Learning from the Universal Pixel Attention Networks
Among image classification, skip and densely-connection-based networks have dominated most leaderboards. Recently, from the successful development of multi-head attention in natural language processing, it is sure that now is a time of either using a Transformer-like model or hybrid CNNs with attention. However, the former need a tremendous resource to train, and the latter is in the perfect balance in this direction. In this work, to make CNNs handle global and local information, we proposed UPANets, which equips channel-wise attention with a hybrid skip-densely-connection structure. Also, the extreme-connection structure makes UPANets robust with a smoother loss landscape. In experiments, UPANets surpassed most well-known and widely-used SOTAs with an accuracy of 96.47% in Cifar-10, 80.29% in Cifar-100, and 67.67% in Tiny Imagenet. Most importantly, these performances have high parameters efficiency and only trained in one customer-based GPU. We share implementing code of UPANets in https://github.com/hanktseng131415go/UPANets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
224,949
1810.02411
Prefix-Free Code Distribution Matching for Probabilistic Constellation Shaping
In this work, we construct energy-efficient variable-to-fixed length (V2F), fixed-to-variable length (F2V), and variable-to-variable length (V2V) prefix-free codes, which are optimal (or near-optimal) in the sense that no (or few) other codes with the size can achieve a smaller energy per code letter for the same entropy rate. Under stringent constraints of 4096 entries or below per codebook, the constructed codes yield an energy per code letter within a few tenths of a dB of the unconstrained theoretic lower bound, across a wide range of entropy rates with a very fine granularity. We also propose a framing method that allows variable-length codes to be transmitted using a fixed-length frame. The penalty caused by framing is studied using simulations and analysis, showing that the energy per code letter is kept within 0.2 dB of the unconstrained theoretic limit for some tested codes with a large frame length. When framed prefix-free codes are used to implement probabilistic constellation shaping (PCS) for communications in the additive white Gaussian noise channel, simulations show that 1.1 dB and 0.65 dB of shaping gains are achieved relative to uniform 8- and 16-quadrature amplitude modulation (QAM), respectively.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
109,583
2006.15121
Learning Optimal Distributionally Robust Individualized Treatment Rules
Recent development in the data-driven decision science has seen great advances in individualized decision making. Given data with individual covariates, treatment assignments and outcomes, policy makers best individualized treatment rule (ITR) that maximizes the expected outcome, known as the value function. Many existing methods assume that the training and testing distributions are the same. However, the estimated optimal ITR may have poor generalizability when the training and testing distributions are not identical. In this paper, we consider the problem of finding an optimal ITR from a restricted ITR class where there is some unknown covariate changes between the training and testing distributions. We propose a novel distributionally robust ITR (DR-ITR) framework that maximizes the worst-case value function across the values under a set of underlying distributions that are "close" to the training distribution. The resulting DR-ITR can guarantee the performance among all such distributions reasonably well. We further propose a calibrating procedure that tunes the DR-ITR adaptively to a small amount of calibration data from a target population. In this way, the calibrated DR-ITR can be shown to enjoy better generalizability than the standard ITR based on our numerical studies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
184,421
1102.1745
Restructuring in Combinatorial Optimization
The paper addresses a new class of combinatorial problems which consist in restructuring of solutions (as structures) in combinatorial optimization. Two main features of the restructuring process are examined: (i) a cost of the restructuring, (ii) a closeness to a goal solution. This problem corresponds to redesign (improvement, upgrade) of modular systems or solutions. The restructuring approach is described and illustrated for the following combinatorial optimization problems: knapsack problem, multiple choice problem, assignment problem, spanning tree problems. Examples illustrate the restructuring processes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
9,084
1610.02431
ResearchDoom and CocoDoom: Learning Computer Vision with Games
In this short note we introduce ResearchDoom, an implementation of the Doom first-person shooter that can extract detailed metadata from the game. We also introduce the CocoDoom dataset, a collection of pre-recorded data extracted from Doom gaming sessions along with annotations in the MS Coco format. ResearchDoom and CocoDoom can be used to train and evaluate a variety of computer vision methods such as object recognition, detection and segmentation at the level of instances and categories, tracking, ego-motion estimation, monocular depth estimation and scene segmentation. The code and data are available at http://www.robots.ox.ac.uk/~vgg/research/researchdoom.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
62,095
2210.16984
Synthesizer Preset Interpolation using Transformer Auto-Encoders
Sound synthesizers are widespread in modern music production but they increasingly require expert skills to be mastered. This work focuses on interpolation between presets, i.e., sets of values of all sound synthesis parameters, to enable the intuitive creation of new sounds from existing ones. We introduce a bimodal auto-encoder neural network, which simultaneously processes presets using multi-head attention blocks, and audio using convolutions. This model has been tested on a popular frequency modulation synthesizer with more than one hundred parameters. Experiments have compared the model to related architectures and methods, and have demonstrated that it performs smoother interpolations. After training, the proposed model can be integrated into commercial synthesizers for live interpolation or sound design tasks.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
327,525
2501.16142
Towards General-Purpose Model-Free Reinforcement Learning
Reinforcement learning (RL) promises a framework for near-universal problem-solving. In practice however, RL algorithms are often tailored to specific benchmarks, relying on carefully tuned hyperparameters and algorithmic choices. Recently, powerful model-based RL methods have shown impressive general results across benchmarks but come at the cost of increased complexity and slow run times, limiting their broader applicability. In this paper, we attempt to find a unifying model-free deep RL algorithm that can address a diverse class of domains and problem settings. To achieve this, we leverage model-based representations that approximately linearize the value function, taking advantage of the denser task objectives used by model-based RL while avoiding the costs associated with planning or simulated trajectories. We evaluate our algorithm, MR.Q, on a variety of common RL benchmarks with a single set of hyperparameters and show a competitive performance against domain-specific and general baselines, providing a concrete step towards building general-purpose model-free deep RL algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
527,832
2212.00281
Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models
Despite the impressive advancements achieved through vision-and-language pretraining, it remains unclear whether this joint learning paradigm can help understand each individual modality. In this work, we conduct a comparative analysis of the visual representations in existing vision-and-language models and vision-only models by probing a broad range of tasks, aiming to assess the quality of the learned representations in a nuanced manner. Interestingly, our empirical observations suggest that vision-and-language models are better at label prediction tasks like object and attribute prediction, while vision-only models are stronger at dense prediction tasks that require more localized information. We hope our study sheds light on the role of language in visual learning, and serves as an empirical guide for various pretrained models. Code will be released at https://github.com/Lizw14/visual_probing
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
334,003
2501.02906
Domain-Agnostic Co-Evolution of Generalizable Parallel Algorithm Portfolios
Generalization is the core objective when training optimizers from data. However, limited training instances often constrain the generalization capability of the trained optimizers. Co-evolutionary approaches address this challenge by simultaneously evolving a parallel algorithm portfolio (PAP) and an instance population to eventually obtain PAPs with good generalization. Yet, when applied to a specific problem class, these approaches have a major limitation. They require practitioners to provide instance generators specially tailored to the problem class, which is often non-trivial to design. This work proposes a general-purpose, off-the-shelf PAP construction approach, named domain-agnostic co-evolution of parameterized search (DACE), for binary optimization problems where decision variables take values of 0 or 1. The key innovation of DACE lies in its neural network-based domain-agnostic instance representation and generation mechanism that delimitates the need for domain-specific instance generators. The strong generality of DACE is validated across three real-world binary optimization problems: the complementary influence maximization problem (CIMP), the compiler arguments optimization problem (CAOP), and the contamination control problem (CCP). Given only a small set of training instances from these classes, DACE, without requiring any domain knowledge, constructs PAPs with better generalization performance than existing approaches on all three classes, despite their use of domain-specific instance generators.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
522,687
2110.07040
Data Incubation -- Synthesizing Missing Data for Handwriting Recognition
In this paper, we demonstrate how a generative model can be used to build a better recognizer through the control of content and style. We are building an online handwriting recognizer from a modest amount of training samples. By training our controllable handwriting synthesizer on the same data, we can synthesize handwriting with previously underrepresented content (e.g., URLs and email addresses) and style (e.g., cursive and slanted). Moreover, we propose a framework to analyze a recognizer that is trained with a mixture of real and synthetic training data. We use the framework to optimize data synthesis and demonstrate significant improvement on handwriting recognition over a model trained on real data only. Overall, we achieve a 66% reduction in Character Error Rate.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
260,838
2406.00751
Evaluating Distributed Representations for Multi-Level Lexical Semantics: A Research Proposal
Modern neural networks (NNs), trained on extensive raw sentence data, construct distributed representations by compressing individual words into dense, continuous, high-dimensional vectors. These representations are expected to capture multi-level lexical meaning. In this thesis, our objective is to examine the efficacy of distributed representations from NNs in encoding lexical meaning. Initially, we identify and formalize three levels of lexical semantics: \textit{local}, \textit{global}, and \textit{mixed} levels. Then, for each level, we evaluate language models by collecting or constructing multilingual datasets, leveraging various language models, and employing linguistic analysis theories. This thesis builds a bridge between computational models and lexical semantics, aiming to complement each other.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
460,008
2501.13908
Graph Neural Controlled Differential Equations For Collaborative Filtering
Graph Convolution Networks (GCNs) are widely considered state-of-the-art for recommendation systems. Several studies in the field of recommendation systems have attempted to apply collaborative filtering (CF) into the Neural ODE framework. These studies follow the same idea as LightGCN, which removes the weight matrix or with a discrete weight matrix. However, we argue that weight control is critical for neural ODE-based methods. The importance of weight in creating tailored graph convolution for each node is crucial, and employing a fixed/discrete weight means it cannot adjust over time within the ODE function. This rigidity in the graph convolution reduces its adaptability, consequently hindering the performance of recommendations. In this study, to create an optimal control for Neural ODE-based recommendation, we introduce a new method called Graph Neural Controlled Differential Equations for Collaborative Filtering (CDE-CF). Our method improves the performance of the Graph ODE-based method by incorporating weight control in a continuous manner. To evaluate our approach, we conducted experiments on various datasets. The results show that our method surpasses competing baselines, including GCNs-based models and state-of-the-art Graph ODE-based methods.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
526,871
2101.07969
Robust W-GAN-Based Estimation Under Wasserstein Contamination
Robust estimation is an important problem in statistics which aims at providing a reasonable estimator when the data-generating distribution lies within an appropriately defined ball around an uncontaminated distribution. Although minimax rates of estimation have been established in recent years, many existing robust estimators with provably optimal convergence rates are also computationally intractable. In this paper, we study several estimation problems under a Wasserstein contamination model and present computationally tractable estimators motivated by generative adversarial networks (GANs). Specifically, we analyze properties of Wasserstein GAN-based estimators for location estimation, covariance matrix estimation, and linear regression and show that our proposed estimators are minimax optimal in many scenarios. Finally, we present numerical results which demonstrate the effectiveness of our estimators.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
216,189
2310.16209
ELM Ridge Regression Boosting
We discuss a boosting approach for the Ridge Regression (RR) method, with applications to the Extreme Learning Machine (ELM), and we show that the proposed method significantly improves the classification performance and robustness of ELMs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
402,621
2104.07396
Node Co-occurrence based Graph Neural Networks for Knowledge Graph Link Prediction
We introduce a novel embedding model, named NoGE, which aims to integrate co-occurrence among entities and relations into graph neural networks to improve knowledge graph completion (i.e., link prediction). Given a knowledge graph, NoGE constructs a single graph considering entities and relations as individual nodes. NoGE then computes weights for edges among nodes based on the co-occurrence of entities and relations. Next, NoGE proposes Dual Quaternion Graph Neural Networks (DualQGNN) and utilizes DualQGNN to update vector representations for entity and relation nodes. NoGE then adopts a score function to produce the triple scores. Comprehensive experimental results show that NoGE obtains state-of-the-art results on three new and difficult benchmark datasets CoDEx for knowledge graph completion.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
230,402
1812.00493
Towards a More Practice-Aware Runtime Analysis of Evolutionary Algorithms
Theory of evolutionary computation (EC) aims at providing mathematically founded statements about the performance of evolutionary algorithms (EAs). The predominant topic in this research domain is runtime analysis, which studies the time it takes a given EA to solve a given optimization problem. Runtime analysis has witnessed significant advances in the last couple of years, allowing us to compute precise runtime estimates for several EAs and several problems. Runtime analysis is, however (and unfortunately!), often judged by practitioners to be of little relevance for real applications of EAs. Several reasons for this claim exist. We address two of them in this present work: (1) EA implementations often differ from their vanilla pseudocode description, which, in turn, typically form the basis for runtime analysis. To close the resulting gap between empirically observed and theoretically derived performance estimates, we therefore suggest to take this discrepancy into account in the mathematical analysis and to adjust, for example, the cost assigned to the evaluation of search points that equal one of their direct parents (provided that this is easy to verify as is the case in almost all standard EAs). (2) Most runtime analysis results make statements about the expected time to reach an optimal solution (and possibly the distribution of this optimization time) only, thus explicitly or implicitly neglecting the importance of understanding how the function values evolve over time. We suggest to extend runtime statements to runtime profiles, covering the expected time needed to reach points of intermediate fitness values. As a direct consequence, we obtain a result showing that the greedy (2+1) GA of Sudholt [GECCO 2012] outperforms any unary unbiased black-box algorithm on OneMax.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
115,276
2303.07338
Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need
Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting old ones. Traditional CIL models are trained from scratch to continually acquire knowledge as data evolves. Recently, pre-training has achieved substantial progress, making vast pre-trained models (PTMs) accessible for CIL. Contrary to traditional methods, PTMs possess generalizable embeddings, which can be easily transferred for CIL. In this work, we revisit CIL with PTMs and argue that the core factors in CIL are adaptivity for model updating and generalizability for knowledge transferring. 1) We first reveal that frozen PTM can already provide generalizable embeddings for CIL. Surprisingly, a simple baseline (SimpleCIL) which continually sets the classifiers of PTM to prototype features can beat state-of-the-art even without training on the downstream task. 2) Due to the distribution gap between pre-trained and downstream datasets, PTM can be further cultivated with adaptivity via model adaptation. We propose AdaPt and mERge (APER), which aggregates the embeddings of PTM and adapted models for classifier construction. APER is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM's generalizability and adapted model's adaptivity. 3) Additionally, considering previous ImageNet-based benchmarks are unsuitable in the era of PTM due to data overlapping, we propose four new benchmarks for assessment, namely ImageNet-A, ObjectNet, OmniBenchmark, and VTAB. Extensive experiments validate the effectiveness of APER with a unified and concise framework. Code is available at https://github.com/zhoudw-zdw/RevisitingCIL
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
351,223
2310.10046
TRANSOM: An Efficient Fault-Tolerant System for Training LLMs
Large language models (LLMs) with hundreds of billions or trillions of parameters, represented by chatGPT, have achieved profound impact on various fields. However, training LLMs with super-large-scale parameters requires large high-performance GPU clusters and long training periods lasting for months. Due to the inevitable hardware and software failures in large-scale clusters, maintaining uninterrupted and long-duration training is extremely challenging. As a result, A substantial amount of training time is devoted to task checkpoint saving and loading, task rescheduling and restart, and task manual anomaly checks, which greatly harms the overall training efficiency. To address these issues, we propose TRANSOM, a novel fault-tolerant LLM training system. In this work, we design three key subsystems: the training pipeline automatic fault tolerance and recovery mechanism named Transom Operator and Launcher (TOL), the training task multi-dimensional metric automatic anomaly detection system named Transom Eagle Eye (TEE), and the training checkpoint asynchronous access automatic fault tolerance and recovery technology named Transom Checkpoint Engine (TCE). Here, TOL manages the lifecycle of training tasks, while TEE is responsible for task monitoring and anomaly reporting. TEE detects training anomalies and reports them to TOL, who automatically enters the fault tolerance strategy to eliminate abnormal nodes and restart the training task. And the asynchronous checkpoint saving and loading functionality provided by TCE greatly shorten the fault tolerance overhead. The experimental results indicate that TRANSOM significantly enhances the efficiency of large-scale LLM training on clusters. Specifically, the pre-training time for GPT3-175B has been reduced by 28%, while checkpoint saving and loading performance have improved by a factor of 20.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
400,060
2007.12988
Storywrangler: A massive exploratorium for sociolinguistic, cultural, socioeconomic, and political timelines using Twitter
In real-time, social media data strongly imprints world events, popular culture, and day-to-day conversations by millions of ordinary people at a scale that is scarcely conventionalized and recorded. Vitally, and absent from many standard corpora such as books and news archives, sharing and commenting mechanisms are native to social media platforms, enabling us to quantify social amplification (i.e., popularity) of trending storylines and contemporary cultural phenomena. Here, we describe Storywrangler, a natural language processing instrument designed to carry out an ongoing, day-scale curation of over 100 billion tweets containing roughly 1 trillion 1-grams from 2008 to 2021. For each day, we break tweets into unigrams, bigrams, and trigrams spanning over 100 languages. We track n-gram usage frequencies, and generate Zipf distributions, for words, hashtags, handles, numerals, symbols, and emojis. We make the data set available through an interactive time series viewer, and as downloadable time series and daily distributions. Although Storywrangler leverages Twitter data, our method of extracting and tracking dynamic changes of n-grams can be extended to any similar social media platform. We showcase a few examples of the many possible avenues of study we aim to enable including how social amplification can be visualized through 'contagiograms'. We also present some example case studies that bridge n-gram time series with disparate data sources to explore sociotechnical dynamics of famous individuals, box office success, and social unrest.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
188,985
1410.6532
A Novel Visual Word Co-occurrence Model for Person Re-identification
Person re-identification aims to maintain the identity of an individual in diverse locations through different non-overlapping camera views. The problem is fundamentally challenging due to appearance variations resulting from differing poses, illumination and configurations of camera views. To deal with these difficulties, we propose a novel visual word co-occurrence model. We first map each pixel of an image to a visual word using a codebook, which is learned in an unsupervised manner. The appearance transformation between camera views is encoded by a co-occurrence matrix of visual word joint distributions in probe and gallery images. Our appearance model naturally accounts for spatial similarities and variations caused by pose, illumination & configuration change across camera views. Linear SVMs are then trained as classifiers using these co-occurrence descriptors. On the VIPeR and CUHK Campus benchmark datasets, our method achieves 83.86% and 85.49% at rank-15 on the Cumulative Match Characteristic (CMC) curves, and beats the state-of-the-art results by 10.44% and 22.27%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
36,991
2103.10972
Learning Task Decomposition with Ordered Memory Policy Network
Many complex real-world tasks are composed of several levels of sub-tasks. Humans leverage these hierarchical structures to accelerate the learning process and achieve better generalization. In this work, we study the inductive bias and propose Ordered Memory Policy Network (OMPN) to discover subtask hierarchy by learning from demonstration. The discovered subtask hierarchy could be used to perform task decomposition, recovering the subtask boundaries in an unstruc-tured demonstration. Experiments on Craft and Dial demonstrate that our modelcan achieve higher task decomposition performance under both unsupervised and weakly supervised settings, comparing with strong baselines. OMPN can also bedirectly applied to partially observable environments and still achieve higher task decomposition performance. Our visualization further confirms that the subtask hierarchy can emerge in our model.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
225,618
2210.16592
Cram\'er-Rao Bound Minimization for IRS-Enabled Multiuser Integrated Sensing and Communication with Extended Target
This paper investigates an intelligent reflecting surface (IRS) enabled multiuser integrated sensing and communication (ISAC) system, which consists of one multi-antenna base station (BS), one IRS, multiple single-antenna communication users (CUs), and one extended target at the non-line-of-sight (NLoS) region of the BS. The IRS is deployed to not only assist the communication from the BS to the CUs, but also enable the BS's NLoS target sensing based on the echo signals from the BS-IRS-target-IRS-BS link. To provide full degrees of freedom for sensing, we suppose that the BS sends additional dedicated sensing signals combined with the information signals. Accordingly, we consider two types of CU receivers, namely Type-I and Type-II receivers, which do not have and have the capability of cancelling the interference from the sensing signals, respectively. Under this setup, we jointly optimize the transmit beamforming at the BS and the reflective beamforming at the IRS to minimize the Cram\'er-Rao bound (CRB) for estimating the target response matrix with respect to the IRS, subject to the minimum signal-to-interference-plus-noise ratio (SINR) constraints at the CUs and the maximum transmit power constraint at the BS. We present efficient algorithms to solve the highly non-convex SINR-constrained CRB minimization problems, by using the techniques of alternating optimization and semi-definite relaxation. Numerical results show that the proposed design achieves lower estimation CRB than other benchmark schemes, and the sensing signal interference pre-cancellation is beneficial when the number of CUs is greater than one.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
327,394
2411.19385
Zero-Forget Preservation of Semantic Communication Alignment in Distributed AI Networks
Future communication networks are expected to connect massive distributed artificial intelligence (AI). Exploiting aligned priori knowledge of AI pairs, it is promising to convert high-dimensional data transmission into highly-compressed semantic communications (SC). However, to accommodate the local data distribution and user preferences, AIs generally adapt to different domains, which fundamentally distorts the SC alignment. In this paper, we propose a zero-forget domain adaptation (ZFDA) framework to preserve SC alignment. To prevent the DA from changing substantial neural parameters of AI, we design sparse additive modifications (SAM) to the parameters, which can be efficiently stored and switched-off to restore the SC alignment. To optimize the SAM, we decouple it into tractable continuous variables and a binary mask, and then handle the binary mask by a score-based optimization. Experimental evaluations on a SC system for image transmissions validate that the proposed framework perfectly preserves the SC alignment with almost no loss of DA performance, even improved in some cases, at a cost of less than 1% of additional memory.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
512,222
2011.08952
Argumentative Topology: Finding Loop(holes) in Logic
Advances in natural language processing have resulted in increased capabilities with respect to multiple tasks. One of the possible causes of the observed performance gains is the introduction of increasingly sophisticated text representations. While many of the new word embedding techniques can be shown to capture particular notions of sentiment or associative structures, we explore the ability of two different word embeddings to uncover or capture the notion of logical shape in text. To this end we present a novel framework that we call Topological Word Embeddings which leverages mathematical techniques in dynamical system analysis and data driven shape extraction (i.e. topological data analysis). In this preliminary work we show that using a topological delay embedding we are able to capture and extract a different, shape-based notion of logic aimed at answering the question "Can we find a circle in a circular argument?"
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
207,036
2410.15326
A Survey of Uncertainty Estimation in LLMs: Theory Meets Practice
As large language models (LLMs) continue to evolve, understanding and quantifying the uncertainty in their predictions is critical for enhancing application credibility. However, the existing literature relevant to LLM uncertainty estimation often relies on heuristic approaches, lacking systematic classification of the methods. In this survey, we clarify the definitions of uncertainty and confidence, highlighting their distinctions and implications for model predictions. On this basis, we integrate theoretical perspectives, including Bayesian inference, information theory, and ensemble strategies, to categorize various classes of uncertainty estimation methods derived from heuristic approaches. Additionally, we address challenges that arise when applying these methods to LLMs. We also explore techniques for incorporating uncertainty into diverse applications, including out-of-distribution detection, data annotation, and question clarification. Our review provides insights into uncertainty estimation from both definitional and theoretical angles, contributing to a comprehensive understanding of this critical aspect in LLMs. We aim to inspire the development of more reliable and effective uncertainty estimation approaches for LLMs in real-world scenarios.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
500,485
2408.16032
An Extremely Data-efficient and Generative LLM-based Reinforcement Learning Agent for Recommenders
Recent advancements in large language models (LLMs) have enabled understanding webpage contexts, product details, and human instructions. Utilizing LLMs as the foundational architecture for either reward models or policies in reinforcement learning has gained popularity -- a notable achievement is the success of InstructGPT. RL algorithms have been instrumental in maximizing long-term customer satisfaction and avoiding short-term, myopic goals in industrial recommender systems, which often rely on deep learning models to predict immediate clicks or purchases. In this project, several RL methods are implemented and evaluated using the WebShop benchmark environment, data, simulator, and pre-trained model checkpoints. The goal is to train an RL agent to maximize the purchase reward given a detailed human instruction describing a desired product. The RL agents are developed by fine-tuning a pre-trained BERT model with various objectives, learning from preferences without a reward model, and employing contemporary training techniques such as Proximal Policy Optimization (PPO) as used in InstructGPT, and Direct Preference Optimization (DPO). This report also evaluates the RL agents trained using generative trajectories. Evaluations were conducted using Thompson sampling in the WebShop simulator environment. The simulated online experiments demonstrate that agents trained on generated trajectories exhibited comparable task performance to those trained using human trajectories. This has demonstrated an example of an extremely low-cost data-efficient way of training reinforcement learning agents. Also, with limited training time (<2hours), without utilizing any images, a DPO agent achieved a 19% success rate after approximately 3000 steps or 30 minutes of training on T4 GPUs, compared to a PPO agent, which reached a 15% success rate.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
484,173
2206.04250
Massive MIMO Hybrid Precoding for LEO Satellite Communications With Twin-Resolution Phase Shifters and Nonlinear Power Amplifiers
The massive multiple-input multiple-output (MIMO) transmission technology has recently attracted much attention in the non-geostationary, e.g., low earth orbit (LEO) satellite communication (SATCOM) systems since it can significantly improve the energy efficiency (EE) and spectral efficiency. In this work, we develop a hybrid analog/digital precoding technique in the massive MIMO LEO SATCOM downlink, which reduces the onboard hardware complexity and power consumption. In the proposed scheme, the analog precoder is implemented via a more practical twin-resolution phase shifting (TRPS) network to make a meticulous tradeoff between the power consumption and array gain. In addition, we consider and study the impact of the distortion effect of the nonlinear power amplifiers (NPAs) in the system design. By jointly considering all the above factors, we propose an efficient algorithmic approach for the TRPS-based hybrid precoding problem with NPAs. Numerical results show the EE gains considering the nonlinear distortion and the performance superiority of the proposed TRPS-based hybrid precoding scheme over the baselines.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
301,552
1912.11675
Learning Controllable Disentangled Representations with Decorrelation Regularization
A crucial problem in learning disentangled image representations is controlling the degree of disentanglement during image editing, while preserving the identity of objects. In this work, we propose a simple yet effective model with the encoder-decoder architecture to address this challenge. To encourage disentanglement, we devise a distance covariance based decorrelation regularization. Further, for the reconstruction step, our model leverages a soft target representation combined with the latent image code. By exploiting the real-valued space of the soft target representations, we are able to synthesize novel images with the designated properties. We also design a classification based protocol to quantitatively evaluate the disentanglement strength of our model. Experimental results show that the proposed model competently disentangles factors of variation, and is able to manipulate face images to synthesize the desired attributes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
158,631
2411.03752
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
Recent studies have shown that deep learning models are very vulnerable to poisoning attacks. Many defense methods have been proposed to address this issue. However, traditional poisoning attacks are not as threatening as commonly believed. This is because they often cause differences in how the model performs on the training set compared to the validation set. Such inconsistency can alert defenders that their data has been poisoned, allowing them to take the necessary defensive actions. In this paper, we introduce a more threatening type of poisoning attack called the Deferred Poisoning Attack. This new attack allows the model to function normally during the training and validation phases but makes it very sensitive to evasion attacks or even natural noise. We achieve this by ensuring the poisoned model's loss function has a similar value as a normally trained model at each input sample but with a large local curvature. A similar model loss ensures that there is no obvious inconsistency between the training and validation accuracy, demonstrating high stealthiness. On the other hand, the large curvature implies that a small perturbation may cause a significant increase in model loss, leading to substantial performance degradation, which reflects a worse robustness. We fulfill this purpose by making the model have singular Hessian information at the optimal point via our proposed Singularization Regularization term. We have conducted both theoretical and empirical analyses of the proposed method and validated its effectiveness through experiments on image classification tasks. Furthermore, we have confirmed the hazards of this form of poisoning attack under more general scenarios using natural noise, offering a new perspective for research in the field of security.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
506,026
1912.07477
A Machine-learning based Probabilistic Perspective on Dynamic Security Assessment
Probabilistic security assessment and real-time dynamic security assessments (DSA) are promising to better handle the risks of system operations. The current methodologies of security assessments may require many time-domain simulations for some stability phenomena that are unpractical in real-time. Supervised machine learning is promising to predict DSA as their predictions are immediately available. Classifiers are offline trained on operating conditions and then used in real-time to identify operating conditions that are insecure. However, the predictions of classifiers can be sometimes wrong and hazardous if an alarm is missed for instance. A probabilistic output of the classifier is explored in more detail and proposed for probabilistic security assessment. An ensemble classifier is trained and calibrated offline by using Platt scaling to provide accurate probability estimates of the output. Imbalances in the training database and a cost-skewness addressing strategy are proposed for considering that missed alarms are significantly worse than false alarms. Subsequently, risk-minimised predictions can be made in real-time operation by applying cost-sensitive learning. Through case studies on a real data-set of the French transmission grid and on the IEEE 6 bus system using static security metrics, it is showcased how the proposed approach reduces inaccurate predictions and risks. The sensitivity on the likelihood of contingency is studied as well as on expected outage costs. Finally, the scalability to several contingencies and operating conditions are showcased.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
157,617
1407.0051
Hands-on experiments on intelligent behavior for mobile robots
In recent years, Artificial Intelligence techniques have emerged as useful tools for solving various engineering problems that were not possible or convenient to handle by traditional methods. AI has directly influenced many areas of computer science and becomes an important part of the engineering curriculum. However, determining the important topics for a single semester AI course is a nontrivial task, given the lack of a general methodology. AI concepts commonly overlap with many other disciplines involving a wide range of subjects, including applied approaches to more formal mathematical issues. This paper presents the use of a simple robotic platform to assist the learning of basic AI concepts. The study is guided through some simple experiments using autonomous mobile robots. The central algorithm is the Learning Automata. Using LA, each robot action is applied to an environment to be evaluated by means of a fitness value. The response of the environment is used by the automata to select its next action. This procedure holds until the goal task is reached. The proposal addresses the AI study by offering in LA a unifying context to draw together several of the topics of AI and motivating the students to learn by building some hands on laboratory exercises. The presented material has been successfully tested as AI teaching aide in the University of Guadalajara robotics group as it motivates students and increases enrolment and retention while educating better computer engineers.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
34,292
2111.06401
Stacked U-Nets with Self-Assisted Priors Towards Robust Correction of Rigid Motion Artifact in Brain MRI
In this paper, we develop an efficient retrospective deep learning method called stacked U-Nets with self-assisted priors to address the problem of rigid motion artifacts in MRI. The proposed work exploits the usage of additional knowledge priors from the corrupted images themselves without the need for additional contrast data. The proposed network learns missed structural details through sharing auxiliary information from the contiguous slices of the same distorted subject. We further design a refinement stacked U-Nets that facilitates preserving of the image spatial details and hence improves the pixel-to-pixel dependency. To perform network training, simulation of MRI motion artifacts is inevitable. We present an intensive analysis using various types of image priors: the proposed self-assisted priors and priors from other image contrast of the same subject. The experimental analysis proves the effectiveness and feasibility of our self-assisted priors since it does not require any further data scans.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
266,068
2107.08585
Non-binary deep transfer learning for image classification
The current standard for a variety of computer vision tasks using smaller numbers of labelled training examples is to fine-tune from weights pre-trained on a large image classification dataset such as ImageNet. The application of transfer learning and transfer learning methods tends to be rigidly binary. A model is either pre-trained or not pre-trained. Pre-training a model either increases performance or decreases it, the latter being defined as negative transfer. Application of L2-SP regularisation that decays the weights towards their pre-trained values is either applied or all weights are decayed towards 0. This paper re-examines these assumptions. Our recommendations are based on extensive empirical evaluation that demonstrate the application of a non-binary approach to achieve optimal results. (1) Achieving best performance on each individual dataset requires careful adjustment of various transfer learning hyperparameters not usually considered, including number of layers to transfer, different learning rates for different layers and different combinations of L2SP and L2 regularization. (2) Best practice can be achieved using a number of measures of how well the pre-trained weights fit the target dataset to guide optimal hyperparameters. We present methods for non-binary transfer learning including combining L2SP and L2 regularization and performing non-traditional fine-tuning hyperparameter searches. Finally we suggest heuristics for determining the optimal transfer learning hyperparameters. The benefits of using a non-binary approach are supported by final results that come close to or exceed state of the art performance on a variety of tasks that have traditionally been more difficult for transfer learning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
246,773
2104.04064
Many-Joint Robot Arm Control with Recurrent Spiking Neural Networks
In the paper, we show how scalable, low-cost trunk-like robotic arms can be constructed using only basic 3D-printing equipment and simple electronics. The design is based on uniform, stackable joint modules with three degrees of freedom each. Moreover, we present an approach for controlling these robots with recurrent spiking neural networks. At first, a spiking forward model learns motor-pose correlations from movement observations. After training, intentions can be projected back through unrolled spike trains of the forward model essentially routing the intention-driven motor gradients towards the respective joints, which unfolds goal-direction navigation. We demonstrate that spiking neural networks can thus effectively control trunk-like robotic arms with up to 75 articulated degrees of freedom with near millimeter accuracy.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
229,270
1701.01625
Opportunistic Downlink Interference Alignment for Multi-Cell MIMO Networks
In this paper, we propose an opportunistic downlink interference alignment (ODIA) for interference-limited cellular downlink, which intelligently combines user scheduling and downlink IA techniques. The proposed ODIA not only efficiently reduces the effect of inter-cell interference from other-cell base stations (BSs) but also eliminates intra-cell interference among spatial streams in the same cell. We show that the minimum number of users required to achieve a target degrees-of-freedom (DoF) can be fundamentally reduced, i.e., the fundamental user scaling law can be improved by using the ODIA, compared with the existing downlink IA schemes. In addition, we adopt a limited feedback strategy in the ODIA framework, and then analyze the number of feedback bits required for the system with limited feedback to achieve the same user scaling law of the ODIA as the system with perfect CSI. We also modify the original ODIA in order to further improve sum-rate, which achieves the optimal multiuser diversity gain, i.e., $\log\log N$, per spatial stream even in the presence of downlink inter-cell interference, where $N$ denotes the number of users in a cell. Simulation results show that the ODIA significantly outperforms existing interference management techniques in terms of sum-rate in realistic cellular environments. Note that the ODIA operates in a non-collaborative and decoupled manner, i.e., it requires no information exchange among BSs and no iterative beamformer optimization between BSs and users, thus leading to an easier implementation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
66,430
2205.09511
Minority Stress Experienced by LGBTQ Online Communities during the COVID-19 Pandemic
The COVID-19 pandemic has disproportionately impacted the lives of minorities, such as members of the LGBTQ community (lesbian, gay, bisexual, transgender, and queer) due to pre-existing social disadvantages and health disparities. Although extensive research has been carried out on the impact of the COVID-19 pandemic on different aspects of the general population's lives, few studies are focused on the LGBTQ population. In this paper, we develop and evaluate two sets of machine learning classifiers using a pre-pandemic and a during-pandemic dataset to identify Twitter posts exhibiting minority stress, which is a unique pressure faced by the members of the LGBTQ population due to their sexual and gender identities. We demonstrate that our best pre- and during-pandemic models show strong and stable performance for detecting posts that contain minority stress. We investigate the linguistic differences in minority stress posts across pre- and during-pandemic periods. We find that anger words are strongly associated with minority stress during the COVID-19 pandemic. We explore the impact of the pandemic on the emotional states of the LGBTQ population by adopting propensity score-based matching to perform a causal analysis. The results show that the LGBTQ population have a greater increase in the usage of cognitive words and worsened observable attribute in the usage of positive emotion words than the group of the general population with similar pre-pandemic behavioral attributes. Our findings have implications for the public health domain and policy-makers to provide adequate support, especially with respect to mental health, to the LGBTQ population during future crises.
true
false
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
297,295
2411.05232
Abstract2Appendix: Academic Reviews Enhance LLM Long-Context Capabilities
Large language models (LLMs) have shown remarkable performance across various tasks, yet their ability to handle long-context reading remains challenging. This study explores the effectiveness of leveraging high-quality academic peer review data for fine-tuning LLMs to enhance their long-context capabilities. We compare the Direct Preference Optimization (DPO) method with the Supervised Fine-Tuning (SFT) method, demonstrating DPO's superiority and data efficiency. Our experiments show that the fine-tuned model achieves a 4.04-point improvement over phi-3 and a 2.6\% increase on the Qasper benchmark using only 2000 samples. Despite facing limitations in data scale and processing costs, this study underscores the potential of DPO and high-quality data in advancing LLM performance. Additionally, the zero-shot benchmark results indicate that aggregated high-quality human reviews are overwhelmingly preferred over LLM-generated responses, even for the most capable models like GPT-4o. This suggests that high-quality human reviews are extremely rich in information, reasoning, and long-context retrieval, capabilities that even the most advanced models have not fully captured. These findings highlight the high utility of leveraging human reviews to further advance the field.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
506,588
2105.05782
How to Design Robust Algorithms using Noisy Comparison Oracle
Metric based comparison operations such as finding maximum, nearest and farthest neighbor are fundamental to studying various clustering techniques such as $k$-center clustering and agglomerative hierarchical clustering. These techniques crucially rely on accurate estimation of pairwise distance between records. However, computing exact features of the records, and their pairwise distances is often challenging, and sometimes not possible. We circumvent this challenge by leveraging weak supervision in the form of a comparison oracle that compares the relative distance between the queried points such as `Is point u closer to v or w closer to x?'. However, it is possible that some queries are easier to answer than others using a comparison oracle. We capture this by introducing two different noise models called adversarial and probabilistic noise. In this paper, we study various problems that include finding maximum, nearest/farthest neighbor search under these noise models. Building upon the techniques we develop for these comparison operations, we give robust algorithms for k-center clustering and agglomerative hierarchical clustering. We prove that our algorithms achieve good approximation guarantees with a high probability and analyze their query complexity. We evaluate the effectiveness and efficiency of our techniques empirically on various real-world datasets.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
234,918
2409.20020
Optimal Infinite-Horizon Mixed $\mathit{H}_2/\mathit{H}_\infty$ Control
We study the problem of mixed $\mathit{H}_2/\mathit{H}_\infty$ control in the infinite-horizon setting. We identify the optimal causal controller that minimizes the $\mathit{H}_2$ cost of the closed-loop system subject to an $\mathit{H}_\infty$ constraint. Megretski proved that the optimal mixed $\mathit{H}_2/\mathit{H}_\infty$ controller is non-rational whenever the constraint is active without giving an explicit construction of the controller. In this work, we provide the first exact closed-form solution to the infinite-horizon mixed $\mathit{H}_2/\mathit{H}_\infty$ control in the frequency domain. While the optimal controller is non-rational, our formulation provides a finite-dimensional parameterization of the optimal controller. Leveraging this fact, we introduce an efficient iterative algorithm that finds the optimal causal controller in the frequency domain. We show that this algorithm is convergent when the system is scalar and present numerical evidence for exponential convergence of the proposed algorithm. Finally, we show how to find the best (in $\mathit{H}_\infty$ norm) fixed-order rational approximations of the optimal mixed $\mathit{H}_2/\mathit{H}_\infty$ controller and study its performance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
492,966
1205.0917
VIQI: A New Approach for Visual Interpretation of Deep Web Query Interfaces
Deep Web databases contain more than 90% of pertinent information of the Web. Despite their importance, users don't profit of this treasury. Many deep web services are offering competitive services in term of prices, quality of service, and facilities. As the number of services is growing rapidly, users have difficulty to ask many web services in the same time. In this paper, we imagine a system where users have the possibility to formulate one query using one query interface and then the system translates query to the rest of query interfaces. However, interfaces are created by designers in order to be interpreted visually by users, machines can not interpret query from a given interface. We propose a new approach which emulates capacity of interpretation of users and extracts query from deep web query interfaces. Our approach has proved good performances on two standard datasets.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
15,795
2412.20088
An archaeological Catalog Collection Method Based on Large Vision-Language Models
Archaeological catalogs, containing key elements such as artifact images, morphological descriptions, and excavation information, are essential for studying artifact evolution and cultural inheritance. These data are widely scattered across publications, requiring automated collection methods. However, existing Large Vision-Language Models (VLMs) and their derivative data collection methods face challenges in accurate image detection and modal matching when processing archaeological catalogs, making automated collection difficult. To address these issues, we propose a novel archaeological catalog collection method based on Large Vision-Language Models that follows an approach comprising three modules: document localization, block comprehension and block matching. Through practical data collection from the Dabagou and Miaozigou pottery catalogs and comparison experiments, we demonstrate the effectiveness of our approach, providing a reliable solution for automated collection of archaeological catalogs.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
521,096
2311.03574
Fuzzy Relational Databases via Associative Arrays
The increasing rise in artificial intelligence has made the use of imprecise language in computer programs like ChatGPT more prominent. Fuzzy logic addresses this form of imprecise language by introducing the concept of fuzzy sets, where elements belong to the set with a certain membership value (called the fuzzy value). This paper combines fuzzy data with relational algebra to provide the mathematical foundation for a fuzzy database querying language, describing various useful operations in the language of linear algebra and multiset operations, in addition to rigorously proving key identities.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
405,902
2402.05050
Federated Learning Can Find Friends That Are Advantageous
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges. While collaboration among clients can significantly enhance the learning process, not all collaborations are beneficial; some may even be detrimental. In this study, we introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective. We demonstrate that our aggregation method converges no worse than the method that aggregates only the updates received from clients with the same data distribution. Furthermore, empirical evaluations consistently reveal that collaborations guided by our algorithm outperform traditional FL approaches. This underscores the critical role of judicious client selection and lays the foundation for more streamlined and effective FL implementations in the coming years.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
427,703
2111.00812
Topology identification of autonomous quantum dynamical networks
Topology identification comprises reconstructing the interaction Hamiltonian of a quantum network by properly processing measurements of its density operator within a fixed time interval. It finds application in several quantum technology contexts, ranging from quantum communication to quantum computing or sensing. In this paper, we provide analytical conditions for the solvability of the topology identification problem for autonomous quantum dynamical networks. The solvability condition is then converted in an algorithm for quantum network reconstruction that is easily implementable on standard computer facilities. The obtained algorithm is tested for Hamiltonian reconstruction on numerical examples based on the quantum walks formalism.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
264,350
2303.03677
Training Machine Learning Models to Characterize Temporal Evolution of Disadvantaged Communities
Disadvantaged communities (DAC), as defined by the Justice40 initiative of the Department of Energy (DOE), USA, identifies census tracts across the USA to determine where benefits of climate and energy investments are or are not currently accruing. The DAC status not only helps in determining the eligibility for future Justice40-related investments but is also critical for exploring ways to achieve equitable distribution of resources. However, designing inclusive and equitable strategies not just requires a good understanding of current demographics, but also a deeper analysis of the transformations that happened in those demographics over the years. In this paper, machine learning (ML) models are trained on publicly available census data from recent years to classify the DAC status at the census tracts level and then the trained model is used to classify DAC status for historical years. A detailed analysis of the feature and model selection along with the evolution of disadvantaged communities between 2013 and 2018 is presented in this study.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
349,812
2410.00020
Loneliness Forecasting Using Multi-modal Wearable and Mobile Sensing in Everyday Settings
The adverse effects of loneliness on both physical and mental well-being are profound. Although previous research has utilized mobile sensing techniques to detect mental health issues, few studies have utilized state-of-the-art wearable devices to forecast loneliness and estimate the physiological manifestations of loneliness and its predictive nature. The primary objective of this study is to examine the feasibility of forecasting loneliness by employing wearable devices, such as smart rings and watches, to monitor early physiological indicators of loneliness. Furthermore, smartphones are employed to capture initial behavioral signs of loneliness. To accomplish this, we employed personalized machine learning techniques, leveraging a comprehensive dataset comprising physiological and behavioral information obtained during our study involving the monitoring of college students. Through the development of personalized models, we achieved a notable accuracy of 0.82 and an F-1 score of 0.82 in forecasting loneliness levels seven days in advance. Additionally, the application of Shapley values facilitated model explainability. The wealth of data provided by this study, coupled with the forecasting methodology employed, possesses the potential to augment interventions and facilitate the early identification of loneliness within populations at risk.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
493,190
2408.10932
The Evolution of Reinforcement Learning in Quantitative Finance: A Survey
Reinforcement Learning (RL) has experienced significant advancement over the past decade, prompting a growing interest in applications within finance. This survey critically evaluates 167 publications, exploring diverse RL applications and frameworks in finance. Financial markets, marked by their complexity, multi-agent nature, information asymmetry, and inherent randomness, serve as an intriguing test-bed for RL. Traditional finance offers certain solutions, and RL advances these with a more dynamic approach, incorporating machine learning methods, including transfer learning, meta-learning, and multi-agent solutions. This survey dissects key RL components through the lens of Quantitative Finance. We uncover emerging themes, propose areas for future research, and critique the strengths and weaknesses of existing methods.
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
482,085
2101.10869
A Raspberry Pi-based Traumatic Brain Injury Detection System for Single-Channel Electroencephalogram
Traumatic Brain Injury (TBI) is a common cause of death and disability. However, existing tools for TBI diagnosis are either subjective or require extensive clinical setup and expertise. The increasing affordability and reduction in size of relatively high-performance computing systems combined with promising results from TBI related machine learning research make it possible to create compact and portable systems for early detection of TBI. This work describes a Raspberry Pi based portable, real-time data acquisition, and automated processing system that uses machine learning to efficiently identify TBI and automatically score sleep stages from a single-channel Electroen-cephalogram (EEG) signal. We discuss the design, implementation, and verification of the system that can digitize EEG signal using an Analog to Digital Converter (ADC) and perform real-time signal classification to detect the presence of mild TBI (mTBI). We utilize Convolutional Neural Networks (CNN) and XGBoost based predictive models to evaluate the performance and demonstrate the versatility of the system to operate with multiple types of predictive models. We achieve a peak classification accuracy of more than 90% with a classification time of less than 1 s across 16 s - 64 s epochs for TBI vs control conditions. This work can enable development of systems suitable for field use without requiring specialized medical equipment for early TBI detection applications and TBI research. Further, this work opens avenues to implement connected, real-time TBI related health and wellness monitoring systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
217,083
1810.08799
Proportionality Degree of Multiwinner Rules
We study multiwinner elections with approval-based preferences. An instance of a multiwinner election consists of a set of alternatives, a population of voters---each voter approves a subset of alternatives, and the desired committee size $k$; the goal is to select a committee (a~subset) of $k$ alternatives according to the preferences of the voters. We investigate a number of election rules and ask whether the committees that they return represent the voters proportionally. In contrast to the classic literature, we employ quantitative techniques that allow to measure the extent to which the considered rules are proportional. This allows us to arrange the rules in a clear hierarchy. For example, we find that Proportional Approval Voting (PAV) has better proportionality guarantees than its sequential counterpart, and that Phragm\'{e}n's Sequential Rule is worse than Sequential PAV. Yet, the loss of proportionality for the two sequential rules is moderate and in some contexts can be outweighed by their other appealing properties. Finally, we measure the tradeoff between proportionality and utilitarian efficiency for a broad subclass of committee election rules.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
110,917
2207.06423
Wakeword Detection under Distribution Shifts
We propose a novel approach for semi-supervised learning (SSL) designed to overcome distribution shifts between training and real-world data arising in the keyword spotting (KWS) task. Shifts from training data distribution are a key challenge for real-world KWS tasks: when a new model is deployed on device, the gating of the accepted data undergoes a shift in distribution, making the problem of timely updates via subsequent deployments hard. Despite the shift, we assume that the marginal distributions on labels do not change. We utilize a modified teacher/student training framework, where labeled training data is augmented with unlabeled data. Note that the teacher does not have access to the new distribution as well. To train effectively with a mix of human and teacher labeled data, we develop a teacher labeling strategy based on confidence heuristics to reduce entropy on the label distribution from the teacher model; the data is then sampled to match the marginal distribution on the labels. Large scale experimental results show that a convolutional neural network (CNN) trained on far-field audio, and evaluated on far-field audio drawn from a different distribution, obtains a 14.3% relative improvement in false discovery rate (FDR) at equal false reject rate (FRR), while yielding a 5% improvement in FDR under no distribution shift. Under a more severe distribution shift from far-field to near-field audio with a smaller fully connected network (FCN) our approach achieves a 52% relative improvement in FDR at equal FRR, while yielding a 20% relative improvement in FDR on the original distribution.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
307,883
1406.6218
Dynamic Model of a Pumping Kite Power System
Converting the traction power of kites into electricity can be a low cost solution for wind energy. Reliable control of both trajectory and tether reeling is crucial. The present study proposes a modelling framework describing the dynamic behaviour of the interconnected system components, suitable for design and optimization of the control systems. The wing, bridle, airborne control unit and tether are represented as a particle system using spring-damper elements to describe their mechanical properties. Two kite models are proposed: a point mass model and a four point model. Reeling of the tether is modelled by varying the lengths of constituent tether elements. Dynamic behaviour of the ground station is included. The framework is validated by combining it with the automatic control system used for the operation of a kite power system demonstrator. The simulation results show that the point mass model can be adjusted to match the measured behaviour during a pumping cycle. The four point model can better predict the influence of gravity and inertia on the steering response and remains stable also at low tether forces. Compared to simple one point models, the proposed framework is more accurate and robust while allowing real-time simulations of the complete system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
34,101
2409.12949
A Learning-based Quadcopter Controller with Extreme Adaptation
This paper introduces a learning-based low-level controller for quadcopters, which adaptively controls quadcopters with significant variations in mass, size, and actuator capabilities. Our approach leverages a combination of imitation learning and reinforcement learning, creating a fast-adapting and general control framework for quadcopters that eliminates the need for precise model estimation or manual tuning. The controller estimates a latent representation of the vehicle's system parameters from sensor-action history, enabling it to adapt swiftly to diverse dynamics. Extensive evaluations in simulation demonstrate the controller's ability to generalize to unseen quadcopter parameters, with an adaptation range up to 16 times broader than the training set. In real-world tests, the controller is successfully deployed on quadcopters with mass differences of 3.7 times and propeller constants varying by more than 100 times, while also showing rapid adaptation to disturbances such as off-center payloads and motor failures. These results highlight the potential of our controller in extreme adaptation to simplify the design process and enhance the reliability of autonomous drone operations in unpredictable environments. The video and code are at: https://github.com/muellerlab/xadapt_ctrl
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
489,780
2101.12270
An Experimental Analysis of Attack Classification Using Machine Learning in IoT Networks
In recent years, there has been a massive increase in the amount of Internet of Things (IoT) devices as well as the data generated by such devices. The participating devices in IoT networks can be problematic due to their resource-constrained nature, and integrating security on these devices is often overlooked. This has resulted in attackers having an increased incentive to target IoT devices. As the number of attacks possible on a network increases, it becomes more difficult for traditional intrusion detection systems (IDS) to cope with these attacks efficiently. In this paper, we highlight several machine learning (ML) methods such as k-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), naive Bayes (NB), random forest (RF), artificial neural network (ANN), and logistic regression (LR) that can be used in IDS. In this work, ML algorithms are compared for both binary and multi-class classification on Bot-IoT dataset. Based on several parameters such as accuracy, precision, recall, F1 score, and log loss, we experimentally compared the aforementioned ML algorithms. In the case of HTTP distributed denial-of-service (DDoS) attack, the accuracy of RF is 99%. Furthermore, other simulation results-based precision, recall, F1 score, and log loss metric reveal that RF outperforms on all types of attacks in binary classification. However, in multi-class classification, KNN outperforms other ML algorithms with an accuracy of 99%, which is 4% higher than RF.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
217,525
2209.12758
Overcoming Referential Ambiguity in Language-Guided Goal-Conditioned Reinforcement Learning
Teaching an agent to perform new tasks using natural language can easily be hindered by ambiguities in interpretation. When a teacher provides an instruction to a learner about an object by referring to its features, the learner can misunderstand the teacher's intentions, for instance if the instruction ambiguously refer to features of the object, a phenomenon called referential ambiguity. We study how two concepts derived from cognitive sciences can help resolve those referential ambiguities: pedagogy (selecting the right instructions) and pragmatism (learning the preferences of the other agents using inductive reasoning). We apply those ideas to a teacher/learner setup with two artificial agents on a simulated robotic task (block-stacking). We show that these concepts improve sample efficiency for training the learner.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
319,644
2004.03056
Truly Intelligent Reflecting Surface-Aided Secure Communication Using Deep Learning
This paper considers machine learning for physical layer security design for communication in a challenging wireless environment. The radio environment is assumed to be programmable with the aid of a meta material-based intelligent reflecting surface (IRS) allowing customisable path loss, multi-path fading and interference effects. In particular, the fine-grained reflections from the IRS elements are exploited to create channel advantage for maximizing the secrecy rate at a legitimate receiver. A deep learning (DL) technique has been developed to tune the reflections of the IRS elements in real-time. Simulation results demonstrate that the DL approach yields comparable performance to the conventional approaches while significantly reducing the computational complexity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
171,429
2502.10019
A Differential Equation Approach to the Most-Informative Boolean Function Conjecture
We study the most-informative Boolean function conjecture using a differential equation approach. This leads to a formulation of a functional inequality on finite-dimensional random variables. We also develop a similar inequality in the case of the Hellinger conjecture. Finally, we conjecture a specific finite dimensional inequality that, if proved, will lead to a proof of the Boolean function conjecture in the balanced case. We further show that the above inequality holds modulo four explicit inequalities (all of which seems to hold via numerical simulation) with the first three containing just two variables and a final one involving four variables.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
533,703
2411.19655
Truth or Mirage? Towards End-to-End Factuality Evaluation with LLM-Oasis
After the introduction of Large Language Models (LLMs), there have been substantial improvements in the performance of Natural Language Generation (NLG) tasks, including Text Summarization and Machine Translation. However, LLMs still produce outputs containing hallucinations, that is, content not grounded in factual information. Therefore, developing methods to assess the factuality of LLMs has become urgent. Indeed, resources for factuality evaluation have recently emerged. Although challenging, these resources face one or more of the following limitations: (i) they are tailored to a specific task or domain; (ii) they are limited in size, thereby preventing the training of new factuality evaluators; (iii) they are designed for simpler verification tasks, such as claim verification. To address these issues, we introduce LLM-Oasis, to the best of our knowledge the largest resource for training end-to-end factuality evaluators. LLM-Oasis is constructed by extracting claims from Wikipedia, falsifying a subset of these claims, and generating pairs of factual and unfactual texts. We then rely on human annotators to both validate the quality of our dataset and to create a gold standard test set for benchmarking factuality evaluation systems. Our experiments demonstrate that LLM-Oasis presents a significant challenge for state-of-the-art LLMs, with GPT-4o achieving up to 60% accuracy in our proposed end-to-end factuality evaluation task, highlighting its potential to drive future research in the field.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
512,340
2210.07290
Joint control variate for faster black-box variational inference
Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance. This variance comes from two sources of randomness: Data subsampling and Monte Carlo sampling. While existing control variates only address Monte Carlo noise, and incremental gradient methods typically only address data subsampling, we propose a new "joint" control variate that jointly reduces variance from both sources of noise. This significantly reduces gradient variance, leading to faster optimization in several applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
323,641
1611.08796
Deep Deformable Registration: Enhancing Accuracy by Fully Convolutional Neural Net
Deformable registration is ubiquitous in medical image analysis. Many deformable registration methods minimize sum of squared difference (SSD) as the registration cost with respect to deformable model parameters. In this work, we construct a tight upper bound of the SSD registration cost by using a fully convolutional neural network (FCNN) in the registration pipeline. The upper bound SSD (UB-SSD) enhances the original deformable model parameter space by adding a heatmap output from FCNN. Next, we minimize this UB-SSD by adjusting both the parameters of the FCNN and the parameters of the deformable model in coordinate descent. Our coordinate descent framework is end-to-end and can work with any deformable registration method that uses SSD. We demonstrate experimentally that our method enhances the accuracy of deformable registration algorithms significantly on two publicly available 3D brain MRI data sets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,562
1504.07586
Dual Rate Control for Security in Cyber-physical Systems
We consider malicious attacks on actuators and sensors of a feedback system which can be modeled as additive, possibly unbounded, disturbances at the digital (cyber) part of the feedback loop. We precisely characterize the role of the unstable poles and zeros of the system in the ability to detect stealthy attacks in the context of the sampled data implementation of the controller in feedback with the continuous (physical) plant. We show that, if there is a single sensor that is guaranteed to be secure and the plant is observable from that sensor, then there exist a class of multirate sampled data controllers that ensure that all attacks remain detectable. These dual rate controllers are sampling the output faster than the zero order hold rate that operates on the control input and as such, they can even provide better nominal performance than single rate, at the price of higher sampling of the continuous output.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
42,557
2401.01454
A Survey on Autonomous Driving Datasets: Statistics, Annotation Quality, and a Future Outlook
Autonomous driving has rapidly developed and shown promising performance due to recent advances in hardware and deep learning techniques. High-quality datasets are fundamental for developing reliable autonomous driving algorithms. Previous dataset surveys either focused on a limited number or lacked detailed investigation of dataset characteristics. To this end, we present an exhaustive study of 265 autonomous driving datasets from multiple perspectives, including sensor modalities, data size, tasks, and contextual conditions. We introduce a novel metric to evaluate the impact of datasets, which can also be a guide for creating new datasets. Besides, we analyze the annotation processes, existing labeling tools, and the annotation quality of datasets, showing the importance of establishing a standard annotation pipeline. On the other hand, we thoroughly analyze the impact of geographical and adversarial environmental conditions on the performance of autonomous driving systems. Moreover, we exhibit the data distribution of several vital datasets and discuss their pros and cons accordingly. Finally, we discuss the current challenges and the development trend of the future autonomous driving datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
419,357
2403.05019
ERASOR++: Height Coding Plus Egocentric Ratio Based Dynamic Object Removal for Static Point Cloud Mapping
Mapping plays a crucial role in location and navigation within automatic systems. However, the presence of dynamic objects in 3D point cloud maps generated from scan sensors can introduce map distortion and long traces, thereby posing challenges for accurate mapping and navigation. To address this issue, we propose ERASOR++, an enhanced approach based on the Egocentric Ratio of Pseudo Occupancy for effective dynamic object removal. To begin, we introduce the Height Coding Descriptor, which combines height difference and height layer information to encode the point cloud. Subsequently, we propose the Height Stack Test, Ground Layer Test, and Surrounding Point Test methods to precisely and efficiently identify the dynamic bins within point cloud bins, thus overcoming the limitations of prior approaches. Through extensive evaluation on open-source datasets, our approach demonstrates superior performance in terms of precision and efficiency compared to existing methods. Furthermore, the techniques described in our work hold promise for addressing various challenging tasks or aspects through subsequent migration.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
435,828
2311.06604
Hub-Based Platoon Formation: Optimal Release Policies and Approximate Solutions
This paper studies the optimal hub-based platoon formation at hubs along a highway under decentralized, distributed, and centralized policies. Hubs are locations along highways where trucks can wait for other trucks to form platoons. A coordinator at each hub decides the departure time of trucks, and the released trucks from the hub will form platoons. The problem is cast as an optimization problem where the objective is to maximize the platooning reward. We first show that the optimal release policy in the decentralized case, where the hubs do not exchange information, is to release all trucks at the hub when the number of trucks exceeds a threshold computed by dynamic programming. We develop efficient approximate release policies for the dependent arrival case using this result. To study the value of information exchange among hubs on platoon formation, we next study the distributed and centralized platoon formation policies which require information exchange among hubs. To this end, we develop receding horizon solutions for the distributed and centralized platoon formation at hubs using the dynamic programming technique. Finally, we perform a simulation study over three hubs in northern Sweden. The profits of the decentralized policies are shown to be approximately 3.5% lower than the distributed policy and 8% lower than the centralized release policy. This observation suggests that decentralized policies are prominent solutions for hub-based platooning as they do not require information exchange among hubs and can achieve a similar performance compared with distributed and centralized policies.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
407,003
1704.01344
Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via Deep Layer Cascade
We propose a novel deep layer cascade (LC) method to improve the accuracy and speed of semantic segmentation. Unlike the conventional model cascade (MC) that is composed of multiple independent models, LC treats a single deep model as a cascade of several sub-models. Earlier sub-models are trained to handle easy and confident regions, and they progressively feed-forward harder regions to the next sub-model for processing. Convolutions are only calculated on these regions to reduce computations. The proposed method possesses several advantages. First, LC classifies most of the easy regions in the shallow stage and makes deeper stage focuses on a few hard regions. Such an adaptive and 'difficulty-aware' learning improves segmentation performance. Second, LC accelerates both training and testing of deep network thanks to early decisions in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable framework, allowing joint learning of all sub-models. We evaluate our method on PASCAL VOC and Cityscapes datasets, achieving state-of-the-art performance and fast speed.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
71,244
2204.04518
Attention U-Net as a surrogate model for groundwater prediction
Numerical simulations of groundwater flow are used to analyze and predict the response of an aquifer system to its change in state by approximating the solution of the fundamental groundwater physical equations. The most used and classical methodologies, such as Finite Difference (FD) and Finite Element (FE) Methods, use iterative solvers which are associated with high computational cost. This study proposes a physics-based convolutional encoder-decoder neural network as a surrogate model to quickly calculate the response of the groundwater system. Holding strong promise in cross-domain mappings, encoder-decoder networks are applicable for learning complex input-output mappings of physical systems. This manuscript presents an Attention U-Net model that attempts to capture the fundamental input-output relations of the groundwater system and generates solutions of hydraulic head in the whole domain given a set of physical parameters and boundary conditions. The model accurately predicts the steady state response of a highly heterogeneous groundwater system given the locations and piezometric head of up to 3 wells as input. The network learns to pay attention only in the relevant parts of the domain and the generated hydraulic head field corresponds to the target samples in great detail. Even relative to coarse finite difference approximations the proposed model is shown to be significantly faster than a comparative state-of-the-art numerical solver, thus providing a base for further development of the presented networks as surrogate models for groundwater prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
290,687
2306.16098
Chan-Vese Attention U-Net: An attention mechanism for robust segmentation
When studying the results of a segmentation algorithm using convolutional neural networks, one wonders about the reliability and consistency of the results. This leads to questioning the possibility of using such an algorithm in applications where there is little room for doubt. We propose in this paper a new attention gate based on the use of Chan-Vese energy minimization to control more precisely the segmentation masks given by a standard CNN architecture such as the U-Net model. This mechanism allows to obtain a constraint on the segmentation based on the resolution of a PDE. The study of the results allows us to observe the spatial information retained by the neural network on the region of interest and obtains competitive results on the binary segmentation. We illustrate the efficiency of this approach for medical image segmentation on a database of MRI brain images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
376,278
1505.06443
Detecting bird sound in unknown acoustic background using crowdsourced training data
Biodiversity monitoring using audio recordings is achievable at a truly global scale via large-scale deployment of inexpensive, unattended recording stations or by large-scale crowdsourcing using recording and species recognition on mobile devices. The ability, however, to reliably identify vocalising animal species is limited by the fact that acoustic signatures of interest in such recordings are typically embedded in a diverse and complex acoustic background. To avoid the problems associated with modelling such backgrounds, we build generative models of bird sounds and use the concept of novelty detection to screen recordings to detect sections of data which are likely bird vocalisations. We present detection results against various acoustic environments and different signal-to-noise ratios. We discuss the issues related to selecting the cost function and setting detection thresholds in such algorithms. Our methods are designed to be scalable and automatically applicable to arbitrary selections of species depending on the specific geographic region and time period of deployment.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
43,425
1103.0538
Stochatic Perron's method and verification without smoothness using viscosity comparison: the linear case
We introduce a probabilistic version of the classical Perron's method to construct viscosity solutions to linear parabolic equations associated to stochastic differential equations. Using this method, we construct easily two viscosity (sub and super) solutions that squeeze in between the expected payoff. If a comparison result holds true, then there exists a unique viscosity solution which is a martingale along the solutions of the stochastic differential equation. The unique viscosity solution is actually equal to the expected payoff. This amounts to a verification result (Ito's Lemma) for non-smooth viscosity solutions of the linear parabolic equation. This is the first step in a larger program to prove verification for viscosity solutions and the Dynamic Programming Principle for stochastic control problems and games
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
9,454
1805.10694
Exponential convergence rates for Batch Normalization: The power of length-direction decoupling in non-convex optimization
Normalization techniques such as Batch Normalization have been applied successfully for training deep neural networks. Yet, despite its apparent empirical benefits, the reasons behind the success of Batch Normalization are mostly hypothetical. We here aim to provide a more thorough theoretical understanding from a classical optimization perspective. Our main contribution towards this goal is the identification of various problem instances in the realm of machine learning where % -- under certain assumptions-- Batch Normalization can provably accelerate optimization. We argue that this acceleration is due to the fact that Batch Normalization splits the optimization task into optimizing length and direction of the parameters separately. This allows gradient-based methods to leverage a favourable global structure in the loss landscape that we prove to exist in Learning Halfspace problems and neural network training with Gaussian inputs. We thereby turn Batch Normalization from an effective practical heuristic into a provably converging algorithm for these settings. Furthermore, we substantiate our analysis with empirical evidence that suggests the validity of our theoretical results in a broader context.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,744
2408.10669
Tensor tree learns hidden relational structures in data to construct generative models
Based on the tensor tree network with the Born machine framework, we propose a general method for constructing a generative model by expressing the target distribution function as the quantum wave function amplitude represented by a tensor tree. The key idea is dynamically optimizing the tree structure that minimizes the bond mutual information. The proposed method offers enhanced performance and uncovers hidden relational structures in the target data. We illustrate potential practical applications with four examples: (i) random patterns, (ii) QMNIST hand-written digits, (iii) Bayesian networks, and (iv) the stock price fluctuation pattern in S&P500. In (i) and (ii), strongly correlated variables were concentrated near the center of the network; in (iii), the causality pattern was identified; and, in (iv), a structure corresponding to the eleven sectors emerged.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
481,970
2305.01936
Illicit item detection in X-ray images for security applications
Automated detection of contraband items in X-ray images can significantly increase public safety, by enhancing the productivity and alleviating the mental load of security officers in airports, subways, customs/post offices, etc. The large volume and high throughput of passengers, mailed parcels, etc., during rush hours make it a Big Data analysis task. Modern computer vision algorithms relying on Deep Neural Networks (DNNs) have proven capable of undertaking this task even under resource-constrained and embedded execution scenarios, e.g., as is the case with fast, single-stage, anchor-based object detectors. This paper proposes a two-fold improvement of such algorithms for the X-ray analysis domain, introducing two complementary novelties. Firstly, more efficient anchors are obtained by hierarchical clustering the sizes of the ground-truth training set bounding boxes; thus, the resulting anchors follow a natural hierarchy aligned with the semantic structure of the data. Secondly, the default Non-Maximum Suppression (NMS) algorithm at the end of the object detection pipeline is modified to better handle occluded object detection and to reduce the number of false predictions, by inserting the Efficient Intersection over Union (E-IoU) metric into the Weighted Cluster NMS method. E-IoU provides more discriminative geometrical correlations between the candidate bounding boxes/Regions-of-Interest (RoIs). The proposed method is implemented on a common single-stage object detector (YOLOv5) and its experimental evaluation on a relevant public dataset indicates significant accuracy gains over both the baseline and competing approaches. This highlights the potential of Big Data analysis in enhancing public safety.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
361,854
2402.06102
Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning
Recent advances in real-world applications of reinforcement learning (RL) have relied on the ability to accurately simulate systems at scale. However, domains such as fluid dynamical systems exhibit complex dynamic phenomena that are hard to simulate at high integration rates, limiting the direct application of modern deep RL algorithms to often expensive or safety critical hardware. In this work, we introduce "Box o Flows", a novel benchtop experimental control system for systematically evaluating RL algorithms in dynamic real-world scenarios. We describe the key components of the Box o Flows, and through a series of experiments demonstrate how state-of-the-art model-free RL algorithms can synthesize a variety of complex behaviors via simple reward specifications. Furthermore, we explore the role of offline RL in data-efficient hypothesis testing by reusing past experiences. We believe that the insights gained from this preliminary study and the availability of systems like the Box o Flows support the way forward for developing systematic RL algorithms that can be generally applied to complex, dynamical systems. Supplementary material and videos of experiments are available at https://sites.google.com/view/box-o-flows/home.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
428,161
2408.06961
ASPEN: ASP-Based System for Collective Entity Resolution
In this paper, we present ASPEN, an answer set programming (ASP) implementation of a recently proposed declarative framework for collective entity resolution (ER). While an ASP encoding had been previously suggested, several practical issues had been neglected, most notably, the question of how to efficiently compute the (externally defined) similarity facts that are used in rule bodies. This leads us to propose new variants of the encodings (including Datalog approximations) and show how to employ different functionalities of ASP solvers to compute (maximal) solutions, and (approximations of) the sets of possible and certain merges. A comprehensive experimental evaluation of ASPEN on real-world datasets shows that the approach is promising, achieving high accuracy in real-life ER scenarios. Our experiments also yield useful insights into the relative merits of different types of (approximate) ER solutions, the impact of recursion, and factors influencing performance.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
480,406
2309.15132
Genetic InfoMax: Exploring Mutual Information Maximization in High-Dimensional Imaging Genetics Studies
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits. When applied to high-dimensional medical imaging data, a key step is to extract lower-dimensional, yet informative representations of the data as traits. Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS in comparison to typical visual representation learning. In this study, we tackle this problem from the mutual information (MI) perspective by identifying key limitations of existing methods. We introduce a trans-modal learning framework Genetic InfoMax (GIM), including a regularized MI estimator and a novel genetics-informed transformer to address the specific challenges of GWAS. We evaluate GIM on human brain 3D MRI data and establish standardized evaluation protocols to compare it to existing approaches. Our results demonstrate the effectiveness of GIM and a significantly improved performance on GWAS.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
394,862