id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1102.2615 | Guaranteeing Convergence of Iterative Skewed Voting Algorithms for Image
Segmentation | In this paper we provide rigorous proof for the convergence of an iterative voting-based image segmentation algorithm called Active Masks. Active Masks (AM) was proposed to solve the challenging task of delineating punctate patterns of cells from fluorescence microscope images. Each iteration of AM consists of a linear convolution composed with a nonlinear thresholding; what makes this process special in our case is the presence of additive terms whose role is to "skew" the voting when prior information is available. In real-world implementation, the AM algorithm always converges to a fixed point. We study the behavior of AM rigorously and present a proof of this convergence. The key idea is to formulate AM as a generalized (parallel) majority cellular automaton, adapting proof techniques from discrete dynamical systems. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 9,145 |
2411.13886 | CLFace: A Scalable and Resource-Efficient Continual Learning Framework
for Lifelong Face Recognition | An important aspect of deploying face recognition (FR) algorithms in real-world applications is their ability to learn new face identities from a continuous data stream. However, the online training of existing deep neural network-based FR algorithms, which are pre-trained offline on large-scale stationary datasets, encounter two major challenges: (I) catastrophic forgetting of previously learned identities, and (II) the need to store past data for complete retraining from scratch, leading to significant storage constraints and privacy concerns. In this paper, we introduce CLFace, a continual learning framework designed to preserve and incrementally extend the learned knowledge. CLFace eliminates the classification layer, resulting in a resource-efficient FR model that remains fixed throughout lifelong learning and provides label-free supervision to a student model, making it suitable for open-set face recognition during incremental steps. We introduce an objective function that employs feature-level distillation to reduce drift between feature maps of the student and teacher models across multiple stages. Additionally, it incorporates a geometry-preserving distillation scheme to maintain the orientation of the teacher model's feature embedding. Furthermore, a contrastive knowledge distillation is incorporated to continually enhance the discriminative power of the feature representation by matching similarities between new identities. Experiments on several benchmark FR datasets demonstrate that CLFace outperforms baseline approaches and state-of-the-art methods on unseen identities using both in-domain and out-of-domain datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 509,958 |
1510.02533 | New Optimisation Methods for Machine Learning | A thesis submitted for the degree of Doctor of Philosophy of The Australian National University. In this work we introduce several new optimisation methods for problems in machine learning. Our algorithms broadly fall into two categories: optimisation of finite sums and of graph structured objectives. The finite sum problem is simply the minimisation of objective functions that are naturally expressed as a summation over a large number of terms, where each term has a similar or identical weight. Such objectives most often appear in machine learning in the empirical risk minimisation framework in the non-online learning setting. The second category, that of graph structured objectives, consists of objectives that result from applying maximum likelihood to Markov random field models. Unlike the finite sum case, all the non-linearity is contained within a partition function term, which does not readily decompose into a summation. For the finite sum problem, we introduce the Finito and SAGA algorithms, as well as variants of each. For graph-structured problems, we take three complementary approaches. We look at learning the parameters for a fixed structure, learning the structure independently, and learning both simultaneously. Specifically, for the combined approach, we introduce a new method for encouraging graph structures with the "scale-free" property. For the structure learning problem, we establish SHORTCUT, a O(n^{2.5}) expected time approximate structure learning method for Gaussian graphical models. For problems where the structure is known but the parameters unknown, we introduce an approximate maximum likelihood learning algorithm that is capable of learning a useful subclass of Gaussian graphical models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 47,728 |
1712.07199 | Cognitive Database: A Step towards Endowing Relational Databases with
Artificial Intelligence Capabilities | We propose Cognitive Databases, an approach for transparently enabling Artificial Intelligence (AI) capabilities in relational databases. A novel aspect of our design is to first view the structured data source as meaningful unstructured text, and then use the text to build an unsupervised neural network model using a Natural Language Processing (NLP) technique called word embedding. This model captures the hidden inter-/intra-column relationships between database tokens of different types. For each database token, the model includes a vector that encodes contextual semantic relationships. We seamlessly integrate the word embedding model into existing SQL query infrastructure and use it to enable a new class of SQL-based analytics queries called cognitive intelligence (CI) queries. CI queries use the model vectors to enable complex queries such as semantic matching, inductive reasoning queries such as analogies, predictive queries using entities not present in a database, and, more generally, using knowledge from external sources. We demonstrate unique capabilities of Cognitive Databases using an Apache Spark based prototype to execute inductive reasoning CI queries over a multi-modal database containing text and images. We believe our first-of-a-kind system exemplifies using AI functionality to endow relational databases with capabilities that were previously very hard to realize in practice. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | true | false | 87,007 |
2004.09745 | Automatically Identifying Political Ads on Facebook: Towards
Understanding of Manipulation via User Targeting | The reports of Russian interference in the 2016 United States elections brought into the center of public attention concerns related to the ability of foreign actors to increase social discord and take advantage of personal user data for political purposes. It has raised questions regarding the ways and the extent to which data can be used to create psychographical profiles to determine what kind of advertisement would be most effective to persuade a particular person in a particular location for some political event. In this work, we study the political ads dataset collected by ProPublica, an American nonprofit newsroom, using a network of volunteers in the period before the 2018 US midterm elections. We first describe the main characteristics of the data and explore the user attributes including age, region, activity, and more, with a series of interactive illustrations. Furthermore, an important first step towards understating of political manipulation via user targeting is to identify politically related ads, yet manually checking ads is not feasible due to the scale of social media advertising. Consequently, we address the challenge of automatically classifying between political and non-political ads, demonstrating a significant improvement compared to the current text-based classifier used by ProPublica, and study whether the user targeting attributes are beneficial for this task. Our evaluation sheds light on questions, such as how user attributes are being used for political ads targeting and which users are more prone to be targeted with political ads. Overall, our contribution of data exploration, political ad classification and initial analysis of the targeting attributes, is designed to support future work with the ProPublica dataset, and specifically with regard to the understanding of political manipulation via user targeting. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 173,441 |
1905.00586 | Estimating Kullback-Leibler Divergence Using Kernel Machines | Recently, a method called the Mutual Information Neural Estimator (MINE) that uses neural networks has been proposed to estimate mutual information and more generally the Kullback-Leibler (KL) divergence between two distributions. The method uses the Donsker-Varadhan representation to arrive at the estimate of the KL divergence and is better than the existing estimators in terms of scalability and flexibility. The output of MINE algorithm is not guaranteed to be a consistent estimator. We propose a new estimator that instead of searching among functions characterized by neural networks searches the functions in a Reproducing Kernel Hilbert Space. We prove that the proposed estimator is consistent. We carry out simulations and show that when the datasets are small the proposed estimator is more reliable than the MINE estimator and when the datasets are large the performance of the two methods are close. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 129,516 |
2211.03004 | Bringing Online Egocentric Action Recognition into the wild | To enable a safe and effective human-robot cooperation, it is crucial to develop models for the identification of human activities. Egocentric vision seems to be a viable solution to solve this problem, and therefore many works provide deep learning solutions to infer human actions from first person videos. However, although very promising, most of these do not consider the major challenges that comes with a realistic deployment, such as the portability of the model, the need for real-time inference, and the robustness with respect to the novel domains (i.e., new spaces, users, tasks). With this paper, we set the boundaries that egocentric vision models should consider for realistic applications, defining a novel setting of egocentric action recognition in the wild, which encourages researchers to develop novel, applications-aware solutions. We also present a new model-agnostic technique that enables the rapid repurposing of existing architectures in this new context, demonstrating the feasibility to deploy a model on a tiny device (Jetson Nano) and to perform the task directly on the edge with very low energy consumption (2.4W on average at 50 fps). The code is publicly available at: https://github.com/EgocentricVision/EgoWild. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 328,784 |
2312.06282 | Rank-Metric Codes and Their Parameters | We present the theory of linear rank-metric codes from the point of view of their fundamental parameters. These are: the minimum rank distance, the rank distribution, the maximum rank, the covering radius, and the field size. The focus of this chapter is on the interplay among these parameters and on their significance for the code's (combinatorial) structure. The results covered in this chapter span from the theory of optimal codes and anticodes to very recent developments on the asymptotic density of MRD codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 414,452 |
2303.13917 | Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams | We investigate the use of Convolutional Neural Networks (including the modern ConvNeXt network family) to classify transient noise signals (i.e.~glitches) and gravitational waves in data from the Advanced LIGO detectors. First, we use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset and employing transfer learning by fine-tuning pre-trained models in this dataset. Second, we also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels. Our findings are very close to existing results for the same dataset, reaching values for the F1 score of 97.18% (94.15%) for the best supervised (self-supervised) model. We further test the models using actual gravitational-wave signals from LIGO-Virgo's O3 run. Although trained using data from previous runs (O1 and O2), the models show good performance, in particular when using transfer learning. We find that transfer learning improves the scores without the need for any training on real signals apart from the less than 50 chirp examples from hardware injections present in the Gravity Spy dataset. This motivates the use of transfer learning not only for glitch classification but also for signal classification. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 353,884 |
2306.16993 | Weight Compander: A Simple Weight Reparameterization for Regularization | Regularization is a set of techniques that are used to improve the generalization ability of deep neural networks. In this paper, we introduce weight compander (WC), a novel effective method to improve generalization by reparameterizing each weight in deep neural networks using a nonlinear function. It is a general, intuitive, cheap and easy to implement method, which can be combined with various other regularization techniques. Large weights in deep neural networks are a sign of a more complex network that is overfitted to the training data. Moreover, regularized networks tend to have a greater range of weights around zero with fewer weights centered at zero. We introduce a weight reparameterization function which is applied to each weight and implicitly reduces overfitting by restricting the magnitude of the weights while forcing them away from zero at the same time. This leads to a more democratic decision-making in the network. Firstly, individual weights cannot have too much influence in the prediction process due to the restriction of their magnitude. Secondly, more weights are used in the prediction process, since they are forced away from zero during the training. This promotes the extraction of more features from the input data and increases the level of weight redundancy, which makes the network less sensitive to statistical differences between training and test data. We extend our method to learn the hyperparameters of the introduced weight reparameterization function. This avoids hyperparameter search and gives the network the opportunity to align the weight reparameterization with the training progress. We show experimentally that using weight compander in addition to standard regularization methods improves the performance of neural networks. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 376,556 |
1802.08417 | Geometric Lower Bounds for Distributed Parameter Estimation under
Communication Constraints | We consider parameter estimation in distributed networks, where each sensor in the network observes an independent sample from an underlying distribution and has $k$ bits to communicate its sample to a centralized processor which computes an estimate of a desired parameter. We develop lower bounds for the minimax risk of estimating the underlying parameter for a large class of losses and distributions. Our results show that under mild regularity conditions, the communication constraint reduces the effective sample size by a factor of $d$ when $k$ is small, where $d$ is the dimension of the estimated parameter. Furthermore, this penalty reduces at most exponentially with increasing $k$, which is the case for some models, e.g., estimating high-dimensional distributions. For other models however, we show that the sample size reduction is re-mediated only linearly with increasing $k$, e.g. when some sub-Gaussian structure is available. We apply our results to the distributed setting with product Bernoulli model, multinomial model, Gaussian location models, and logistic regression which recover or strengthen existing results. Our approach significantly deviates from existing approaches for developing information-theoretic lower bounds for communication-efficient estimation. We circumvent the need for strong data processing inequalities used in prior work and develop a geometric approach which builds on a new representation of the communication constraint. This approach allows us to strengthen and generalize existing results with simpler and more transparent proofs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 91,104 |
2109.07717 | R-PCC: A Baseline for Range Image-based Point Cloud Compression | In autonomous vehicles or robots, point clouds from LiDAR can provide accurate depth information of objects compared with 2D images, but they also suffer a large volume of data, which is inconvenient for data storage or transmission. In this paper, we propose a Range image-based Point Cloud Compression method, R-PCC, which can reconstruct the point cloud with uniform or non-uniform accuracy loss. We segment the original large-scale point cloud into small and compact regions for spatial redundancy and salient region classification. Compared with other voxel-based or image-based compression methods, our method can keep and align all points from the original point cloud in the reconstructed point cloud. It can also control the maximum reconstruction error for each point through a quantization module. In the experiments, we prove that our easier FPS-based segmentation method can achieve better performance than instance-based segmentation methods such as DBSCAN. To verify the advantages of our proposed method, we evaluate the reconstruction quality and fidelity for 3D object detection and SLAM, as the downstream tasks. The experimental results show that our elegant framework can achieve 30$\times$ compression ratio without affecting downstream tasks, and our non-uniform compression framework shows a great improvement on the downstream tasks compared with the state-of-the-art large-scale point cloud compression methods. Our real-time method is efficient and effective enough to act as a baseline for range image-based point cloud compression. The code is available on https://github.com/StevenWang30/R-PCC.git. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 255,623 |
2201.12648 | Private Boosted Decision Trees via Smooth Re-Weighting | Protecting the privacy of people whose data is used by machine learning algorithms is important. Differential Privacy is the appropriate mathematical framework for formal guarantees of privacy, and boosted decision trees are a popular machine learning technique. So we propose and test a practical algorithm for boosting decision trees that guarantees differential privacy. Privacy is enforced because our booster never puts too much weight on any one example; this ensures that each individual's data never influences a single tree "too much." Experiments show that this boosting algorithm can produce better model sparsity and accuracy than other differentially private ensemble classifiers. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 277,721 |
1503.07795 | Multi-Labeled Classification of Demographic Attributes of Patients: a
case study of diabetics patients | Automated learning of patients demographics can be seen as multi-label problem where a patient model is based on different race and gender groups. The resulting model can be further integrated into Privacy-Preserving Data Mining, where it can be used to assess risk of identification of different patient groups. Our project considers relations between diabetes and demographics of patients as a multi-labelled problem. Most research in this area has been done as binary classification, where the target class is finding if a person has diabetes or not. But very few, and maybe no work has been done in multi-labeled analysis of the demographics of patients who are likely to be diagnosed with diabetes. To identify such groups, we applied ensembles of several multi-label learning algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 41,516 |
2407.19468 | MVPbev: Multi-view Perspective Image Generation from BEV with Test-time
Controllability and Generalizability | This work aims to address the multi-view perspective RGB generation from text prompts given Bird-Eye-View(BEV) semantics. Unlike prior methods that neglect layout consistency, lack the ability to handle detailed text prompts, or are incapable of generalizing to unseen view points, MVPbev simultaneously generates cross-view consistent images of different perspective views with a two-stage design, allowing object-level control and novel view generation at test-time. Specifically, MVPbev firstly projects given BEV semantics to perspective view with camera parameters, empowering the model to generalize to unseen view points. Then we introduce a multi-view attention module where special initialization and de-noising processes are introduced to explicitly enforce local consistency among overlapping views w.r.t. cross-view homography. Last but not least, MVPbev further allows test-time instance-level controllability by refining a pre-trained text-to-image diffusion model. Our extensive experiments on NuScenes demonstrate that our method is capable of generating high-resolution photorealistic images from text descriptions with thousands of training samples, surpassing the state-of-the-art methods under various evaluation metrics. We further demonstrate the advances of our method in terms of generalizability and controllability with the help of novel evaluation metrics and comprehensive human analysis. Our code, data, and model can be found in \url{https://github.com/kkaiwwana/MVPbev}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 476,799 |
2002.08500 | Processing topical queries on images of historical newspaper pages | Historical newspapers are a source of research for the human and social sciences. However, these image collections are difficult to read by machine due to the low quality of the print, the lack of standardization of the pages in addition to the low quality photograph of some files. This paper presents the processing model of a topic navigation system in historical newspaper page images. The general procedure consists of four modules which are: segmentation of text sub-images and text extraction, preprocessing and representation, induced topic extraction and representation, and document viewing and retrieval interface. The algorithmic and technological approaches of each module are described and the initial test results about a collection covering a range of 28 years are presented. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 164,772 |
2011.08837 | Dominant Z-Eigenpairs of Tensor Kronecker Products are Decoupled and
Applications to Higher-Order Graph Matching | Tensor Kronecker products, the natural generalization of the matrix Kronecker product, are independently emerging in multiple research communities. Like their matrix counterpart, the tensor generalization gives structure for implicit multiplication and factorization theorems. We present a theorem that decouples the dominant eigenvectors of tensor Kronecker products, which is a rare generalization from matrix theory to tensor eigenvectors. This theorem implies low rank structure ought to be present in the iterates of tensor power methods on Kronecker products. We investigate low rank structure in the network alignment algorithm TAME, a power method heuristic. Using the low rank structure directly or via a new heuristic embedding approach, we produce new algorithms which are faster while improving or maintaining accuracy, and scale to problems that cannot be realistically handled with existing techniques. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 207,012 |
1801.09005 | A Two-point Method for PTZ Camera Calibration in Sports | Calibrating narrow field of view soccer cameras is challenging because there are very few field markings in the image. Unlike previous solutions, we propose a two-point method, which requires only two point correspondences given the prior knowledge of base location and orientation of a pan-tilt-zoom (PTZ) camera. We deploy this new calibration method to annotate pan-tilt-zoom data from soccer videos. The collected data are used as references for new images. We also propose a fast random forest method to predict pan-tilt angles without image-to-image feature matching, leading to an efficient calibration method for new images. We demonstrate our system on synthetic data and two real soccer datasets. Our two-point approach achieves superior performance over the state-of-the-art method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 89,023 |
2401.13511 | Tissue Cross-Section and Pen Marking Segmentation in Whole Slide Images | Tissue segmentation is a routine preprocessing step to reduce the computational cost of whole slide image (WSI) analysis by excluding background regions. Traditional image processing techniques are commonly used for tissue segmentation, but often require manual adjustments to parameter values for atypical cases, fail to exclude all slide and scanning artifacts from the background, and are unable to segment adipose tissue. Pen marking artifacts in particular can be a potential source of bias for subsequent analyses if not removed. In addition, several applications require the separation of individual cross-sections, which can be challenging due to tissue fragmentation and adjacent positioning. To address these problems, we develop a convolutional neural network for tissue and pen marking segmentation using a dataset of 200 H&E stained WSIs. For separating tissue cross-sections, we propose a novel post-processing method based on clustering predicted centroid locations of the cross-sections in a 2D histogram. On an independent test set, the model achieved a mean Dice score of 0.981$\pm$0.033 for tissue segmentation and a mean Dice score of 0.912$\pm$0.090 for pen marking segmentation. The mean absolute difference between the number of annotated and separated cross-sections was 0.075$\pm$0.350. Our results demonstrate that the proposed model can accurately segment H&E stained tissue cross-sections and pen markings in WSIs while being robust to many common slide and scanning artifacts. The model with trained model parameters and post-processing method are made publicly available as a Python package called SlideSegmenter. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 423,751 |
2204.09410 | Generating 3D Molecules for Target Protein Binding | A fundamental problem in drug discovery is to design molecules that bind to specific proteins. To tackle this problem using machine learning methods, here we propose a novel and effective framework, known as GraphBP, to generate 3D molecules that bind to given proteins by placing atoms of specific types and locations to the given binding site one by one. In particular, at each step, we first employ a 3D graph neural network to obtain geometry-aware and chemically informative representations from the intermediate contextual information. Such context includes the given binding site and atoms placed in the previous steps. Second, to preserve the desirable equivariance property, we select a local reference atom according to the designed auxiliary classifiers and then construct a local spherical coordinate system. Finally, to place a new atom, we generate its atom type and relative location w.r.t. the constructed local coordinate system via a flow model. We also consider generating the variables of interest sequentially to capture the underlying dependencies among them. Experiments demonstrate that our GraphBP is effective to generate 3D molecules with binding ability to target protein binding sites. Our implementation is available at https://github.com/divelab/GraphBP. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 292,428 |
2406.03680 | Meta-learning for Positive-unlabeled Classification | We propose a meta-learning method for positive and unlabeled (PU) classification, which improves the performance of binary classifiers obtained from only PU data in unseen target tasks. PU learning is an important problem since PU data naturally arise in real-world applications such as outlier detection and information retrieval. Existing PU learning methods require many PU data, but sufficient data are often unavailable in practice. The proposed method minimizes the test classification risk after the model is adapted to PU data by using related tasks that consist of positive, negative, and unlabeled data. We formulate the adaptation as an estimation problem of the Bayes optimal classifier, which is an optimal classifier to minimize the classification risk. The proposed method embeds each instance into a task-specific space using neural networks. With the embedded PU data, the Bayes optimal classifier is estimated through density-ratio estimation of PU densities, whose solution is obtained as a closed-form solution. The closed-form solution enables us to efficiently and effectively minimize the test classification risk. We empirically show that the proposed method outperforms existing methods with one synthetic and three real-world datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 461,333 |
2312.11057 | DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via
Diffusion Models | Dataset sanitization is a widely adopted proactive defense against poisoning-based backdoor attacks, aimed at filtering out and removing poisoned samples from training datasets. However, existing methods have shown limited efficacy in countering the ever-evolving trigger functions, and often leading to considerable degradation of benign accuracy. In this paper, we propose DataElixir, a novel sanitization approach tailored to purify poisoned datasets. We leverage diffusion models to eliminate trigger features and restore benign features, thereby turning the poisoned samples into benign ones. Specifically, with multiple iterations of the forward and reverse process, we extract intermediary images and their predicted labels for each sample in the original dataset. Then, we identify anomalous samples in terms of the presence of label transition of the intermediary images, detect the target label by quantifying distribution discrepancy, select their purified images considering pixel and feature distance, and determine their ground-truth labels by training a benign model. Experiments conducted on 9 popular attacks demonstrates that DataElixir effectively mitigates various complex attacks while exerting minimal impact on benign accuracy, surpassing the performance of baseline defense methods. | false | false | false | false | true | false | false | false | false | false | false | true | true | false | false | false | false | false | 416,432 |
1502.04381 | Random Walks, Markov Processes and the Multiscale Modular Organization
of Complex Networks | Most methods proposed to uncover communities in complex networks rely on combinatorial graph properties. Usually an edge-counting quality function, such as modularity, is optimized over all partitions of the graph compared against a null random graph model. Here we introduce a systematic dynamical framework to design and analyze a wide variety of quality functions for community detection. The quality of a partition is measured by its Markov Stability, a time-parametrized function defined in terms of the statistical properties of a Markov process taking place on the graph. The Markov process provides a dynamical sweeping across all scales in the graph, and the time scale is an intrinsic parameter that uncovers communities at different resolutions. This dynamic-based community detection leads to a compound optimization, which favours communities of comparable centrality (as defined by the stationary distribution), and provides a unifying framework for spectral algorithms, as well as different heuristics for community detection, including versions of modularity and Potts model. Our dynamic framework creates a systematic link between different stochastic dynamics and their corresponding notions of optimal communities under distinct (node and edge) centralities. We show that the Markov Stability can be computed efficiently to find multi-scale community structure in large networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 40,262 |
2405.10516 | Language Models can Evaluate Themselves via Probability Discrepancy | In this paper, we initiate our discussion by demonstrating how Large Language Models (LLMs), when tasked with responding to queries, display a more even probability distribution in their answers if they are more adept, as opposed to their less skilled counterparts. Expanding on this foundational insight, we propose a new self-evaluation method ProbDiff for assessing the efficacy of various LLMs. This approach obviates the necessity for an additional evaluation model or the dependence on external, proprietary models like GPT-4 for judgment. It uniquely utilizes the LLMs being tested to compute the probability discrepancy between the initial response and its revised versions. A higher discrepancy for a given query between two LLMs indicates a relatively weaker capability. Our findings reveal that ProbDiff achieves results on par with those obtained from evaluations based on GPT-4, spanning a range of scenarios that include natural language generation (NLG) tasks such as translation, summarization, and our proposed Xiaohongshu blog writing task, and benchmarks for LLM evaluation like AlignBench, MT-Bench, and AlpacaEval, across LLMs of varying magnitudes. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 454,788 |
1411.3935 | Overlapping Community Discovery Methods: A Survey | The detection of overlapping communities is a challenging problem which is gaining increasing interest in recent years because of the natural attitude of individuals, observed in real-world networks, to participate in multiple groups at the same time. This review gives a description of the main proposals in the field. Besides the methods designed for static networks, some new approaches that deal with the detection of overlapping communities in networks that change over time, are described. Methods are classified with respect to the underlying principles guiding them to obtain a network division in groups sharing part of their nodes. For each of them we also report, when available, computational complexity and web site address from which it is possible to download the software implementing the method. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 37,555 |
1904.10079 | The MineRL 2019 Competition on Sample Efficient Reinforcement Learning
using Human Priors | Though deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples. As state-of-the-art reinforcement learning (RL) systems require an exponentially increasing number of samples, their development is restricted to a continually shrinking segment of the AI community. Likewise, many of these systems cannot be applied to real-world problems, where environment samples are expensive. Resolution of these limitations requires new, sample-efficient methods. To facilitate research in this direction, we introduce the MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors. The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations to drastically reduce the number of samples needed to solve complex, hierarchical, and sparse environments. To that end, we introduce: (1) the Minecraft ObtainDiamond task, a sequential decision making environment requiring long-term planning, hierarchical control, and efficient exploration methods; and (2) the MineRL-v0 dataset, a large-scale collection of over 60 million state-action pairs of human demonstrations that can be resimulated into embodied trajectories with arbitrary modifications to game state and visuals. Participants will compete to develop systems which solve the ObtainDiamond task with a limited number of samples from the environment simulator, Malmo. The competition is structured into two rounds in which competitors are provided several paired versions of the dataset and environment with different game textures. At the end of each round, competitors will submit containerized versions of their learning algorithms and they will then be trained/evaluated from scratch on a hold-out dataset-environment pair for a total of 4-days on a prespecified hardware platform. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 128,544 |
1506.01367 | A Nearly Optimal and Agnostic Algorithm for Properly Learning a Mixture
of k Gaussians, for any Constant k | Learning a Gaussian mixture model (GMM) is a fundamental problem in machine learning, learning theory, and statistics. One notion of learning a GMM is proper learning: here, the goal is to find a mixture of $k$ Gaussians $\mathcal{M}$ that is close to the density $f$ of the unknown distribution from which we draw samples. The distance between $\mathcal{M}$ and $f$ is typically measured in the total variation or $L_1$-norm. We give an algorithm for learning a mixture of $k$ univariate Gaussians that is nearly optimal for any fixed $k$. The sample complexity of our algorithm is $\tilde{O}(\frac{k}{\epsilon^2})$ and the running time is $(k \cdot \log\frac{1}{\epsilon})^{O(k^4)} + \tilde{O}(\frac{k}{\epsilon^2})$. It is well-known that this sample complexity is optimal (up to logarithmic factors), and it was already achieved by prior work. However, the best known time complexity for proper learning a $k$-GMM was $\tilde{O}(\frac{1}{\epsilon^{3k-1}})$. In particular, the dependence between $\frac{1}{\epsilon}$ and $k$ was exponential. We significantly improve this dependence by replacing the $\frac{1}{\epsilon}$ term with a $\log \frac{1}{\epsilon}$ while only increasing the exponent moderately. Hence, for any fixed $k$, the $\tilde{O} (\frac{k}{\epsilon^2})$ term dominates our running time, and thus our algorithm runs in time which is nearly-linear in the number of samples drawn. Achieving a running time of $\textrm{poly}(k, \frac{1}{\epsilon})$ for proper learning of $k$-GMMs has recently been stated as an open problem by multiple researchers, and we make progress on this question. Moreover, our approach offers an agnostic learning guarantee: our algorithm returns a good GMM even if the distribution we are sampling from is not a mixture of Gaussians. To the best of our knowledge, our algorithm is the first agnostic proper learning algorithm for GMMs. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | 43,794 |
2401.00110 | Diffusion Model with Perceptual Loss | Diffusion models without guidance tend to generate unrealistic samples, yet the cause of this problem is not fully studied. Our analysis suggests that the loss objective plays an important role in shaping the learned distribution and the common mean squared error loss is not optimal. We hypothesize that a better loss objective can be designed with inductive biases and propose a novel self-perceptual loss that utilizes the diffusion model itself as the perceptual loss. Our work demonstrates that perceptual loss can be used in diffusion training to improve sample quality effectively. Models trained using our objective can generate realistic samples without guidance. We hope our work paves the way for more future explorations of the diffusion loss objective. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 418,882 |
cs/0306066 | The COMPASS Event Store in 2002 | COMPASS, the fixed-target experiment at CERN studying the structure of the nucleon and spectroscopy, collected over 260 TB during summer 2002 run. All these data, together with reconstructed events information, were put from the beginning in a database infrastructure based on Objectivity/DB and on the hierarchical storage manager CASTOR. The experience in the usage of the database is reviewed and the evolution of the system outlined. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 537,880 |
2306.16009 | Accelerating Transducers through Adjacent Token Merging | Recent end-to-end automatic speech recognition (ASR) systems often utilize a Transformer-based acoustic encoder that generates embedding at a high frame rate. However, this design is inefficient, particularly for long speech signals due to the quadratic computation of self-attention. To address this, we propose a new method, Adjacent Token Merging (A-ToMe), which gradually combines adjacent tokens with high similarity scores between their key values. In this way, the total time step could be reduced, and the inference of both the encoder and joint network is accelerated. Experiments on LibriSpeech show that our method can reduce 57% of tokens and improve the inference speed on GPU by 70% without any notable loss of accuracy. Additionally, we demonstrate that A-ToMe is also an effective solution to reduce tokens in long-form ASR, where the input speech consists of multiple utterances. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 376,244 |
2308.09812 | Reliability and Delay Analysis of 3-Dimensional Networks with
Multi-Connectivity: Satellite, HAPs, and Cellular Communications | Aerial vehicles (AVs) such as electric vertical take-off and landing (eVTOL) aircraft make aerial passenger transportation a reality in urban environments. However, their communication connectivity is still under research to realize their safe and full-scale operation. This paper envisages a multi-connectivity (MC) enabled aerial network to provide ubiquitous and reliable service to AVs. Vertical heterogeneous networks with direct air-to-ground (DA2G) and air-to-air (A2A) communication, high altitude platforms (HAPs), and low Earth orbit (LEO) satellites are considered. We evaluate the end-to-end (E2E) multi-hop reliability and network availability of the downlink of AVs for remote piloting scenarios, and control/telemetry traffic. Command and control (C2) connectivity service requires ultra-reliable and low-latency communication (URLLC), therefore we analyse E2E reliability and latency under the finite blocklength (FBL) regime. We explore how different MC options satisfy the demanding E2E connectivity requirements taking into account antenna radiation patterns and unreliable backhaul links. Since providing seamless connectivity to AVs is very challenging due to the line-of-sight (LoS) interference and reduced gains of downtilt ground base station (BS) antennas, we use coordinated multi-point (CoMP) among ground BSs to alleviate the inter-cell interference. Furthermore, we solve an optimization problem to select the best MC path under the quality of service (QoS) constraints. We maximize spectral efficiency (SE) to specify the optimum MC path with the minimum number of required links. Based on the simulation results, we find out that even with very efficient interference mitigation, MC is the key enabler for safe remote piloting operations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 386,435 |
2203.01844 | Stochastic Model Predictive Control using Initial State Optimization | We propose a stochastic MPC scheme using an optimization over the initial state for the predicted trajectory. Considering linear discrete-time systems under unbounded additive stochastic disturbances subject to chance constraints, we use constraint tightening based on probabilistic reachable sets to design the MPC. The scheme avoids the infeasibility issues arising from unbounded disturbances by including the initial state as a decision variable. We show that the stabilizing control scheme can guarantee constraint satisfaction in closed loop, assuming unimodal disturbances. In addition to illustrating these guarantees, the numerical example indicates further advantages of optimizing over the initial state for the transient behavior. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 283,527 |
1705.01359 | FOIL it! Find One mismatch between Image and Language caption | In this paper, we aim to understand whether current language and vision (LaVi) models truly grasp the interaction between the two modalities. To this end, we propose an extension of the MSCOCO dataset, FOIL-COCO, which associates images with both correct and "foil" captions, that is, descriptions of the image that are highly similar to the original ones, but contain one single mistake ("foil word"). We show that current LaVi models fall into the traps of this data and perform badly on three tasks: a) caption classification (correct vs. foil); b) foil word detection; c) foil word correction. Humans, in contrast, have near-perfect performance on those tasks. We demonstrate that merely utilising language cues is not enough to model FOIL-COCO and that it challenges the state-of-the-art by requiring a fine-grained understanding of the relation between text and image. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | true | 72,832 |
2110.04622 | Does Preprocessing Help Training Over-parameterized Neural Networks? | Deep neural networks have achieved impressive performance in many areas. Designing a fast and provable method for training neural networks is a fundamental question in machine learning. The classical training method requires paying $\Omega(mnd)$ cost for both forward computation and backward computation, where $m$ is the width of the neural network, and we are given $n$ training points in $d$-dimensional space. In this paper, we propose two novel preprocessing ideas to bypass this $\Omega(mnd)$ barrier: $\bullet$ First, by preprocessing the initial weights of the neural networks, we can train the neural network in $\widetilde{O}(m^{1-\Theta(1/d)} n d)$ cost per iteration. $\bullet$ Second, by preprocessing the input data points, we can train the neural network in $\widetilde{O} (m^{4/5} nd )$ cost per iteration. From the technical perspective, our result is a sophisticated combination of tools in different fields, greedy-type convergence analysis in optimization, sparsity observation in practical work, high-dimensional geometric search in data structure, concentration and anti-concentration in probability. Our results also provide theoretical insights for a large number of previously established fast training methods. In addition, our classical algorithm can be generalized to the Quantum computation model. Interestingly, we can get a similar sublinear cost per iteration but avoid preprocessing initial weights or input data points. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 259,966 |
2407.13307 | Conformal Performance Range Prediction for Segmentation Output Quality
Control | Recent works have introduced methods to estimate segmentation performance without ground truth, relying solely on neural network softmax outputs. These techniques hold potential for intuitive output quality control. However, such performance estimates rely on calibrated softmax outputs, which is often not the case in modern neural networks. Moreover, the estimates do not take into account inherent uncertainty in segmentation tasks. These limitations may render precise performance predictions unattainable, restricting the practical applicability of performance estimation methods. To address these challenges, we develop a novel approach for predicting performance ranges with statistical guarantees of containing the ground truth with a user specified probability. Our method leverages sampling-based segmentation uncertainty estimation to derive heuristic performance ranges, and applies split conformal prediction to transform these estimates into rigorous prediction ranges that meet the desired guarantees. We demonstrate our approach on the FIVES retinal vessel segmentation dataset and compare five commonly used sampling-based uncertainty estimation techniques. Our results show that it is possible to achieve the desired coverage with small prediction ranges, highlighting the potential of performance range prediction as a valuable tool for output quality control. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 474,329 |
2501.18793 | OT-Transformer: A Continuous-time Transformer Architecture with Optimal
Transport Regularization | Transformers have achieved state-of-the-art performance in numerous tasks. In this paper, we propose a continuous-time formulation of transformers. Specifically, we consider a dynamical system whose governing equation is parametrized by transformer blocks. We leverage optimal transport theory to regularize the training problem, which enhances stability in training and improves generalization of the resulting model. Moreover, we demonstrate in theory that this regularization is necessary as it promotes uniqueness and regularity of solutions. Our model is flexible in that almost any existing transformer architectures can be adopted to construct the dynamical system with only slight modifications to the existing code. We perform extensive numerical experiments on tasks motivated by natural language processing, image classification, and point cloud classification. Our experimental results show that the proposed method improves the performance of its discrete counterpart and outperforms relevant comparing models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 528,857 |
2205.03719 | Odor Descriptor Understanding through Prompting | Embeddings from contemporary natural language processing (NLP) models are commonly used as numerical representations for words or sentences. However, odor descriptor words, like "leather" or "fruity", vary significantly between their commonplace usage and their olfactory usage, as a result traditional methods for generating these embeddings do not suffice. In this paper, we present two methods to generate embeddings for odor words that are more closely aligned with their olfactory meanings when compared to off-the-shelf embeddings. These generated embeddings outperform the previous state-of-the-art and contemporary fine-tuning/prompting methods on a pre-existing zero-shot odor-specific NLP benchmark. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 295,385 |
2410.19580 | Hybrid Memetic Search for Electric Vehicle Routing with Time Windows,
Simultaneous Pickup-Delivery, and Partial Recharges | With growing environmental concerns, electric vehicles for logistics have gained significant attention within the computational intelligence community in recent years. This work addresses an emerging and significant extension of the electric vehicle routing problem (EVRP), namely EVRP with time windows, simultaneous pickup-delivery, and partial recharges (EVRP-TW-SPD), which has wide real-world applications. We propose a hybrid memetic algorithm (HMA) for solving EVRP-TW-SPD. HMA incorporates two novel components: a parallel-sequential station insertion procedure for handling partial recharges that can better avoid local optima compared to purely sequential insertion, and a cross-domain neighborhood search that explores solution spaces of both electric and non-electric problem domains simultaneously. These components can also be easily applied to various EVRP variants. To bridge the gap between existing benchmarks and real-world scenarios, we introduce a new, large-scale EVRP-TW-SPD benchmark set derived from real-world applications, containing instances with many more customers and charging stations than existing benchmark instances. Extensive experiments demonstrate the significant performance advantages of HMA over existing algorithms across a wide range of problem instances. Both the benchmark set and HMA will be open-sourced to facilitate further research in this area. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 502,376 |
1511.01440 | A low-complexity 2D signal space diversity solution for future
broadcasting systems | -DVB-T2 was the first industrial standard deploying rotated and cyclic Q delayed (RCQD)modulation to improve performance over fading channels. This enablesimportantgains compared toconventional quadrature amplitude modulations(QAM) under severe channel conditions.However, the corresponding demodulation complexitystill prevents its use forwider applications. This paper proposes several rotation angles for different QAM constellations anda corresponding low-complexity detection method. Results show that the proposed solution simplifies both the transmitter and the receiver with often betterperformancethan the proposed angles in DVB-T2. Compared with the lowest complexity demappers currently used in DVB-T2, the proposed solution achieves an additional reduction bymore than 60%. Index Terms- DVB-T2, Rotated and Cyclic Q Delayed (RCQD) Modulations, Signal Space Diversity (SSD), Fading Channel, Quadrature Amplitude Modulations (QAM), Max-Log, ComputationalComplexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 48,503 |
1805.07502 | On Deep Ensemble Learning from a Function Approximation Perspective | In this paper, we propose to provide a general ensemble learning framework based on deep learning models. Given a group of unit models, the proposed deep ensemble learning framework will effectively combine their learning results via a multilayered ensemble model. In the case when the unit model mathematical mappings are bounded, sigmoidal and discriminatory, we demonstrate that the deep ensemble learning framework can achieve a universal approximation of any functions from the input space to the output space. Meanwhile, to achieve such a performance, the deep ensemble learning framework also impose a strict constraint on the number of involved unit models. According to the theoretic proof provided in this paper, given the input feature space of dimension d, the required unit model number will be 2d, if the ensemble model involves one single layer. Furthermore, as the ensemble component goes deeper, the number of required unit model is proved to be lowered down exponentially. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 97,847 |
2110.10522 | CIM-PPO:Proximal Policy Optimization with Liu-Correntropy Induced Metric | As a popular Deep Reinforcement Learning (DRL) algorithm, Proximal Policy Optimization (PPO) has demonstrated remarkable efficacy in numerous complex tasks. According to the penalty mechanism in a surrogate, PPO can be classified into PPO with KL divergence (PPO-KL) and PPO with Clip (PPO-Clip). In this paper, we analyze the impact of asymmetry in KL divergence on PPO-KL and highlight that when this asymmetry is pronounced, it will misguide the improvement of the surrogate. To address this issue, we represent the PPO-KL in inner product form and demonstrate that the KL divergence is a Correntropy Induced Metric (CIM) in Euclidean space. Subsequently, we extend the PPO-KL to the Reproducing Kernel Hilbert Space (RKHS), redefine the inner products with RKHS, and propose the PPO-CIM algorithm. Moreover, this paper states that the PPO-CIM algorithm has a lower computation cost in policy gradient and proves that PPO-CIM can guarantee the new policy is within the trust region while the kernel satisfies some conditions. Finally, we design experiments based on six Mujoco continuous-action tasks to validate the proposed algorithm. The experimental results validate that the asymmetry of KL divergence can affect the policy improvement of PPO-KL and show that the PPO-CIM can perform better than both PPO-KL and PPO-Clip in most tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 262,181 |
2401.12351 | Performance Analysis of 6G Multiuser Massive MIMO-OFDM THz Wireless
Systems with Hybrid Beamforming under Intercarrier Interference | 6G networks are expected to provide more diverse capabilities than their predecessors and are likely to support applications beyond current mobile applications, such as virtual and augmented reality (VR/AR), AI, and the Internet of Things (IoT). In contrast to typical multiple-input multiple-output (MIMO) systems, THz MIMO precoding cannot be conducted totally at baseband using digital precoders due to the restricted number of signal mixers and analog-to-digital converters that can be supported due to their cost and power consumption. In this thesis, we analyzed the performance of multiuser massive MIMO-OFDM THz wireless systems with hybrid beamforming. Carrier frequency offset (CFO) is one of the most well-known disturbances for OFDM. For practicality, we accounted for CFO, which results in Intercarrier Interference. Incorporating the combined impact of molecular absorption, high sparsity, and multi-path fading, we analyzed a three-dimensional wideband THz channel and the carrier frequency offset in multi-carrier systems. With this model, we first presented a two-stage wideband hybrid beamforming technique comprising Riemannian manifolds optimization for analog beamforming and then a zero-forcing (ZF) approach for digital beamforming. We adjusted the objective function to reduce complexity, and instead of maximizing the bit rate, we determined parameters by minimizing interference. Numerical results demonstrate the significance of considering ICI for practical implementation for the THz system. We demonstrated how our change in problem formulation minimizes latency without compromising results. We also evaluated spectral efficiency by varying the number of RF chains and antennas. The spectral efficiency grows as the number of RF chains and antennas increases, but the spectral efficiency of antennas declines when the number of users increases. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 423,345 |
cs/0701062 | Network Coding over a Noisy Relay : a Belief Propagation Approach | In recent years, network coding has been investigated as a method to obtain improvements in wireless networks. A typical assumption of previous work is that relay nodes performing network coding can decode the messages from sources perfectly. On a simple relay network, we design a scheme to obtain network coding gain even when the relay node cannot perfectly decode its received messages. In our scheme, the operation at the relay node resembles message passing in belief propagation, sending the logarithm likelihood ratio (LLR) of the network coded message to the destination. Simulation results demonstrate the gain obtained over different channel conditions. The goal of this paper is not to give a theoretical result, but to point to possible interaction of network coding with user cooperation in noisy scenario. The extrinsic information transfer (EXIT) chart is shown to be a useful engineering tool to analyze the performance of joint channel coding and network coding in the network. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 540,040 |
2203.02867 | Diffusion Maps : Using the Semigroup Property for Parameter Tuning | Diffusion maps (DM) constitute a classic dimension reduction technique, for data lying on or close to a (relatively) low-dimensional manifold embedded in a much larger dimensional space. The DM procedure consists in constructing a spectral parametrization for the manifold from simulated random walks or diffusion paths on the data set. However, DM is hard to tune in practice. In particular, the task to set a diffusion time t when constructing the diffusion kernel matrix is critical. We address this problem by using the semigroup property of the diffusion operator. We propose a semigroup criterion for picking t. Experiments show that this principled approach is effective and robust. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 283,891 |
2003.13135 | Learning Latent Causal Structures with a Redundant Input Neural Network | Most causal discovery algorithms find causal structure among a set of observed variables. Learning the causal structure among latent variables remains an important open problem, particularly when using high-dimensional data. In this paper, we address a problem for which it is known that inputs cause outputs, and these causal relationships are encoded by a causal network among a set of an unknown number of latent variables. We developed a deep learning model, which we call a redundant input neural network (RINN), with a modified architecture and a regularized objective function to find causal relationships between input, hidden, and output variables. More specifically, our model allows input variables to directly interact with all latent variables in a neural network to influence what information the latent variables should encode in order to generate the output variables accurately. In this setting, the direct connections between input and latent variables makes the latent variables partially interpretable; furthermore, the connectivity among the latent variables in the neural network serves to model their potential causal relationships to each other and to the output variables. A series of simulation experiments provide support that the RINN method can successfully recover latent causal structure between input and output variables. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 170,116 |
1612.02485 | Comparative Evaluation of Big-Data Systems on Scientific Image Analytics
Workloads | Scientific discoveries are increasingly driven by analyzing large volumes of image data. Many new libraries and specialized database management systems (DBMSs) have emerged to support such tasks. It is unclear, however, how well these systems support real-world image analysis use cases, and how performant are the image analytics tasks implemented on top of such systems. In this paper, we present the first comprehensive evaluation of large-scale image analysis systems using two real-world scientific image data processing use cases. We evaluate five representative systems (SciDB, Myria, Spark, Dask, and TensorFlow) and find that each of them has shortcomings that complicate implementation or hurt performance. Such shortcomings lead to new research opportunities in making large-scale image analysis both efficient and easy to use. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 65,231 |
1706.04082 | Distributed Submodular Maximization with Limited Information | We consider a class of distributed submodular maximization problems in which each agent must choose a single strategy from its strategy set. The global objective is to maximize a submodular function of the strategies chosen by each agent. When choosing a strategy, each agent has access to only a limited number of other agents' choices. For each of its strategies, an agent can evaluate its marginal contribution to the global objective given its information. The main objective is to investigate how this limitation of information about the strategies chosen by other agents affects the performance when agents make choices according to a local greedy algorithm. In particular, we provide lower bounds on the performance of greedy algorithms for submodular maximization, which depend on the clique number of a graph that captures the information structure. We also characterize graph-theoretic upper bounds in terms of the chromatic number of the graph. Finally, we demonstrate how certain graph properties limit the performance of the greedy algorithm. Simulations on several common models for random networks demonstrate our results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 75,276 |
2208.02025 | OLLIE: Derivation-based Tensor Program Optimizer | Boosting the runtime performance of deep neural networks (DNNs) is critical due to their wide adoption in real-world tasks. Existing approaches to optimizing the tensor algebra expression of a DNN only consider expressions representable by a fixed set of predefined operators, missing possible optimization opportunities between general expressions. We propose OLLIE, the first derivation-based tensor program optimizer. OLLIE optimizes tensor programs by leveraging transformations between general tensor algebra expressions, enabling a significantly larger expression search space that includes those supported by prior work as special cases. OLLIE uses a hybrid derivation-based optimizer that effectively combines explorative and guided derivations to quickly discover highly optimized expressions. Evaluation on seven DNNs shows that OLLIE can outperform existing optimizers by up to 2.73$\times$ (1.46$\times$ on average) on an A100 GPU and up to 2.68$\times$ (1.51$\times$) on a V100 GPU, respectively. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 311,356 |
2410.11275 | Shallow diffusion networks provably learn hidden low-dimensional
structure | Diffusion-based generative models provide a powerful framework for learning to sample from a complex target distribution. The remarkable empirical success of these models applied to high-dimensional signals, including images and video, stands in stark contrast to classical results highlighting the curse of dimensionality for distribution recovery. In this work, we take a step towards understanding this gap through a careful analysis of learning diffusion models over the Barron space of single layer neural networks. In particular, we show that these shallow models provably adapt to simple forms of low dimensional structure, thereby avoiding the curse of dimensionality. We combine our results with recent analyses of sampling with diffusion models to provide an end-to-end sample complexity bound for learning to sample from structured distributions. Importantly, our results do not require specialized architectures tailored to particular latent structures, and instead rely on the low-index structure of the Barron space to adapt to the underlying distribution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 498,477 |
2411.08861 | Interaction Testing in Variation Analysis | Relationships of cause and effect are of prime importance for explaining scientific phenomena. Often, rather than just understanding the effects of causes, researchers also wish to understand how a cause $X$ affects an outcome $Y$ mechanistically -- i.e., what are the causal pathways that are activated between $X$ and $Y$. For analyzing such questions, a range of methods has been developed over decades under the rubric of causal mediation analysis. Traditional mediation analysis focuses on decomposing the average treatment effect (ATE) into direct and indirect effects, and therefore focuses on the ATE as the central quantity. This corresponds to providing explanations for associations in the interventional regime, such as when the treatment $X$ is randomized. Commonly, however, it is of interest to explain associations in the observational regime, and not just in the interventional regime. In this paper, we introduce \text{variation analysis}, an extension of mediation analysis that focuses on the total variation (TV) measure between $X$ and $Y$, written as $\mathrm{E}[Y \mid X=x_1] - \mathrm{E}[Y \mid X=x_0]$. The TV measure encompasses both causal and confounded effects, as opposed to the ATE which only encompasses causal (direct and mediated) variations. In this way, the TV measure is suitable for providing explanations in the natural regime and answering questions such as ``why is $X$ associated with $Y$?''. Our focus is on decomposing the TV measure, in a way that explicitly includes direct, indirect, and confounded variations. Furthermore, we also decompose the TV measure to include interaction terms between these different pathways. Subsequently, interaction testing is introduced, involving hypothesis tests to determine if interaction terms are significantly different from zero. If interactions are not significant, more parsimonious decompositions of the TV measure can be used. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 508,034 |
1411.4695 | Feedback Solution to Optimal Switching Problems with Switching Cost | The problem of optimal switching between nonlinear autonomous subsystems is investigated in this study where the objective is not only bringing the states to close to the desired point, but also adjusting the switching pattern, in the sense of penalizing switching occurrences and assigning different preferences to utilization of different modes. The mode sequence is unspecified and a switching cost term is used in the cost function for penalizing each switching. It is shown that once a switching cost is incorporated, the optimal cost-to-go function depends on the already active subsystem, i.e., the subsystem which was engaged in the previous time step. Afterwards, an approximate dynamic programming based method is developed which provides an approximation of the optimal solution to the problem in a feedback form and for different initial conditions. Finally, the performance of the method is analyzed through numerical examples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 37,664 |
2407.15312 | FMDNN: A Fuzzy-guided Multi-granular Deep Neural Network for
Histopathological Image Classification | Histopathological image classification constitutes a pivotal task in computer-aided diagnostics. The precise identification and categorization of histopathological images are of paramount significance for early disease detection and treatment. In the diagnostic process of pathologists, a multi-tiered approach is typically employed to assess abnormalities in cell regions at different magnifications. However, feature extraction is often performed at a single granularity, overlooking the multi-granular characteristics of cells. To address this issue, we propose the Fuzzy-guided Multi-granularity Deep Neural Network (FMDNN). Inspired by the multi-granular diagnostic approach of pathologists, we perform feature extraction on cell structures at coarse, medium, and fine granularity, enabling the model to fully harness the information in histopathological images. We incorporate the theory of fuzzy logic to address the challenge of redundant key information arising during multi-granular feature extraction. Cell features are described from different perspectives using multiple fuzzy membership functions, which are fused to create universal fuzzy features. A fuzzy-guided cross-attention module guides universal fuzzy features toward multi-granular features. We propagate these features through an encoder to all patch tokens, aiming to achieve enhanced classification accuracy and robustness. In experiments on multiple public datasets, our model exhibits a significant improvement in accuracy over commonly used classification methods for histopathological image classification and shows commendable interpretability. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 475,125 |
2310.10667 | Enhancing Network Resilience through Machine Learning-powered Graph
Combinatorial Optimization: Applications in Cyber Defense and Information
Diffusion | With the burgeoning advancements of computing and network communication technologies, network infrastructures and their application environments have become increasingly complex. Due to the increased complexity, networks are more prone to hardware faults and highly susceptible to cyber-attacks. Therefore, for rapidly growing network-centric applications, network resilience is essential to minimize the impact of attacks and to ensure that the network provides an acceptable level of services during attacks, faults or disruptions. In this regard, this thesis focuses on developing effective approaches for enhancing network resilience. Existing approaches for enhancing network resilience emphasize on determining bottleneck nodes and edges in the network and designing proactive responses to safeguard the network against attacks. However, existing solutions generally consider broader application domains and possess limited applicability when applied to specific application areas such as cyber defense and information diffusion, which are highly popular application domains among cyber attackers. This thesis aims to design effective, efficient and scalable techniques for discovering bottleneck nodes and edges in the network to enhance network resilience in cyber defense and information diffusion application domains. We first investigate a cyber defense graph optimization problem, i.e., hardening active directory systems by discovering bottleneck edges in the network. We then study the problem of identifying bottleneck structural hole spanner nodes, which are crucial for information diffusion in the network. We transform both problems into graph-combinatorial optimization problems and design machine learning based approaches for discovering bottleneck points vital for enhancing network resilience. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 400,337 |
1510.05963 | Semantic, Cognitive, and Perceptual Computing: Advances toward Computing
for Human Experience | The World Wide Web continues to evolve and serve as the infrastructure for carrying massive amounts of multimodal and multisensory observations. These observations capture various situations pertinent to people's needs and interests along with all their idiosyncrasies. To support human-centered computing that empower people in making better and timely decisions, we look towards computation that is inspired by human perception and cognition. Toward this goal, we discuss computing paradigms of semantic computing, cognitive computing, and an emerging aspect of computing, which we call perceptual computing. In our view, these offer a continuum to make the most out of vast, growing, and diverse data pertinent to human needs and interests. We propose details of perceptual computing characterized by interpretation and exploration operations comparable to the interleaving of bottom and top brain processing. This article consists of two parts. First we describe semantic computing, cognitive computing, and perceptual computing to lay out distinctions while acknowledging their complementary capabilities. We then provide a conceptual overview of the newest of these three paradigms--perceptual computing. For further insights, we focus on an application scenario of asthma management converting massive, heterogeneous and multimodal (big) data into actionable information or smart data. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 48,070 |
2104.03693 | Learning specialized activation functions with the Piecewise Linear Unit | The choice of activation functions is crucial for modern deep neural networks. Popular hand-designed activation functions like Rectified Linear Unit(ReLU) and its variants show promising performance in various tasks and models. Swish, the automatically discovered activation function, has been proposed and outperforms ReLU on many challenging datasets. However, it has two main drawbacks. First, the tree-based search space is highly discrete and restricted, which is difficult for searching. Second, the sample-based searching method is inefficient, making it infeasible to find specialized activation functions for each dataset or neural architecture. To tackle these drawbacks, we propose a new activation function called Piecewise Linear Unit(PWLU), which incorporates a carefully designed formulation and learning method. It can learn specialized activation functions and achieves SOTA performance on large-scale datasets like ImageNet and COCO. For example, on ImageNet classification dataset, PWLU improves 0.9%/0.53%/1.0%/1.7%/1.0% top-1 accuracy over Swish for ResNet-18/ResNet-50/MobileNet-V2/MobileNet-V3/EfficientNet-B0. PWLU is also easy to implement and efficient at inference, which can be widely applied in real-world applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 229,147 |
1905.06514 | ReshapeGAN: Object Reshaping by Providing A Single Reference Image | The aim of this work is learning to reshape the object in an input image to an arbitrary new shape, by just simply providing a single reference image with an object instance in the desired shape. We propose a new Generative Adversarial Network (GAN) architecture for such an object reshaping problem, named ReshapeGAN. The network can be tailored for handling all kinds of problem settings, including both within-domain (or single-dataset) reshaping and cross-domain (typically across mutiple datasets) reshaping, with paired or unpaired training data. The appearance of the input object is preserved in all cases, and thus it is still identifiable after reshaping, which has never been achieved as far as we are aware. We present the tailored models of the proposed ReshapeGAN for all the problem settings, and have them tested on 8 kinds of reshaping tasks with 13 different datasets, demonstrating the ability of ReshapeGAN on generating convincing and superior results for object reshaping. To the best of our knowledge, we are the first to be able to make one GAN framework work on all such object reshaping tasks, especially the cross-domain tasks on handling multiple diverse datasets. We present here both ablation studies on our proposed ReshapeGAN models and comparisons with the state-of-the-art models when they are made comparable, using all kinds of applicable metrics that we are aware of. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 131,016 |
1912.02651 | Effect of Imbalanced Datasets on Security of Industrial IoT Using
Machine Learning | Machine learning algorithms have been shown to be suitable for securing platforms for IT systems. However, due to the fundamental differences between the industrial internet of things (IIoT) and regular IT networks, a special performance review needs to be considered. The vulnerabilities and security requirements of IIoT systems demand different considerations. In this paper, we study the reasons why machine learning must be integrated into the security mechanisms of the IIoT, and where it currently falls short in having a satisfactory performance. The challenges and real-world considerations associated with this matter are studied in our experimental design. We use an IIoT testbed resembling a real industrial plant to show our proof of concept. | false | false | false | false | false | false | true | false | false | false | true | false | true | false | false | false | true | false | 156,405 |
2103.00191 | FisheyeSuperPoint: Keypoint Detection and Description Network for
Fisheye Images | Keypoint detection and description is a commonly used building block in computer vision systems particularly for robotics and autonomous driving. However, the majority of techniques to date have focused on standard cameras with little consideration given to fisheye cameras which are commonly used in urban driving and automated parking. In this paper, we propose a novel training and evaluation pipeline for fisheye images. We make use of SuperPoint as our baseline which is a self-supervised keypoint detector and descriptor that has achieved state-of-the-art results on homography estimation. We introduce a fisheye adaptation pipeline to enable training on undistorted fisheye images. We evaluate the performance on the HPatches benchmark, and, by introducing a fisheye based evaluation method for detection repeatability and descriptor matching correctness, on the Oxford RobotCar dataset. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 222,182 |
2307.05126 | Enhancing Continuous Time Series Modelling with a Latent ODE-LSTM
Approach | Due to their dynamic properties such as irregular sampling rate and high-frequency sampling, Continuous Time Series (CTS) are found in many applications. Since CTS with irregular sampling rate are difficult to model with standard Recurrent Neural Networks (RNNs), RNNs have been generalised to have continuous-time hidden dynamics defined by a Neural Ordinary Differential Equation (Neural ODE), leading to the ODE-RNN model. Another approach that provides a better modelling is that of the Latent ODE model, which constructs a continuous-time model where a latent state is defined at all times. The Latent ODE model uses a standard RNN as the encoder and a Neural ODE as the decoder. However, since the RNN encoder leads to difficulties with missing data and ill-defined latent variables, a Latent ODE-RNN model has recently been proposed that uses a ODE-RNN model as the encoder instead. Both the Latent ODE and Latent ODE-RNN models are difficult to train due to the vanishing and exploding gradients problem. To overcome this problem, the main contribution of this paper is to propose and illustrate a new model based on a new Latent ODE using an ODE-LSTM (Long Short-Term Memory) network as an encoder -- the Latent ODE-LSTM model. To limit the growth of the gradients the Norm Gradient Clipping strategy was embedded on the Latent ODE-LSTM model. The performance evaluation of the new Latent ODE-LSTM (with and without Norm Gradient Clipping) for modelling CTS with regular and irregular sampling rates is then demonstrated. Numerical experiments show that the new Latent ODE-LSTM performs better than Latent ODE-RNNs and can avoid the vanishing and exploding gradients during training. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 378,630 |
2405.08973 | An adaptive approach to Bayesian Optimization with switching costs | We investigate modifications to Bayesian Optimization for a resource-constrained setting of sequential experimental design where changes to certain design variables of the search space incur a switching cost. This models the scenario where there is a trade-off between evaluating more while maintaining the same setup, or switching and restricting the number of possible evaluations due to the incurred cost. We adapt two process-constrained batch algorithms to this sequential problem formulation, and propose two new methods: one cost-aware and one cost-ignorant. We validate and compare the algorithms using a set of 7 scalable test functions in different dimensionalities and switching-cost settings for 30 total configurations. Our proposed cost-aware hyperparameter-free algorithm yields comparable results to tuned process-constrained algorithms in all settings we considered, suggesting some degree of robustness to varying landscape features and cost trade-offs. This method starts to outperform the other algorithms with increasing switching-cost. Our work broadens out from other recent Bayesian Optimization studies in resource-constrained settings that consider a batch setting only. While the contributions of this work are relevant to the general class of resource-constrained problems, they are particularly relevant to problems where adaptability to varying resource availability is of high importance | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 454,256 |
1802.06502 | EA-CG: An Approximate Second-Order Method for Training Fully-Connected
Neural Networks | For training fully-connected neural networks (FCNNs), we propose a practical approximate second-order method including: 1) an approximation of the Hessian matrix and 2) a conjugate gradient (CG) based method. Our proposed approximate Hessian matrix is memory-efficient and can be applied to any FCNNs where the activation and criterion functions are twice differentiable. We devise a CG-based method incorporating one-rank approximation to derive Newton directions for training FCNNs, which significantly reduces both space and time complexity. This CG-based method can be employed to solve any linear equation where the coefficient matrix is Kronecker-factored, symmetric and positive definite. Empirical studies show the efficacy and efficiency of our proposed method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,693 |
2502.00675 | ReFoRCE: A Text-to-SQL Agent with Self-Refinement, Format Restriction,
and Column Exploration | Text-to-SQL systems have unlocked easier access to critical data insights by enabling natural language queries over structured databases. However, deploying such systems in enterprise environments remains challenging due to factors such as large, complex schemas (> 3000 columns), diverse SQL dialects (e.g., BigQuery, Snowflake) and sophisticated query requirements (e.g., transformation, analytics). Current state-of-the-art performance on the Spider 2.0 dataset -- a benchmark built to mimic such complex environments -- remains limited at 20%. Key limitations include inadequate instruction-following, poor long-context comprehension, weak self-refinement, and insufficient dialect-specific knowledge. To address these gaps, we propose ReFoRCE (Self-Refinement Agent with Format Restriction and Column Exploration) which introduces (1) table compression to mitigate long-context limitations (2) format restriction to ensure accurate answer format, and (3) iterative column exploration for enhanced schema understanding. Additionally, it employs self-refinement pipeline consisting of (1) parallelized workflows with voting mechanisms and (2) a Common Table Expression (CTE) based refinement approach to handle unresolved cases. ReFoRCE achieves state-of-the-art results scoring 31.26 on the Spider 2.0-Snow and scoring 30.35 on the Spider 2.0-Lite tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 529,491 |
1706.08162 | Automated text summarisation and evidence-based medicine: A survey of
two domains | The practice of evidence-based medicine (EBM) urges medical practitioners to utilise the latest research evidence when making clinical decisions. Because of the massive and growing volume of published research on various medical topics, practitioners often find themselves overloaded with information. As such, natural language processing research has recently commenced exploring techniques for performing medical domain-specific automated text summarisation (ATS) techniques-- targeted towards the task of condensing large medical texts. However, the development of effective summarisation techniques for this task requires cross-domain knowledge. We present a survey of EBM, the domain-specific needs for EBM, automated summarisation techniques, and how they have been applied hitherto. We envision that this survey will serve as a first resource for the development of future operational text summarisation techniques for EBM. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 75,952 |
1405.3396 | Reducing Dueling Bandits to Cardinal Bandits | We present algorithms for reducing the Dueling Bandits problem to the conventional (stochastic) Multi-Armed Bandits problem. The Dueling Bandits problem is an online model of learning with ordinal feedback of the form "A is preferred to B" (as opposed to cardinal feedback like "A has value 2.5"), giving it wide applicability in learning from implicit user feedback and revealed and stated preferences. In contrast to existing algorithms for the Dueling Bandits problem, our reductions -- named $\Doubler$, $\MultiSbm$ and $\DoubleSbm$ -- provide a generic schema for translating the extensive body of known results about conventional Multi-Armed Bandit algorithms to the Dueling Bandits setting. For $\Doubler$ and $\MultiSbm$ we prove regret upper bounds in both finite and infinite settings, and conjecture about the performance of $\DoubleSbm$ which empirically outperforms the other two as well as previous algorithms in our experiments. In addition, we provide the first almost optimal regret bound in terms of second order terms, such as the differences between the values of the arms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 33,087 |
2109.14922 | D{\'e}composition et analyse de trac{\'e}s EMG pour aider au diagnostic
des maladies neuromusculaires | The electromyogram (EMG) in needle detection represents one of the steps of the electroneuromyogram (ENMG), an examination commonly performed in neurology. By inserting a needle into a muscle and studying the contraction during effort, the EMG provides extremely useful information on the functioning of the neuromuscular system of an individual, but it is an examination that remains complex to interpret. The objective of this work is to participate in the design and evaluation of a software allowing an automated analysis of EMG tracings of patients suspected of neuromuscular diseases, orienting the diagnosis towards either a neuropathic or myopathic process from recorded tracings. The software uses a method of signal decomposition according to a Markovian model, based on the analysis of motor unit potentials obtained by EMG, then a classification of the tracings. The tracings of 9 patients were thus analyzed and classified on the basis of the clinical interpretation of the neurologist, making it possible to initiate a "machine learning" process. The software will then be submitted to new tracings in order to test it against a practitioner experienced in EMG analysis.Translated with www.DeepL.com/Translator (free version) | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 258,116 |
1906.09972 | Classical Music Prediction and Composition by means of Variational
Autoencoders | This paper proposes a new model for music prediction based on Variational Autoencoders (VAEs). In this work, VAEs are used in a novel way in order to address two different problems: music representation into the latent space, and using this representation to make predictions of the future values of the musical piece. This approach was trained with different songs of a classical composer. As a result, the system can represent the music in the latent space, and make accurate predictions. Therefore, the system can be used to compose new music either from an existing piece or from a random starting point. An additional feature of this system is that a small dataset was used for training. However, results show that the system is able to return accurate representations and predictions in unseen data. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 136,322 |
2402.14034 | AgentScope: A Flexible yet Robust Multi-Agent Platform | With the rapid advancement of Large Language Models (LLMs), significant progress has been made in multi-agent applications. However, the complexities in coordinating agents' cooperation and LLMs' erratic performance pose notable challenges in developing robust and efficient multi-agent applications. To tackle these challenges, we propose AgentScope, a developer-centric multi-agent platform with message exchange as its core communication mechanism. The abundant syntactic tools, built-in agents and service functions, user-friendly interfaces for application demonstration and utility monitor, zero-code programming workstation, and automatic prompt tuning mechanism significantly lower the barriers to both development and deployment. Towards robust and flexible multi-agent application, AgentScope provides both built-in and customizable fault tolerance mechanisms. At the same time, it is also armed with system-level support for managing and utilizing multi-modal data, tools, and external knowledge. Additionally, we design an actor-based distribution framework, enabling easy conversion between local and distributed deployments and automatic parallel optimization without extra effort. With these features, AgentScope empowers developers to build applications that fully realize the potential of intelligent agents. We have released AgentScope at https://github.com/modelscope/agentscope, and hope AgentScope invites wider participation and innovation in this fast-moving field. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 431,508 |
1710.11190 | Manipulation Planning under Changing External Forces | In this work, we present a manipulation planning algorithm for a robot to keep an object stable under changing external forces. We particularly focus on the case where a human may be applying forceful operations, e.g. cutting or drilling, on an object that the robot is holding. The planner produces an efficient plan by intelligently deciding when the robot should change its grasp on the object as the human applies the forces. The planner also tries to choose subsequent grasps such that they will minimize the number of regrasps that will be required in the long-term. Furthermore, as it switches from one grasp to the other, the planner solves the problem of bimanual regrasp planning, where the object is not placed on a support surface, but instead it is held by a single gripper until the second gripper moves to a new position on the object. This requires the planner to also reason about the stability of the object under gravity.We provide an implementation on a bimanual robot and present experiments to show the performance of our planner. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 83,537 |
1909.12289 | Attention Forcing for Sequence-to-sequence Model Training | Auto-regressive sequence-to-sequence models with attention mechanism have achieved state-of-the-art performance in many tasks such as machine translation and speech synthesis. These models can be difficult to train. The standard approach, teacher forcing, guides a model with reference output history during training. The problem is that the model is unlikely to recover from its mistakes during inference, where the reference output is replaced by generated output. Several approaches deal with this problem, largely by guiding the model with generated output history. To make training stable, these approaches often require a heuristic schedule or an auxiliary classifier. This paper introduces attention forcing, which guides the model with generated output history and reference attention. This approach can train the model to recover from its mistakes, in a stable fashion, without the need for a schedule or a classifier. In addition, it allows the model to generate output sequences aligned with the references, which can be important for cascaded systems like many speech synthesis systems. Experiments on speech synthesis show that attention forcing yields significant performance gain. Experiments on machine translation show that for tasks where various re-orderings of the output are valid, guiding the model with generated output history is challenging, while guiding the model with reference attention is beneficial. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 147,083 |
2012.07828 | Robustness Threats of Differential Privacy | Differential privacy (DP) is a gold-standard concept of measuring and guaranteeing privacy in data analysis. It is well-known that the cost of adding DP to deep learning model is its accuracy. However, it remains unclear how it affects robustness of the model. Standard neural networks are not robust to different input perturbations: either adversarial attacks or common corruptions. In this paper, we empirically observe an interesting trade-off between privacy and robustness of neural networks. We experimentally demonstrate that networks, trained with DP, in some settings might be even more vulnerable in comparison to non-private versions. To explore this, we extensively study different robustness measurements, including FGSM and PGD adversaries, distance to linear decision boundaries, curvature profile, and performance on a corrupted dataset. Finally, we study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect (decrease and increase) the robustness of the model. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 211,588 |
2103.02913 | Quantifying identifiability to choose and audit $\epsilon$ in
differentially private deep learning | Differential privacy allows bounding the influence that training data records have on a machine learning model. To use differential privacy in machine learning, data scientists must choose privacy parameters $(\epsilon,\delta)$. Choosing meaningful privacy parameters is key, since models trained with weak privacy parameters might result in excessive privacy leakage, while strong privacy parameters might overly degrade model utility. However, privacy parameter values are difficult to choose for two main reasons. First, the theoretical upper bound on privacy loss $(\epsilon,\delta)$ might be loose, depending on the chosen sensitivity and data distribution of practical datasets. Second, legal requirements and societal norms for anonymization often refer to individual identifiability, to which $(\epsilon,\delta)$ are only indirectly related. We transform $(\epsilon,\delta)$ to a bound on the Bayesian posterior belief of the adversary assumed by differential privacy concerning the presence of any record in the training dataset. The bound holds for multidimensional queries under composition, and we show that it can be tight in practice. Furthermore, we derive an identifiability bound, which relates the adversary assumed in differential privacy to previous work on membership inference adversaries. We formulate an implementation of this differential privacy adversary that allows data scientists to audit model training and compute empirical identifiability scores and empirical $(\epsilon,\delta)$. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 223,112 |
1904.12012 | RevealNet: Seeing Behind Objects in RGB-D Scans | During 3D reconstruction, it is often the case that people cannot scan each individual object from all views, resulting in missing geometry in the captured scan. This missing geometry can be fundamentally limiting for many applications, e.g., a robot needs to know the unseen geometry to perform a precise grasp on an object. Thus, we introduce the task of semantic instance completion: from an incomplete RGB-D scan of a scene, we aim to detect the individual object instances and infer their complete object geometry. This will open up new possibilities for interactions with objects in a scene, for instance for virtual or robotic agents. We tackle this problem by introducing RevealNet, a new data-driven approach that jointly detects object instances and predicts their complete geometry. This enables a semantically meaningful decomposition of a scanned scene into individual, complete 3D objects, including hidden and unobserved object parts. RevealNet is an end-to-end 3D neural network architecture that leverages joint color and geometry feature learning. The fully-convolutional nature of our 3D network enables efficient inference of semantic instance completion for 3D scans at scale of large indoor environments in a single forward pass. We show that predicting complete object geometry improves both 3D detection and instance segmentation performance. We evaluate on both real and synthetic scan benchmark data for the new task, where we outperform state-of-the-art approaches by over 15 in mAP@0.5 on ScanNet, and over 18 in mAP@0.5 on SUNCG. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,993 |
2009.05689 | Feedback Control Methods for a Single Machine Infinite Bus System | In this manuscript, we present a high-fidelity physics-based truth model of a Single Machine Infinite Bus (SMIB) system. We also present reduced-order control-oriented nonlinear and linear models of a synchronous generator-turbine system connected to a power grid. The reduced-order control-oriented models are next used to design various control strategies such as: proportional-integral derivative (PID), linear-quadratic regulator (LQR), pole placement-based state feedback, observer-based output feedback, loop transfer recovery (LTR)-based linear-quadratic-Gaussian (LQG), and nonlinear feedback-linearizing control for the SMIB system. The controllers developed are then validated on the high-fidelity physics-based truth model of the SMIB system. Finally, a comparison is made of the performance of the controllers at different operating points of the SMIB system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 195,392 |
2308.01525 | VisAlign: Dataset for Measuring the Degree of Alignment between AI and
Humans in Visual Perception | AI alignment refers to models acting towards human-intended goals, preferences, or ethical principles. Given that most large-scale deep learning models act as black boxes and cannot be manually controlled, analyzing the similarity between models and humans can be a proxy measure for ensuring AI safety. In this paper, we focus on the models' visual perception alignment with humans, further referred to as AI-human visual alignment. Specifically, we propose a new dataset for measuring AI-human visual alignment in terms of image classification, a fundamental task in machine perception. In order to evaluate AI-human visual alignment, a dataset should encompass samples with various scenarios that may arise in the real world and have gold human perception labels. Our dataset consists of three groups of samples, namely Must-Act (i.e., Must-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity of visual information in an image and further divided into eight categories. All samples have a gold human perception label; even Uncertain (severely blurry) sample labels were obtained via crowd-sourcing. The validity of our dataset is verified by sampling theory, statistical theories related to survey design, and experts in the related fields. Using our dataset, we analyze the visual alignment and reliability of five popular visual perception models and seven abstention methods. Our code and data is available at https://github.com/jiyounglee-0523/VisAlign. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 383,276 |
2110.10064 | Idiomatic Expression Identification using Semantic Compatibility | Idiomatic expressions are an integral part of natural language and constantly being added to a language. Owing to their non-compositionality and their ability to take on a figurative or literal meaning depending on the sentential context, they have been a classical challenge for NLP systems. To address this challenge, we study the task of detecting whether a sentence has an idiomatic expression and localizing it. Prior art for this task had studied specific classes of idiomatic expressions offering limited views of their generalizability to new idioms. We propose a multi-stage neural architecture with the attention flow mechanism for identifying these expressions. The network effectively fuses contextual and lexical information at different levels using word and sub-word representations. Empirical evaluations on three of the largest benchmark datasets with idiomatic expressions of varied syntactic patterns and degrees of non-compositionality show that our proposed model achieves new state-of-the-art results. A salient feature of the model is its ability to identify idioms unseen during training with gains from 1.4% to 30.8% over competitive baselines on the largest dataset. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 262,025 |
1808.01454 | T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth
Estimation Tasks | Current methods for single-image depth estimation use training datasets with real image-depth pairs or stereo pairs, which are not easy to acquire. We propose a framework, trained on synthetic image-depth pairs and unpaired real images, that comprises an image translation network for enhancing realism of input images, followed by a depth prediction network. A key idea is having the first network act as a wide-spectrum input translator, taking in either synthetic or real images, and ideally producing minimally modified realistic images. This is done via a reconstruction loss when the training input is real, and GAN loss when synthetic, removing the need for heuristic self-regularization. The second network is trained on a task loss for synthetic image-depth pairs, with extra GAN loss to unify real and synthetic feature distributions. Importantly, the framework can be trained end-to-end, leading to good results, even surpassing early deep-learning methods that use real paired data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,570 |
1701.02357 | Son of Zorn's Lemma: Targeted Style Transfer Using Instance-aware
Semantic Segmentation | Style transfer is an important task in which the style of a source image is mapped onto that of a target image. The method is useful for synthesizing derivative works of a particular artist or specific painting. This work considers targeted style transfer, in which the style of a template image is used to alter only part of a target image. For example, an artist may wish to alter the style of only one particular object in a target image without altering the object's general morphology or surroundings. This is useful, for example, in augmented reality applications (such as the recently released Pokemon GO), where one wants to alter the appearance of a single real-world object in an image frame to make it appear as a cartoon. Most notably, the rendering of real-world objects into cartoon characters has been used in a number of films and television show, such as the upcoming series Son of Zorn. We present a method for targeted style transfer that simultaneously segments and stylizes single objects selected by the user. The method uses a Markov random field model to smooth and anti-alias outlier pixels near object boundaries, so that stylized objects naturally blend into their surroundings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 66,538 |
2012.03023 | Volumetric Occupancy Mapping With Probabilistic Depth Completion for
Robotic Navigation | In robotic applications, a key requirement for safe and efficient motion planning is the ability to map obstacle-free space in unknown, cluttered 3D environments. However, commodity-grade RGB-D cameras commonly used for sensing fail to register valid depth values on shiny, glossy, bright, or distant surfaces, leading to missing data in the map. To address this issue, we propose a framework leveraging probabilistic depth completion as an additional input for spatial mapping. We introduce a deep learning architecture providing uncertainty estimates for the depth completion of RGB-D images. Our pipeline exploits the inferred missing depth values and depth uncertainty to complement raw depth images and improve the speed and quality of free space mapping. Evaluations on synthetic data show that our approach maps significantly more correct free space with relatively low error when compared against using raw data alone in different indoor environments; thereby producing more complete maps that can be directly used for robotic navigation tasks. The performance of our framework is validated using real-world data. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 209,957 |
1912.06269 | Learning and Optimization with Bayesian Hybrid Models | Bayesian hybrid models fuse physics-based insights with machine learning constructs to correct for systematic bias. In this paper, we compare Bayesian hybrid models against physics-based glass-box and Gaussian process black-box surrogate models. We consider ballistic firing as an illustrative case study for a Bayesian decision-making workflow. First, Bayesian calibration is performed to estimate model parameters. We then use the posterior distribution from Bayesian analysis to compute optimal firing conditions to hit a target via a single-stage stochastic program. The case study demonstrates the ability of Bayesian hybrid models to overcome systematic bias from missing physics with less data than the pure machine learning approach. Ultimately, we argue Bayesian hybrid models are an emerging paradigm for data-informed decision-making under parametric and epistemic uncertainty. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 157,306 |
2307.03565 | MALIBO: Meta-learning for Likelihood-free Bayesian Optimization | Bayesian optimization (BO) is a popular method to optimize costly black-box functions. While traditional BO optimizes each new target task from scratch, meta-learning has emerged as a way to leverage knowledge from related tasks to optimize new tasks faster. However, existing meta-learning BO methods rely on surrogate models that suffer from scalability issues and are sensitive to observations with different scales and noise types across tasks. Moreover, they often overlook the uncertainty associated with task similarity. This leads to unreliable task adaptation when only limited observations are obtained or when the new tasks differ significantly from the related tasks. To address these limitations, we propose a novel meta-learning BO approach that bypasses the surrogate model and directly learns the utility of queries across tasks. Our method explicitly models task uncertainty and includes an auxiliary model to enable robust adaptation to new tasks. Extensive experiments show that our method demonstrates strong anytime performance and outperforms state-of-the-art meta-learning BO methods in various benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 378,080 |
1903.10040 | Safe Learning-Based Control of Stochastic Jump Linear Systems: a
Distributionally Robust Approach | We consider the problem of designing control laws for stochastic jump linear systems where the disturbances are drawn randomly from a finite sample space according to an unknown distribution, which is estimated from a finite sample of i.i.d. observations. We adopt a distributionally robust approach to compute a mean-square stabilizing feedback gain with a given probability. The larger the sample size, the less conservative the controller, yet our methodology gives stability guarantees with high probability, for any number of samples. Using tools from statistical learning theory, we estimate confidence regions for the unknown probability distributions (ambiguity sets) which have the shape of total variation balls centered around the empirical distribution. We use these confidence regions in the design of appropriate distributionally robust controllers and show that the associated stability conditions can be cast as a tractable linear matrix inequality (LMI) by using conjugate duality. The resulting design procedure scales gracefully with the size of the probability space and the system dimensions. Through a numerical example, we illustrate the superior sample complexity of the proposed methodology over the stochastic approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 125,191 |
1401.5934 | Accelerated Assistant to SubOptimum Receiver for Multi Carrier Code
Division Multiple Access System | The Multiple Input Multiple output system are considered to be the strongest candidate for the maximum utilization of available bandwidth. In this paper, the MIMO system with the combination of Multi-Carrier Code Division Multiple Access and Space Time Coding using the Alamoutis scheme is considered. A Genetic Algorithm based receiver with an exceptional relationship between filter weights while detecting symbols is proposed. This scheme has better Convergence Rate and Bit Error Rate than the Fast-LMS Adaptive receiver. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 30,273 |
1206.6262 | Scaling Life-long Off-policy Learning | We pursue a life-long learning approach to artificial intelligence that makes extensive use of reinforcement learning algorithms. We build on our prior work with general value functions (GVFs) and the Horde architecture. GVFs have been shown able to represent a wide variety of facts about the world's dynamics that may be useful to a long-lived agent (Sutton et al. 2011). We have also previously shown scaling - that thousands of on-policy GVFs can be learned accurately in real-time on a mobile robot (Modayil, White & Sutton 2011). That work was limited in that it learned about only one policy at a time, whereas the greatest potential benefits of life-long learning come from learning about many policies in parallel, as we explore in this paper. Many new challenges arise in this off-policy learning setting. To deal with convergence and efficiency challenges, we utilize the recently introduced GTD({\lambda}) algorithm. We show that GTD({\lambda}) with tile coding can simultaneously learn hundreds of predictions for five simple target policies while following a single random behavior policy, assessing accuracy with interspersed on-policy tests. To escape the need for the tests, which preclude further scaling, we introduce and empirically vali- date two online estimators of the off-policy objective (MSPBE). Finally, we use the more efficient of the two estimators to demonstrate off-policy learning at scale - the learning of value functions for one thousand policies in real time on a physical robot. This ability constitutes a significant step towards scaling life-long off-policy learning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 16,905 |
2206.04456 | Choosing Answers in $\varepsilon$-Best-Answer Identification for Linear
Bandits | In pure-exploration problems, information is gathered sequentially to answer a question on the stochastic environment. While best-arm identification for linear bandits has been extensively studied in recent years, few works have been dedicated to identifying one arm that is $\varepsilon$-close to the best one (and not exactly the best one). In this problem with several correct answers, an identification algorithm should focus on one candidate among those answers and verify that it is correct. We demonstrate that picking the answer with highest mean does not allow an algorithm to reach asymptotic optimality in terms of expected sample complexity. Instead, a \textit{furthest answer} should be identified. Using that insight to choose the candidate answer carefully, we develop a simple procedure to adapt best-arm identification algorithms to tackle $\varepsilon$-best-answer identification in transductive linear stochastic bandits. Finally, we propose an asymptotically optimal algorithm for this setting, which is shown to achieve competitive empirical performance against existing modified best-arm identification algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,635 |
2103.12926 | Beyond Visual Attractiveness: Physically Plausible Single Image HDR
Reconstruction for Spherical Panoramas | HDR reconstruction is an important task in computer vision with many industrial needs. The traditional approaches merge multiple exposure shots to generate HDRs that correspond to the physical quantity of illuminance of the scene. However, the tedious capturing process makes such multi-shot approaches inconvenient in practice. In contrast, recent single-shot methods predict a visually appealing HDR from a single LDR image through deep learning. But it is not clear whether the previously mentioned physical properties would still hold, without training the network to explicitly model them. In this paper, we introduce the physical illuminance constraints to our single-shot HDR reconstruction framework, with a focus on spherical panoramas. By the proposed physical regularization, our method can generate HDRs which are not only visually appealing but also physically plausible. For evaluation, we collect a large dataset of LDR and HDR images with ground truth illuminance measures. Extensive experiments show that our HDR images not only maintain high visual quality but also top all baseline methods in illuminance prediction accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 226,325 |
2407.00518 | When Robots Get Chatty: Grounding Multimodal Human-Robot Conversation
and Collaboration | We investigate the use of Large Language Models (LLMs) to equip neural robotic agents with human-like social and cognitive competencies, for the purpose of open-ended human-robot conversation and collaboration. We introduce a modular and extensible methodology for grounding an LLM with the sensory perceptions and capabilities of a physical robot, and integrate multiple deep learning models throughout the architecture in a form of system integration. The integrated models encompass various functions such as speech recognition, speech generation, open-vocabulary object detection, human pose estimation, and gesture detection, with the LLM serving as the central text-based coordinating unit. The qualitative and quantitative results demonstrate the huge potential of LLMs in providing emergent cognition and interactive language-oriented control of robots in a natural and social manner. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 468,897 |
2410.08847 | Unintentional Unalignment: Likelihood Displacement in Direct Preference
Optimization | Direct Preference Optimization (DPO) and its variants are increasingly used for aligning language models with human preferences. Although these methods are designed to teach a model to generate preferred responses more frequently relative to dispreferred responses, prior work has observed that the likelihood of preferred responses often decreases during training. The current work sheds light on the causes and implications of this counter-intuitive phenomenon, which we term likelihood displacement. We demonstrate that likelihood displacement can be catastrophic, shifting probability mass from preferred responses to responses with an opposite meaning. As a simple example, training a model to prefer $\texttt{No}$ over $\texttt{Never}$ can sharply increase the probability of $\texttt{Yes}$. Moreover, when aligning the model to refuse unsafe prompts, we show that such displacement can unintentionally lead to unalignment, by shifting probability mass from preferred refusal responses to harmful responses (e.g., reducing the refusal rate of Llama-3-8B-Instruct from 74.4% to 33.4%). We theoretically characterize that likelihood displacement is driven by preferences that induce similar embeddings, as measured by a centered hidden embedding similarity (CHES) score. Empirically, the CHES score enables identifying which training samples contribute most to likelihood displacement in a given dataset. Filtering out these samples effectively mitigated unintentional unalignment in our experiments. More broadly, our results highlight the importance of curating data with sufficiently distinct preferences, for which we believe the CHES score may prove valuable. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 497,303 |
2401.01822 | HawkRover: An Autonomous mmWave Vehicular Communication Testbed with
Multi-sensor Fusion and Deep Learning | Connected and automated vehicles (CAVs) have become a transformative technology that can change our daily life. Currently, millimeter-wave (mmWave) bands are identified as the promising CAV connectivity solution. While it can provide high data rate, their realization faces many challenges such as high attenuation during mmWave signal propagation and mobility management. Existing solution has to initiate pilot signal to measure channel information, then apply signal processing to calculate the best narrow beam towards the receiver end to guarantee sufficient signal power. This process takes significant overhead and time, hence not suitable for vehicles. In this study, we propose an autonomous and low-cost testbed to collect extensive co-located mmWave signal and other sensors data such as LiDAR (Light Detection and Ranging), cameras, ultrasonic, etc, traditionally for ``automated'', to facilitate mmWave vehicular communications. Intuitively, these sensors can build a 3D map around the vehicle and signal propagation path can be estimated, eliminating iterative the process via pilot signals. This multimodal data fusion, together with AI, is expected to bring significant advances in ``connected'' research. | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | 419,508 |
2303.07606 | PSNet: a deep learning model based digital phase shifting algorithm from
a single fringe image | As the gold standard for phase retrieval, phase-shifting algorithm (PS) has been widely used in optical interferometry, fringe projection profilometry, etc. However, capturing multiple fringe patterns in PS limits the algorithm to only a narrow range of application. To this end, a deep learning (DL) model based digital PS algorithm from only a single fringe image is proposed. By training on a simulated dataset of PS fringe patterns, the learnt model, denoted PSNet, can predict fringe patterns with other PS steps when given a pattern with the first PS step. Simulation and experiment results demonstrate the PSNet's promising performance on accurate prediction of digital PS patterns, and robustness to complex scenarios such as surfaces with varying curvature and reflectance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,314 |
2110.07716 | Adversarial Scene Reconstruction and Object Detection System for
Assisting Autonomous Vehicle | In the current computer vision era classifying scenes through video surveillance systems is a crucial task. Artificial Intelligence (AI) Video Surveillance technologies have been advanced remarkably while artificial intelligence and deep learning ascended into the system. Adopting the superior compounds of deep learning visual classification methods achieved enormous accuracy in classifying visual scenes. However, the visual classifiers face difficulties examining the scenes in dark visible areas, especially during the nighttime. Also, the classifiers face difficulties in identifying the contexts of the scenes. This paper proposed a deep learning model that reconstructs dark visual scenes to clear scenes like daylight, and the method recognizes visual actions for the autonomous vehicle. The proposed model achieved 87.3 percent accuracy for scene reconstruction and 89.2 percent in scene understanding and detection tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 261,092 |
2205.15704 | Mitigating Dataset Bias by Using Per-sample Gradient | The performance of deep neural networks is strongly influenced by the training dataset setup. In particular, when attributes having a strong correlation with the target attribute are present, the trained model can provide unintended prejudgments and show significant inference errors (i.e., the dataset bias problem). Various methods have been proposed to mitigate dataset bias, and their emphasis is on weakly correlated samples, called bias-conflicting samples. These methods are based on explicit bias labels involving human or empirical correlation metrics (e.g., training loss). However, such metrics require human costs or have insufficient theoretical explanation. In this study, we propose a debiasing algorithm, called PGD (Per-sample Gradient-based Debiasing), that comprises three steps: (1) training a model on uniform batch sampling, (2) setting the importance of each sample in proportion to the norm of the sample gradient, and (3) training the model using importance-batch sampling, whose probability is obtained in step (2). Compared with existing baselines for various synthetic and real-world datasets, the proposed method showed state-of-the-art accuracy for a the classification task. Furthermore, we describe theoretical understandings about how PGD can mitigate dataset bias. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 299,837 |
2011.08655 | Lung Segmentation in Chest X-rays with Res-CR-Net | Deep Neural Networks (DNN) are widely used to carry out segmentation tasks in biomedical images. Most DNNs developed for this purpose are based on some variation of the encoder-decoder U-Net architecture. Here we show that Res-CR-Net, a new type of fully convolutional neural network, which was originally developed for the semantic segmentation of microscopy images, and which does not adopt a U-Net architecture, is very effective at segmenting the lung fields in chest X-rays from either healthy patients or patients with a variety of lung pathologies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 206,946 |
2112.13588 | Survey on Stabilization of Nonlinear Systems via state/output feedback
control | This survey paper deals with the stabilization of nonlinear systems by analyzing the controlling method in terms of state feedback and output feedback. A brief overview of some literature on how the feedback controller of some dynamic systems in real applications like robots can be applicable is introduced. The aim is to give a direction about the methods of how to solve the state and output feedback stabilization problems for nonlinear systems based on feedback design methods and other theorems like the Lyapunov stability theorem. The feedback controllers can be constructed to ensure the origin of the nonlinear system is whether asymptotically stable or not. The paper is purposely focused on directing the theoretical results regarding the presence of such a feedback controller in stabilization. Keywords: Nonlinear systems, stability, Feedback stabilization, State Feedback, Output Feedback | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 273,298 |
2411.05661 | Multi-armed Bandits with Missing Outcome | While significant progress has been made in designing algorithms that minimize regret in online decision-making, real-world scenarios often introduce additional complexities, perhaps the most challenging of which is missing outcomes. Overlooking this aspect or simply assuming random missingness invariably leads to biased estimates of the rewards and may result in linear regret. Despite the practical relevance of this challenge, no rigorous methodology currently exists for systematically handling missingness, especially when the missingness mechanism is not random. In this paper, we address this gap in the context of multi-armed bandits (MAB) with missing outcomes by analyzing the impact of different missingness mechanisms on achievable regret bounds. We introduce algorithms that account for missingness under both missing at random (MAR) and missing not at random (MNAR) models. Through both analytical and simulation studies, we demonstrate the drastic improvements in decision-making by accounting for missingness in these settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 506,741 |
0901.1936 | A Lower Bound on the Capacity of Wireless Erasure Networks with Random
Node Locations | In this paper, a lower bound on the capacity of wireless ad hoc erasure networks is derived in closed form in the canonical case where $n$ nodes are uniformly and independently distributed in the unit area square. The bound holds almost surely and is asymptotically tight. We assume all nodes have fixed transmit power and hence two nodes should be within a specified distance $r_n$ of each other to overcome noise. In this context, interference determines outages, so we model each transmitter-receiver pair as an erasure channel with a broadcast constraint, i.e. each node can transmit only one signal across all its outgoing links. A lower bound of $\Theta(n r_n)$ for the capacity of this class of networks is derived. If the broadcast constraint is relaxed and each node can send distinct signals on distinct outgoing links, we show that the gain is a function of $r_n$ and the link erasure probabilities, and is at most a constant if the link erasure probabilities grow sufficiently large with $n$. Finally, the case where the erasure probabilities are themselves random variables, for example due to randomness in geometry or channels, is analyzed. We prove somewhat surprisingly that in this setting, variability in erasure probabilities increases network capacity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 2,953 |
2110.05572 | EchoVPR: Echo State Networks for Visual Place Recognition | Recognising previously visited locations is an important, but unsolved, task in autonomous navigation. Current visual place recognition (VPR) benchmarks typically challenge models to recover the position of a query image (or images) from sequential datasets that include both spatial and temporal components. Recently, Echo State Network (ESN) varieties have proven particularly powerful at solving machine learning tasks that require spatio-temporal modelling. These networks are simple, yet powerful neural architectures that--exhibiting memory over multiple time-scales and non-linear high-dimensional representations--can discover temporal relations in the data while still maintaining linearity in the learning time. In this paper, we present a series of ESNs and analyse their applicability to the VPR problem. We report that the addition of ESNs to pre-processed convolutional neural networks led to a dramatic boost in performance in comparison to non-recurrent networks in five out of six standard benchmarks (GardensPoint, SPEDTest, ESSEX3IN1, Oxford RobotCar, and Nordland), demonstrating that ESNs are able to capture the temporal structure inherent in VPR problems. Moreover, we show that models that include ESNs can outperform class-leading VPR models which also exploit the sequential dynamics of the data. Finally, our results demonstrate that ESNs improve generalisation abilities, robustness, and accuracy further supporting their suitability to VPR applications. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 260,314 |
0804.3259 | On Multiuser Power Region of Fading Multiple-Access Channel with
Multiple Antennas | This paper is concerned with the fading MIMO-MAC with multiple receive antennas at the base station (BS) and multiple transmit antennas at each mobile terminal (MT). Two multiple-access techniques are considered for scheduling transmissions from each MT to the BS at the same frequency, which are space-division multiple-access (SDMA) and time-division multiple-access (TDMA). For SDMA, all MTs transmit simultaneously to the BS and their individual signals are resolved at the BS via multiple receive antennas while for TDMA, each MT transmits independently to the BS during mutually orthogonal time slots. It is assumed that the channel-state information (CSI) of the fading channel from each MT to the BS is unknown at each MT transmitter, but is perfectly known at the BS receiver. Thereby, the BS can acquire the long-term channel-distribution information (CDI) for each MT. This paper extends the well-known transmit-covariance feedback scheme for the point-to-point fading MIMO channel to the fading MIMO-MAC, whereby the BS jointly optimizes the transmit signal covariance matrices for all MTs based on their CDI, and then sends each transmit covariance matrix back to the corresponding MT via a feedback channel. The main goal of this paper is to characterize the so-called multiuser power region under the multiuser transmit-covariance feedback scheme for both SDMA and TDMA. The power region is defined as the constitution of all user transmit power-tuples that can achieve reliable transmissions for a given set of user target rates. Simulation results show that SDMA can achieve substantial power savings over TDMA for the fading MIMO-MAC. Moreover, this paper demonstrates the usefulness of the multiuser power region for maintaining proportionally-fair power consumption among the MTs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,607 |
1201.5154 | Finding short vectors in a lattice of Voronoi's first kind | We show that for those lattices of Voronoi's first kind, a vector of shortest nonzero Euclidean length can computed in polynomial time by computing a minimum cut in a graph. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 13,944 |
2210.07487 | A Scalable Finite Difference Method for Deep Reinforcement Learning | Several low-bandwidth distributable black-box optimization algorithms in the family of finite differences such as Evolution Strategies have recently been shown to perform nearly as well as tailored Reinforcement Learning methods in some Reinforcement Learning domains. One shortcoming of these black-box methods is that they must collect information about the structure of the return function at every update, and can often employ only information drawn from a distribution centered around the current parameters. As a result, when these algorithms are distributed across many machines, a significant portion of total runtime may be spent with many machines idle, waiting for a final return and then for an update to be calculated. In this work we introduce a novel method to use older data in finite difference algorithms, which produces a scalable algorithm that avoids significant idle time or wasted computation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 323,733 |
2202.01290 | Cyclical Pruning for Sparse Neural Networks | Current methods for pruning neural network weights iteratively apply magnitude-based pruning on the model weights and re-train the resulting model to recover lost accuracy. In this work, we show that such strategies do not allow for the recovery of erroneously pruned weights. To enable weight recovery, we propose a simple strategy called \textit{cyclical pruning} which requires the pruning schedule to be periodic and allows for weights pruned erroneously in one cycle to recover in subsequent ones. Experimental results on both linear models and large-scale deep neural networks show that cyclical pruning outperforms existing pruning algorithms, especially at high sparsity ratios. Our approach is easy to tune and can be readily incorporated into existing pruning pipelines to boost performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 278,429 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.