id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1302.1007 | Image Denoising Using Interquartile Range Filter with Local Averaging | Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential advantage of applying IQR filter is to preserve edge sharpness better of the original image. A variety of test images have been used to support the proposed filter and PSNR was calculated and compared with median filter. The experimental results on standard test images demonstrate this filter is simpler and better performing than median filter. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 21,772 |
1506.02792 | Capacity of the AWGN Channel with Random Battery Recharges | We consider communication over the AWGN channel with a transmitter whose battery is recharged with RF energy transfer at random times known to the receiver. We assume that the recharging process is i.i.d. Bernoulli. We characterize the capacity of this channel as the limit of an $n$-letter maximum mutual information rate under both causal and noncausal transmitter knowledge of the battery recharges. With noncausal knowledge, it is possible to explicitly identify the maximizing input distribution, which we use to demonstrate that the capacity with noncausal knowledge of the battery recharges is strictly larger than that with causal knowledge. We then proceed to derive explicit upper and lower bounds on the capacity, which are within 1.05 bits/s/Hz of each other for all parameter values. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 43,976 |
2402.08250 | A survey of recent methods for addressing AI fairness and bias in
biomedicine | Artificial intelligence (AI) systems have the potential to revolutionize clinical practices, including improving diagnostic accuracy and surgical decision-making, while also reducing costs and manpower. However, it is important to recognize that these systems may perpetuate social inequities or demonstrate biases, such as those based on race or gender. Such biases can occur before, during, or after the development of AI models, making it critical to understand and address potential biases to enable the accurate and reliable application of AI models in clinical settings. To mitigate bias concerns during model development, we surveyed recent publications on different debiasing methods in the fields of biomedical natural language processing (NLP) or computer vision (CV). Then we discussed the methods that have been applied in the biomedical domain to address bias. We performed our literature search on PubMed, ACM digital library, and IEEE Xplore of relevant articles published between January 2018 and December 2023 using multiple combinations of keywords. We then filtered the result of 10,041 articles automatically with loose constraints, and manually inspected the abstracts of the remaining 890 articles to identify the 55 articles included in this review. Additional articles in the references are also included in this review. We discuss each method and compare its strengths and weaknesses. Finally, we review other potential methods from the general domain that could be applied to biomedicine to address bias and improve fairness.The bias of AIs in biomedicine can originate from multiple sources. Existing debiasing methods that focus on algorithms can be categorized into distributional or algorithmic. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 429,031 |
2203.10961 | Bike Sharing Demand Prediction based on Knowledge Sharing across Modes:
A Graph-based Deep Learning Approach | Bike sharing is an increasingly popular part of urban transportation systems. Accurate demand prediction is the key to support timely re-balancing and ensure service efficiency. Most existing models of bike-sharing demand prediction are solely based on its own historical demand variation, essentially regarding bike sharing as a closed system and neglecting the interaction between different transport modes. This is particularly important because bike sharing is often used to complement travel through other modes (e.g., public transit). Despite some recent efforts, there is no existing method capable of leveraging spatiotemporal information from multiple modes with heterogeneous spatial units. To address this research gap, this study proposes a graph-based deep learning approach for bike sharing demand prediction (B-MRGNN) with multimodal historical data as input. The spatial dependencies across modes are encoded with multiple intra- and inter-modal graphs. A multi-relational graph neural network (MRGNN) is introduced to capture correlations between spatial units across modes, such as bike sharing stations, subway stations, or ride-hailing zones. Extensive experiments are conducted using real-world bike sharing, subway and ride-hailing data from New York City, and the results demonstrate the superior performance of our proposed approach compared to existing methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 286,748 |
1907.05286 | Voxel-FPN: multi-scale voxel feature aggregation in 3D object detection
from point clouds | Object detection in point cloud data is one of the key components in computer vision systems, especially for autonomous driving applications. In this work, we present Voxel-FPN, a novel one-stage 3D object detector that utilizes raw data from LIDAR sensors only. The core framework consists of an encoder network and a corresponding decoder followed by a region proposal network. Encoder extracts multi-scale voxel information in a bottom-up manner while decoder fuses multiple feature maps from various scales in a top-down way. Extensive experiments show that the proposed method has better performance on extracting features from point data and demonstrates its superiority over some baselines on the challenging KITTI-3D benchmark, obtaining good performance on both speed and accuracy in real-world scenarios. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 138,325 |
2303.15564 | Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder | Deep neural networks are vulnerable to backdoor attacks, where an adversary maliciously manipulates the model behavior through overlaying images with special triggers. Existing backdoor defense methods often require accessing a few validation data and model parameters, which are impractical in many real-world applications, e.g., when the model is provided as a cloud service. In this paper, we address the practical task of blind backdoor defense at test time, in particular for black-box models. The true label of every test image needs to be recovered on the fly from a suspicious model regardless of image benignity. We focus on test-time image purification methods that incapacitate possible triggers while keeping semantic contents intact. Due to diverse trigger patterns and sizes, the heuristic trigger search in image space can be unscalable. We circumvent such barrier by leveraging the strong reconstruction power of generative models, and propose a framework of Blind Defense with Masked AutoEncoder (BDMAE). It detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations. The detection results are then refined by considering trigger topology. Finally, we fuse MAE restorations adaptively into a purified image for making prediction. Our approach is blind to the model architectures, trigger patterns and image benignity. Extensive experiments under different backdoor settings validate its effectiveness and generalizability. Code is available at https://github.com/tsun/BDMAE. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 354,535 |
1905.11882 | Statistical bounds for entropic optimal transport: sample complexity and
the central limit theorem | We prove several fundamental statistical bounds for entropic OT with the squared Euclidean cost between subgaussian probability measures in arbitrary dimension. First, through a new sample complexity result we establish the rate of convergence of entropic OT for empirical measures. Our analysis improves exponentially on the bound of Genevay et al. (2019) and extends their work to unbounded measures. Second, we establish a central limit theorem for entropic OT, based on techniques developed by Del Barrio and Loubes (2019). Previously, such a result was only known for finite metric spaces. As an application of our results, we develop and analyze a new technique for estimating the entropy of a random variable corrupted by gaussian noise. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,578 |
2310.00012 | Operator-free Equilibrium on the Sphere | We propose a generalized minimum discrepancy, which derives from Legendre's ODE and spherical harmonic theoretics to provide a new criterion of equidistributed pointsets on the sphere. A continuous and derivative kernel in terms of elementary functions is established to simplify the computation of the generalized minimum discrepancy. We consider the deterministic point generated from Pycke's statistics to integrate a Franke function for the sphere and investigate the discrepancies of points systems embedding with different kernels. Quantitive experiments are conducted and the results are analyzed. Our deduced model can explore latent point systems, that have the minimum discrepancy without the involvement of pseudodifferential operators and Beltrami operators, by the use of derivatives. Compared to the random point generated from the Monte Carlo method, only a few points generated by our method are required to approximate the target in arbitrary dimensions. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 395,785 |
1007.0690 | A unified view of Automata-based algorithms for Frequent Episode
Discovery | Frequent Episode Discovery framework is a popular framework in Temporal Data Mining with many applications. Over the years many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper we present a unified view of all such frequency counting algorithms. We present a generic algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various algorithms as we show here. We also point out how this unified view helps us to consider generalization of the algorithm so that they can discover episodes with general partial orders. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 6,992 |
2402.07043 | A Tale of Tails: Model Collapse as a Change of Scaling Laws | As AI model size grows, neural scaling laws have become a crucial tool to predict the improvements of large models when increasing capacity and the size of original (human or natural) training data. Yet, the widespread use of popular models means that the ecosystem of online data and text will co-evolve to progressively contain increased amounts of synthesized data. In this paper we ask: How will the scaling laws change in the inevitable regime where synthetic data makes its way into the training corpus? Will future models, still improve, or be doomed to degenerate up to total (model) collapse? We develop a theoretical framework of model collapse through the lens of scaling laws. We discover a wide range of decay phenomena, analyzing loss of scaling, shifted scaling with number of generations, the ''un-learning" of skills, and grokking when mixing human and synthesized data. Our theory is validated by large-scale experiments with a transformer on an arithmetic task and text generation using the large language model Llama2. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 428,547 |
2405.05506 | Cross-Care: Assessing the Healthcare Implications of Pre-training Data
on Language Model Bias | Large language models (LLMs) are increasingly essential in processing natural languages, yet their application is frequently compromised by biases and inaccuracies originating in their training data. In this study, we introduce Cross-Care, the first benchmark framework dedicated to assessing biases and real world knowledge in LLMs, specifically focusing on the representation of disease prevalence across diverse demographic groups. We systematically evaluate how demographic biases embedded in pre-training corpora like $ThePile$ influence the outputs of LLMs. We expose and quantify discrepancies by juxtaposing these biases against actual disease prevalences in various U.S. demographic groups. Our results highlight substantial misalignment between LLM representation of disease prevalence and real disease prevalence rates across demographic subgroups, indicating a pronounced risk of bias propagation and a lack of real-world grounding for medical applications of LLMs. Furthermore, we observe that various alignment methods minimally resolve inconsistencies in the models' representation of disease prevalence across different languages. For further exploration and analysis, we make all data and a data visualization tool available at: www.crosscare.net. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 452,943 |
2010.02503 | Categorizing Online Shopping Behavior from Cosmetics to Electronics: An
Analytical Framework | A success factor for modern companies in the age of Digital Marketing is to understand how customers think and behave based on their online shopping patterns. While the conventional method of gathering consumer insights through questionnaires and surveys still form the bases of descriptive analytics for market intelligence units, we propose a machine learning framework to automate this process. In this paper we present a modular consumer data analysis platform that processes session level interaction records between users and products to predict session level, user journey level and customer behavior specific patterns leading towards purchase events. We explore the computational framework and provide test results on two Big data sets-cosmetics and consumer electronics of size 2GB and 15GB, respectively. The proposed system achieves 97-99% classification accuracy and recall for user-journey level purchase predictions and categorizes buying behavior into 5 clusters with increasing purchase ratios for both data sets. Thus, the proposed framework is extendable to other large e-commerce data sets to obtain automated purchase predictions and descriptive consumer insights. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 199,055 |
1608.00180 | Local Testing for Membership in Lattices | Motivated by the structural analogies between point lattices and linear error-correcting codes, and by the mature theory on locally testable codes, we initiate a systematic study of local testing for membership in lattices. Testing membership in lattices is also motivated in practice, by applications to integer programming, error detection in lattice-based communication, and cryptography. Apart from establishing the conceptual foundations of lattice testing, our results include the following: 1. We demonstrate upper and lower bounds on the query complexity of local testing for the well-known family of code formula lattices. Furthermore, we instantiate our results with code formula lattices constructed from Reed-Muller codes, and obtain nearly-tight bounds. 2. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing 35(1) pp1-21). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 59,237 |
2409.00494 | GenAI-powered Multi-Agent Paradigm for Smart Urban Mobility:
Opportunities and Challenges for Integrating Large Language Models (LLMs) and
Retrieval-Augmented Generation (RAG) with Intelligent Transportation Systems | Leveraging recent advances in generative AI, multi-agent systems are increasingly being developed to enhance the functionality and efficiency of smart city applications. This paper explores the transformative potential of large language models (LLMs) and emerging Retrieval-Augmented Generation (RAG) technologies in Intelligent Transportation Systems (ITS), paving the way for innovative solutions to address critical challenges in urban mobility. We begin by providing a comprehensive overview of the current state-of-the-art in mobility data, ITS, and Connected Vehicles (CV) applications. Building on this review, we discuss the rationale behind RAG and examine the opportunities for integrating these Generative AI (GenAI) technologies into the smart mobility sector. We propose a conceptual framework aimed at developing multi-agent systems capable of intelligently and conversationally delivering smart mobility services to urban commuters, transportation operators, and decision-makers. Our approach seeks to foster an autonomous and intelligent approach that (a) promotes science-based advisory to reduce traffic congestion, accidents, and carbon emissions at multiple scales, (b) facilitates public education and engagement in participatory mobility management, and (c) automates specialized transportation management tasks and the development of critical ITS platforms, such as data analytics and interpretation, knowledge representation, and traffic simulations. By integrating LLM and RAG, our approach seeks to overcome the limitations of traditional rule-based multi-agent systems, which rely on fixed knowledge bases and limited reasoning capabilities. This integration paves the way for a more scalable, intuitive, and automated multi-agent paradigm, driving advancements in ITS and urban mobility. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 484,946 |
1811.10553 | A deep neural network to enhance prediction of 1-year mortality using
echocardiographic videos of the heart | Predicting future clinical events helps physicians guide appropriate intervention. Machine learning has tremendous promise to assist physicians with predictions based on the discovery of complex patterns from historical data, such as large, longitudinal electronic health records (EHR). This study is a first attempt to demonstrate such capabilities using raw echocardiographic videos of the heart. We show that a large dataset of 723,754 clinically-acquired echocardiographic videos (~45 million images) linked to longitudinal follow-up data in 27,028 patients can be used to train a deep neural network to predict 1-year mortality with good accuracy (area under the curve (AUC) in an independent test set = 0.839). Prediction accuracy was further improved by adding EHR data (AUC = 0.858). Finally, we demonstrate that the trained neural network was more accurate in mortality prediction than two expert cardiologists. These results highlight the potential of neural networks to add new power to clinical predictions. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 114,520 |
1611.03068 | Incremental Sequence Learning | Deep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning. We introduce Incremental Sequence Learning, a simple incremental approach to sequence learning. Incremental Sequence Learning starts out by using only the first few steps of each sequence as training data. Each time a performance criterion has been reached, the length of the parts of the sequences used for training is increased. We introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences. We find that Incremental Sequence Learning greatly speeds up sequence learning and reaches the best test performance level of regular sequence learning 20 times faster, reduces the test error by 74%, and in general performs more robustly; it displays lower variance and achieves sustained progress after all three comparison methods have stopped improving. The other instantiations of curriculum learning do not result in any noticeable improvement. A trained sequence prediction model is also used in transfer learning to the task of sequence classification, where it is found that transfer learning realizes improved classification performance compared to methods that learn to classify from scratch. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 63,648 |
2209.01421 | Deep Live Video Ad Placement on the 5G Edge | The video broadcasting industry has been growing significantly in the recent years, specially on delivering personalized contents to the end users. While video broadcasting has continued to grow beyond TV, video adverting has become a key marketing tool to deliver targeted messages directly to the audience. However, unfortunately for broadband TV, a key problem is that the TV commercials target the broad audience, therefore lacking user-specific and personalized ad contents. In this paper, we propose a deep edge-cloud ad-placement system, and briefly describe our methodologies and the architecture of our designed ad placement system for delivering both the Video on Demand (VoD) and live broadcast TV contents over MMT streaming protocol. The aim of our paper is to showcase how to enable targeted, personalized, and user-specific advertising services deployed on the future 5G MEC platforms, which in turn can have high potentials to increase ad revenues for the mobile operator industry. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 315,886 |
2411.15703 | Analysis of Hierarchical AoII over unreliable channel: A Stochastic
Hybrid System Approach | In this work, we generalize the Stochastic Hybrid Systems (SHSs) analysis of traditional AoI to the AoII metric. Hierarchical ageing processes are adopted using the continuous AoII for the first time, where two different hierarchy schemes, i.e., a hybrid of linear ageing processes with different slopes and a hybrid of linear and quadratic ageing processes, are considered. We first modify the main result in \cite[Theorem 1]{yates_age_2020b} to provide a systematic way to analyze the continuous hierarchical AoII over unslotted real-time systems. The closed-form expressions of average hierarchical AoII are obtained based on our Theorem \ref{theorem1} in two typical scenarios with different channel conditions, i.e., an M/M/1/1 queue over noisy channel and two M/M/1/1 queues over collision channel. Moreover, we analyze the stability conditions for two scenarios given that the quadratic ageing process may lead to the absence of stationary solutions. Finally, we compare the average age performance between the classic AoI results and our AoII results in the M/M/1/1 queue, and the effects of different channel parameters on AoII are also evaluated. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 510,729 |
1702.00584 | Ultra Reliable Short Message Relaying with Wireless Power Transfer | We consider a dual-hop wireless network where an energy constrained relay node first harvests energy through the received radio-frequency signal from the source, and then uses the harvested energy to forward the source's information to the destination node. The throughput and delay metrics are investigated for a decode-and-forward relaying mechanism at finite blocklength regime and delay-limited transmission mode. We consider ultra-reliable communication scenarios under discussion for the next fifth-generation of wireless systems, with error and latency constraints. The impact on these metrics of the blocklength, information bits, and relay position is investigated. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 67,678 |
1903.06740 | A Data Mining Approach to Flight Arrival Delay Prediction for American
Airlines | In the present scenario of domestic flights in USA, there have been numerous instances of flight delays and cancellations. In the United States, the American Airlines, Inc. have been one of the most entrusted and the world's largest airline in terms of number of destinations served. But when it comes to domestic flights, AA has not lived up to the expectations in terms of punctuality or on-time performance. Flight Delays also result in airline companies operating commercial flights to incur huge losses. So, they are trying their best to prevent or avoid Flight Delays and Cancellations by taking certain measures. This study aims at analyzing flight information of US domestic flights operated by American Airlines, covering top 5 busiest airports of US and predicting possible arrival delay of the flight using Data Mining and Machine Learning Approaches. The Gradient Boosting Classifier Model is deployed by training and hyper-parameter tuning it, achieving a maximum accuracy of 85.73%. Such an Intelligent System is very essential in foretelling flights'on-time performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,450 |
1811.04608 | Matrix Product Operator Restricted Boltzmann Machines | A restricted Boltzmann machine (RBM) learns a probability distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input. Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by model construction, which leads to a weak model expression power. This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power. A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed. Numerical experiments compare the MPORBM with the traditional RBM and MvRBM for data classification and image completion and denoising tasks. The expressive power of the MPORBM as a function of the MPO-rank is also investigated. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 113,131 |
2407.14653 | OASIS: Conditional Distribution Shaping for Offline Safe Reinforcement
Learning | Offline safe reinforcement learning (RL) aims to train a policy that satisfies constraints using a pre-collected dataset. Most current methods struggle with the mismatch between imperfect demonstrations and the desired safe and rewarding performance. In this paper, we introduce OASIS (cOnditionAl diStributIon Shaping), a new paradigm in offline safe RL designed to overcome these critical limitations. OASIS utilizes a conditional diffusion model to synthesize offline datasets, thus shaping the data distribution toward a beneficial target domain. Our approach makes compliance with safety constraints through effective data utilization and regularization techniques to benefit offline safe RL training. Comprehensive evaluations on public benchmarks and varying datasets showcase OASIS's superiority in benefiting offline safe RL agents to achieve high-reward behavior while satisfying the safety constraints, outperforming established baselines. Furthermore, OASIS exhibits high data efficiency and robustness, making it suitable for real-world applications, particularly in tasks where safety is imperative and high-quality demonstrations are scarce. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 474,856 |
2212.02184 | 3D-LatentMapper: View Agnostic Single-View Reconstruction of 3D Shapes | Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to represent and generate 3D shapes, as well as a vast number of use cases. However, single-view reconstruction remains a challenging topic that can unlock various interesting use cases such as interactive design. In this work, we propose a novel framework that leverages the intermediate latent spaces of Vision Transformer (ViT) and a joint image-text representational model, CLIP, for fast and efficient Single View Reconstruction (SVR). More specifically, we propose a novel mapping network architecture that learns a mapping between deep features extracted from ViT and CLIP, and the latent space of a base 3D generative model. Unlike previous work, our method enables view-agnostic reconstruction of 3D shapes, even in the presence of large occlusions. We use the ShapeNetV2 dataset and perform extensive experiments with comparisons to SOTA methods to demonstrate our method's effectiveness. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 334,712 |
2003.12972 | On the Precise Error Analysis of Support Vector Machines | This paper investigates the asymptotic behavior of the soft-margin and hard-margin support vector machine (SVM) classifiers for simultaneously high-dimensional and numerous data (large $n$ and large $p$ with $n/p\to\delta$) drawn from a Gaussian mixture distribution. Sharp predictions of the classification error rate of the hard-margin and soft-margin SVM are provided, as well as asymptotic limits of as such important parameters as the margin and the bias. As a further outcome, the analysis allow for the identification of the maximum number of training samples that the hard-margin SVM is able to separate. The precise nature of our results allow for an accurate performance comparison of the hard-margin and soft-margin SVM as well as a better understanding of the involved parameters (such as the number of measurements and the margin parameter) on the classification performance. Our analysis, confirmed by a set of numerical experiments, builds upon the convex Gaussian min-max Theorem, and extends its scope to new problems never studied before by this framework. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 170,062 |
1708.02283 | Real-Time Visual Localisation in a Tagged Environment | In a robotised warehouse a major issue is the safety of human operators in case of intervention in the work area of the robots. The current solution is to shut down every robot but it causes a loss of productivity, especially for large robotised warehouses. In order to avoid this loss we need to ensure the operator's security during his/her intervention in the warehouse without powering off the robots. The human operator needs to be localised in the warehouse and the trajectories of the robots have to be modified so that they do not interfere with the human. The purpose of this paper is to demonstrate a visual localisation method with visual elements that are already available in the current warehouse setup. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 78,558 |
2102.01737 | Canonical Form of Lyapunov Second Method in Mathematical Modelling and
Control Design | The objective of the paper is to put canonical Lyapunov function(CLF), canonizing diffeomorphism (CD) and canonical form of dynamical systems (CFDS), which have led to the generalization of the Lyapunov second method, in perspective of their high efficiency for Mathematical Modelling and Control Design. We show how the symbiosis of the ideas of Henri Poincare and Nikolay Chetaev leads us to CD, CFDS and CLF. Our approach successfully translates into mathematical modelling and control design for special two-angles synchronized longitudinal maneuvering of a thrust-vectored aircraft. The essentially nonlinear five-dimensional mathematical model of the longitudinal flight dynamics of a thrust-vectored aircraft in a wing-body coordinate system with two controls, namely the angular deflections of a movable horizontal stabilizer and a turbojet engine nozzle, is investigated. The wide-sense robust and stable in the large tracking control law is designed. Its core is the hierarchical cascade of two controlling attractor-mediators and two controlling terminal attractors embedded in the extended phase space of the mathematical model of the aircraft longitudinal motion. The detailed demonstration of the elaborated technique of designing wide-sense robust tracking control for the nonlinear multidimensional mathematical model constitutes the quintessence of the paper. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 218,203 |
2208.11313 | RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided
Self-Exemplars | Recent methods for single image super-resolution (SISR) have demonstrated outstanding performance in generating high-resolution (HR) images from low-resolution (LR) images. However, most of these methods show their superiority using synthetically generated LR images, and their generalizability to real-world images is often not satisfactory. In this paper, we pay attention to two well-known strategies developed for robust super-resolution (SR), i.e., reference-based SR (RefSR) and zero-shot SR (ZSSR), and propose an integrated solution, called reference-based zero-shot SR (RZSR). Following the principle of ZSSR, we train an image-specific SR network at test time using training samples extracted only from the input image itself. To advance ZSSR, we obtain reference image patches with rich textures and high-frequency details which are also extracted only from the input image using cross-scale matching. To this end, we construct an internal reference dataset and retrieve reference image patches from the dataset using depth information. Using LR patches and their corresponding HR reference patches, we train a RefSR network that is embodied with a non-local attention module. Experimental results demonstrate the superiority of the proposed RZSR compared to the previous ZSSR methods and robustness to unseen images compared to other fully supervised SISR methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 314,381 |
2210.06170 | Contrastive Neural Ratio Estimation for Simulation-based Inference | Likelihood-to-evidence ratio estimation is usually cast as either a binary (NRE-A) or a multiclass (NRE-B) classification task. In contrast to the binary classification framework, the current formulation of the multiclass version has an intrinsic and unknown bias term, making otherwise informative diagnostics unreliable. We propose a multiclass framework free from the bias inherent to NRE-B at optimum, leaving us in the position to run diagnostics that practitioners depend on. It also recovers NRE-A in one corner case and NRE-B in the limiting case. For fair comparison, we benchmark the behavior of all algorithms in both familiar and novel training regimes: when jointly drawn data is unlimited, when data is fixed but prior draws are unlimited, and in the commonplace fixed data and parameters setting. Our investigations reveal that the highest performing models are distant from the competitors (NRE-A, NRE-B) in hyperparameter space. We make a recommendation for hyperparameters distinct from the previous models. We suggest two bounds on the mutual information as performance metrics for simulation-based inference methods, without the need for posterior samples, and provide experimental results. This version corrects a minor implementation error in $\gamma$, improving results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 323,163 |
1612.06516 | Random linear under-determined systems with block-sparse solutions --
asymptotics, large deviations, and finite dimensions | In this paper we consider random linear under-determined systems with block-sparse solutions. A standard subvariant of such systems, namely, precisely the same type of systems without additional block structuring requirement, gained a lot of popularity over the last decade. This is of course in first place due to the success in mathematical characterization of an $\ell_1$ optimization technique typically used for solving such systems, initially achieved in \cite{CRT,DOnoho06CS} and later on perfected in \cite{DonohoPol,DonohoUnsigned,StojnicCSetam09,StojnicUpper10}. The success that we achieved in \cite{StojnicCSetam09,StojnicUpper10} characterizing the standard sparse solutions systems, we were then able to replicate in a sequence of papers \cite{StojnicCSetamBlock09,StojnicUpperBlock10,StojnicICASSP09block,StojnicJSTSP09} where instead of the standard $\ell_1$ optimization we utilized its an $\ell_2/\ell_1$ variant as a better fit for systems with block-sparse solutions. All of these results finally settled the so-called threshold/phase transitions phenomena (which naturally assume the asymptotic/large dimensional scenario). Here, in addition to a few novel asymptotic considerations, we also try to raise the level a bit, step a bit away from the asymptotics, and consider the finite dimensions scenarios as well. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 65,826 |
2402.05408 | MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis | We present a Multi-Instance Generation (MIG) task, simultaneously generating multiple instances with diverse controls in one image. Given a set of predefined coordinates and their corresponding descriptions, the task is to ensure that generated instances are accurately at the designated locations and that all instances' attributes adhere to their corresponding description. This broadens the scope of current research on Single-instance generation, elevating it to a more versatile and practical dimension. Inspired by the idea of divide and conquer, we introduce an innovative approach named Multi-Instance Generation Controller (MIGC) to address the challenges of the MIG task. Initially, we break down the MIG task into several subtasks, each involving the shading of a single instance. To ensure precise shading for each instance, we introduce an instance enhancement attention mechanism. Lastly, we aggregate all the shaded instances to provide the necessary information for accurately generating multiple instances in stable diffusion (SD). To evaluate how well generation models perform on the MIG task, we provide a COCO-MIG benchmark along with an evaluation pipeline. Extensive experiments were conducted on the proposed COCO-MIG benchmark, as well as on various commonly used benchmarks. The evaluation results illustrate the exceptional control capabilities of our model in terms of quantity, position, attribute, and interaction. Code and demos will be released at https://migcproject.github.io/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 427,852 |
1812.08999 | Feature-Wise Bias Amplification | We study the phenomenon of bias amplification in classifiers, wherein a machine learning model learns to predict classes with a greater disparity than the underlying ground truth. We demonstrate that bias amplification can arise via an inductive bias in gradient descent methods that results in the overestimation of the importance of moderately-predictive "weak" features if insufficient training data is available. This overestimation gives rise to feature-wise bias amplification -- a previously unreported form of bias that can be traced back to the features of a trained model. Through analysis and experiments, we show that while some bias cannot be mitigated without sacrificing accuracy, feature-wise bias amplification can be mitigated through targeted feature selection. We present two new feature selection algorithms for mitigating bias amplification in linear models, and show how they can be adapted to convolutional neural networks efficiently. Our experiments on synthetic and real data demonstrate that these algorithms consistently lead to reduced bias without harming accuracy, in some cases eliminating predictive bias altogether while providing modest gains in accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 117,085 |
2112.10668 | Few-shot Learning with Multilingual Language Models | Large-scale generative language models such as GPT-3 are competitive few-shot learners. While these models are known to be able to jointly represent many different languages, their training data is dominated by English, potentially limiting their cross-lingual generalization. In this work, we train multilingual generative language models on a corpus covering a diverse set of languages, and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings) and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark, our model outperforms GPT-3 on 171 out of 182 directions with 32 training examples, while surpassing the official supervised baseline in 45 directions. We conduct an in-depth analysis of different multilingual prompting approaches, showing in particular that strong few-shot learning performance across languages can be achieved via cross-lingual transfer through both templates and demonstration examples. Finally, we evaluate our models in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 272,500 |
2110.06195 | Planning Sensing Sequences for Subsurface 3D Tumor Mapping | Surgical automation has the potential to enable increased precision and reduce the per-patient workload of overburdened human surgeons. An effective automation system must be able to sense and map subsurface anatomy, such as tumors, efficiently and accurately. In this work, we present a method that plans a sequence of sensing actions to map the 3D geometry of subsurface tumors. We leverage a sequential Bayesian Hilbert map to create a 3D probabilistic occupancy model that represents the likelihood that any given point in the anatomy is occupied by a tumor, conditioned on sensor readings. We iteratively update the map, utilizing Bayesian optimization to determine sensing poses that explore unsensed regions of anatomy and exploit the knowledge gained by previous sensing actions. We demonstrate our method's efficiency and accuracy in three anatomical scenarios including a liver tumor scenario generated from a real patient's CT scan. The results show that our proposed method significantly outperforms comparison methods in terms of efficiency while detecting subsurface tumors with high accuracy. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 260,530 |
2412.00343 | Nonlinearity and Uncertainty Informed Moment-Matching Gaussian Mixture
Splitting | Many problems in navigation and tracking require increasingly accurate characterizations of the evolution of uncertainty in nonlinear systems. Nonlinear uncertainty propagation approaches based on Gaussian mixture density approximations offer distinct advantages over sampling based methods in their computational cost and continuous representation. State-of-the-art Gaussian mixture approaches are adaptive in that individual Gaussian mixands are selectively split into mixtures to yield better approximations of the true propagated distribution. Despite the importance of the splitting process to accuracy and computational efficiency, relatively little work has been devoted to mixand selection and splitting direction optimization. The first part of this work presents splitting methods that preserve the mean and covariance of the original distribution. Then, we present and compare a number of novel heuristics for selecting the splitting direction. The choice of splitting direction is informed by the initial uncertainty distribution, properties of the nonlinear function through which the original distribution is propagated, and a whitening based natural scaling method to avoid dependence of the splitting direction on the scaling of coordinates. We compare these novel heuristics to existing techniques in three distinct examples involving Cartesian to polar coordinate transformation, Keplerian orbital element propagation, and uncertainty propagation in the circular restricted three-body problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 512,608 |
2111.11322 | Contour-guided Image Completion with Perceptual Grouping | Humans are excellent at perceiving illusory outlines. We are readily able to complete contours, shapes, scenes, and even unseen objects when provided with images that contain broken fragments of a connected appearance. In vision science, this ability is largely explained by perceptual grouping: a foundational set of processes in human vision that describes how separated elements can be grouped. In this paper, we revisit an algorithm called Stochastic Completion Fields (SCFs) that mechanizes a set of such processes -- good continuity, closure, and proximity -- through contour completion. This paper implements a modernized model of the SCF algorithm, and uses it in an image editing framework where we propose novel methods to complete fragmented contours. We show how the SCF algorithm plausibly mimics results in human perception. We use the SCF completed contours as guides for inpainting, and show that our guides improve the performance of state-of-the-art models. Additionally, we show that the SCF aids in finding edges in high-noise environments. Overall, our described algorithms resemble an important mechanism in the human visual system, and offer a novel framework that modern computer vision models can benefit from. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 267,628 |
2402.04829 | NeRF as a Non-Distant Environment Emitter in Physics-based Inverse
Rendering | Physics-based inverse rendering enables joint optimization of shape, material, and lighting based on captured 2D images. To ensure accurate reconstruction, using a light model that closely resembles the captured environment is essential. Although the widely adopted distant environmental lighting model is adequate in many cases, we demonstrate that its inability to capture spatially varying illumination can lead to inaccurate reconstructions in many real-world inverse rendering scenarios. To address this limitation, we incorporate NeRF as a non-distant environment emitter into the inverse rendering pipeline. Additionally, we introduce an emitter importance sampling technique for NeRF to reduce the rendering variance. Through comparisons on both real and synthetic datasets, our results demonstrate that our NeRF-based emitter offers a more precise representation of scene lighting, thereby improving the accuracy of inverse rendering. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 427,603 |
2002.05658 | Ten Research Challenge Areas in Data Science | Although data science builds on knowledge from computer science, mathematics, statistics, and other disciplines, data science is a unique field with many mysteries to unlock: challenging scientific questions and pressing questions of societal importance. This article starts with meta-questions about data science as a discipline and then elaborates on ten ideas for the basis of a research agenda for data science. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 163,967 |
2209.09060 | Deep Metric Learning with Chance Constraints | Deep metric learning (DML) aims to minimize empirical expected loss of the pairwise intra-/inter- class proximity violations in the embedding space. We relate DML to feasibility problem of finite chance constraints. We show that minimizer of proxy-based DML satisfies certain chance constraints, and that the worst case generalization performance of the proxy-based methods can be characterized by the radius of the smallest ball around a class proxy to cover the entire domain of the corresponding class samples, suggesting multiple proxies per class helps performance. To provide a scalable algorithm as well as exploiting more proxies, we consider the chance constraints implied by the minimizers of proxy-based DML instances and reformulate DML as finding a feasible point in intersection of such constraints, resulting in a problem to be approximately solved by iterative projections. Simply put, we repeatedly train a regularized proxy-based loss and re-initialize the proxies with the embeddings of the deliberately selected new samples. We applied our method with 4 well-accepted DML losses and show the effectiveness with extensive evaluations on 4 popular DML benchmarks. Code is available at: https://github.com/yetigurbuz/ccp-dml | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 318,366 |
2010.10294 | Adaptive Webpage Fingerprinting from TLS Traces | In webpage fingerprinting, an on-path adversary infers the specific webpage loaded by a victim user by analysing the patterns in the encrypted TLS traffic exchanged between the user's browser and the website's servers. This work studies modern webpage fingerprinting adversaries against the TLS protocol; aiming to shed light on their capabilities and inform potential defences. Despite the importance of this research area (the majority of global Internet users rely on standard web browsing with TLS) and the potential real-life impact, most past works have focused on attacks specific to anonymity networks (e.g., Tor). We introduce a TLS-specific model that: 1) scales to an unprecedented number of target webpages, 2) can accurately classify thousands of classes it never encountered during training, and 3) has low operational costs even in scenarios of frequent page updates. Based on these findings, we then discuss TLS-specific countermeasures and evaluate the effectiveness of the existing padding capabilities provided by TLS 1.3. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 201,846 |
1909.04101 | Neural Naturalist: Generating Fine-Grained Image Comparisons | We introduce the new Birds-to-Words dataset of 41k sentences describing fine-grained differences between photographs of birds. The language collected is highly detailed, while remaining understandable to the everyday observer (e.g., "heart-shaped face," "squat body"). Paragraph-length descriptions naturally adapt to varying levels of taxonomic and visual distance---drawn from a novel stratified sampling approach---with the appropriate level of detail. We propose a new model called Neural Naturalist that uses a joint image encoding and comparative module to generate comparative language, and evaluate the results with humans who must use the descriptions to distinguish real images. Our results indicate promising potential for neural models to explain differences in visual embedding space using natural language, as well as a concrete path for machine learning to aid citizen scientists in their effort to preserve biodiversity. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 144,690 |
2502.11569 | Towards Reasoning Ability of Small Language Models | Reasoning has long been viewed as an emergent property of large language models (LLMs), appearing at or above a certain scale ($\sim$100B parameters). However, recent studies challenge this assumption, showing that small language models (SLMs) can also achieve competitive reasoning performance. SLMs are increasingly favored for their efficiency and deployability. However, there is a lack of systematic study on the reasoning abilities of diverse SLMs, including those trained from scratch or derived from LLMs through quantization, pruning, and distillation. This raises a critical question: Can SLMs achieve reasoning abilities comparable to LLMs? In this work, we systematically survey, benchmark, and analyze 72 SLMs from six model families across 14 reasoning benchmarks. For reliable evaluation, we examine four evaluation methods and compare four LLM judges against human evaluations on 800 data points. We repeat all experiments three times to ensure a robust performance assessment. Additionally, we analyze the impact of different prompting strategies in small models. Beyond accuracy, we also evaluate model robustness under adversarial conditions and intermediate reasoning steps. Our findings challenge the assumption that scaling is the only way to achieve strong reasoning. Instead, we foresee a future where SLMs with strong reasoning capabilities can be developed through structured training or post-training compression. They can serve as efficient alternatives to LLMs for reasoning-intensive tasks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 534,445 |
1801.08577 | Effective Building Block Design for Deep Convolutional Neural Networks
using Search | Deep learning has shown promising results on many machine learning tasks but DL models are often complex networks with large number of neurons and layers, and recently, complex layer structures known as building blocks. Finding the best deep model requires a combination of finding both the right architecture and the correct set of parameters appropriate for that architecture. In addition, this complexity (in terms of layer types, number of neurons, and number of layers) also present problems with generalization since larger networks are easier to overfit to the data. In this paper, we propose a search framework for finding effective architectural building blocks for convolutional neural networks (CNN). Our approach is much faster at finding models that are close to state-of-the-art in performance. In addition, the models discovered by our approach are also smaller than models discovered by similar techniques. We achieve these twin advantages by designing our search space in such a way that it searches over a reduced set of state-of-the-art building blocks for CNNs including residual block, inception block, inception-residual block, ResNeXt block and many others. We apply this technique to generate models for multiple image datasets and show that these models achieve performance comparable to state-of-the-art (and even surpassing the state-of-the-art in one case). We also show that learned models are transferable between datasets. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 88,964 |
2303.11676 | Deep Learning Pipeline for Preprocessing and Segmenting Cardiac Magnetic
Resonance of Single Ventricle Patients from an Image Registry | Purpose: To develop and evaluate an end-to-end deep learning pipeline for segmentation and analysis of cardiac magnetic resonance images to provide core-lab processing for a multi-centre registry of Fontan patients. Materials and Methods: This retrospective study used training (n = 175), validation (n = 25) and testing (n = 50) cardiac magnetic resonance image exams collected from 13 institutions in the UK, US and Canada. The data was used to train and evaluate a pipeline containing three deep-learning models. The pipeline's performance was assessed on the Dice and IoU score between the automated and reference standard manual segmentation. Cardiac function values were calculated from both the automated and manual segmentation and evaluated using Bland-Altman analysis and paired t-tests. The overall pipeline was further evaluated qualitatively on 475 unseen patient exams. Results: For the 50 testing dataset, the pipeline achieved a median Dice score of 0.91 (0.89-0.94) for end-diastolic volume, 0.86 (0.82-0.89) for end-systolic volume, and 0.74 (0.70-0.77) for myocardial mass. The deep learning-derived end-diastolic volume, end-systolic volume, myocardial mass, stroke volume and ejection fraction had no statistical difference compared to the same values derived from manual segmentation with p values all greater than 0.05. For the 475 unseen patient exams, the pipeline achieved 68% adequate segmentation in both systole and diastole, 26% needed minor adjustments in either systole or diastole, 5% needed major adjustments, and the cropping model only failed in 0.4%. Conclusion: Deep learning pipeline can provide standardised 'core-lab' segmentation for Fontan patients. This pipeline can now be applied to the >4500 cardiac magnetic resonance exams currently in the FORCE registry as well as any new patients that are recruited. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 352,956 |
2205.08256 | Letters From the Past: Modeling Historical Sound Change Through
Diachronic Character Embeddings | While a great deal of work has been done on NLP approaches to lexical semantic change detection, other aspects of language change have received less attention from the NLP community. In this paper, we address the detection of sound change through historical spelling. We propose that a sound change can be captured by comparing the relative distance through time between their distributions using PPMI character embeddings. We verify this hypothesis in synthetic data and then test the method's ability to trace the well-known historical change of lenition of plosives in Danish historical sources. We show that the models are able to identify several of the changes under consideration and to uncover meaningful contexts in which they appeared. The methodology has the potential to contribute to the study of open questions such as the relative chronology of sound shifts and their geographical distribution. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 296,869 |
1610.07724 | Matroidal Structure of Skew Polynomial Rings with Application to Network
Coding | Over a finite field $\mathbb{F}_{q^m}$, the evaluation of skew polynomials is intimately related to the evaluation of linearized polynomials. This connection allows one to relate the concept of polynomial independence defined for skew polynomials to the familiar concept of linear independence for vector spaces. This relation allows for the definition of a representable matroid called the $\mathbb{F}_{q^m}[x;\sigma]$-matroid, with rank function that makes it a metric space. Specific submatroids of this matroid are individually bijectively isometric to the projective geometry of $\mathbb{F}_{q^m}$ equipped with the subspace metric. This isometry allows one to use the $\mathbb{F}_{q^m}[x;\sigma]$-matroid in a matroidal network coding application. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 62,837 |
2307.04525 | Cluster-Induced Mask Transformers for Effective Opportunistic Gastric
Cancer Screening on Non-contrast CT Scans | Gastric cancer is the third leading cause of cancer-related mortality worldwide, but no guideline-recommended screening test exists. Existing methods can be invasive, expensive, and lack sensitivity to identify early-stage gastric cancer. In this study, we explore the feasibility of using a deep learning approach on non-contrast CT scans for gastric cancer detection. We propose a novel cluster-induced Mask Transformer that jointly segments the tumor and classifies abnormality in a multi-task manner. Our model incorporates learnable clusters that encode the texture and shape prototypes of gastric cancer, utilizing self- and cross-attention to interact with convolutional features. In our experiments, the proposed method achieves a sensitivity of 85.0% and specificity of 92.6% for detecting gastric tumors on a hold-out test set consisting of 100 patients with cancer and 148 normal. In comparison, two radiologists have an average sensitivity of 73.5% and specificity of 84.3%. We also obtain a specificity of 97.7% on an external test set with 903 normal cases. Our approach performs comparably to established state-of-the-art gastric cancer screening tools like blood testing and endoscopy, while also being more sensitive in detecting early-stage cancer. This demonstrates the potential of our approach as a novel, non-invasive, low-cost, and accurate method for opportunistic gastric cancer screening. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 378,435 |
1912.00879 | Improving Question Generation with Sentence-level Semantic Matching and
Answer Position Inferring | Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation. However, we observe that these approaches often generate wrong question words or keywords and copy answer-irrelevant words from the input. We believe that lacking global question semantics and exploiting answer position-awareness not well are the key root causes. In this paper, we propose a neural question generation model with two concrete modules: sentence-level semantic matching and answer position inferring. Further, we enhance the initial state of the decoder by leveraging the answer-aware gated fusion mechanism. Experimental results demonstrate that our model outperforms the state-of-the-art (SOTA) models on SQuAD and MARCO datasets. Owing to its generality, our work also improves the existing models significantly. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 155,923 |
1703.01670 | Control Interpretations for First-Order Optimization Methods | First-order iterative optimization methods play a fundamental role in large scale optimization and machine learning. This paper presents control interpretations for such optimization methods. First, we give loop-shaping interpretations for several existing optimization methods and show that they are composed of basic control elements such as PID and lag compensators. Next, we apply the small gain theorem to draw a connection between the convergence rate analysis of optimization methods and the input-output gain computations of certain complementary sensitivity functions. These connections suggest that standard classical control synthesis tools may be brought to bear on the design of optimization algorithms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 69,410 |
2104.00556 | Deep Two-View Structure-from-Motion Revisited | Two-view structure-from-motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM. Existing deep learning-based approaches formulate the problem by either recovering absolute pose scales from two consecutive frames or predicting a depth map from a single image, both of which are ill-posed problems. In contrast, we propose to revisit the problem of deep two-view SfM by leveraging the well-posedness of the classic pipeline. Our method consists of 1) an optical flow estimation network that predicts dense correspondences between two frames; 2) a normalized pose estimation module that computes relative camera poses from the 2D optical flow correspondences, and 3) a scale-invariant depth estimation network that leverages epipolar geometry to reduce the search space, refine the dense correspondences, and estimate relative depth maps. Extensive experiments show that our method outperforms all state-of-the-art two-view SfM methods by a clear margin on KITTI depth, KITTI VO, MVS, Scenes11, and SUN3D datasets in both relative pose and depth estimation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 228,040 |
2207.04546 | FairDistillation: Mitigating Stereotyping in Language Models | Large pre-trained language models are successfully being used in a variety of tasks, across many languages. With this ever-increasing usage, the risk of harmful side effects also rises, for example by reproducing and reinforcing stereotypes. However, detecting and mitigating these harms is difficult to do in general and becomes computationally expensive when tackling multiple languages or when considering different biases. To address this, we present FairDistillation: a cross-lingual method based on knowledge distillation to construct smaller language models while controlling for specific biases. We found that our distillation method does not negatively affect the downstream performance on most tasks and successfully mitigates stereotyping and representational harms. We demonstrate that FairDistillation can create fairer language models at a considerably lower cost than alternative approaches. | false | false | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | 307,233 |
2201.09120 | Investigating the Potential of Auxiliary-Classifier GANs for Image
Classification in Low Data Regimes | Generative Adversarial Networks (GANs) have shown promise in augmenting datasets and boosting convolutional neural networks' (CNN) performance on image classification tasks. But they introduce more hyperparameters to tune as well as the need for additional time and computational power to train supplementary to the CNN. In this work, we examine the potential for Auxiliary-Classifier GANs (AC-GANs) as a 'one-stop-shop' architecture for image classification, particularly in low data regimes. Additionally, we explore modifications to the typical AC-GAN framework, changing the generator's latent space sampling scheme and employing a Wasserstein loss with gradient penalty to stabilize the simultaneous training of image synthesis and classification. Through experiments on images of varying resolutions and complexity, we demonstrate that AC-GANs show promise in image classification, achieving competitive performance with standard CNNs. These methods can be employed as an 'all-in-one' framework with particular utility in the absence of large amounts of training data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 276,562 |
2407.18232 | LION: Linear Group RNN for 3D Object Detection in Point Clouds | The benefit of transformers in large-scale 3D point cloud perception tasks, such as 3D object detection, is limited by their quadratic computation cost when modeling long-range relationships. In contrast, linear RNNs have low computational complexity and are suitable for long-range modeling. Toward this goal, we propose a simple and effective window-based framework built on LInear grOup RNN (i.e., perform linear RNN for grouped features) for accurate 3D object detection, called LION. The key property is to allow sufficient feature interaction in a much larger group than transformer-based methods. However, effectively applying linear group RNN to 3D object detection in highly sparse point clouds is not trivial due to its limitation in handling spatial modeling. To tackle this problem, we simply introduce a 3D spatial feature descriptor and integrate it into the linear group RNN operators to enhance their spatial features rather than blindly increasing the number of scanning orders for voxel features. To further address the challenge in highly sparse point clouds, we propose a 3D voxel generation strategy to densify foreground features thanks to linear group RNN as a natural property of auto-regressive models. Extensive experiments verify the effectiveness of the proposed components and the generalization of our LION on different linear group RNN operators including Mamba, RWKV, and RetNet. Furthermore, it is worth mentioning that our LION-Mamba achieves state-of-the-art on Waymo, nuScenes, Argoverse V2, and ONCE dataset. Last but not least, our method supports kinds of advanced linear RNN operators (e.g., RetNet, RWKV, Mamba, xLSTM and TTT) on small but popular KITTI dataset for a quick experience with our linear RNN-based framework. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 476,284 |
2111.11600 | Throughput Maximization for Active Intelligent Reflecting Surface Aided
Wireless Powered Communications | This paper considers an active intelligent reflecting surface (IRS)-aided wireless powered communication network (WPCN), where devices first harvest energy and then transmit information to a hybrid access point (HAP). Different from the existing works on passive IRS-aided WPCNs, this is the first work that introduces the active IRS in WPCNs. To guarantee fairness, the problem is formulated as an amplifying power-limited weighted sum throughput (WST) maximization problem, which is solved by successive convex approximation technique and fractional programming alternatively. To balance the performance and complexity tradeoff, three beamforming setups are considered at the active IRS, namely user-adaptive IRS beamforming, uplink-adaptive IRS beamforming, and static IRS beamforming. Numerical results demonstrate the significant superiority of employing active IRS in WPCNs and the benefits of dynamic IRS beamforming. Specifically, it is found that compared to the passive IRS, the active IRS not only improves the WST greatly, but also is more energy-efficient and can significantly extend the transmission coverage. Moreover, different from the symmetric deployment strategy of passive IRS, it is more preferable to deploy the active IRS near the devices. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 267,710 |
1911.03558 | Joint Demosaicing and Super-Resolution (JDSR): Network Design and
Perceptual Optimization | Image demosaicing and super-resolution are two important tasks in color imaging pipeline. So far they have been mostly independently studied in the open literature of deep learning; little is known about the potential benefit of formulating a joint demosaicing and super-resolution (JDSR) problem. In this paper, we propose an end-to-end optimization solution to the JDSR problem and demonstrate its practical significance in computational imaging. Our technical contributions are mainly two-fold. On network design, we have developed a Residual-Dense Squeeze-and-Excitation Networks (RDSEN) supported by a pre-demosaicing network (PDNet) as the pre-processing step. We address the issue of spatio-spectral attention for color-filter-array (CFA) data and discuss how to achieve better information flow by concatenating Residue-Dense Squeeze-and-Excitation Blocks (RDSEBs) for JDSR. Experimental results have shown that significant PSNR/SSIM gain can be achieved by RDSEN over previous network architectures including state-of-the-art RCAN. On perceptual optimization, we propose to leverage the latest ideas including relativistic discriminator and pre-excitation perceptual loss function to further improve the visual quality of textured regions in reconstructed images. Our extensive experiment results have shown that Texture-enhanced Relativistic average Generative Adversarial Network (TRaGAN) can produce both subjectively more pleasant images and objectively lower perceptual distortion scores than standard GAN for JDSR. Finally, we have verified the benefit of JDSR to high-quality image reconstruction from real-world Bayer pattern data collected by NASA Mars Curiosity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 152,675 |
2403.16993 | Comp4D: LLM-Guided Compositional 4D Scene Generation | Recent advancements in diffusion models for 2D and 3D content creation have sparked a surge of interest in generating 4D content. However, the scarcity of 3D scene datasets constrains current methodologies to primarily object-centric generation. To overcome this limitation, we present Comp4D, a novel framework for Compositional 4D Generation. Unlike conventional methods that generate a singular 4D representation of the entire scene, Comp4D innovatively constructs each 4D object within the scene separately. Utilizing Large Language Models (LLMs), the framework begins by decomposing an input text prompt into distinct entities and maps out their trajectories. It then constructs the compositional 4D scene by accurately positioning these objects along their designated paths. To refine the scene, our method employs a compositional score distillation technique guided by the pre-defined trajectories, utilizing pre-trained diffusion models across text-to-image, text-to-video, and text-to-3D domains. Extensive experiments demonstrate our outstanding 4D content creation capability compared to prior arts, showcasing superior visual quality, motion fidelity, and enhanced object interactions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 441,273 |
2311.09989 | Xputer: Bridging Data Gaps with NMF, XGBoost, and a Streamlined GUI
Experience | The rapid proliferation of data across diverse fields has accentuated the importance of accurate imputation for missing values. This task is crucial for ensuring data integrity and deriving meaningful insights. In response to this challenge, we present Xputer, a novel imputation tool that adeptly integrates Non-negative Matrix Factorization (NMF) with the predictive strengths of XGBoost. One of Xputer's standout features is its versatility: it supports zero imputation, enables hyperparameter optimization through Optuna, and allows users to define the number of iterations. For enhanced user experience and accessibility, we have equipped Xputer with an intuitive Graphical User Interface (GUI) ensuring ease of handling, even for those less familiar with computational tools. In performance benchmarks, Xputer not only rivals the computational speed of established tools such as IterativeImputer but also often outperforms them in terms of imputation accuracy. Furthermore, Xputer autonomously handles a diverse spectrum of data types, including categorical, continuous, and Boolean, eliminating the need for prior preprocessing. Given its blend of performance, flexibility, and user-friendly design, Xputer emerges as a state-of-the-art solution in the realm of data imputation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 408,377 |
2310.04590 | Deep Model Predictive Optimization | A major challenge in robotics is to design robust policies which enable complex and agile behaviors in the real world. On one end of the spectrum, we have model-free reinforcement learning (MFRL), which is incredibly flexible and general but often results in brittle policies. In contrast, model predictive control (MPC) continually re-plans at each time step to remain robust to perturbations and model inaccuracies. However, despite its real-world successes, MPC often under-performs the optimal strategy. This is due to model quality, myopic behavior from short planning horizons, and approximations due to computational constraints. And even with a perfect model and enough compute, MPC can get stuck in bad local optima, depending heavily on the quality of the optimization algorithm. To this end, we propose Deep Model Predictive Optimization (DMPO), which learns the inner-loop of an MPC optimization algorithm directly via experience, specifically tailored to the needs of the control problem. We evaluate DMPO on a real quadrotor agile trajectory tracking task, on which it improves performance over a baseline MPC algorithm for a given computational budget. It can outperform the best MPC algorithm by up to 27% with fewer samples and an end-to-end policy trained with MFRL by 19%. Moreover, because DMPO requires fewer samples, it can also achieve these benefits with 4.3X less memory. When we subject the quadrotor to turbulent wind fields with an attached drag plate, DMPO can adapt zero-shot while still outperforming all baselines. Additional results can be found at https://tinyurl.com/mr2ywmnw. | false | false | false | false | true | false | true | true | false | false | true | false | false | false | false | false | false | false | 397,724 |
1712.04762 | Social Media Writing Style Fingerprint | We present our approach for computer-aided social media text authorship attribution based on recent advances in short text authorship verification. We use various natural language techniques to create word-level and character-level models that act as hidden layers to simulate a simple neural network. The choice of word-level and character-level models in each layer was informed through validation performance. The output layer of our system uses an unweighted majority vote vector to arrive at a conclusion. We also considered writing bias in social media posts while collecting our training dataset to increase system robustness. Our system achieved a precision, recall, and F-measure of 0.82, 0.926 and 0.869 respectively. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 86,654 |
1208.4475 | Information-Theoretic Measures of Influence Based on Content Dynamics | The fundamental building block of social influence is for one person to elicit a response in another. Researchers measuring a "response" in social media typically depend either on detailed models of human behavior or on platform-specific cues such as re-tweets, hash tags, URLs, or mentions. Most content on social networks is difficult to model because the modes and motivation of human expression are diverse and incompletely understood. We introduce content transfer, an information-theoretic measure with a predictive interpretation that directly quantifies the strength of the effect of one user's content on another's in a model-free way. Estimating this measure is made possible by combining recent advances in non-parametric entropy estimation with increasingly sophisticated tools for content representation. We demonstrate on Twitter data collected for thousands of users that content transfer is able to capture non-trivial, predictive relationships even for pairs of users not linked in the follower or mention graph. We suggest that this measure makes large quantities of previously under-utilized social media content accessible to rigorous statistical causal analysis. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 18,215 |
1908.08813 | Efficient Capon-Based Approach Exploiting Temporal Windowing For
Electric Network Frequency Estimation | Electric Network Frequency (ENF) fluctuations constitute a powerful tool in multimedia forensics. An efficient approach for ENF estimation is introduced with temporal windowing based on the filter-bank Capon spectral estimator. A type of Gohberg-Semencul factorization of the model covariance matrix is used due to the Toeplitz structure of the covariance matrix. Moreover, this approach uses, for the first time in the field of ENF, a temporal window, not necessarily the rectangular one, at the stage preceding spectral estimation. Krylov matrices are employed for fast implementation of matrix inversions. The proposed approach outperforms the state-of-the-art methods in ENF estimation, when a short time window of $1$ second is employed in power recordings. In speech recordings, the proposed approach yields highly accurate results with respect to both time complexity and accuracy. Moreover, the impact of different temporal windows is studied. The results show that even the most trivial methods for ENF estimation, such as the Short-Time Fourier Transform, can provide better results than the most recent state-of-the-art methods, when a temporal window is employed. The correlation coefficient is used to measure the ENF estimation accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 142,657 |
1901.01028 | Iris Recognition with Image Segmentation Employing Retrained
Off-the-Shelf Deep Neural Networks | This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 117,906 |
1805.08946 | Building Extraction at Scale using Convolutional Neural Network: Mapping
of the United States | Establishing up-to-date large scale building maps is essential to understand urban dynamics, such as estimating population, urban planning and many other applications. Although many computer vision tasks has been successfully carried out with deep convolutional neural networks, there is a growing need to understand their large scale impact on building mapping with remote sensing imagery. Taking advantage of the scalability of CNNs and using only few areas with the abundance of building footprints, for the first time we conduct a comparative analysis of four state-of-the-art CNNs for extracting building footprints across the entire continental United States. The four CNN architectures namely: branch-out CNN, fully convolutional neural network (FCN), conditional random field as recurrent neural network (CRFasRNN), and SegNet, support semantic pixel-wise labeling and focus on capturing textural information at multi-scale. We use 1-meter resolution aerial images from National Agriculture Imagery Program (NAIP) as the test-bed, and compare the extraction results across the four methods. In addition, we propose to combine signed-distance labels with SegNet, the preferred CNN architecture identified by our extensive evaluations, to advance building extraction results to instance level. We further demonstrate the usefulness of fusing additional near IR information into the building extraction framework. Large scale experimental evaluations are conducted and reported using metrics that include: precision, recall rate, intersection over union, and the number of buildings extracted. With the improved CNN model and no requirement of further post-processing, we have generated building maps for the United States. The quality of extracted buildings and processing time demonstrated the proposed CNN-based framework fits the need of building extraction at scale. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 98,295 |
2110.10745 | Iterated Block Particle Filter for High-dimensional Parameter Learning:
Beating the Curse of Dimensionality | Parameter learning for high-dimensional, partially observed, and nonlinear stochastic processes is a methodological challenge. Spatiotemporal disease transmission systems provide examples of such processes giving rise to open inference problems. We propose the iterated block particle filter (IBPF) algorithm for learning high-dimensional parameters over graphical state space models with general state spaces, measures, transition densities and graph structure. Theoretical performance guarantees are obtained on beating the curse of dimensionality (COD), algorithm convergence, and likelihood maximization. Experiments on a highly nonlinear and non-Gaussian spatiotemporal model for measles transmission reveal that the iterated ensemble Kalman filter algorithm (Li et al. (2020)) is ineffective and the iterated filtering algorithm (Ionides et al. (2015)) suffers from the COD, while our IBPF algorithm beats COD consistently across various experiments with different metrics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 262,242 |
1801.07884 | Joint Pilot and Payload Power Control for Uplink MIMO-NOMA with MRC-SIC
Receivers | This letter proposes a joint pilot and payload power allocation (JPA) scheme to mitigate the error propagation problem for uplink multiple-input multiple-output non-orthogonal multiple access (MIMO-NOMA) systems. A base station equipped with a maximum ratio combining and successive interference cancellation (MRC-SIC) receiver is adopted for multiuser detection. The average signal-to-interference-plus-noise ratio (ASINR) of each user during the MRC-SIC decoding is analyzed by taking into account the error propagation due to the channel estimation error. Furthermore, the JPA design is formulated as a nonconvex optimization problem to maximize the minimum weighted ASINR and is solved optimally with geometric programming. Simulation results confirm the developed performance analysis and show that our proposed scheme can effectively alleviate the error propagation of MRC-SIC and enhance the detection performance, especially for users with moderate energy budgets. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 88,871 |
2212.13674 | Regular complete permutation polynomials over quadratic extension fields | Let $r\geq 3$ be any positive integer which is relatively prime to $p$ and $q^2\equiv 1 \pmod r$. Let $\tau_1, \tau_2$ be any permutation polynomials over $\mathbb{F}_{q^2},$ $\sigma_M$ is an invertible linear map over $\mathbb{F}_{q^2}$ and $\sigma=\tau_1\circ\sigma_M\circ\tau_2$. In this paper, we prove that, for suitable $\tau_1, \tau_2$ and $\sigma_M$, the map $\sigma$ could be $r$-regular complete permutation polynomials over quadratic extension fields. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 338,368 |
2312.09730 | Overcome the Fear Of Missing Out: Active Sensing UAV Scanning for
Precision Agriculture | This paper deals with the problem of informative path planning for a UAV deployed for precision agriculture applications. First, we observe that the ``fear of missing out'' data lead to uniform, conservative scanning policies over the whole agricultural field. Consequently, employing a non-uniform scanning approach can mitigate the expenditure of time in areas with minimal or negligible real value, while ensuring heightened precision in information-dense regions. Turning to the available informative path planning methodologies, we discern that certain methods entail intensive computational requirements, while others necessitate training on an ideal world simulator. To address the aforementioned issues, we propose an active sensing coverage path planning approach, named OverFOMO, that regulates the speed of the UAV in accordance with both the relative quantity of the identified classes, i.e. crops and weeds, and the confidence level of such detections. To identify these instances, a robust Deep Learning segmentation model is deployed. The computational needs of the proposed algorithm are independent of the size of the agricultural field, rendering its applicability on modern UAVs quite straightforward. The proposed algorithm was evaluated with a simu-realistic pipeline, combining data from real UAV missions and the high-fidelity dynamics of AirSim simulator, showcasing its performance improvements over the established state of affairs for this type of missions. An open-source implementation of the algorithm and the evaluation pipeline is also available: \url{https://github.com/emmarapt/OverFOMO}. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 415,857 |
1610.03809 | A Continuous Model of Cortical Connectivity | We present a continuous model for structural brain connectivity based on the Poisson point process. The model treats each streamline curve in a tractography as an observed event in connectome space, here a product space of cortical white matter boundaries. We approximate the model parameter via kernel density estimation. To deal with the heavy computational burden, we develop a fast parameter estimation method by pre-computing associated Legendre products of the data, leveraging properties of the spherical heat kernel. We show how our approach can be used to assess the quality of cortical parcellations with respect to connectivty. We further present empirical results that suggest the discrete connectomes derived from our model have substantially higher test-retest reliability compared to standard methods. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 62,304 |
1909.06983 | A Self-Attentional Neural Architecture for Code Completion with
Multi-Task Learning | Code completion, one of the most useful features in the Integrated Development Environments (IDEs), can accelerate software development by suggesting the libraries, APIs, and method names in real-time. Recent studies have shown that statistical language models can improve the performance of code completion tools through learning from large-scale software repositories. However, these models suffer from three major drawbacks: a) The hierarchical structural information of the programs is not fully utilized in the program's representation; b) In programs, the semantic relationships can be very long. Existing recurrent neural networks based language models are not sufficient to model the long-term dependency. c) Existing approaches perform a specific task in one model, which leads to the underuse of the information from related tasks. To address these challenges, in this paper, we propose a self-attentional neural architecture for code completion with multi-task learning. To utilize the hierarchical structural information of the programs, we present a novel method that considers the path from the predicting node to the root node. To capture the long-term dependency in the input programs, we adopt a self-attentional architecture based network as the base language model. To enable the knowledge sharing between related tasks, we creatively propose a Multi-Task Learning (MTL) framework to learn two related tasks in code completion jointly. Experiments on three real-world datasets demonstrate the effectiveness of our model when compared with state-of-the-art methods. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 145,556 |
2310.02201 | Learnable Data Augmentation for One-Shot Unsupervised Domain Adaptation | This paper presents a classification framework based on learnable data augmentation to tackle the One-Shot Unsupervised Domain Adaptation (OS-UDA) problem. OS-UDA is the most challenging setting in Domain Adaptation, as only one single unlabeled target sample is assumed to be available for model adaptation. Driven by such single sample, our method LearnAug-UDA learns how to augment source data, making it perceptually similar to the target. As a result, a classifier trained on such augmented data will generalize well for the target domain. To achieve this, we designed an encoder-decoder architecture that exploits a perceptual loss and style transfer strategies to augment the source data. Our method achieves state-of-the-art performance on two well-known Domain Adaptation benchmarks, DomainNet and VisDA. The project code is available at https://github.com/IIT-PAVIS/LearnAug-UDA | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 396,740 |
2310.17153 | Hierarchical Semi-Implicit Variational Inference with Application to
Diffusion Model Acceleration | Semi-implicit variational inference (SIVI) has been introduced to expand the analytical variational families by defining expressive semi-implicit distributions in a hierarchical manner. However, the single-layer architecture commonly used in current SIVI methods can be insufficient when the target posterior has complicated structures. In this paper, we propose hierarchical semi-implicit variational inference, called HSIVI, which generalizes SIVI to allow more expressive multi-layer construction of semi-implicit distributions. By introducing auxiliary distributions that interpolate between a simple base distribution and the target distribution, the conditional layers can be trained by progressively matching these auxiliary distributions one layer after another. Moreover, given pre-trained score networks, HSIVI can be used to accelerate the sampling process of diffusion models with the score matching objective. We show that HSIVI significantly enhances the expressiveness of SIVI on several Bayesian inference problems with complicated target distributions. When used for diffusion model acceleration, we show that HSIVI can produce high quality samples comparable to or better than the existing fast diffusion model based samplers with a small number of function evaluations on various datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 403,014 |
2112.02922 | Anomaly Detection in IR Images of PV Modules using Supervised
Contrastive Learning | Increasing deployment of photovoltaic (PV) plants requires methods for automatic detection of faulty PV modules in modalities, such as infrared (IR) images. Recently, deep learning has become popular for this. However, related works typically sample train and test data from the same distribution ignoring the presence of domain shift between data of different PV plants. Instead, we frame fault detection as more realistic unsupervised domain adaptation problem where we train on labelled data of one source PV plant and make predictions on another target plant. We train a ResNet-34 convolutional neural network with a supervised contrastive loss, on top of which we employ a k-nearest neighbor classifier to detect anomalies. Our method achieves a satisfactory area under the receiver operating characteristic (AUROC) of 73.3 % to 96.6 % on nine combinations of four source and target datasets with 2.92 million IR images of which 8.5 % are anomalous. It even outperforms a binary cross-entropy classifier in some cases. With a fixed decision threshold this results in 79.4 % and 77.1 % correctly classified normal and anomalous images, respectively. Most misclassified anomalies are of low severity, such as hot diodes and small hot spots. Our method is insensitive to hyperparameter settings, converges quickly and reliably detects unknown types of anomalies making it well suited for practice. Possible uses are in automatic PV plant inspection systems or to streamline manual labelling of IR datasets by filtering out normal images. Furthermore, our work serves the community with a more realistic view on PV module fault detection using unsupervised domain adaptation to develop more performant methods with favorable generalization capabilities. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 270,017 |
2311.18791 | Minimizing Age of Information with Generate at Will Status Updates and
Age-Agnostic Cyclic Scheduling | We study the scheduling problem for a multi-source single-server generate-at-will (GAW) status update system with sources having heterogeneous service times and weights, with the goal of minimizing the weighted sum age of information (AoI). In particular, we study \emph{age-agnostic} schedulers which rely only on the first two moments of the source service times and they are relatively easier to implement than their age-aware counterparts which make use of the actual realizations of the service times. In particular, we focus on age-agnostic cyclic schedulers with $O(1)$ runtime complexity where status updates from multiple sources are scheduled according to a fixed finite transmission pattern. We first develop an analytical method to obtain the exact average AoI of each source when a transmission pattern is given. Then, we derive the optimum transmission pattern in closed form for the specific case of two sources. For general number of sources, we propose a novel algorithm, called IS (Insertion Search), for constructing transmission patterns, and we show that IS is capable of producing the optimum pattern for two-source systems, and it outperforms other existing age-agnostic schemes, for the case of more than two sources. Numerical examples are presented to showcase the effectiveness of the proposed approach. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 411,818 |
1904.01500 | Neural Vector Conceptualization for Word Vector Space Interpretation | Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models. In this work, we introduce a new method to interpret arbitrary samples from a word vector space. To this end, we train a neural model to conceptualize word vectors, which means that it activates higher order concepts it recognizes in a given vector. Contrary to prior approaches, our model operates in the original vector space and is capable of learning non-linear relations between word vectors and concepts. Furthermore, we show that it produces considerably less entropic concept activation profiles than the popular cosine similarity. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 126,154 |
2209.14965 | DirectTracker: 3D Multi-Object Tracking Using Direct Image Alignment and
Photometric Bundle Adjustment | Direct methods have shown excellent performance in the applications of visual odometry and SLAM. In this work we propose to leverage their effectiveness for the task of 3D multi-object tracking. To this end, we propose DirectTracker, a framework that effectively combines direct image alignment for the short-term tracking and sliding-window photometric bundle adjustment for 3D object detection. Object proposals are estimated based on the sparse sliding-window pointcloud and further refined using an optimization-based cost function that carefully combines 3D and 2D cues to ensure consistency in image and world space. We propose to evaluate 3D tracking using the recently introduced higher-order tracking accuracy (HOTA) metric and the generalized intersection over union similarity measure to mitigate the limitations of the conventional use of intersection over union for the evaluation of vision-based trackers. We perform evaluation on the KITTI Tracking benchmark for the Car class and show competitive performance in tracking objects both in 2D and 3D. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 320,411 |
2209.12491 | Information-Theoretic Hashing for Zero-Shot Cross-Modal Retrieval | Zero-shot cross-modal retrieval (ZS-CMR) deals with the retrieval problem among heterogenous data from unseen classes. Typically, to guarantee generalization, the pre-defined class embeddings from natural language processing (NLP) models are used to build a common space. In this paper, instead of using an extra NLP model to define a common space beforehand, we consider a totally different way to construct (or learn) a common hamming space from an information-theoretic perspective. We term our model the Information-Theoretic Hashing (ITH), which is composed of two cascading modules: an Adaptive Information Aggregation (AIA) module; and a Semantic Preserving Encoding (SPE) module. Specifically, our AIA module takes the inspiration from the Principle of Relevant Information (PRI) to construct a common space that adaptively aggregates the intrinsic semantics of different modalities of data and filters out redundant or irrelevant information. On the other hand, our SPE module further generates the hashing codes of different modalities by preserving the similarity of intrinsic semantics with the element-wise Kullback-Leibler (KL) divergence. A total correlation regularization term is also imposed to reduce the redundancy amongst different dimensions of hash codes. Sufficient experiments on three benchmark datasets demonstrate the superiority of the proposed ITH in ZS-CMR. Source code is available in the supplementary material. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 319,557 |
1408.0521 | A Hands-on Education Program on Cyber Physical Systems for High School
Students | Cyber Physical Systems (CPS) are the conjoining of an entities' physical and computational elements. The development of a typical CPS system follows a sequence from conceptual modeling, testing in simulated (virtual) worlds, testing in controlled (possibly laboratory) environments and finally deployment. Throughout each (repeatable) stage, the behavior of the physical entities, the sensing and situation assessment, and the computation and control options have to be understood and carefully represented through abstraction. The CPS Group at the Ohio State University, as part of an NSF funded CPS project on "Autonomous Driving in Mixed Environments", has been developing CPS related educational activities at the K-12, undergraduate and graduate levels. The aim of these educational activities is to train students in the principles and design issues in CPS and to broaden the participation in science and engineering. The project team has a strong commitment to impact STEM education across the entire K-20 community. In this paper, we focus on the K-12 community and present a two-week Summer Program for high school juniors and seniors that introduces them to the principles of CPS design and walks them through several of the design steps. We also provide an online repository that aids CPS researchers in providing a similar educational experience. | false | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | 35,090 |
2404.04317 | DeepLINK-T: deep learning inference for time series data using knockoffs
and LSTM | High-dimensional longitudinal time series data is prevalent across various real-world applications. Many such applications can be modeled as regression problems with high-dimensional time series covariates. Deep learning has been a popular and powerful tool for fitting these regression models. Yet, the development of interpretable and reproducible deep-learning models is challenging and remains underexplored. This study introduces a novel method, Deep Learning Inference using Knockoffs for Time series data (DeepLINK-T), focusing on the selection of significant time series variables in regression while controlling the false discovery rate (FDR) at a predetermined level. DeepLINK-T combines deep learning with knockoff inference to control FDR in feature selection for time series models, accommodating a wide variety of feature distributions. It addresses dependencies across time and features by leveraging a time-varying latent factor structure in time series covariates. Three key ingredients for DeepLINK-T are 1) a Long Short-Term Memory (LSTM) autoencoder for generating time series knockoff variables, 2) an LSTM prediction network using both original and knockoff variables, and 3) the application of the knockoffs framework for variable selection with FDR control. Extensive simulation studies have been conducted to evaluate DeepLINK-T's performance, showing its capability to control FDR effectively while demonstrating superior feature selection power for high-dimensional longitudinal time series data compared to its non-time series counterpart. DeepLINK-T is further applied to three metagenomic data sets, validating its practical utility and effectiveness, and underscoring its potential in real-world applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 444,603 |
2006.03677 | Visual Transformers: Token-based Image Representation and Processing for
Computer Vision | Computer vision has achieved remarkable success by (a) representing images as uniformly-arranged pixel arrays and (b) convolving highly-localized features. However, convolutions treat all image pixels equally regardless of importance; explicitly model all concepts across all images, regardless of content; and struggle to relate spatially-distant concepts. In this work, we challenge this paradigm by (a) representing images as semantic visual tokens and (b) running transformers to densely model token relationships. Critically, our Visual Transformer operates in a semantic token space, judiciously attending to different image parts based on context. This is in sharp contrast to pixel-space transformers that require orders-of-magnitude more compute. Using an advanced training recipe, our VTs significantly outperform their convolutional counterparts, raising ResNet accuracy on ImageNet top-1 by 4.6 to 7 points while using fewer FLOPs and parameters. For semantic segmentation on LIP and COCO-stuff, VT-based feature pyramid networks (FPN) achieve 0.35 points higher mIoU while reducing the FPN module's FLOPs by 6.5x. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 180,392 |
2012.03540 | Efficient and Scalable Structure Learning for Bayesian Networks:
Algorithms and Applications | Structure Learning for Bayesian network (BN) is an important problem with extensive research. It plays central roles in a wide variety of applications in Alibaba Group. However, existing structure learning algorithms suffer from considerable limitations in real world applications due to their low efficiency and poor scalability. To resolve this, we propose a new structure learning algorithm LEAST, which comprehensively fulfills our business requirements as it attains high accuracy, efficiency and scalability at the same time. The core idea of LEAST is to formulate the structure learning into a continuous constrained optimization problem, with a novel differentiable constraint function measuring the acyclicity of the resulting graph. Unlike with existing work, our constraint function is built on the spectral radius of the graph and could be evaluated in near linear time w.r.t. the graph node size. Based on it, LEAST can be efficiently implemented with low storage overhead. According to our benchmark evaluation, LEAST runs 1 to 2 orders of magnitude faster than state of the art method with comparable accuracy, and it is able to scale on BNs with up to hundreds of thousands of variables. In our production environment, LEAST is deployed and serves for more than 20 applications with thousands of executions per day. We describe a concrete scenario in a ticket booking service in Alibaba, where LEAST is applied to build a near real-time automatic anomaly detection and root error cause analysis system. We also show that LEAST unlocks the possibility of applying BN structure learning in new areas, such as large-scale gene expression data analysis and explainable recommendation system. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 210,157 |
1711.07684 | A two-dimensional decomposition approach for matrix completion through
gossip | Factoring a matrix into two low rank matrices is at the heart of many problems. The problem of matrix completion especially uses it to decompose a sparse matrix into two non sparse, low rank matrices which can then be used to predict unknown entries of the original matrix. We present a scalable and decentralized approach in which instead of learning two factors for the original input matrix, we decompose the original matrix into a grid blocks, each of whose factors can be individually learned just by communicating (gossiping) with neighboring blocks. This eliminates any need for a central server. We show that our algorithm performs well on both synthetic and real datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 85,052 |
2412.15618 | 3D Shape Tokenization | We introduce Shape Tokens, a 3D representation that is continuous, compact, and easy to incorporate into machine learning models. Shape Tokens act as conditioning vectors that represent shape information in a 3D flow-matching model. The flow-matching model is trained to approximate probability density functions corresponding to delta functions concentrated on the surfaces of shapes in 3D. By attaching Shape Tokens to various machine learning models, we can generate new shapes, convert images to 3D, align 3D shapes with text and images, and render shapes directly at variable, user specified, resolution. Moreover, Shape Tokens enable a systematic analysis of geometric properties such as normal, density, and deformation field. Across all tasks and experiments, utilizing Shape Tokens demonstrate strong performance compared to existing baselines. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 519,208 |
2210.07128 | Language Models of Code are Few-Shot Commonsense Learners | We address the general task of structured commonsense reasoning: given a natural language input, the goal is to generate a graph such as an event -- or a reasoning-graph. To employ large language models (LMs) for this task, existing approaches ``serialize'' the output graph as a flat list of nodes and edges. Although feasible, these serialized graphs strongly deviate from the natural language corpora that LMs were pre-trained on, hindering LMs from generating them correctly. In this paper, we show that when we instead frame structured commonsense reasoning tasks as code generation tasks, pre-trained LMs of code are better structured commonsense reasoners than LMs of natural language, even when the downstream task does not involve source code at all. We demonstrate our approach across three diverse structured commonsense reasoning tasks. In all these natural language tasks, we show that using our approach, a code generation LM (CODEX) outperforms natural-LMs that are fine-tuned on the target task (e.g., T5) and other strong LMs such as GPT-3 in the few-shot setting. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 323,584 |
2401.00691 | Stochastic Gradient Descent for Nonparametric Regression | This paper introduces an iterative algorithm for training nonparametric additive models that enjoys favorable memory storage and computational requirements. The algorithm can be viewed as the functional counterpart of stochastic gradient descent, applied to the coefficients of a truncated basis expansion of the component functions. We show that the resulting estimator satisfies an oracle inequality that allows for model mis-specification. In the well-specified setting, by choosing the learning rate carefully across three distinct stages of training, we demonstrate that its risk is minimax optimal in terms of the dependence on the dimensionality of the data and the size of the training sample. We also provide polynomial convergence rates even when the covariates do not have full support on their domain. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 419,075 |
2406.17338 | Robustly Optimized Deep Feature Decoupling Network for Fatty Liver
Diseases Detection | Current medical image classification efforts mainly aim for higher average performance, often neglecting the balance between different classes. This can lead to significant differences in recognition accuracy between classes and obvious recognition weaknesses. Without the support of massive data, deep learning faces challenges in fine-grained classification of fatty liver. In this paper, we propose an innovative deep learning framework that combines feature decoupling and adaptive adversarial training. Firstly, we employ two iteratively compressed decouplers to supervised decouple common features and specific features related to fatty liver in abdominal ultrasound images. Subsequently, the decoupled features are concatenated with the original image after transforming the color space and are fed into the classifier. During adversarial training, we adaptively adjust the perturbation and balance the adversarial strength by the accuracy of each class. The model will eliminate recognition weaknesses by correctly classifying adversarial samples, thus improving recognition robustness. Finally, the accuracy of our method improved by 4.16%, achieving 82.95%. As demonstrated by extensive experiments, our method is a generalized learning framework that can be directly used to eliminate the recognition weaknesses of any classifier while improving its average performance. Code is available at https://github.com/HP-ML/MICCAI2024. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 467,533 |
cs/0510043 | On Minimal Pseudo-Codewords of Tanner Graphs from Projective Planes | We would like to better understand the fundamental cone of Tanner graphs derived from finite projective planes. Towards this goal, we discuss bounds on the AWGNC and BSC pseudo-weight of minimal pseudo-codewords of such Tanner graphs, on one hand, and study the structure of minimal pseudo-codewords, on the other. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 539,018 |
1503.02108 | Maximum a Posteriori Adaptation of Network Parameters in Deep Models | We present a Bayesian approach to adapting parameters of a well-trained context-dependent, deep-neural-network, hidden Markov model (CD-DNN-HMM) to improve automatic speech recognition performance. Given an abundance of DNN parameters but with only a limited amount of data, the effectiveness of the adapted DNN model can often be compromised. We formulate maximum a posteriori (MAP) adaptation of parameters of a specially designed CD-DNN-HMM with an augmented linear hidden networks connected to the output tied states, or senones, and compare it to feature space MAP linear regression previously proposed. Experimental evidences on the 20,000-word open vocabulary Wall Street Journal task demonstrate the feasibility of the proposed framework. In supervised adaptation, the proposed MAP adaptation approach provides more than 10% relative error reduction and consistently outperforms the conventional transformation based methods. Furthermore, we present an initial attempt to generate hierarchical priors to improve adaptation efficiency and effectiveness with limited adaptation data by exploiting similarities among senones. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | 40,895 |
1906.04367 | Evaluation of Seed Set Selection Approaches and Active Learning
Strategies in Predictive Coding | Active learning is a popular methodology in text classification - known in the legal domain as "predictive coding" or "Technology Assisted Review" or "TAR" - due to its potential to minimize the required review effort to build effective classifiers. In this study, we use extensive experimentation to examine the impact of popular seed set selection strategies in active learning, within a predictive coding exercise, and evaluate different active learning strategies against well-researched continuous active learning strategies for the purpose of determining efficient training methods for classifying large populations quickly and precisely. We study how random sampling, keyword models and clustering based seed set selection strategies combined together with top-ranked, uncertain, random, recall inspired, and hybrid active learning document selection strategies affect the performance of active learning for predictive coding. We use the percentage of documents requiring review to reach 75% recall as the "benchmark" metric to evaluate and compare our approaches. In most cases we find that seed set selection methods have a minor impact, though they do show significant impact in lower richness data sets or when choosing a top-ranked active learning selection strategy. Our results also show that active learning selection strategies implementing uncertainty, random, or 75% recall selection strategies has the potential to reach the optimum active learning round much earlier than the popular continuous active learning approach (top-ranked selection). The results of our research shed light on the impact of active learning seed set selection strategies and also the effectiveness of the selection strategies for the following learning rounds. Legal practitioners can use the results of this study to enhance the efficiency, precision, and simplicity of their predictive coding process. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 134,693 |
2306.03570 | Personalization Disentanglement for Federated Learning: An explainable
perspective | Personalized federated learning (PFL) jointly trains a variety of local models through balancing between knowledge sharing across clients and model personalization per client. This paper addresses PFL via explicit disentangling latent representations into two parts to capture the shared knowledge and client-specific personalization, which leads to more reliable and effective PFL. The disentanglement is achieved by a novel Federated Dual Variational Autoencoder (FedDVA), which employs two encoders to infer the two types of representations. FedDVA can produce a better understanding of the trade-off between global knowledge sharing and local personalization in PFL. Moreover, it can be integrated with existing FL methods and turn them into personalized models for heterogeneous downstream tasks. Extensive experiments validate the advantages caused by disentanglement and show that models trained with disentangled representations substantially outperform those vanilla methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,393 |
1906.11927 | Homography from two orientation- and scale-covariant features | This paper proposes a geometric interpretation of the angles and scales which the orientation- and scale-covariant feature detectors, e.g. SIFT, provide. Two new general constraints are derived on the scales and rotations which can be used in any geometric model estimation tasks. Using these formulas, two new constraints on homography estimation are introduced. Exploiting the derived equations, a solver for estimating the homography from the minimal number of two correspondences is proposed. Also, it is shown how the normalization of the point correspondences affects the rotation and scale parameters, thus achieving numerically stable results. Due to requiring merely two feature pairs, robust estimators, e.g. RANSAC, do significantly fewer iterations than by using the four-point algorithm. When using covariant features, e.g. SIFT, the information about the scale and orientation is given at no cost. The proposed homography estimation method is tested in a synthetic environment and on publicly available real-world datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 136,799 |
2205.12445 | Over-the-Air Design of GAN Training for mmWave MIMO Channel Estimation | Future wireless systems are trending towards higher carrier frequencies that offer larger communication bandwidth but necessitate the use of large antenna arrays. Existing signal processing techniques for channel estimation do not scale well to this "high-dimensional" regime in terms of performance and pilot overhead. Meanwhile, training deep learning based approaches for channel estimation requires large labeled datasets mapping pilot measurements to clean channel realizations, which can only be generated offline using simulated channels. In this paper, we develop a novel unsupervised over-the-air (OTA) algorithm that utilizes noisy received pilot measurements to train a deep generative model to output beamspace MIMO channel realizations. Our approach leverages Generative Adversarial Networks (GAN), while using a conditional input to distinguish between Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) channel realizations. We also present a federated implementation of the OTA algorithm that distributes the GAN training over multiple users and greatly reduces the user side computation. We then formulate channel estimation from a limited number of pilot measurements as an inverse problem and reconstruct the channel by optimizing the input vector of the trained generative model. Our proposed approach significantly outperforms Orthogonal Matching Pursuit on both LOS and NLOS channel models, and EM-GM-AMP -- an Approximate Message Passing algorithm -- on LOS channel models, while achieving comparable performance on NLOS channel models in terms of the normalized channel reconstruction error. More importantly, our proposed framework has the potential to be trained online using real noisy pilot measurements, is not restricted to a specific channel model and can even be utilized for a federated OTA design of a dataset generator from noisy data. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 298,537 |
2104.00205 | Fusing RGBD Tracking and Segmentation Tree Sampling for Multi-Hypothesis
Volumetric Segmentation | Despite rapid progress in scene segmentation in recent years, 3D segmentation methods are still limited when there is severe occlusion. The key challenge is estimating the segment boundaries of (partially) occluded objects, which are inherently ambiguous when considering only a single frame. In this work, we propose Multihypothesis Segmentation Tracking (MST), a novel method for volumetric segmentation in changing scenes, which allows scene ambiguity to be tracked and our estimates to be adjusted over time as we interact with the scene. Two main innovations allow us to tackle this difficult problem: 1) A novel way to sample possible segmentations from a segmentation tree; and 2) A novel approach to fusing tracking results with multiple segmentation estimates. These methods allow MST to track the segmentation state over time and incorporate new information, such as new objects being revealed. We evaluate our method on several cluttered tabletop environments in simulation and reality. Our results show that MST outperforms baselines in all tested scenes. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 227,912 |
2408.11809 | Informed, Constrained, Aligned: A Field Analysis on Degeneracy-aware
Point Cloud Registration in the Wild | The ICP registration algorithm has been a preferred method for LiDAR-based robot localization for nearly a decade. However, even in modern SLAM solutions, ICP can degrade and become unreliable in geometrically ill-conditioned environments. Current solutions primarily focus on utilizing additional sources of information, such as external odometry, to either replace the degenerate directions of the optimization solution or add additional constraints in a sensor-fusion setup afterward. In response, this work investigates and compares new and existing degeneracy mitigation methods for robust LiDAR-based localization and analyzes the efficacy of these approaches in degenerate environments for the first time in the literature at this scale. Specifically, this work investigates i) the effect of using active or passive degeneracy mitigation methods for the problem of ill-conditioned ICP in LiDAR degenerate environments, ii) the evaluation of TSVD, inequality constraints, and linear/non-linear Tikhonov regularization for the application of degenerate point cloud registration for the first time. Furthermore, a sensitivity analysis for least-squares minimization step of the ICP problem is carried out to better understand how each method affects the optimization and what to expect from each method. The results of the analysis are validated through multiple real-world robotic field and simulated experiments. The analysis demonstrates that active optimization degeneracy mitigation is necessary and advantageous in the absence of reliable external estimate assistance for LiDAR-SLAM, and soft-constrained methods can provide better results in complex ill-conditioned scenarios with heuristic fine-tuned parameters. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 482,440 |
2112.03215 | Multi-scale Feature Learning Dynamics: Insights for Double Descent | A key challenge in building theoretical foundations for deep learning is the complex optimization dynamics of neural networks, resulting from the high-dimensional interactions between the large number of network parameters. Such non-trivial dynamics lead to intriguing behaviors such as the phenomenon of "double descent" of the generalization error. The more commonly studied aspect of this phenomenon corresponds to model-wise double descent where the test error exhibits a second descent with increasing model complexity, beyond the classical U-shaped error curve. In this work, we investigate the origins of the less studied epoch-wise double descent in which the test error undergoes two non-monotonous transitions, or descents as the training time increases. By leveraging tools from statistical physics, we study a linear teacher-student setup exhibiting epoch-wise double descent similar to that in deep neural networks. In this setting, we derive closed-form analytical expressions for the evolution of generalization error over training. We find that double descent can be attributed to distinct features being learned at different scales: as fast-learning features overfit, slower-learning features start to fit, resulting in a second descent in test error. We validate our findings through numerical experiments where our theory accurately predicts empirical findings and remains consistent with observations in deep neural networks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 270,121 |
2003.00732 | Fusing Physics-based and Deep Learning Models for Prognostics | Physics-based and data-driven models for remaining useful lifetime (RUL) prediction typically suffer from two major challenges that limit their applicability to complex real-world domains: (1) incompleteness of physics-based models and (2) limited representativeness of the training dataset for data-driven models. Combining the advantages of these two directions while overcoming some of their limitations, we propose a novel hybrid framework for fusing the information from physics-based performance models with deep learning algorithms for prognostics of complex safety-critical systems under real-world scenarios. In the proposed framework, we use physics-based performance models to infer unobservable model parameters related to a system's components health solving a calibration problem. These parameters are subsequently combined with sensor readings and used as input to a deep neural network to generate a data-driven prognostics model with physics-augmented features. The performance of the hybrid framework is evaluated on an extensive case study comprising run-to-failure degradation trajectories from a fleet of nine turbofan engines under real flight conditions. The experimental results show that the hybrid framework outperforms purely data-driven approaches by extending the prediction horizon by nearly 127\%. Furthermore, it requires less training data and is less sensitive to the limited representativeness of the dataset compared to purely data-driven approaches. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 166,395 |
2306.04476 | Energy-based Assessment and Driving Behavior of ACC Systems and Humans
Inside Platoons | Evidence in the literature shows that automated and human driving modes demonstrate different driving characteristics, i.e., headway policy, spacing policy, reaction time, comfortable acceleration, and others. These differences alter observed traffic dynamics and have an impact on energy consumption. This paper assesses the energy footprint of commercially implemented adaptive cruise control (ACC) systems and human drivers in car-following formation via different models using empirical observations on very similar driving cycles and/or routes. Most importantly, it initiates a critical discussion of the findings under the behavioral properties of each mode. Findings show that: ACC systems propagate an increasing energy consumption upstream, while human drivers do not; they succeed in maintaining a constant time-headway policy, operating very reliably; they develop a strong bond with their leader compared to their human counterparts; the two modes (humans and ACCs) are operating in different phase-space areas with room for improvement. Overall, findings show that ACC systems must be optimized to achieve a trade-off between functional requirements and eco-driving instructions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 371,757 |
2411.05894 | SSSD: Simply-Scalable Speculative Decoding | Over the past year, Speculative Decoding has gained popularity as a technique for accelerating Large Language Model inference. While several methods have been introduced, most struggle to deliver satisfactory performance at batch sizes typical for data centers ($\geq 8$) and often involve significant deployment complexities. In this work, we offer a theoretical explanation of how Speculative Decoding can be effectively utilized with larger batch sizes. We also introduce a method that integrates seamlessly into existing systems without additional training or the complexity of deploying a small LLM. In a continuous batching setting, we achieve a 4x increase in throughput without any latency impact for short context generation, and a 1.7-2x improvement in both latency and throughput for longer contexts. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 506,879 |
2008.01674 | A Machine Learning Approach for Modelling Parking Duration in Urban
Land-use | Parking is an inevitable issue in the fast-growing developing countries. Increasing number of vehicles require more and more urban land to be allocated for parking. However, a little attention has been conferred to the parking issues in developing countries like India. This study proposes a model for analysing the influence of car users' socioeconomic and travel characteristics on parking duration. Specifically, artificial neural networks (ANNs) is deployed to capture the interrelationship between driver characteristics and parking duration. ANNs are highly efficient in learning and recognizing connections between parameters for best prediction of an outcome. Since, utility of ANNs has been critically limited due to its Black Box nature, the study involves the use of Garson algorithm and Local interpretable model-agnostic explanations (LIME) for model interpretations. LIME shows the prediction for any classification, by approximating it locally with the developed interpretable model. This study is based on microdata collected on-site through interview surveys considering two land-uses: office-business and market/shopping. Results revealed the higher probability of prediction through LIME and therefore, the methodology can be adopted ubiquitously. Further, the policy implications are discussed based on the results for both land-uses. This unique study could lead to enhanced parking policy and management to achieve the sustainability goals. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 190,406 |
2309.08172 | LASER: LLM Agent with State-Space Exploration for Web Navigation | Large language models (LLMs) have been successfully adapted for interactive decision-making tasks like web navigation. While achieving decent performance, previous methods implicitly assume a forward-only execution mode for the model, where they only provide oracle trajectories as in-context examples to guide the model on how to reason in the environment. Consequently, the model could not handle more challenging scenarios not covered in the in-context examples, e.g., mistakes, leading to sub-optimal performance. To address this issue, we propose to model the interactive task as state space exploration, where the LLM agent transitions among a pre-defined set of states by performing actions to complete the task. This formulation enables flexible backtracking, allowing the model to recover from errors easily. We evaluate our proposed LLM Agent with State-Space ExploRation (LASER) on both the WebShop task and amazon.com. Experimental results show that LASER significantly outperforms previous methods and closes the gap with human performance on the web navigation task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 392,064 |
1310.8390 | Harnack's inequality and Green functions on locally finite graphs | In this paper we study the gradient estimate for positive solutions of Schrodinger equations on locally finite graph. Then we derive Harnack's inequality for positive solutions of the Schrodinger equations. We also set up some results about Green functions of the Laplacian equation on locally finite graph. Interesting properties of Schrodinger equation are derived. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 28,101 |
2404.07121 | Digital Over-the-Air Computation: Achieving High Reliability via
Bit-Slicing | 6G mobile networks aim to realize ubiquitous intelligence at the network edge via distributed learning, sensing, and data analytics. Their common operation is to aggregate high-dimensional data, which causes a communication bottleneck that cannot be resolved using traditional orthogonal multi-access schemes. A promising solution, called over-the-air computation (AirComp), exploits channels' waveform superposition property to enable simultaneous access, thereby overcoming the bottleneck. Nevertheless, its reliance on uncoded linear analog modulation exposes data to perturbation by noise and interference. Hence, the traditional analog AirComp falls short of meeting the high-reliability requirement for 6G. Overcoming the limitation of analog AirComp motivates this work, which focuses on developing a framework for digital AirComp. The proposed framework features digital modulation of each data value, integrated with the bit-slicing technique to allocate its bits to multiple symbols, thereby increasing the AirComp reliability. To optimally detect the aggregated digital symbols, we derive the optimal maximum a posteriori detector that is shown to outperform the traditional maximum likelihood detector. Furthermore, a comparative performance analysis of digital AirComp with respect to its analog counterpart with repetition coding is conducted to quantify the practical signal-to-noise ratio (SNR) regime favoring the proposed scheme. On the other hand, digital AirComp is enhanced by further development to feature awareness of heterogeneous bit importance levels and its exploitation in channel adaptation. Lastly, simulation results demonstrate the achivability of substantial reliability improvement of digital AirComp over its analog counterpart given the same channel uses. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 445,721 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.