id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.13414 | GTAutoAct: An Automatic Datasets Generation Framework Based on Game
Engine Redevelopment for Action Recognition | Current datasets for action recognition tasks face limitations stemming from traditional collection and generation methods, including the constrained range of action classes, absence of multi-viewpoint recordings, limited diversity, poor video quality, and labor-intensive manually collection. To address these challenges, we introduce GTAutoAct, a innovative dataset generation framework leveraging game engine technology to facilitate advancements in action recognition. GTAutoAct excels in automatically creating large-scale, well-annotated datasets with extensive action classes and superior video quality. Our framework's distinctive contributions encompass: (1) it innovatively transforms readily available coordinate-based 3D human motion into rotation-orientated representation with enhanced suitability in multiple viewpoints; (2) it employs dynamic segmentation and interpolation of rotation sequences to create smooth and realistic animations of action; (3) it offers extensively customizable animation scenes; (4) it implements an autonomous video capture and processing pipeline, featuring a randomly navigating camera, with auto-trimming and labeling functionalities. Experimental results underscore the framework's robustness and highlights its potential to significantly improve action recognition model training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 423,720 |
2107.02821 | New Methods and Datasets for Group Anomaly Detection From Fundamental
Physics | The identification of anomalous overdensities in data - group or collective anomaly detection - is a rich problem with a large number of real world applications. However, it has received relatively little attention in the broader ML community, as compared to point anomalies or other types of single instance outliers. One reason for this is the lack of powerful benchmark datasets. In this paper, we first explain how, after the Nobel-prize winning discovery of the Higgs boson, unsupervised group anomaly detection has become a new frontier of fundamental physics (where the motivation is to find new particles and forces). Then we propose a realistic synthetic benchmark dataset (LHCO2020) for the development of group anomaly detection algorithms. Finally, we compare several existing statistically-sound techniques for unsupervised group anomaly detection, and demonstrate their performance on the LHCO2020 dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,948 |
2109.02398 | Click-Through Rate Prediction with Multi-Modal Hypergraphs | Advertising is critical to many online e-commerce platforms such as e-Bay and Amazon. One of the important signals that these platforms rely upon is the click-through rate (CTR) prediction. The recent popularity of multi-modal sharing platforms such as TikTok has led to an increased interest in online micro-videos. It is, therefore, useful to consider micro-videos to help a merchant target micro-video advertising better and find users' favourites to enhance user experience. Existing works on CTR prediction largely exploit unimodal content to learn item representations. A relatively minimal effort has been made to leverage multi-modal information exchange among users and items. We propose a model to exploit the temporal user-item interactions to guide the representation learning with multi-modal features, and further predict the user click rate of the micro-video item. We design a Hypergraph Click-Through Rate prediction framework (HyperCTR) built upon the hyperedge notion of hypergraph neural networks, which can yield modal-specific representations of users and micro-videos to better capture user preferences. We construct a time-aware user-item bipartite network with multi-modal information and enrich the representation of each user and item with the generated interests-based user hypergraph and item hypergraph. Through extensive experiments on three public datasets, we demonstrate that our proposed model significantly outperforms various state-of-the-art methods. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 253,735 |
1908.04903 | Control of Mobile Robots Using Barrier Functions Under Temporal Logic
Specifications | In this paper, we propose a framework for the control of mobile robots subject to temporal logic specifications using barrier functions. Complex task specifications can be conveniently encoded using linear temporal logic. In particular, we consider a fragment of linear temporal logic which encompasses a large class of motion planning specifications for a robotic system. Control barrier functions have recently emerged as a convenient tool to guarantee reachability and safety for a system. In addition, they can be encoded as affine constraints in a quadratic program. In this paper, a fully automatic framework which translates a user defined specification in temporal logic to a sequence of barrier function based quadratic programs is presented. In addition, with the aim of alleviating infeasibility scenarios, we propose methods for composition of barrier functions as well as a prioritization based control method to guarantee feasibility of the controller. We prove that the resulting system trajectory synthesized by the proposed controller satisfies the given specification. Robotic simulation and experimental results are provided in addition to the theoretical framework. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 141,593 |
2204.06858 | Flexible LED Index Modulation for MIMO Optical Wireless Communications | The limited bandwidth of optical wireless communication (OWC) front-end devices motivates the use of multiple-input-multiple-output (MIMO) techniques to enhance data rates. It is known that very high multiplexing gains could be achieved by spatial multiplexing (SMX) in exchange for exhaustive detection complexity. Alternatively, in spatial modulation (SM), a single light emitting diode (LED) is activated per time instance where information is carried by both the signal and the LED index. Since only an LED is active, both transmitter (TX) and receiver (RX) complexity reduces significantly while retaining the information transmission in the spatial domain. However, significant spectral efficiency losses occur in SM compared to SMX. In this paper, we propose a technique which adopts the advantages of both systems. Accordingly, the proposed flexible LED index modulation (FLIM) technique harnesses the inactive state of the LEDs as a transmit symbol. Therefore, the number of active LEDs changes in each transmission, unlike conventional techniques. Moreover, the system complexity is reduced by employing a linear minimum mean squared error (MMSE) equalizer and an angle perturbed receiver at the RX. Numerical results show that FLIM outperforms the reference systems by at least 6 dB in the low and medium/high spectral efficiency regions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 291,469 |
1401.2651 | An Overview of Schema Theory | The purpose of this paper is to give an introduction to the field of Schema Theory written by a mathematician and for mathematicians. In particular, we endeavor to to highlight areas of the field which might be of interest to a mathematician, to point out some related open problems, and to suggest some large-scale projects. Schema theory seeks to give a theoretical justification for the efficacy of the field of genetic algorithms, so readers who have studied genetic algorithms stand to gain the most from this paper. However, nothing beyond basic probability theory is assumed of the reader, and for this reason we write in a fairly informal style. Because the mathematics behind the theorems in schema theory is relatively elementary, we focus more on the motivation and philosophy. Many of these results have been proven elsewhere, so this paper is designed to serve a primarily expository role. We attempt to cast known results in a new light, which makes the suggested future directions natural. This involves devoting a substantial amount of time to the history of the field. We hope that this exposition will entice some mathematicians to do research in this area, that it will serve as a road map for researchers new to the field, and that it will help explain how schema theory developed. Furthermore, we hope that the results collected in this document will serve as a useful reference. Finally, as far as the author knows, the questions raised in the final section are new. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 29,768 |
2409.07109 | Advancing On-Device Neural Network Training with TinyPropv2: Dynamic,
Sparse, and Efficient Backpropagation | This study introduces TinyPropv2, an innovative algorithm optimized for on-device learning in deep neural networks, specifically designed for low-power microcontroller units. TinyPropv2 refines sparse backpropagation by dynamically adjusting the level of sparsity, including the ability to selectively skip training steps. This feature significantly lowers computational effort without substantially compromising accuracy. Our comprehensive evaluation across diverse datasets CIFAR 10, CIFAR100, Flower, Food, Speech Command, MNIST, HAR, and DCASE2020 reveals that TinyPropv2 achieves near-parity with full training methods, with an average accuracy drop of only around 1 percent in most cases. For instance, against full training, TinyPropv2's accuracy drop is minimal, for example, only 0.82 percent on CIFAR 10 and 1.07 percent on CIFAR100. In terms of computational effort, TinyPropv2 shows a marked reduction, requiring as little as 10 percent of the computational effort needed for full training in some scenarios, and consistently outperforms other sparse training methodologies. These findings underscore TinyPropv2's capacity to efficiently manage computational resources while maintaining high accuracy, positioning it as an advantageous solution for advanced embedded device applications in the IoT ecosystem. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 487,388 |
1407.6090 | Social and Business Intelligence Analysis Using PSO | The goal of this paper is to elaborate swarm intelligence for business intelligence decision making and the business rules management improvement. .The swarm optimization, which is highly influenced by the behavior of creature, performs in group. The Spatial data is defined as data that is represented by 2D or 3D images. SQL Server supports only 2D images till now. As we know that location is an essential part of any organizational data as well as business data enterprises maintain customer address lists, own property, ship goods from and to warehouses, manage transport flows among their workforce, and perform many other activities. By means to say a lot of spatial data is used and processed by enterprises, organizations and other bodies in order to make the things more visible and self descriptive. From the experiments, we found that PSO is can facilitate the intelligence in social and business behavior. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 34,837 |
1104.5183 | Direct search methods for an open problem of optimization in systems and
control | The motivation of this work is to illustrate the efficiency of some often overlooked alternatives to deal with optimization problems in systems and control. In particular, we will consider a problem for which an iterative linear matrix inequality algorithm (ILMI) has been proposed recently. As it often happens, this algorithm does not have guaranteed global convergence and therefore many methods may perform better. We will put forward how some general purpose optimization solvers are more suited than the ILMI. This is illustrated with the considered problem and example, but the general observations remain valid for many similar situations in the literature. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 10,139 |
1902.00866 | Semi-Supervised Learning Detector for MU-MIMO Systems with One-bit ADCs | We study an uplink multiuser multiple-input multiple-output (MU-MIMO) system with one-bit analog-to-digital converters (ADCs). For such system, a supervised-learning (SL) detector has been recently proposed by modeling a non-linear end-to-end system function into a parameterized Bernoulli-like model. Despite its attractive performance, the SL detector requires a large amount of labeled data (i.e., pilot signals) to estimate the parameters of the underlying model accurately. This is because the amount of the parameters grows exponentially with the number of users. To overcome this drawback, we propose a semi-supervised learning (SSL) detector where both pilot signals (i.e., labeled data) and some part of data signals (i.e., unlabeled data) are used to estimate the parameters via expectation-maximization (EM) algorithm. Via simulation results, we demonstrate that the proposed SSL detector can achieve the performance of the existing SL detector with significantly lower pilot-overhead. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 120,532 |
2106.02193 | Cross-Trajectory Representation Learning for Zero-Shot Generalization in
RL | A highly desirable property of a reinforcement learning (RL) agent -- and a major difficulty for deep RL approaches -- is the ability to generalize policies learned on a few tasks over a high-dimensional observation space to similar tasks not seen during training. Many promising approaches to this challenge consider RL as a process of training two functions simultaneously: a complex nonlinear encoder that maps high-dimensional observations to a latent representation space, and a simple linear policy over this space. We posit that a superior encoder for zero-shot generalization in RL can be trained by using solely an auxiliary SSL objective if the training process encourages the encoder to map behaviorally similar observations to similar representations, as reward-based signal can cause overfitting in the encoder (Raileanu et al., 2021). We propose Cross-Trajectory Representation Learning (CTRL), a method that runs within an RL agent and conditions its encoder to recognize behavioral similarity in observations by applying a novel SSL objective to pairs of trajectories from the agent's policies. CTRL can be viewed as having the same effect as inducing a pseudo-bisimulation metric but, crucially, avoids the use of rewards and associated overfitting risks. Our experiments ablate various components of CTRL and demonstrate that in combination with PPO it achieves better generalization performance on the challenging Procgen benchmark suite (Cobbe et al., 2020). | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 238,756 |
2501.06741 | Hierarchical Divide-and-Conquer for Fine-Grained Alignment in LLM-Based
Medical Evaluation | In the rapidly evolving landscape of large language models (LLMs) for medical applications, ensuring the reliability and accuracy of these models in clinical settings is paramount. Existing benchmarks often focus on fixed-format tasks like multiple-choice QA, which fail to capture the complexity of real-world clinical diagnostics. Moreover, traditional evaluation metrics and LLM-based evaluators struggle with misalignment, often providing oversimplified assessments that do not adequately reflect human judgment. To address these challenges, we introduce HDCEval, a Hierarchical Divide-and-Conquer Evaluation framework tailored for fine-grained alignment in medical evaluation. HDCEval is built on a set of fine-grained medical evaluation guidelines developed in collaboration with professional doctors, encompassing Patient Question Relevance, Medical Knowledge Correctness, and Expression. The framework decomposes complex evaluation tasks into specialized subtasks, each evaluated by expert models trained through Attribute-Driven Token Optimization (ADTO) on a meticulously curated preference dataset. This hierarchical approach ensures that each aspect of the evaluation is handled with expert precision, leading to a significant improvement in alignment with human evaluators. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 524,108 |
2209.01206 | Transformers in Remote Sensing: A Survey | Deep learning-based algorithms have seen a massive popularity in different areas of remote sensing image analysis over the past decade. Recently, transformers-based architectures, originally introduced in natural language processing, have pervaded computer vision field where the self-attention mechanism has been utilized as a replacement to the popular convolution operator for capturing long-range dependencies. Inspired by recent advances in computer vision, remote sensing community has also witnessed an increased exploration of vision transformers for a diverse set of tasks. Although a number of surveys have focused on transformers in computer vision in general, to the best of our knowledge we are the first to present a systematic review of recent advances based on transformers in remote sensing. Our survey covers more than 60 recent transformers-based methods for different remote sensing problems in sub-areas of remote sensing: very high-resolution (VHR), hyperspectral (HSI) and synthetic aperture radar (SAR) imagery. We conclude the survey by discussing different challenges and open issues of transformers in remote sensing. Additionally, we intend to frequently update and maintain the latest transformers in remote sensing papers with their respective code at: https://github.com/VIROBO-15/Transformer-in-Remote-Sensing | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 315,806 |
2211.06714 | Bayesian Learning of Coupled Biogeochemical-Physical Models | Predictive dynamical models for marine ecosystems are used for a variety of needs. Due to sparse measurements and limited understanding of the myriad of ocean processes, there is however significant uncertainty. There is model uncertainty in the parameter values, functional forms with diverse parameterizations, level of complexity needed, and thus in the state fields. We develop a Bayesian model learning methodology that allows interpolation in the space of candidate models and discovery of new models from noisy, sparse, and indirect observations, all while estimating state fields and parameter values, as well as the joint PDFs of all learned quantities. We address the challenges of high-dimensional and multidisciplinary dynamics governed by PDEs by using state augmentation and the computationally efficient GMM-DO filter. Our innovations include stochastic formulation and complexity parameters to unify candidate models into a single general model as well as stochastic expansion parameters within piecewise function approximations to generate dense candidate model spaces. These innovations allow handling many compatible and embedded candidate models, possibly none of which are accurate, and learning elusive unknown functional forms. Our new methodology is generalizable, interpretable, and extrapolates out of the space of models to discover new ones. We perform a series of twin experiments based on flows past a ridge coupled with three-to-five component ecosystem models, including flows with chaotic advection. The probabilities of known, uncertain, and unknown model formulations, and of state fields and parameters, are updated jointly using Bayes' law. Non-Gaussian statistics, ambiguity, and biases are captured. The parameter values and model formulations that best explain the data are identified. When observations are sufficiently informative, model complexity and functions are discovered. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 329,993 |
2405.21016 | MpoxSLDNet: A Novel CNN Model for Detecting Monkeypox Lesions and
Performance Comparison with Pre-trained Models | Monkeypox virus (MPXV) is a zoonotic virus that poses a significant threat to public health, particularly in remote parts of Central and West Africa. Early detection of monkeypox lesions is crucial for effective treatment. However, due to its similarity with other skin diseases, monkeypox lesion detection is a challenging task. To detect monkeypox, many researchers used various deep-learning models such as MobileNetv2, VGG16, ResNet50, InceptionV3, DenseNet121, EfficientNetB3, MobileNetV2, and Xception. However, these models often require high storage space due to their large size. This study aims to improve the existing challenges by introducing a CNN model named MpoxSLDNet (Monkeypox Skin Lesion Detector Network) to facilitate early detection and categorization of Monkeypox lesions and Non-Monkeypox lesions in digital images. Our model represents a significant advancement in the field of monkeypox lesion detection by offering superior performance metrics, including precision, recall, F1-score, accuracy, and AUC, compared to traditional pre-trained models such as VGG16, ResNet50, and DenseNet121. The key novelty of our approach lies in MpoxSLDNet's ability to achieve high detection accuracy while requiring significantly less storage space than existing models. By addressing the challenge of high storage requirements, MpoxSLDNet presents a practical solution for early detection and categorization of monkeypox lesions in resource-constrained healthcare settings. In this study, we have used "Monkeypox Skin Lesion Dataset" comprising 1428 skin images of monkeypox lesions and 1764 skin images of Non-Monkeypox lesions. Dataset's limitations could potentially impact the model's ability to generalize to unseen cases. However, the MpoxSLDNet model achieved a validation accuracy of 94.56%, compared to 86.25%, 84.38%, and 67.19% for VGG16, DenseNet121, and ResNet50, respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 459,611 |
2410.05363 | Towards World Simulator: Crafting Physical Commonsense-Based Benchmark
for Video Generation | Text-to-video (T2V) models like Sora have made significant strides in visualizing complex prompts, which is increasingly viewed as a promising path towards constructing the universal world simulator. Cognitive psychologists believe that the foundation for achieving this goal is the ability to understand intuitive physics. However, the capacity of these models to accurately represent intuitive physics remains largely unexplored. To bridge this gap, we introduce PhyGenBench, a comprehensive \textbf{Phy}sics \textbf{Gen}eration \textbf{Ben}chmark designed to evaluate physical commonsense correctness in T2V generation. PhyGenBench comprises 160 carefully crafted prompts across 27 distinct physical laws, spanning four fundamental domains, which could comprehensively assesses models' understanding of physical commonsense. Alongside PhyGenBench, we propose a novel evaluation framework called PhyGenEval. This framework employs a hierarchical evaluation structure utilizing appropriate advanced vision-language models and large language models to assess physical commonsense. Through PhyGenBench and PhyGenEval, we can conduct large-scale automated assessments of T2V models' understanding of physical commonsense, which align closely with human feedback. Our evaluation results and in-depth analysis demonstrate that current models struggle to generate videos that comply with physical commonsense. Moreover, simply scaling up models or employing prompt engineering techniques is insufficient to fully address the challenges presented by PhyGenBench (e.g., dynamic scenarios). We hope this study will inspire the community to prioritize the learning of physical commonsense in these models beyond entertainment applications. We will release the data and codes at https://github.com/OpenGVLab/PhyGenBench | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 495,708 |
2401.02552 | Long-term Fairness For Real-time Decision Making: A Constrained Online
Optimization Approach | Machine learning (ML) has demonstrated remarkable capabilities across many real-world systems, from predictive modeling to intelligent automation. However, the widespread integration of machine learning also makes it necessary to ensure machine learning-driven decision-making systems do not violate ethical principles and values of society in which they operate. As ML-driven decisions proliferate, particularly in cases involving sensitive attributes such as gender, race, and age, to name a few, the need for equity and impartiality has emerged as a fundamental concern. In situations demanding real-time decision-making, fairness objectives become more nuanced and complex: instantaneous fairness to ensure equity in every time slot, and long-term fairness to ensure fairness over a period of time. There is a growing awareness that real-world systems that operate over long periods and require fairness over different timelines. However, existing approaches mainly address dynamic costs with time-invariant fairness constraints, often disregarding the challenges posed by time-varying fairness constraints. To bridge this gap, this work introduces a framework for ensuring long-term fairness within dynamic decision-making systems characterized by time-varying fairness constraints. We formulate the decision problem with fairness constraints over a period as a constrained online optimization problem. A novel online algorithm, named LoTFair, is presented that solves the problem 'on the fly'. We prove that LoTFair can make overall fairness violations negligible while maintaining the performance over the long run. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 419,756 |
2010.07470 | Masked Contrastive Representation Learning for Reinforcement Learning | Improving sample efficiency is a key research problem in reinforcement learning (RL), and CURL, which uses contrastive learning to extract high-level features from raw pixels of individual video frames, is an efficient algorithm~\citep{srinivas2020curl}. We observe that consecutive video frames in a game are highly correlated but CURL deals with them independently. To further improve data efficiency, we propose a new algorithm, masked contrastive representation learning for RL, that takes the correlation among consecutive inputs into consideration. In addition to the CNN encoder and the policy network in CURL, our method introduces an auxiliary Transformer module to leverage the correlations among video frames. During training, we randomly mask the features of several frames, and use the CNN encoder and Transformer to reconstruct them based on the context frames. The CNN encoder and Transformer are jointly trained via contrastive learning where the reconstructed features should be similar to the ground-truth ones while dissimilar to others. During inference, the CNN encoder and the policy network are used to take actions, and the Transformer module is discarded. Our method achieves consistent improvements over CURL on $14$ out of $16$ environments from DMControl suite and $21$ out of $26$ environments from Atari 2600 Games. The code is available at https://github.com/teslacool/m-curl. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 200,828 |
1707.06468 | Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite
Optimization | Due to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such as the Lasso, group Lasso or empirical risk minimization with convex constraints. In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse method inspired by SAGA, a variance reduced incremental gradient algorithm. The proposed method is easy to implement and significantly outperforms the state of the art on several nonsmooth, large-scale problems. We prove that our method achieves a theoretical linear speedup with respect to the sequential version under assumptions on the sparsity of gradients and block-separability of the proximal term. Empirical benchmarks on a multi-core architecture illustrate practical speedups of up to 12x on a 20-core machine. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 77,429 |
1102.3080 | Covering Point Patterns | An encoder observes a point pattern---a finite number of points in the interval $[0,T]$---which is to be described to a reconstructor using bits. Based on these bits, the reconstructor wishes to select a subset of $[0,T]$ that contains all the points in the pattern. It is shown that, if the point pattern is produced by a homogeneous Poisson process of intensity $\lambda$, and if the reconstructor is restricted to select a subset of average Lebesgue measure not exceeding $DT$, then, as $T$ tends to infinity, the minimum number of bits per second needed by the encoder is $-\lambda\log D$. It is also shown that, as $T$ tends to infinity, any point pattern on $[0,T]$ containing no more than $\lambda T$ points can be successfully described using $-\lambda \log D$ bits per second in this sense. Finally, a Wyner-Ziv version of this problem is considered where some of the points in the pattern are known to the reconstructor. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 9,206 |
2304.11488 | Physics-guided generative adversarial network to learn physical models | This short note describes the concept of guided training of deep neural networks (DNNs) to learn physically reasonable solutions. DNNs are being widely used to predict phenomena in physics and mechanics. One of the issues of DNNs is that their output does not always satisfy physical equations. One approach to consider physical equations is adding a residual of equations into the loss function; this is called physics-informed neural network (PINN). One feature of PINNs is that the physical equations and corresponding residual must be implemented as part of a neural network model. In addition, the residual does not always converge to a small value. The proposed model is a physics-guided generative adversarial network (PG-GAN) that uses a GAN architecture in which physical equations are used to judge whether the neural network's output is consistent with physics. The proposed method was applied to a simple problem to assess its potential usability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 359,837 |
1508.06395 | On the Role of Shared Randomness in Simultaneous Communication | Two parties wish to carry out certain distributed computational tasks, and they are given access to a source of correlated random bits. It allows the parties to act in a correlated manner, which can be quite useful. But what happens if the shared randomness is not perfect? In this work, we initiate the study of the power of different sources of shared randomness in communication complexity. This is done in the setting of simultaneous message passing (SMP) model of communication complexity, which is one of the most suitable models for studying the resource of shared randomness. Toward characterising the power of various sources of shared randomness, we introduce a measure for the quality of a source - we call it collision complexity. Our results show that the collision complexity tightly characterises the power of a (shared) randomness resource in the SMP model. Of independent interest is our demonstration that even the weakest sources of shared randomness can in some cases increase the power of SMP substantially: the equality function can be solved very efficiently with virtually any nontrivial shared randomness. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 46,323 |
2403.12981 | Beyond Inference: Performance Analysis of DNN Server Overheads for
Computer Vision | Deep neural network (DNN) inference has become an important part of many data-center workloads. This has prompted focused efforts to design ever-faster deep learning accelerators such as GPUs and TPUs. However, an end-to-end DNN-based vision application contains more than just DNN inference, including input decompression, resizing, sampling, normalization, and data transfer. In this paper, we perform a thorough evaluation of computer vision inference requests performed on a throughput-optimized serving system. We quantify the performance impact of server overheads such as data movement, preprocessing, and message brokers between two DNNs producing outputs at different rates. Our empirical analysis encompasses many computer vision tasks including image classification, segmentation, detection, depth-estimation, and more complex processing pipelines with multiple DNNs. Our results consistently demonstrate that end-to-end application performance can easily be dominated by data processing and data movement functions (up to 56% of end-to-end latency in a medium-sized image, and $\sim$ 80% impact on system throughput in a large image), even though these functions have been conventionally overlooked in deep learning system design. Our work identifies important performance bottlenecks in different application scenarios, achieves 2.25$\times$ better throughput compared to prior work, and paves the way for more holistic deep learning system design. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 439,420 |
2306.03377 | TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision | End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework. Typical methods heavily rely on Region-of-Interest (RoI) operations to extract local features and complex post-processing steps to produce final predictions. To address these limitations, we propose TextFormer, a query-based end-to-end text spotter with Transformer architecture. Specifically, using query embedding per text instance, TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling. It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing without sacrificing flexibility or simplicity. Additionally, we design an Adaptive Global aGgregation (AGG) module to transfer global features into sequential features for reading arbitrarily-shaped texts, which overcomes the sub-optimization problem of RoI operations. Furthermore, potential corpus information is utilized from weak annotations to full labels through mixed supervision, further improving text detection and end-to-end text spotting results. Extensive experiments on various bilingual (i.e., English and Chinese) benchmarks demonstrate the superiority of our method. Especially on TDA-ReCTS dataset, TextFormer surpasses the state-of-the-art method in terms of 1-NED by 13.2%. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 371,296 |
2202.12979 | Generalised Gaussian Process Latent Variable Models (GPLVM) with
Stochastic Variational Inference | Gaussian process latent variable models (GPLVM) are a flexible and non-linear approach to dimensionality reduction, extending classical Gaussian processes to an unsupervised learning context. The Bayesian incarnation of the GPLVM Titsias and Lawrence, 2010] uses a variational framework, where the posterior over latent variables is approximated by a well-behaved variational family, a factorized Gaussian yielding a tractable lower bound. However, the non-factories ability of the lower bound prevents truly scalable inference. In this work, we study the doubly stochastic formulation of the Bayesian GPLVM model amenable with minibatch training. We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models. Further, we demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions. We demonstrate the model's performance by benchmarking against the canonical sparse GPLVM for high-dimensional data examples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 282,430 |
1910.03033 | Synthesizing Credit Card Transactions | Two elements have been essential to AI's recent boom: (1) deep neural nets and the theory and practice behind them; and (2) cloud computing with its abundant labeled data and large computing resources. Abundant labeled data is available for key domains such as images, speech, natural language processing, and recommendation engines. However, there are many other domains where such data is not available, or access to it is highly restricted for privacy reasons, as with health and financial data. Even when abundant data is available, it is often not labeled. Doing such labeling is labor-intensive and non-scalable. As a result, to the best of our knowledge, key domains still lack labeled data or have at most toy data; or the synthetic data must have access to real data from which it can mimic new data. This paper outlines work to generate realistic synthetic data for an important domain: credit card transactions. Some challenges: there are many patterns and correlations in real purchases. There are millions of merchants and innumerable locations. Those merchants offer a wide variety of goods. Who shops where and when? How much do people pay? What is a realistic fraudulent transaction? We use a mixture of technical approaches and domain knowledge including mechanics of credit card processing, a broad set of consumer domains: electronics, clothing, hair styling, etc. Connecting everything is a virtual world. This paper outlines some of our key techniques and provides evidence that the data generated is indeed realistic. Beyond the scope of this paper: (1) use of our data to develop and train models to predict fraud; (2) coupling models and the synthetic dataset to assess performance in designing accelerators such as GPUs and TPUs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | true | 148,392 |
2008.08748 | DPMC: Weighted Model Counting by Dynamic Programming on Project-Join
Trees | We propose a unifying dynamic-programming framework to compute exact literal-weighted model counts of formulas in conjunctive normal form. At the center of our framework are project-join trees, which specify efficient project-join orders to apply additive projections (variable eliminations) and joins (clause multiplications). In this framework, model counting is performed in two phases. First, the planning phase constructs a project-join tree from a formula. Second, the execution phase computes the model count of the formula, employing dynamic programming as guided by the project-join tree. We empirically evaluate various methods for the planning phase and compare constraint-satisfaction heuristics with tree-decomposition tools. We also investigate the performance of different data structures for the execution phase and compare algebraic decision diagrams with tensors. We show that our dynamic-programming model-counting framework DPMC is competitive with the state-of-the-art exact weighted model counters cachet, c2d, d4, and miniC2D. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 192,494 |
2011.03043 | Identifying and interpreting tuning dimensions in deep networks | In neuroscience, a tuning dimension is a stimulus attribute that accounts for much of the activation variance of a group of neurons. These are commonly used to decipher the responses of such groups. While researchers have attempted to manually identify an analogue to these tuning dimensions in deep neural networks, we are unaware of an automatic way to discover them. This work contributes an unsupervised framework for identifying and interpreting "tuning dimensions" in deep networks. Our method correctly identifies the tuning dimensions of a synthetic Gabor filter bank and tuning dimensions of the first two layers of InceptionV1 trained on ImageNet. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 205,115 |
2102.04656 | Large-Scale Visual Search with Binary Distributed Graph at Alibaba | Graph-based approximate nearest neighbor search has attracted more and more attentions due to its online search advantages. Numbers of methods studying the enhancement of speed and recall have been put forward. However, few of them focus on the efficiency and scale of offline graph-construction. For a deployed visual search system with several billions of online images in total, building a billion-scale offline graph in hours is essential, which is almost unachievable by most existing methods. In this paper, we propose a novel algorithm called Binary Distributed Graph to solve this problem. Specifically, we combine binary codes with graph structure to speedup online and offline procedures, and achieve comparable performance with the ones in real-value based scenarios by recalling more binary candidates. Furthermore, the graph-construction is optimized to completely distributed implementation, which significantly accelerates the offline process and gets rid of the limitation of memory and disk within a single machine. Experimental comparisons on Alibaba Commodity Data Set (more than three billion images) show that the proposed method outperforms the state-of-the-art with respect to the online/offline trade-off. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | 219,181 |
2311.08024 | MD-IQA: Learning Multi-scale Distributed Image Quality Assessment with
Semi Supervised Learning for Low Dose CT | Image quality assessment (IQA) plays a critical role in optimizing radiation dose and developing novel medical imaging techniques in computed tomography (CT). Traditional IQA methods relying on hand-crafted features have limitations in summarizing the subjective perceptual experience of image quality. Recent deep learning-based approaches have demonstrated strong modeling capabilities and potential for medical IQA, but challenges remain regarding model generalization and perceptual accuracy. In this work, we propose a multi-scale distributions regression approach to predict quality scores by constraining the output distribution, thereby improving model generalization. Furthermore, we design a dual-branch alignment network to enhance feature extraction capabilities. Additionally, semi-supervised learning is introduced by utilizing pseudo-labels for unlabeled data to guide model training. Extensive qualitative experiments demonstrate the effectiveness of our proposed method for advancing the state-of-the-art in deep learning-based medical IQA. Code is available at: https://github.com/zunzhumu/MD-IQA. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 407,566 |
1710.09953 | The Error Probability of Random Fourier Features is Dimensionality
Independent | We show that the error probability of reconstructing kernel matrices from Random Fourier Features for the Gaussian kernel function is at most $\mathcal{O}(R^{2/3} \exp(-D))$, where $D$ is the number of random features and $R$ is the diameter of the data domain. We also provide an information-theoretic method-independent lower bound of $\Omega((1-\exp(-R^2)) \exp(-D))$. Compared to prior work, we are the first to show that the error probability for random Fourier features is independent of the dimensionality of data points. As applications of our theory, we obtain dimension-independent bounds for kernel ridge regression and support vector machines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 83,287 |
1611.07767 | Multiframe Motion Coupling for Video Super Resolution | The idea of video super resolution is to use different view points of a single scene to enhance the overall resolution and quality. Classical energy minimization approaches first establish a correspondence of the current frame to all its neighbors in some radius and then use this temporal information for enhancement. In this paper, we propose the first variational super resolution approach that computes several super resolved frames in one batch optimization procedure by incorporating motion information between the high-resolution image frames themselves. As a consequence, the number of motion estimation problems grows linearly in the number of frames, opposed to a quadratic growth of classical methods and temporal consistency is enforced naturally. We use infimal convolution regularization as well as an automatic parameter balancing scheme to automatically determine the reliability of the motion information and reweight the regularization locally. We demonstrate that our approach yields state-of-the-art results and even is competitive with machine learning approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 64,404 |
2502.08821 | DejAIvu: Identifying and Explaining AI Art on the Web in Real-Time with
Saliency Maps | The recent surge in advanced generative models, such as diffusion models and generative adversarial networks (GANs), has led to an alarming rise in AI-generated images across various domains on the web. While such technologies offer benefits such as democratizing artistic creation, they also pose challenges in misinformation, digital forgery, and authenticity verification. Additionally, the uncredited use of AI-generated images in media and marketing has sparked significant backlash from online communities. In response to this, we introduce DejAIvu, a Chrome Web extension that combines real-time AI-generated image detection with saliency-based explainability while users browse the web. Using an ONNX-optimized deep learning model, DejAIvu automatically analyzes images on websites such as Google Images, identifies AI-generated content using model inference, and overlays a saliency heatmap to highlight AI-related artifacts. Our approach integrates efficient in-browser inference, gradient-based saliency analysis, and a seamless user experience, ensuring that AI detection is both transparent and interpretable. We also evaluate DejAIvu across multiple pretrained architectures and benchmark datasets, demonstrating high accuracy and low latency, making it a practical and deployable tool for enhancing AI image accountability. The code for this system can be found at https://github.com/Noodulz/dejAIvu. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 533,179 |
1303.3632 | Statistical Regression to Predict Total Cumulative CPU Usage of
MapReduce Jobs | Recently, businesses have started using MapReduce as a popular computation framework for processing large amount of data, such as spam detection, and different data mining tasks, in both public and private clouds. Two of the challenging questions in such environments are (1) choosing suitable values for MapReduce configuration parameters e.g., number of mappers, number of reducers, and DFS block size, and (2) predicting the amount of resources that a user should lease from the service provider. Currently, the tasks of both choosing configuration parameters and estimating required resources are solely the users responsibilities. In this paper, we present an approach to provision the total CPU usage in clock cycles of jobs in MapReduce environment. For a MapReduce job, a profile of total CPU usage in clock cycles is built from the job past executions with different values of two configuration parameters e.g., number of mappers, and number of reducers. Then, a polynomial regression is used to model the relation between these configuration parameters and total CPU usage in clock cycles of the job. We also briefly study the influence of input data scaling on measured total CPU usage in clock cycles. This derived model along with the scaling result can then be used to provision the total CPU usage in clock cycles of the same jobs with different input data size. We validate the accuracy of our models using three realistic applications (WordCount, Exim MainLog parsing, and TeraSort). Results show that the predicted total CPU usage in clock cycles of generated resource provisioning options are less than 8% of the measured total CPU usage in clock cycles in our 20-node virtual Hadoop cluster. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 22,935 |
2410.11414 | ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via
Mechanistic Interpretability | Retrieval-Augmented Generation (RAG) models are designed to incorporate external knowledge, reducing hallucinations caused by insufficient parametric (internal) knowledge. However, even with accurate and relevant retrieved content, RAG models can still produce hallucinations by generating outputs that conflict with the retrieved information. Detecting such hallucinations requires disentangling how Large Language Models (LLMs) utilize external and parametric knowledge. Current detection methods often focus on one of these mechanisms or without decoupling their intertwined effects, making accurate detection difficult. In this paper, we investigate the internal mechanisms behind hallucinations in RAG scenarios. We discover hallucinations occur when the Knowledge FFNs in LLMs overemphasize parametric knowledge in the residual stream, while Copying Heads fail to effectively retain or integrate external knowledge from retrieved content. Based on these findings, we propose ReDeEP, a novel method that detects hallucinations by decoupling LLM's utilization of external context and parametric knowledge. Our experiments show that ReDeEP significantly improves RAG hallucination detection accuracy. Additionally, we introduce AARF, which mitigates hallucinations by modulating the contributions of Knowledge FFNs and Copying Heads. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 498,551 |
1804.03273 | On the Supermodularity of Active Graph-based Semi-supervised Learning
with Stieltjes Matrix Regularization | Active graph-based semi-supervised learning (AG-SSL) aims to select a small set of labeled examples and utilize their graph-based relation to other unlabeled examples to aid in machine learning tasks. It is also closely related to the sampling theory in graph signal processing. In this paper, we revisit the original formulation of graph-based SSL and prove the supermodularity of an AG-SSL objective function under a broad class of regularization functions parameterized by Stieltjes matrices. Under this setting, supermodularity yields a novel greedy label sampling algorithm with guaranteed performance relative to the optimal sampling set. Compared to three state-of-the-art graph signal sampling and recovery methods on two real-life community detection datasets, the proposed AG-SSL method attains superior classification accuracy given limited sample budgets. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 94,586 |
2010.00403 | Mediating Artificial Intelligence Developments through Negative and
Positive Incentives | The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to "win". Starting from a baseline model that describes a broad class of technology races where winners draw a significant benefit compared to others (such as AI advances, patent race, pharmaceutical technologies), we investigate here how positive (rewards) and negative (punishments) incentives may beneficially influence the outcomes. We uncover conditions in which punishment is either capable of reducing the development speed of unsafe participants or has the capacity to reduce innovation through over-regulation. Alternatively, we show that, in several scenarios, rewarding those that follow safety measures may increase the development speed while ensuring safe choices. Moreover, in {the latter} regimes, rewards do not suffer from the issue of over-regulation as is the case for punishment. Overall, our findings provide valuable insights into the nature and kinds of regulatory actions most suitable to improve safety compliance in the contexts of both smooth and sudden technological shifts. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 198,283 |
2312.16566 | Inverse Reinforcement Learning with Unknown Reward Model based on
Structural Risk Minimization | Inverse reinforcement learning (IRL) usually assumes the model of the reward function is pre-specified and estimates the parameter only. However, how to determine a proper reward model is nontrivial. A simplistic model is less likely to contain the real reward function, while a model with high complexity leads to substantial computation cost and risks overfitting. This paper addresses this trade-off in IRL model selection by introducing the structural risk minimization (SRM) method from statistical learning. SRM selects an optimal reward function class from a hypothesis set minimizing both estimation error and model complexity. To formulate an SRM scheme for IRL, we estimate policy gradient by demonstration serving as empirical risk and establish the upper bound of Rademacher complexity of hypothesis classes as model penalty. The learning guarantee is further presented. In particular, we provide explicit SRM for the common linear weighted sum setting in IRL. Simulations demonstrate the performance and efficiency of our scheme. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 418,433 |
2101.09505 | Safe Learning and Optimization Techniques: Towards a Survey of the State
of the Art | Safe learning and optimization deals with learning and optimization problems that avoid, as much as possible, the evaluation of non-safe input points, which are solutions, policies, or strategies that cause an irrecoverable loss (e.g., breakage of a machine or equipment, or life threat). Although a comprehensive survey of safe reinforcement learning algorithms was published in 2015, a number of new algorithms have been proposed thereafter, and related works in active learning and in optimization were not considered. This paper reviews those algorithms from a number of domains including reinforcement learning, Gaussian process regression and classification, evolutionary algorithms, and active learning. We provide the fundamental concepts on which the reviewed algorithms are based and a characterization of the individual algorithms. We conclude by explaining how the algorithms are connected and suggestions for future research. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 216,620 |
2403.11090 | Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via
NN-Driven Traffic Analysis at Line-Speed | The emerging programmable networks sparked significant research on Intelligent Network Data Plane (INDP), which achieves learning-based traffic analysis at line-speed. Prior art in INDP focus on deploying tree/forest models on the data plane. We observe a fundamental limitation in tree-based INDP approaches: although it is possible to represent even larger tree/forest tables on the data plane, the flow features that are computable on the data plane are fundamentally limited by hardware constraints. In this paper, we present BoS to push the boundaries of INDP by enabling Neural Network (NN) driven traffic analysis at line-speed. Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly. However, the challenge is significant: the recurrent computation scheme used in RNN inference is fundamentally different from the match-action paradigm used on the network data plane. BoS addresses this challenge by (i) designing a novel data plane friendly RNN architecture that can execute unlimited RNN time steps with limited data plane stages, effectively achieving line-speed RNN inference; and (ii) complementing the on-switch RNN model with an off-switch transformer-based traffic analysis module to further boost the overall performance. We implement a prototype of BoS using a P4 programmable switch as our data plane, and extensively evaluate it over multiple traffic analysis tasks. The results show that BoS outperforms state-of-the-art in both analysis accuracy and scalability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 438,525 |
2407.08196 | SoupLM: Model Integration in Large Language and Multi-Modal Models | Training large language models (LLMs) and multimodal LLMs necessitates significant computing resources, and existing publicly available LLMs are typically pre-trained on diverse, privately curated datasets spanning various tasks. For instance, LLaMA, Vicuna, and LLaVA are three LLM variants trained with LLaMA base models using very different training recipes, tasks, and data modalities. The training cost and complexity for such LLM variants grow rapidly. In this study, we propose to use a soup strategy to assemble these LLM variants into a single well-generalized multimodal LLM (SoupLM) in a cost-efficient manner. Assembling these LLM variants efficiently brings knowledge and specialities trained from different domains and data modalities into an integrated one (e.g., chatbot speciality from user-shared conversations for Vicuna, and visual capacity from vision-language data for LLaVA), therefore, to avoid computing costs of repetitive training on several different domains. We propose series of soup strategies to systematically benchmark performance gains across various configurations, and probe the soup behavior across base models in the interpolation space. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 472,059 |
2307.13779 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective. | true | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | 381,699 |
1710.05768 | Adaptive Full Duplex Communications in Cognitive Radio Networks | In this paper we propose a novel adaptive scheme for full duplex communication of secondary users (SUs) in a cognitive radio network. The secondary network operates in three modes; Cooperative Sensing (CS), Full Duplex Transmit and Sensing (FDTS), and Full Duplex Transmit and Receive (FDTR). In the CS mode, the secondary nodes detect the activity of primary users (PUs) through a novel cooperative MAC protocol and will decide the systems mode of operation in the subsequent spectrum hole. In the FDTS mode one of the SUs senses the PUs activity continuously whilst transmitting to another node. In the FDTR mode, the SUs would communicate bidirectionally in an asynchronous full duplex (FD) manner, with decreased maximum and average collision durations. Analytical closed forms for probability of collision, average collision duration and cumulative collision duration, as well as throughput of the SU network are derived, and performance of the proposed protocol in terms of above-mentioned metrics, its effectiveness, and advantages over conventional methods of sensing and transmission are verified via simulations | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 82,685 |
2408.02087 | Constructing Mechanical Design Agent Based on Large Language Models | Since ancient times, mechanical design aids have been developed to assist human users, aimed at improving the efficiency and effectiveness of design. However, even with the widespread use of contemporary Computer-Aided Design (CAD) systems, there are still high learning costs, repetitive work, and other challenges. In recent years, the rise of Large Language Models (LLMs) has introduced new productivity opportunities to the field of mechanical design. Yet, it remains unrealistic to rely on LLMs alone to complete mechanical design tasks directly. Through a series of explorations, we propose a method for constructing a comprehensive Mechanical Design Agent (MDA) by guiding LLM learning. To verify the validity of our proposed method, we conducted a series of experiments and presented relevant cases. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 478,497 |
2010.15426 | Physics-informed deep learning for flow and deformation in poroelastic
media | A physics-informed neural network is presented for poroelastic problems with coupled flow and deformation processes. The governing equilibrium and mass balance equations are discussed and specific derivations for two-dimensional cases are presented. A fully-connected deep neural network is used for training. Barry and Mercer's source problem with time-dependent fluid injection/extraction in an idealized poroelastic medium, which has an exact analytical solution, is used as a numerical example. A random sample from the analytical solution is used as training data and the performance of the model is tested by predicting the solution on the entire domain after training. The deep learning model predicts the horizontal and vertical deformations well while the error in the predicted pore pressure predictions is slightly higher because of the sparsity of the pore pressure values. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 203,768 |
2209.03777 | Joint Optimization of STAR-RIS Assisted UAV Communication Systems | In this letter, we study the simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) assisted unmanned aerial vehicle (UAV) communications. Our goal is to maximize the sum rate of all users by jointly optimizing the STAR-RIS's beamforming vectors, the UAV's trajectory and power allocation. We decompose the formulated non-convex problem into three subproblems and solve them alternately to obtain the solution. Simulations show that: 1) the STAR-RIS achieves a higher sum rate than traditional RIS; 2) to exploit the benefits of STAR-RIS, the UAV's trajectory is closer to STAR-RIS than that of RIS; 3) the energy splitting for reflection and transmission highly depends on the real-time trajectory of UAV. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 316,597 |
2105.09275 | DumbleDR: Predicting User Preferences of Dimensionality Reduction
Projection Quality | A plethora of dimensionality reduction techniques have emerged over the past decades, leaving researchers and analysts with a wide variety of choices for reducing their data, all the more so given some techniques come with additional parametrization (e.g. t-SNE, UMAP, etc.). Recent studies are showing that people often use dimensionality reduction as a black-box regardless of the specific properties the method itself preserves. Hence, evaluating and comparing 2D projections is usually qualitatively decided, by setting projections side-by-side and letting human judgment decide which projection is the best. In this work, we propose a quantitative way of evaluating projections, that nonetheless places human perception at the center. We run a comparative study, where we ask people to select 'good' and 'misleading' views between scatterplots of low-level projections of image datasets, simulating the way people usually select projections. We use the study data as labels for a set of quality metrics whose purpose is to discover and quantify what exactly people are looking for when deciding between projections. With this proxy for human judgments, we use it to rank projections on new datasets, explain why they are relevant, and quantify the degree of subjectivity in projections selected. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 236,018 |
1802.06318 | Large Neighborhood-Based Metaheuristic and Branch-and-Price for the
Pickup and Delivery Problem with Split Loads | We consider the multi-vehicle one-to-one pickup and delivery problem with split loads, a NP-hard problem linked with a variety of applications for bulk product transportation, bike-sharing systems and inventory re-balancing. This problem is notoriously difficult due to the interaction of two challenging vehicle routing attributes, "pickups and deliveries" and "split deliveries". This possibly leads to optimal solutions of a size that grows exponentially with the instance size, containing multiple visits per customer pair, even in the same route. To solve this problem, we propose an iterated local search metaheuristic as well as a branch-and-price algorithm. The core of the metaheuristic consists of a new large neighborhood search, which reduces the problem of finding the best insertion combination of a pickup and delivery pair into a route (with possible splits) to a resource-constrained shortest path and knapsack problem. Similarly, the branch-and-price algorithm uses sophisticated labeling techniques, route relaxations, pre-processing and branching rules for an efficient resolution. Our computational experiments on classical single-vehicle instances demonstrate the excellent performance of the metaheuristic, which produces new best known solutions for 92 out of 93 test instances, and outperforms all previous algorithms. Experimental results on new multi-vehicle instances with distance constraints are also reported. The branch-and-price algorithm produces optimal solutions for instances with up to 20 pickup-and-delivery pairs, and very accurate solutions are found by the metaheuristic. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 90,644 |
2410.15980 | Learning from Neighbors: Category Extrapolation for Long-Tail Learning | Balancing training on long-tail data distributions remains a long-standing challenge in deep learning. While methods such as re-weighting and re-sampling help alleviate the imbalance issue, limited sample diversity continues to hinder models from learning robust and generalizable feature representations, particularly for tail classes. In contrast to existing methods, we offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance. In this paper, we investigate this phenomenon through both quantitative and qualitative studies, showing that increased granularity enhances the generalization of learned features in tail categories. Motivated by these findings, we propose a method to increase dataset granularity through category extrapolation. Specifically, we introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes. This forms the core contribution and insight of our approach. To automate the curation of auxiliary data, we leverage large language models (LLMs) as knowledge bases to search for auxiliary categories and retrieve relevant images through web crawling. To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss that encourages the model to focus on class discrimination within the target dataset. During inference, the classifier weights for auxiliary categories are masked out, leaving only the target class weights for use. Extensive experiments and ablation studies on three standard long-tail benchmarks demonstrate the effectiveness of our approach, notably outperforming strong baseline methods that use the same amount of data. The code will be made publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 500,823 |
2410.20890 | Generative Example-Based Explanations: Bridging the Gap between
Generative Modeling and Explainability | Recently, several methods have leveraged deep generative modeling to produce example-based explanations of decision algorithms for high-dimensional input data. Despite promising results, a disconnect exists between these methods and the classical explainability literature, which focuses on lower-dimensional data with semantically meaningful features. This conceptual and communication gap leads to misunderstandings and misalignments in goals and expectations. In this paper, we bridge this gap by proposing a novel probabilistic framework for local example-based explanations. Our framework integrates the critical characteristics of classical local explanation desiderata while being amenable to high-dimensional data and their modeling through deep generative models. Our aim is to facilitate communication, foster rigor and transparency, and improve the quality of peer discussion and research progress. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 503,006 |
1803.04638 | A New Simulation Algorithm for Absorbing Receiver in Molecular
Communication | The simulation of diffusion-based molecular communication systems with absorbing receivers often requires a high computational complexity to produce accurate results. In this work, a new a priori Monte Carlo (APMC) algorithm is proposed to precisely simulate the molecules absorbed at a spherical receiver when the simulation time step length is relatively large. This algorithm addresses the limitations of the current refined Monte Carlo (RMC) algorithm, since the RMC algorithm provides accurate simulation only for a relatively small time step length. The APMC algorithm is demonstrated to achieve a higher simulation efficiency than the existing algorithms by finding that the APMC algorithm, for a relatively large time step length, absorbs the fraction of molecules expected by analysis, while other algorithms do not. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 92,490 |
1703.10664 | Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos | Deep learning has been demonstrated to achieve excellent results for image classification and object detection. However, the impact of deep learning on video analysis (e.g. action detection and recognition) has been limited due to complexity of video data and lack of annotations. Previous convolutional neural networks (CNN) based video action detection approaches usually consist of two major steps: frame-level action proposal detection and association of proposals across frames. Also, these methods employ two-stream CNN framework to handle spatial and temporal feature separately. In this paper, we propose an end-to-end deep network called Tube Convolutional Neural Network (T-CNN) for action detection in videos. The proposed architecture is a unified network that is able to recognize and localize action based on 3D convolution features. A video is first divided into equal length clips and for each clip a set of tube proposals are generated next based on 3D Convolutional Network (ConvNet) features. Finally, the tube proposals of different clips are linked together employing network flow and spatio-temporal action detection is performed using these linked video proposals. Extensive experiments on several video datasets demonstrate the superior performance of T-CNN for classifying and localizing actions in both trimmed and untrimmed videos compared to state-of-the-arts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 70,951 |
2212.09180 | Don't Forget Your ABC's: Evaluating the State-of-the-Art in
Chat-Oriented Dialogue Systems | Despite tremendous advancements in dialogue systems, stable evaluation still requires human judgments producing notoriously high-variance metrics due to their inherent subjectivity. Moreover, methods and labels in dialogue evaluation are not fully standardized, especially for open-domain chats, with a lack of work to compare and assess the validity of those approaches. The use of inconsistent evaluation can misinform the performance of a dialogue system, which becomes a major hurdle to enhance it. Thus, a dimensional evaluation of chat-oriented open-domain dialogue systems that reliably measures several aspects of dialogue capabilities is desired. This paper presents a novel human evaluation method to estimate the rates of many dialogue system behaviors. Our method is used to evaluate four state-of-the-art open-domain dialogue systems and compared with existing approaches. The analysis demonstrates that our behavior method is more suitable than alternative Likert-style or comparative approaches for dimensional evaluation of these systems. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 337,026 |
2104.01073 | Enhancing Underwater Image via Adaptive Color and Contrast Enhancement,
and Denoising | Images captured underwater are often characterized by low contrast, color distortion, and noise. To address these visual degradations, we propose a novel scheme by constructing an adaptive color and contrast enhancement, and denoising (ACCE-D) framework for underwater image enhancement. In the proposed framework, Difference of Gaussian (DoG) filter and bilateral filter are respectively employed to decompose the high-frequency and low-frequency components. Benefited from this separation, we utilize soft-thresholding operation to suppress the noise in the high-frequency component. Specially, the low-frequency component is enhanced by using an adaptive color and contrast enhancement (ACCE) strategy. The proposed ACCE is an adaptive variational framework implemented in the HSI color space, which integrates data term and regularized term, as well as introduces Gaussian weight and Heaviside function to avoid over-enhancement and oversaturation. Moreover, we derive a numerical solution for ACCE, and adopt a pyramid-based strategy to accelerate the solving procedure. Experimental results demonstrate that our strategy is effective in color correction, visibility improvement, and detail revealing. Comparison with state-of-the-art techniques also validate the superiority of proposed method. Furthermore, we have verified the utility of our proposed ACCE-D for enhancing other types of degraded scenes, including foggy scene, sandstorm scene and low-light scene. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 228,222 |
2005.12844 | Approximation Schemes for ReLU Regression | We consider the fundamental problem of ReLU regression, where the goal is to output the best fitting ReLU with respect to square loss given access to draws from some unknown distribution. We give the first efficient, constant-factor approximation algorithm for this problem assuming the underlying distribution satisfies some weak concentration and anti-concentration conditions (and includes, for example, all log-concave distributions). This solves the main open problem of Goel et al., who proved hardness results for any exact algorithm for ReLU regression (up to an additive $\epsilon$). Using more sophisticated techniques, we can improve our results and obtain a polynomial-time approximation scheme for any subgaussian distribution. Given the aforementioned hardness results, these guarantees can not be substantially improved. Our main insight is a new characterization of surrogate losses for nonconvex activations. While prior work had established the existence of convex surrogates for monotone activations, we show that properties of the underlying distribution actually induce strong convexity for the loss, allowing us to relate the global minimum to the activation's Chow parameters. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 178,839 |
1911.08065 | Adaptive Activation Network and Functional Regularization for Efficient
and Flexible Deep Multi-Task Learning | Multi-task learning (MTL) is a common paradigm that seeks to improve the generalization performance of task learning by training related tasks simultaneously. However, it is still a challenging problem to search the flexible and accurate architecture that can be shared among multiple tasks. In this paper, we propose a novel deep learning model called Task Adaptive Activation Network (TAAN) that can automatically learn the optimal network architecture for MTL. The main principle of TAAN is to derive flexible activation functions for different tasks from the data with other parameters of the network fully shared. We further propose two functional regularization methods that improve the MTL performance of TAAN. The improved performance of both TAAN and the regularization methods is demonstrated by comprehensive experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 154,077 |
2103.03054 | An Open-Source Low-Cost Mobile Robot System with an RGB-D Camera and
Efficient Real-Time Navigation Algorithm | Currently, mobile robots are developing rapidly and are finding numerous applications in the industry. However, several problems remain related to their practical use, such as the need for expensive hardware and high power consumption levels. In this study, we build a low-cost indoor mobile robot platform that does not include a LiDAR or a GPU. Then, we design an autonomous navigation architecture that guarantees real-time performance on our platform with an RGB-D camera and a low-end off-the-shelf single board computer. The overall system includes SLAM, global path planning, ground segmentation, and motion planning. The proposed ground segmentation approach extracts a traversability map from raw depth images for the safe driving of low-body mobile robots. We apply both rule-based and learning-based navigation policies using the traversability map. Running sensor data processing and other autonomous driving components simultaneously, our navigation policies perform rapidly at a refresh rate of 18 Hz for control command, whereas other systems have slower refresh rates. Our methods show better performances than current state-of-the-art navigation approaches within limited computation resources as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in an indoor environment. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 223,152 |
2301.13261 | Emergence of Maps in the Memories of Blind Navigation Agents | Animal navigation research posits that organisms build and maintain internal spatial representations, or maps, of their environment. We ask if machines -- specifically, artificial intelligence (AI) navigation agents -- also build implicit (or 'mental') maps. A positive answer to this question would (a) explain the surprising phenomenon in recent literature of ostensibly map-free neural-networks achieving strong performance, and (b) strengthen the evidence of mapping as a fundamental mechanism for navigation by intelligent embodied agents, whether they be biological or artificial. Unlike animal navigation, we can judiciously design the agent's perceptual system and control the learning paradigm to nullify alternative navigation mechanisms. Specifically, we train 'blind' agents -- with sensing limited to only egomotion and no other sensing of any kind -- to perform PointGoal navigation ('go to $\Delta$ x, $\Delta$ y') via reinforcement learning. Our agents are composed of navigation-agnostic components (fully-connected and recurrent neural networks), and our experimental setup provides no inductive bias towards mapping. Despite these harsh conditions, we find that blind agents are (1) surprisingly effective navigators in new environments (~95% success); (2) they utilize memory over long horizons (remembering ~1,000 steps of past experience in an episode); (3) this memory enables them to exhibit intelligent behavior (following walls, detecting collisions, taking shortcuts); (4) there is emergence of maps and collision detection neurons in the representations of the environment built by a blind agent as it navigates; and (5) the emergent maps are selective and task dependent (e.g. the agent 'forgets' exploratory detours). Overall, this paper presents no new techniques for the AI audience, but a surprising finding, an insight, and an explanation. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 342,827 |
2403.09603 | Optimistic Verifiable Training by Controlling Hardware Nondeterminism | The increasing compute demands of AI systems have led to the emergence of services that train models on behalf of clients lacking necessary resources. However, ensuring correctness of training and guarding against potential training-time attacks, such as data poisoning and backdoors, poses challenges. Existing works on verifiable training largely fall into two classes: proof-based systems, which are difficult to scale, and ``optimistic'' methods that consider a third-party auditor who can replicate the training process and contest the trainer. A key challenge with the latter is that nondeterminism between GPU types during training prevents exact replication of the training process, resulting in schemes that are non-robust. We propose a method that combines training in a higher precision than the target, rounding after intermediate computations, and sharing rounding decisions based on an adaptive thresholding procedure, to successfully control for nondeterminism. Across three different NVIDIA GPUs (A40, Titan XP, RTX 2080 Ti), we achieve exact training replication at FP32 precision for both full-training and fine-tuning of ResNet-50 (23M) and GPT-2 (117M) models. Our verifiable training scheme significantly decreases the storage and time costs compared to proof-based systems, and is publicly released at https://github.com/meghabyte/verifiable-training. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 437,832 |
2308.00364 | Fountain -- an intelligent contextual assistant combining knowledge
representation and language models for manufacturing risk identification | Deviations from the approved design or processes during mass production can lead to unforeseen risks. However, these changes are sometimes necessary due to changes in the product design characteristics or an adaptation in the manufacturing process. A major challenge is to identify these risks early in the workflow so that failures leading to warranty claims can be avoided. We developed Fountain as a contextual assistant integrated in the deviation management workflow that helps in identifying the risks based on the description of the existing design and process criteria and the proposed deviation. In the manufacturing context, it is important that the assistant provides recommendations that are explainable and consistent. We achieve this through a combination of the following two components 1) language models finetuned for domain specific semantic similarity and, 2) knowledge representation in the form of a property graph derived from the bill of materials, Failure Modes and Effect Analysis (FMEA) and prior failures reported by customers. Here, we present the nuances of selecting and adapting pretrained language models for an engineering domain, continuous model updates based on user interaction with the contextual assistant and creating the causal chain for explainable recommendations based on the knowledge representation. Additionally, we demonstrate that the model adaptation is feasible using moderate computational infrastructure already available to most engineering teams in manufacturing organizations and inference can be performed on standard CPU only instances for integration with existing applications making these methods easily deployable. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 382,916 |
1606.01081 | Implementing graph grammars for intelligence analysis in OCaml | We report on implementing graph grammars for intelligence analysis in OCaml. Graph grammars are represented as elements of an algebraic data type in OCaml. In addition to algebraic data types, we use other concepts from functional programming languages to implement features of graph grammars. We use type checking to perform graph pattern matching. Graph transformations are defined as implicit coercions derived from structural subtyping proofs, subset types, lambda abstractions, and analytics. An analytic is a general-purpose OCaml function whose output is required to match a graph pattern described by an element of an algebraic data type. By using a strongly-typed language for representing graphs, we can ensure graphs produced from a graph transformation will match a specific schema. This is a high priority requirement for intelligence analysis. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 56,748 |
1711.05795 | Finer Grained Entity Typing with TypeNet | We consider the challenging problem of entity typing over an extremely fine grained set of types, wherein a single mention or entity can have many simultaneous and often hierarchically-structured types. Despite the importance of the problem, there is a relative lack of resources in the form of fine-grained, deep type hierarchies aligned to existing knowledge bases. In response, we introduce TypeNet, a dataset of entity types consisting of over 1941 types organized in a hierarchy, obtained by manually annotating a mapping from 1081 Freebase types to WordNet. We also experiment with several models comparable to state-of-the-art systems and explore techniques to incorporate a structure loss on the hierarchy with the standard mention typing loss, as a first step towards future research on this dataset. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | 84,638 |
2110.13409 | Task-Aware Meta Learning-based Siamese Neural Network for Classifying
Obfuscated Malware | Malware authors apply different techniques of control flow obfuscation, in order to create new malware variants to avoid detection. Existing Siamese neural network (SNN)-based malware detection methods fail to correctly classify different malware families when such obfuscated malware samples are present in the training dataset, resulting in high false-positive rates. To address this issue, we propose a novel task-aware few-shot-learning-based Siamese Neural Network that is resilient against the presence of malware variants affected by such control flow obfuscation techniques. Using the average entropy features of each malware family as inputs, in addition to the image features, our model generates the parameters for the feature layers, to more accurately adjust the feature embedding for different malware families, each of which has obfuscated malware variants. In addition, our proposed method can classify malware classes, even if there are only one or a few training samples available. Our model utilizes few-shot learning with the extracted features of a pre-trained network (e.g., VGG-16), to avoid the bias typically associated with a model trained with a limited number of training samples. Our proposed approach is highly effective in recognizing unique malware signatures, thus correctly classifying malware samples that belong to the same malware family, even in the presence of obfuscated malware variants. Our experimental results, validated by N-way on N-shot learning, show that our model is highly effective in classification accuracy, exceeding a rate \textgreater 91\%, compared to other similar methods. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 263,167 |
2205.00484 | Dynamic Programming in Rank Space: Scaling Structured Inference with
Low-Rank HMMs and PCFGs | Hidden Markov Models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) are widely used structured models, both of which can be represented as factor graph grammars (FGGs), a powerful formalism capable of describing a wide range of models. Recent research found it beneficial to use large state spaces for HMMs and PCFGs. However, inference with large state spaces is computationally demanding, especially for PCFGs. To tackle this challenge, we leverage tensor rank decomposition (aka.\ CPD) to decrease inference computational complexities for a subset of FGGs subsuming HMMs and PCFGs. We apply CPD on the factors of an FGG and then construct a new FGG defined in the rank space. Inference with the new FGG produces the same result but has a lower time complexity when the rank size is smaller than the state size. We conduct experiments on HMM language modeling and unsupervised PCFG parsing, showing better performance than previous work. Our code is publicly available at \url{https://github.com/VPeterV/RankSpace-Models}. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 294,276 |
1302.3583 | Flexible Policy Construction by Information Refinement | We report on work towards flexible algorithms for solving decision problems represented as influence diagrams. An algorithm is given to construct a tree structure for each decision node in an influence diagram. Each tree represents a decision function and is constructed incrementally. The improvements to the tree converge to the optimal decision function (neglecting computational costs) and the asymptotic behaviour is only a constant factor worse than dynamic programming techniques, counting the number of Bayesian network queries. Empirical results show how expected utility increases with the size of the tree and the number of Bayesian net calculations. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 22,049 |
2404.06025 | Greedy-DiM: Greedy Algorithms for Unreasonably Effective Face Morphs | Morphing attacks are an emerging threat to state-of-the-art Face Recognition (FR) systems, which aim to create a single image that contains the biometric information of multiple identities. Diffusion Morphs (DiM) are a recently proposed morphing attack that has achieved state-of-the-art performance for representation-based morphing attacks. However, none of the existing research on DiMs have leveraged the iterative nature of DiMs and left the DiM model as a black box, treating it no differently than one would a Generative Adversarial Network (GAN) or Varational AutoEncoder (VAE). We propose a greedy strategy on the iterative sampling process of DiM models which searches for an optimal step guided by an identity-based heuristic function. We compare our proposed algorithm against ten other state-of-the-art morphing algorithms using the open-source SYN-MAD 2022 competition dataset. We find that our proposed algorithm is unreasonably effective, fooling all of the tested FR systems with an MMPMR of 100%, outperforming all other morphing algorithms compared. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 445,304 |
1911.09826 | Factorized Multimodal Transformer for Multimodal Sequential Learning | The complex world around us is inherently multimodal and sequential (continuous). Information is scattered across different modalities and requires multiple continuous sensors to be captured. As machine learning leaps towards better generalization to real world, multimodal sequential learning becomes a fundamental research area. Arguably, modeling arbitrarily distributed spatio-temporal dynamics within and across modalities is the biggest challenge in this research area. In this paper, we present a new transformer model, called the Factorized Multimodal Transformer (FMT) for multimodal sequential learning. FMT inherently models the intramodal and intermodal (involving two or more modalities) dynamics within its multimodal input in a factorized manner. The proposed factorization allows for increasing the number of self-attentions to better model the multimodal phenomena at hand; without encountering difficulties during training (e.g. overfitting) even on relatively low-resource setups. All the attention mechanisms within FMT have a full time-domain receptive field which allows them to asynchronously capture long-range multimodal dynamics. In our experiments we focus on datasets that contain the three commonly studied modalities of language, vision and acoustic. We perform a wide range of experiments, spanning across 3 well-studied datasets and 21 distinct labels. FMT shows superior performance over previously proposed models, setting new state of the art in the studied datasets. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 154,631 |
2205.07883 | Learning Car Speed Using Inertial Sensors for Dead Reckoning Navigation | A deep neural network (DNN) is trained to estimate the speed of a car driving in an urban area using as input a stream of measurements from a low-cost six-axis inertial measurement unit (IMU). Three hours of data was collected by driving through the city of Ashdod, Israel in a car equipped with a global navigation satellite system (GNSS) real time kinematic (RTK) positioning device and a synchronized IMU. Ground truth labels for the car speed were calculated using the position measurements obtained at the high rate of 50 Hz. A DNN architecture with long short-term memory layers is proposed to enable high-frequency speed estimation that accounts for previous inputs history and the nonlinear relation between speed, acceleration and angular velocity. A simplified aided dead reckoning localization scheme is formulated to assess the trained model which provides the speed pseudo-measurement. The trained model is shown to substantially improve the position accuracy during a 4 minutes drive without the use of GNSS position updates. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,757 |
1505.03917 | General Riemannian SOM | Kohonen's Self-Organizing Maps (SOMs) have proven to be a successful data-reduction method to identify the intrinsic lower-dimensional sub-manifold of a data set that is scattered in the higher-dimensional feature space. Motivated by the possibly non-Euclidian nature of the feature space and of the intrinsic geometry of the data set, we extend the definition of classic SOMs to obtain the General Riemannian SOM (GRiSOM). We additionally provide an implementation as a proof-of-concept for geometries with constant curvature. We furthermore perform the analytic and numerical analysis of the stability limits of certain (GRi)SOM configurations covering the different possible regular tessellation of the map space in each geometry. A deviation between the numerical and analytic stability limit has been observed for the square and hexagonal Euclidean maps for very small neighbourhoods in the map space as well as agreement in case of longer-ranged relations between the map nodes. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 43,119 |
2211.00713 | MAgNET: A Graph U-Net Architecture for Mesh-Based Simulations | In many cutting-edge applications, high-fidelity computational models prove to be too slow for practical use and are therefore replaced by much faster surrogate models. Recently, deep learning techniques have increasingly been utilized to accelerate such predictions. To enable learning on large-dimensional and complex data, specific neural network architectures have been developed, including convolutional and graph neural networks. In this work, we present a novel encoder-decoder geometric deep learning framework called MAgNET, which extends the well-known convolutional neural networks to accommodate arbitrary graph-structured data. MAgNET consists of innovative Multichannel Aggregation (MAg) layers and graph pooling/unpooling layers, forming a graph U-Net architecture that is analogous to convolutional U-Nets. We demonstrate the predictive capabilities of MAgNET in surrogate modeling for non-linear finite element simulations in the mechanics of solids. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 327,963 |
2502.01784 | VILP: Imitation Learning with Latent Video Planning | In the era of generative AI, integrating video generation models into robotics opens new possibilities for the general-purpose robot agent. This paper introduces imitation learning with latent video planning (VILP). We propose a latent video diffusion model to generate predictive robot videos that adhere to temporal consistency to a good degree. Our method is able to generate highly time-aligned videos from multiple views, which is crucial for robot policy learning. Our video generation model is highly time-efficient. For example, it can generate videos from two distinct perspectives, each consisting of six frames with a resolution of 96x160 pixels, at a rate of 5 Hz. In the experiments, we demonstrate that VILP outperforms the existing video generation robot policy across several metrics: training costs, inference speed, temporal consistency of generated videos, and the performance of the policy. We also compared our method with other imitation learning methods. Our findings indicate that VILP can rely less on extensive high-quality task-specific robot action data while still maintaining robust performance. In addition, VILP possesses robust capabilities in representing multi-modal action distributions. Our paper provides a practical example of how to effectively integrate video generation models into robot policies, potentially offering insights for related fields and directions. For more details, please refer to our open-source repository https://github.com/ZhengtongXu/VILP. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 530,030 |
1706.07457 | Learning Spatial-Aware Regressions for Visual Tracking | In this paper, we analyze the spatial information of deep features, and propose two complementary regressions for robust visual tracking. First, we propose a kernelized ridge regression model wherein the kernel value is defined as the weighted sum of similarity scores of all pairs of patches between two samples. We show that this model can be formulated as a neural network and thus can be efficiently solved. Second, we propose a fully convolutional neural network with spatially regularized kernels, through which the filter kernel corresponding to each output channel is forced to focus on a specific region of the target. Distance transform pooling is further exploited to determine the effectiveness of each output channel of the convolution layer. The outputs from the kernelized ridge regression model and the fully convolutional neural network are combined to obtain the ultimate response. Experimental results on two benchmark datasets validate the effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 75,843 |
2309.04038 | S-Adapter: Generalizing Vision Transformer for Face Anti-Spoofing with
Statistical Tokens | Face Anti-Spoofing (FAS) aims to detect malicious attempts to invade a face recognition system by presenting spoofed faces. State-of-the-art FAS techniques predominantly rely on deep learning models but their cross-domain generalization capabilities are often hindered by the domain shift problem, which arises due to different distributions between training and testing data. In this study, we develop a generalized FAS method under the Efficient Parameter Transfer Learning (EPTL) paradigm, where we adapt the pre-trained Vision Transformer models for the FAS task. During training, the adapter modules are inserted into the pre-trained ViT model, and the adapters are updated while other pre-trained parameters remain fixed. We find the limitations of previous vanilla adapters in that they are based on linear layers, which lack a spoofing-aware inductive bias and thus restrict the cross-domain generalization. To address this limitation and achieve cross-domain generalized FAS, we propose a novel Statistical Adapter (S-Adapter) that gathers local discriminative and statistical information from localized token histograms. To further improve the generalization of the statistical tokens, we propose a novel Token Style Regularization (TSR), which aims to reduce domain style variance by regularizing Gram matrices extracted from tokens across different domains. Our experimental results demonstrate that our proposed S-Adapter and TSR provide significant benefits in both zero-shot and few-shot cross-domain testing, outperforming state-of-the-art methods on several benchmark tests. We will release the source code upon acceptance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 390,592 |
2407.15752 | Broad and Spectral-Efficient Beamforming for the Uni-polarized
Reconfigurable Intelligent Surfaces | A reconfigurable intelligent surface (RIS) is composed of low-cost elements that manipulate the propagation environment from a transmitter by intelligently applying phase shifts to incoming signals before they are reflected. This paper explores a uni-polarized RIS with linear shape aimed at transmitting a common signal to multiple user equipments (UEs) spread across a wide angular region. To achieve uniform coverage, the uni-polarized RIS is designed to emit a broad and spectral-efficient beam featuring a spatially flat-like array factor, diverging from the conventional narrow beam approach. To achieve this objective, we start by deriving probabilistic lower and upper bounds for the average spectral efficiency (SE) delivered to the UEs. Leveraging the insights from the lower bound, we focus on optimizing the minimum value of the power domain array factor (PDAF) across a range of azimuth angles from \(-\frac{\pi}{2}\) to \(\frac{\pi}{2}\). We employ the continuous genetic algorithm (CGA) for this optimization task, aiming to improve the SE delivered to the UEs while also creating a wide beam. Extensive simulation experiments are carried out to assess the performance of the proposed code, focusing on key metrics such as the minimum and average values of the PDAF and the SE delivered to the UEs. Our findings demonstrate that the proposed code enhances the minimum SE delivered to the UEs while maintaining the desired attribute of a broad beam. This performance is notably superior to that of established codes, including the Barker, Frank, and Chu codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 475,316 |
2002.00822 | The Heuristic Dynamic Programming Approach in Boost Converters | In this study, a heuristic dynamic programming controller is proposed to control a boost converter. Conventional controllers such as proportional integral-derivative (PID) or proportional integral (PI) are designed based on the linearized small-signal model near the operating point. Therefore, the performance of the controller during the start-up, the load change, or the input voltage variation is not optimal since the system model changes by varying the operating point. The heuristic dynamic programming controller optimally controls the boost converter by following the approximate dynamic programming. The advantage of the HDP is that the neural network based characteristic of the proposed controller enables boost converters to easily cope with large disturbances. An HDP with a well trained critic and action networks can perform as an optimal controller for the boost converter. To compare the effectiveness of the traditional PI-based and the HDP boost converter, the simulation results are provided. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 162,481 |
2110.14597 | Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication | Gesture-based authentication has emerged as a non-intrusive, effective means of authenticating users on mobile devices. Typically, such authentication techniques have relied on classical machine learning techniques, but recently, deep learning techniques have been applied this problem. Although prior research has shown that deep learning models are vulnerable to adversarial attacks, relatively little research has been done in the adversarial domain for behavioral biometrics. In this research, we collect tri-axial accelerometer gesture data (TAGD) from 46 users and perform classification experiments with both classical machine learning and deep learning models. Specifically, we train and test support vector machines (SVM) and convolutional neural networks (CNN). We then consider a realistic adversarial attack, where we assume the attacker has access to real users' TAGD data, but not the authentication model. We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples, and we show that our deep learning model is surprisingly robust to such an attack scenario. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 263,582 |
2305.00217 | Still no evidence for an effect of the proportion of non-native speakers
on language complexity -- A response to Kauhanen, Einhaus & Walkden (2023) | In a recent paper published in the Journal of Language Evolution, Kauhanen, Einhaus & Walkden (https://doi.org/10.1093/jole/lzad005, KEW) challenge the results presented in one of my papers (Koplenig, Royal Society Open Science, 6, 181274 (2019), https://doi.org/10.1098/rsos.181274), in which I tried to show through a series of statistical analyses that large numbers of L2 (second language) speakers do not seem to affect the (grammatical or statistical) complexity of a language. To this end, I focus on the way in which the Ethnologue assesses language status: a language is characterised as vehicular if, in addition to being used by L1 (first language) speakers, it should also have a significant number of L2 users. KEW criticise both the use of vehicularity as a (binary) indicator of whether a language has a significant number of L2 users and the idea of imputing a zero proportion of L2 speakers to non-vehicular languages whenever a direct estimate of that proportion is unavailable. While I recognise the importance of post-publication commentary on published research, I show in this rejoinder that both points of criticism are explicitly mentioned and analysed in my paper. In addition, I also comment on other points raised by KEW and demonstrate that both alternative analyses offered by KEW do not stand up to closer scrutiny. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 361,256 |
2008.07870 | Multi-Modal Trajectory Prediction of NBA Players | National Basketball Association (NBA) players are highly motivated and skilled experts that solve complex decision making problems at every time point during a game. As a step towards understanding how players make their decisions, we focus on their movement trajectories during games. We propose a method that captures the multi-modal behavior of players, where they might consider multiple trajectories and select the most advantageous one. The method is built on an LSTM-based architecture predicting multiple trajectories and their probabilities, trained by a multi-modal loss function that updates the best trajectories. Experiments on large, fine-grained NBA tracking data show that the proposed method outperforms the state-of-the-art. In addition, the results indicate that the approach generates more realistic trajectories and that it can learn individual playing styles of specific players. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 192,244 |
2204.12726 | PRE-NAS: Predictor-assisted Evolutionary Neural Architecture Search | Neural architecture search (NAS) aims to automate architecture engineering in neural networks. This often requires a high computational overhead to evaluate a number of candidate networks from the set of all possible networks in the search space during the search. Prediction of the networks' performance can alleviate this high computational overhead by mitigating the need for evaluating every candidate network. Developing such a predictor typically requires a large number of evaluated architectures which may be difficult to obtain. We address this challenge by proposing a novel evolutionary-based NAS strategy, Predictor-assisted E-NAS (PRE-NAS), which can perform well even with an extremely small number of evaluated architectures. PRE-NAS leverages new evolutionary search strategies and integrates high-fidelity weight inheritance over generations. Unlike one-shot strategies, which may suffer from bias in the evaluation due to weight sharing, offspring candidates in PRE-NAS are topologically homogeneous, which circumvents bias and leads to more accurate predictions. Extensive experiments on NAS-Bench-201 and DARTS search spaces show that PRE-NAS can outperform state-of-the-art NAS methods. With only a single GPU searching for 0.6 days, competitive architecture can be found by PRE-NAS which achieves 2.40% and 24% test error rates on CIFAR-10 and ImageNet respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 293,583 |
2006.10102 | Capturing Label Characteristics in VAEs | We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs-capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the CCVAE, a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 182,760 |
2408.01224 | Multi-head Spatial-Spectral Mamba for Hyperspectral Image Classification | Spatial-Spectral Mamba (SSM) improves computational efficiency and captures long-range dependencies, addressing Transformer limitations. However, traditional Mamba models overlook rich spectral information in HSIs and struggle with high dimensionality and sequential data. To address these issues, we propose the SSM with multi-head self-attention and token enhancement (MHSSMamba). This model integrates spectral and spatial information by enhancing spectral tokens and using multi-head attention to capture complex relationships between spectral bands and spatial locations. It also manages long-range dependencies and the sequential nature of HSI data, preserving contextual information across spectral bands. MHSSMamba achieved remarkable classification accuracies of 97.62\% on Pavia University, 96.92\% on the University of Houston, 96.85\% on Salinas, and 99.49\% on Wuhan-longKou datasets. The source code is available at \href{https://github.com/MHassaanButt/MHA\_SS\_Mamba}{GitHub}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 478,147 |
2302.07084 | Towards Lightweight and Automated Representation Learning System for
Networks | We propose LIGHTNE 2.0, a cost-effective, scalable, automated, and high-quality network embedding system that scales to graphs with hundreds of billions of edges on a single machine. In contrast to the mainstream belief that distributed architecture and GPUs are needed for large-scale network embedding with good quality, we prove that we can achieve higher quality, better scalability, lower cost, and faster runtime with shared-memory, CPU-only architecture. LIGHTNE 2.0 combines two theoretically grounded embedding methods NetSMF and ProNE. We introduce the following techniques to network embedding for the first time: (1) a newly proposed downsampling method to reduce the sample complexity of NetSMF while preserving its theoretical advantages; (2) a high-performance parallel graph processing stack GBBS to achieve high memory efficiency and scalability; (3) sparse parallel hash table to aggregate and maintain the matrix sparsifier in memory; (4) a fast randomized singular value decomposition (SVD) enhanced by power iteration and fast orthonormalization to improve vanilla randomized SVD in terms of both efficiency and effectiveness; (5) Intel MKL for proposed fast randomized SVD and spectral propagation; and (6) a fast and lightweight AutoML library FLAML for automated hyperparameter tuning. Experimental results show that LIGHTNE 2.0 can be up to 84X faster than GraphVite, 30X faster than PBG and 9X faster than NetSMF while delivering better performance. LIGHTNE 2.0 can embed very large graph with 1.7 billion nodes and 124 billion edges in half an hour on a CPU server, while other baselines cannot handle very large graphs of this scale. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 345,617 |
2103.05896 | Streaming Linear System Identification with Reverse Experience Replay | We consider the problem of estimating a linear time-invariant (LTI) dynamical system from a single trajectory via streaming algorithms, which is encountered in several applications including reinforcement learning (RL) and time-series analysis. While the LTI system estimation problem is well-studied in the {\em offline} setting, the practically important streaming/online setting has received little attention. Standard streaming methods like stochastic gradient descent (SGD) are unlikely to work since streaming points can be highly correlated. In this work, we propose a novel streaming algorithm, SGD with Reverse Experience Replay ($\mathsf{SGD}-\mathsf{RER}$), that is inspired by the experience replay (ER) technique popular in the RL literature. $\mathsf{SGD}-\mathsf{RER}$ divides data into small buffers and runs SGD backwards on the data stored in the individual buffers. We show that this algorithm exactly deconstructs the dependency structure and obtains information theoretically optimal guarantees for both parameter error and prediction error. Thus, we provide the first -- to the best of our knowledge -- optimal SGD-style algorithm for the classical problem of linear system identification with a first order oracle. Furthermore, $\mathsf{SGD}-\mathsf{RER}$ can be applied to more general settings like sparse LTI identification with known sparsity pattern, and non-linear dynamical systems. Our work demonstrates that the knowledge of data dependency structure can aid us in designing statistically and computationally efficient algorithms which can "decorrelate" streaming samples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 224,117 |
2201.03538 | Assisting Unknown Teammates in Unknown Tasks: Ad Hoc Teamwork under
Partial Observability | In this paper, we present a novel Bayesian online prediction algorithm for the problem setting of ad hoc teamwork under partial observability (ATPO), which enables on-the-fly collaboration with unknown teammates performing an unknown task without needing a pre-coordination protocol. Unlike previous works that assume a fully observable state of the environment, ATPO accommodates partial observability, using the agent's observations to identify which task is being performed by the teammates. Our approach assumes neither that the teammate's actions are visible nor an environment reward signal. We evaluate ATPO in three domains -- two modified versions of the Pursuit domain with partial observability and the overcooked domain. Our results show that ATPO is effective and robust in identifying the teammate's task from a large library of possible tasks, efficient at solving it in near-optimal time, and scalable in adapting to increasingly larger problem sizes. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 274,875 |
2410.13472 | Day-Night Adaptation: An Innovative Source-free Adaptation Framework for
Medical Image Segmentation | Distribution shifts widely exist in medical images acquired from different medical centres, hindering the deployment of semantic segmentation models trained on one centre (source domain) to another (target domain). While unsupervised domain adaptation has shown significant promise in mitigating these shifts, it poses privacy risks due to sharing data between centres. To facilitate adaptation while preserving data privacy, source-free domain adaptation (SFDA) and test-time adaptation (TTA) have emerged as effective paradigms, relying solely on target domain data. However, SFDA requires a pre-collected target domain dataset before deployment. TTA insufficiently exploit the potential value of test data, as it processes the test data only once. Considering that most medical centres operate during the day and remain inactive at night in clinical practice, we propose a novel adaptation framework called Day-Night Adaptation (DyNA) with above insights, which performs adaptation through day-night cycles without requiring access to source data. During the day, a low-frequency prompt is trained to adapt the frozen model to each test sample. We construct a memory bank for prompt initialization and develop a warm-up mechanism to enhance prompt training. During the night, we reuse test data collected from the day and introduce a global student model to bridge the knowledge between teacher and student models, facilitating model fine-tuning while ensuring training stability. Extensive experiments demonstrate that our DyNA outperforms existing TTA and SFDA methods on two benchmark medical image segmentation tasks. Code will be available after the paper is published. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 499,551 |
2407.11699 | Relation DETR: Exploring Explicit Position Relation Prior for Object
Detection | This paper presents a general scheme for enhancing the convergence and performance of DETR (DEtection TRansformer). We investigate the slow convergence problem in transformers from a new perspective, suggesting that it arises from the self-attention that introduces no structural bias over inputs. To address this issue, we explore incorporating position relation prior as attention bias to augment object detection, following the verification of its statistical significance using a proposed quantitative macroscopic correlation (MC) metric. Our approach, termed Relation-DETR, introduces an encoder to construct position relation embeddings for progressive attention refinement, which further extends the traditional streaming pipeline of DETR into a contrastive relation pipeline to address the conflicts between non-duplicate predictions and positive supervision. Extensive experiments on both generic and task-specific datasets demonstrate the effectiveness of our approach. Under the same configurations, Relation-DETR achieves a significant improvement (+2.0% AP compared to DINO), state-of-the-art performance (51.7% AP for 1x and 52.1% AP for 2x settings), and a remarkably faster convergence speed (over 40% AP with only 2 training epochs) than existing DETR detectors on COCO val2017. Moreover, the proposed relation encoder serves as a universal plug-in-and-play component, bringing clear improvements for theoretically any DETR-like methods. Furthermore, we introduce a class-agnostic detection dataset, SA-Det-100k. The experimental results on the dataset illustrate that the proposed explicit position relation achieves a clear improvement of 1.3% AP, highlighting its potential towards universal object detection. The code and dataset are available at https://github.com/xiuqhou/Relation-DETR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 473,581 |
2401.00284 | Evaluation is all you need. Prompting Generative Large Language Models
for Annotation Tasks in the Social Sciences. A Primer using Open Models | This paper explores the use of open generative Large Language Models (LLMs) for annotation tasks in the social sciences. The study highlights the challenges associated with proprietary models, such as limited reproducibility and privacy concerns, and advocates for the adoption of open (source) models that can be operated on independent devices. Two examples of annotation tasks, sentiment analysis in tweets and identification of leisure activities in childhood aspirational essays are provided. The study evaluates the performance of different prompting strategies and models (neural-chat-7b-v3-2, Starling-LM-7B-alpha, openchat_3.5, zephyr-7b-alpha and zephyr-7b-beta). The results indicate the need for careful validation and tailored prompt engineering. The study highlights the advantages of open models for data privacy and reproducibility. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 418,942 |
2104.11917 | KDF: Kinodynamic Motion Planning via Geometric Sampling-based Algorithms
and Funnel Control | We integrate sampling-based planning techniques with funnel-based feedback control to develop KDF, a new framework for solving the kinodynamic motion-planning problem via funnel control. The considered systems evolve subject to complex, nonlinear, and uncertain dynamics (aka differential constraints). Firstly, we use a geometric planner to obtain a high-level safe path in a user-defined extended free space. Secondly, we develop a low-level funnel control algorithm that guarantees safe tracking of the path by the system. Neither the planner nor the control algorithm use information on the underlying dynamics of the system, which makes the proposed scheme easily distributable to a large variety of different systems and scenarios. Intuitively, the funnel control module is able to implicitly accommodate the dynamics of the system, allowing hence the deployment of purely geometrical motion planners. Extensive computer simulations and experimental results with a 6-DOF robotic arm validate the proposed approach. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 232,060 |
1506.08909 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured
Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | true | false | false | 44,668 |
2305.09689 | Learning Switching Port-Hamiltonian Systems with Uncertainty
Quantification | Switching physical systems are ubiquitous in modern control applications, for instance, locomotion behavior of robots and animals, power converters with switches and diodes. The dynamics and switching conditions are often hard to obtain or even inaccessible in case of a-priori unknown environments and nonlinear components. Black-box neural networks can learn to approximately represent switching dynamics, but typically require a large amount of data, neglect the underlying axioms of physics, and lack of uncertainty quantification. We propose a Gaussian process based learning approach enhanced by switching Port-Hamiltonian systems (GP-SPHS) to learn physical plausible system dynamics and identify the switching condition. The Bayesian nature of Gaussian processes uses collected data to form a distribution over all possible switching policies and dynamics that allows for uncertainty quantification. Furthermore, the proposed approach preserves the compositional nature of Port-Hamiltonian systems. A simulation with a hopping robot validates the effectiveness of the proposed approach. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 364,739 |
1811.04604 | Learning Personalized End-to-End Goal-Oriented Dialog | Most existing works on dialog systems only consider conversation content while neglecting the personality of the user the bot is interacting with, which begets several unsolved issues. In this paper, we present a personalized end-to-end model in an attempt to leverage personalization in goal-oriented dialogs. We first introduce a Profile Model which encodes user profiles into distributed embeddings and refers to conversation history from other similar users. Then a Preference Model captures user preferences over knowledge base entities to handle the ambiguity in user requests. The two models are combined into the Personalized MemN2N. Experiments show that the proposed model achieves qualitative performance improvements over state-of-the-art methods. As for human evaluation, it also outperforms other approaches in terms of task completion rate and user satisfaction. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 113,130 |
2408.00688 | Kernel-based multi-step predictors for data-driven analysis and control
of nonlinear systems through the velocity form | We propose kernel-based approaches for the construction of a single-step and multi-step predictor of the velocity form of nonlinear (NL) systems, which describes the time-difference dynamics of the corresponding NL system and admits a highly structured representation. The predictors in turn allow to formulate completely data-driven representations of the velocity form. The kernel-based formulation that we derive, inherently respects the structured quasi-linear and specific time-dependent relationship of the velocity form. This results in an efficient multi-step predictor for the velocity form and hence for nonlinear systems. Moreover, by using the velocity form, our methods open the door for data-driven behavioral analysis and control of nonlinear systems with global stability and performance guarantees. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 477,934 |
2403.11875 | Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic
Vision Sensors | Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications. However, uncontrolled access to restricted areas threatens privacy and security. Thus, prevention and detection of UAVs are pivotal to guarantee confidentiality and safety. Although active scanning, mainly based on radars, is one of the most accurate technologies, it can be expensive and less versatile than passive inspections, e.g., object recognition. Dynamic vision sensors (DVS) are bio-inspired event-based vision models that leverage timestamped pixel-level brightness changes in fast-moving scenes that adapt well to low-latency object detection. This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection. In particular, we propose a setup to exploit DVS as an alternative to RGB cameras in a real-time and low-power configuration. Our approach leverages the high-dynamic range (HDR) and background suppression of DVS and, when trained with various fast-moving drones, outperforms RGB input in suboptimal ambient conditions such as low illumination and fast-moving scenes. Our results show that F-UAV-D can (i) detect drones by using less than <15 W on average and (ii) perform real-time inference (i.e., <50 ms) by leveraging the CPU and GPU nodes of our edge computer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 438,906 |
2206.06801 | Peripheral Vision Transformer | Human vision possesses a special type of visual processing systems called peripheral vision. Partitioning the entire visual field into multiple contour regions based on the distance to the center of our gaze, the peripheral vision provides us the ability to perceive various visual features at different regions. In this work, we take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition. We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data. We evaluate the proposed network, dubbed PerViT, on ImageNet-1K and systematically investigate the inner workings of the model for machine perception, showing that the network learns to perceive visual data similarly to the way that human vision does. The performance improvements in image classification over the baselines across different model sizes demonstrate the efficacy of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 302,505 |
1803.04715 | Hierarchical Learning of Cross-Language Mappings through Distributed
Vector Representations for Code | Translating a program written in one programming language to another can be useful for software development tasks that need functionality implementations in different languages. Although past studies have considered this problem, they may be either specific to the language grammars, or specific to certain kinds of code elements (e.g., tokens, phrases, API uses). This paper proposes a new approach to automatically learn cross-language representations for various kinds of structural code elements that may be used for program translation. Our key idea is two folded: First, we normalize and enrich code token streams with additional structural and semantic information, and train cross-language vector representations for the tokens (a.k.a. shared embeddings based on word2vec, a neural-network-based technique for producing word embeddings; Second, hierarchically from bottom up, we construct shared embeddings for code elements of higher levels of granularity (e.g., expressions, statements, methods) from the embeddings for their constituents, and then build mappings among code elements across languages based on similarities among embeddings. Our preliminary evaluations on about 40,000 Java and C# source files from 9 software projects show that our approach can automatically learn shared embeddings for various code elements in different languages and identify their cross-language mappings with reasonable Mean Average Precision scores. When compared with an existing tool for mapping library API methods, our approach identifies many more mappings accurately. The mapping results and code can be accessed at https://github.com/bdqnghi/hierarchical-programming-language-mapping. We believe that our idea for learning cross-language vector representations with code structural information can be a useful step towards automated program translation. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | 92,503 |
2311.04789 | Determination of toxic comments and unintended model bias minimization
using Deep learning approach | Online conversations can be toxic and subjected to threats, abuse, or harassment. To identify toxic text comments, several deep learning and machine learning models have been proposed throughout the years. However, recent studies demonstrate that because of the imbalances in the training data, some models are more likely to show unintended biases including gender bias and identity bias. In this research, our aim is to detect toxic comment and reduce the unintended bias concerning identity features such as race, gender, sex, religion by fine-tuning an attention based model called BERT(Bidirectional Encoder Representation from Transformers). We apply weighted loss to address the issue of unbalanced data and compare the performance of a fine-tuned BERT model with a traditional Logistic Regression model in terms of classification and bias minimization. The Logistic Regression model with the TFIDF vectorizer achieve 57.1% accuracy, and fine-tuned BERT model's accuracy is 89%. Code is available at https://github.com/zim10/Determine_Toxic_comment_and_identity_bias.git | false | false | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | 406,345 |
2210.02419 | Boundary-Aware Uncertainty for Feature Attribution Explainers | Post-hoc explanation methods have become a critical tool for understanding black-box classifiers in high-stakes applications. However, high-performing classifiers are often highly nonlinear and can exhibit complex behavior around the decision boundary, leading to brittle or misleading local explanations. Therefore there is an impending need to quantify the uncertainty of such explanation methods in order to understand when explanations are trustworthy. In this work we propose the Gaussian Process Explanation UnCertainty (GPEC) framework, which generates a unified uncertainty estimate combining decision boundary-aware uncertainty with explanation function approximation uncertainty. We introduce a novel geodesic-based kernel, which captures the complexity of the target black-box decision boundary. We show theoretically that the proposed kernel similarity increases with decision boundary complexity. The proposed framework is highly flexible; it can be used with any black-box classifier and feature attribution method. Empirical results on multiple tabular and image datasets show that the GPEC uncertainty estimate improves understanding of explanations as compared to existing methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 321,645 |
2306.08354 | Complete Visibility Algorithm for Autonomous Mobile Luminous Robots
under an Asynchronous Scheduler on Grid Plane | An autonomous mobile robot system is a distributed system consisting of mobile computational entities (called robots) that autonomously and repeatedly perform three operations: Look, Compute, and Move. Various problems related to autonomous mobile robots, such as gathering, pattern formation, or flocking, have been extensively studied to understand the relationship between each robot's capabilities and the solvability of these problems. In this study, we focus on the complete visibility problem, which involves relocating all the robots on an infinite grid plane such that each robot is visible to every other robot. We assume that each robot is a luminous robot (i.e., has a light with a constant number of colors) and opaque (not transparent). In this paper, we propose an algorithm to achieve complete visibility when a set of robots is given. The algorithm ensures that complete visibility is achieved even when robots operate asynchronously and have no knowledge of the total number of robots on the grid plane using only two colors. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 373,386 |
1711.09767 | GazeGAN - Unpaired Adversarial Image Generation for Gaze Estimation | Recent research has demonstrated the ability to estimate gaze on mobile devices by performing inference on the image from the phone's front-facing camera, and without requiring specialized hardware. While this offers wide potential applications such as in human-computer interaction, medical diagnosis and accessibility (e.g., hands free gaze as input for patients with motor disorders), current methods are limited as they rely on collecting data from real users, which is a tedious and expensive process that is hard to scale across devices. There have been some attempts to synthesize eye region data using 3D models that can simulate various head poses and camera settings, however these lack in realism. In this paper, we improve upon a recently suggested method, and propose a generative adversarial framework to generate a large dataset of high resolution colorful images with high diversity (e.g., in subjects, head pose, camera settings) and realism, while simultaneously preserving the accuracy of gaze labels. The proposed approach operates on extended regions of the eye, and even completes missing parts of the image. Using this rich synthesized dataset, and without using any additional training data from real users, we demonstrate improvements over state-of-the-art for estimating 2D gaze position on mobile devices. We further demonstrate cross-device generalization of model performance, as well as improved robustness to diverse head pose, blur and distance. | true | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,468 |
2212.04580 | Effective Dynamics of Generative Adversarial Networks | Generative adversarial networks (GANs) are a class of machine-learning models that use adversarial training to generate new samples with the same (potentially very complex) statistics as the training samples. One major form of training failure, known as mode collapse, involves the generator failing to reproduce the full diversity of modes in the target probability distribution. Here, we present an effective model of GAN training, which captures the learning dynamics by replacing the generator neural network with a collection of particles in the output space; particles are coupled by a universal kernel valid for certain wide neural networks and high-dimensional inputs. The generality of our simplified model allows us to study the conditions under which mode collapse occurs. Indeed, experiments which vary the effective kernel of the generator reveal a mode collapse transition, the shape of which can be related to the type of discriminator through the frequency principle. Further, we find that gradient regularizers of intermediate strengths can optimally yield convergence through critical damping of the generator dynamics. Our effective GAN model thus provides an interpretable physical framework for understanding and improving adversarial training. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 335,490 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.