id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2403.17881
Deepfake Generation and Detection: A Benchmark and Survey
Deepfake is a technology dedicated to creating highly realistic facial images and videos under specific conditions, which has significant application potential in fields such as entertainment, movie production, digital human creation, to name a few. With the advancements in deep learning, techniques primarily represented by Variational Autoencoders and Generative Adversarial Networks have achieved impressive generation results. More recently, the emergence of diffusion models with powerful generation capabilities has sparked a renewed wave of research. In addition to deepfake generation, corresponding detection technologies continuously evolve to regulate the potential misuse of deepfakes, such as for privacy invasion and phishing attacks. This survey comprehensively reviews the latest developments in deepfake generation and detection, summarizing and analyzing current state-of-the-arts in this rapidly evolving field. We first unify task definitions, comprehensively introduce datasets and metrics, and discuss developing technologies. Then, we discuss the development of several related sub-fields and focus on researching four representative deepfake fields: face swapping, face reenactment, talking face generation, and facial attribute editing, as well as forgery detection. Subsequently, we comprehensively benchmark representative methods on popular datasets for each field, fully evaluating the latest and influential published works. Finally, we analyze challenges and future research directions of the discussed fields.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
441,672
1008.4370
Fourier Domain Decoding Algorithm of Non-Binary LDPC codes for Parallel Implementation
For decoding non-binary low-density parity check (LDPC) codes, logarithm-domain sum-product (Log-SP) algorithms were proposed for reducing quantization effects of SP algorithm in conjunction with FFT. Since FFT is not applicable in the logarithm domain, the computations required at check nodes in the Log-SP algorithms are computationally intensive. What is worth, check nodes usually have higher degree than variable nodes. As a result, most of the time for decoding is used for check node computations, which leads to a bottleneck effect. In this paper, we propose a Log-SP algorithm in the Fourier domain. With this algorithm, the role of variable nodes and check nodes are switched. The intensive computations are spread over lower-degree variable nodes, which can be efficiently calculated in parallel. Furthermore, we develop a fast calculation method for the estimated bits and syndromes in the Fourier domain.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
7,376
1806.04497
A Virtual Environment with Multi-Robot Navigation, Analytics, and Decision Support for Critical Incident Investigation
Accidents and attacks that involve chemical, biological, radiological/nuclear or explosive (CBRNE) substances are rare, but can be of high consequence. Since the investigation of such events is not anybody's routine work, a range of AI techniques can reduce investigators' cognitive load and support decision-making, including: planning the assessment of the scene; ongoing evaluation and updating of risks; control of autonomous vehicles for collecting images and sensor data; reviewing images/videos for items of interest; identification of anomalies; and retrieval of relevant documentation. Because of the rare and high-risk nature of these events, realistic simulations can support the development and evaluation of AI-based tools. We have developed realistic models of CBRNE scenarios and implemented an initial set of tools.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
100,246
2407.12013
Dating ancient manuscripts using radiocarbon and AI-based writing style analysis
Determining the chronology of ancient handwritten manuscripts is essential for reconstructing the evolution of ideas. For the Dead Sea Scrolls, this is particularly important. However, there is an almost complete lack of date-bearing manuscripts evenly distributed across the timeline and written in similar scripts available for palaeographic comparison. Here, we present Enoch, a state-of-the-art AI-based date-prediction model, trained on the basis of new radiocarbon-dated samples of the scrolls. Enoch uses established handwriting-style descriptors and applies Bayesian ridge regression. The challenge of this study is that the number of radiocarbon-dated manuscripts is small, while current machine learning requires an abundance of training data. We show that by using combined angular and allographic writing style feature vectors and applying Bayesian ridge regression, Enoch could predict the radiocarbon-based dates from style, supported by leave-one-out validation, with varied MAEs of 27.9 to 30.7 years relative to the radiocarbon dating. Enoch was then used to estimate the dates of 135 unseen manuscripts, revealing that 79 per cent of the samples were considered 'realistic' upon palaeographic post-hoc evaluation. We present a new chronology of the scrolls. The radiocarbon ranges and Enoch's style-based predictions are often older than the traditionally assumed palaeographic estimates. In the range of 300-50 BCE, Enoch's date prediction provides an improved granularity. The study is in line with current developments in multimodal machine-learning techniques, and the methods can be used for date prediction in other partially-dated manuscript collections. This research shows how Enoch's quantitative, probability-based approach can be a tool for palaeographers and historians, re-dating ancient Jewish key texts and contributing to current debates on Jewish and Christian origins.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
true
473,721
2207.12884
CFLIT: Coexisting Federated Learning and Information Transfer
Future wireless networks are expected to support diverse mobile services, including artificial intelligence (AI) services and ubiquitous data transmissions. Federated learning (FL), as a revolutionary learning approach, enables collaborative AI model training across distributed mobile edge devices. By exploiting the superposition property of multiple-access channels, over-the-air computation allows concurrent model uploading from massive devices over the same radio resources, and thus significantly reduces the communication cost of FL. In this paper, we study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network. We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system. Under this framework, we aim to maximize the IT data rate and guarantee a given FL convergence performance by optimizing the long-term radio resource allocation. A key challenge that limits the spectrum efficiency of the coexisting system lies in the large overhead incurred by frequent communication between the server and edge devices for FL model aggregation. To address the challenge, we rigorously analyze the impact of the computation-to-communication ratio on the convergence of over-the-air FL in wireless fading channels. The analysis reveals the existence of an optimal computation-to-communication ratio that minimizes the amount of radio resources needed for over-the-air FL to converge to a given error tolerance. Based on the analysis, we propose a low-complexity online algorithm to jointly optimize the radio resource allocation for both the FL devices and IT devices. Extensive numerical simulations verify the superior performance of the proposed design for the coexistence of FL and IT devices in wireless cellular systems.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
310,147
2312.11299
Uncertainty-based Fairness Measures
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings. Tackling this arduous challenge first necessitates defining what it means for an ML model to be fair. This has been addressed by the ML community with various measures of fairness that depend on the prediction outcomes of the ML models, either at the group level or the individual level. These fairness measures are limited in that they utilize point predictions, neglecting their variances, or uncertainties, making them susceptible to noise, missingness and shifts in data. In this paper, we first show that an ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties. Then, we introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty. We demonstrate on many datasets that (i) our uncertainty-based measures are complementary to existing measures of fairness, and (ii) they provide more insights about the underlying issues leading to bias.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
416,507
2202.00791
Mars Terrain Segmentation with Less Labels
Planetary rover systems need to perform terrain segmentation to identify drivable areas as well as identify specific types of soil for sample collection. The latest Martian terrain segmentation methods rely on supervised learning which is very data hungry and difficult to train where only a small number of labeled samples are available. Moreover, the semantic classes are defined differently for different applications (e.g., rover traversal vs. geological) and as a result the network has to be trained from scratch each time, which is an inefficient use of resources. This research proposes a semi-supervised learning framework for Mars terrain segmentation where a deep segmentation network trained in an unsupervised manner on unlabeled images is transferred to the task of terrain segmentation trained on few labeled images. The network incorporates a backbone module which is trained using a contrastive loss function and an output atrous convolution module which is trained using a pixel-wise cross-entropy loss function. Evaluation results using the metric of segmentation accuracy show that the proposed method with contrastive pretraining outperforms plain supervised learning by 2%-10%. Moreover, the proposed model is able to achieve a segmentation accuracy of 91.1% using only 161 training images (1% of the original dataset) compared to 81.9% with plain supervised learning.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
278,258
2211.13032
Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning
In many risk-aware and multi-objective reinforcement learning settings, the utility of the user is derived from a single execution of a policy. In these settings, making decisions based on the average future returns is not suitable. For example, in a medical setting a patient may only have one opportunity to treat their illness. Making decisions using just the expected future returns -- known in reinforcement learning as the value -- cannot account for the potential range of adverse or positive outcomes a decision may have. Therefore, we should use the distribution over expected future returns differently to represent the critical information that the agent requires at decision time by taking both the future and accrued returns into consideration. In this paper, we propose two novel Monte Carlo tree search algorithms. Firstly, we present a Monte Carlo tree search algorithm that can compute policies for nonlinear utility functions (NLU-MCTS) by optimising the utility of the different possible returns attainable from individual policy executions, resulting in good policies for both risk-aware and multi-objective settings. Secondly, we propose a distributional Monte Carlo tree search algorithm (DMCTS) which extends NLU-MCTS. DMCTS computes an approximate posterior distribution over the utility of the returns, and utilises Thompson sampling during planning to compute policies in risk-aware and multi-objective settings. Both algorithms outperform the state-of-the-art in multi-objective reinforcement learning for the expected utility of the returns.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
332,329
1004.2003
The Socceral Force
We have an audacious dream, we would like to develop a simulation and virtual reality system to support the decision making in European football (soccer). In this review, we summarize the efforts that we have made to fulfil this dream until recently. In addition, an introductory version of FerSML (Footballer and Football Simulation Markup Language) is presented in this paper.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
6,148
2406.18359
Optimizing Extension Techniques for Discovering Non-Algebraic Matroids
In this work, we revisit some combinatorial and information-theoretic extension techniques for detecting non-algebraic matroids. These are the Dress-Lov\'asz and Ahlswede-K\"orner extension properties. We provide optimizations of these techniques to reduce their computational complexity, finding new non-algebraic matroids on 9 and 10 points. In addition, we use the Ahlswede-K\"orner extension property to find better lower bounds on the information ratio of secret sharing schemes for ports of non-algebraic matroids.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
467,980
1802.01527
Supporting UAV Cellular Communications through Massive MIMO
In this article, we provide a much-needed study of UAV cellular communications, focusing on the rates achievable for the UAV downlink command and control (C&C) channel. For this key performance indicator, we perform a realistic comparison between existing deployments operating in single-user mode and next-generation multi-user massive MIMO systems. We find that in single-user deployments under heavy data traffic, UAVs flying at 50 m, 150 m, and 300 m achieve the C&C target rate of 100 kbps -- as set by the 3GPP -- in a mere 35%, 2%, and 1% of the cases, respectively. Owing to mitigated interference, a stronger carrier signal, and a spatial multiplexing gain, massive MIMO time division duplex systems can dramatically increase such probability. Indeed, we show that for UAV heights up to 300 m the target rate is met with massive MIMO in 74% and 96% of the cases with and without uplink pilot reuse for channel state information (CSI) acquisition, respectively. On the other hand, the presence of UAVs can significantly degrade the performance of ground users, whose pilot signals are vulnerable to UAV-generated contamination and require protection through uplink power control.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
89,625
1605.00529
Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning
Faced with massive data, is it possible to trade off (statistical) risk, and (computational) space and time? This challenge lies at the heart of large-scale machine learning. Using k-means clustering as a prototypical unsupervised learning problem, we show how we can strategically summarize the data (control space) in order to trade off risk and time when data is generated by a probabilistic model. Our summarization is based on coreset constructions from computational geometry. We also develop an algorithm, TRAM, to navigate the space/time/data/risk tradeoff in practice. In particular, we show that for a fixed risk (or data size), as the data size increases (resp. risk increases) the running time of TRAM decreases. Our extensive experiments on real data sets demonstrate the existence and practical utility of such tradeoffs, not only for k-means but also for Gaussian Mixture Models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
55,358
1510.07163
Evolutionary Landscape and Management of Population Diversity
The search ability of an Evolutionary Algorithm (EA) depends on the variation among the individuals in the population [3, 4, 8]. Maintaining an optimal level of diversity in the EA population is imperative to ensure that progress of the EA search is unhindered by premature convergence to suboptimal solutions. Clearer understanding of the concept of population diversity, in the context of evolutionary search and premature convergence in particular, is the key to designing efficient EAs. To this end, this paper first presents a brief analysis of the EA population diversity issues. Next we present an investigation on a counter-niching EA technique [4] that introduces and maintains constructive diversity in the population. The proposed approach uses informed genetic operations to reach promising, but unexplored or under-explored areas of the search space, while discouraging premature local convergence. Simulation runs on a suite of standard benchmark test functions with Genetic Algorithm (GA) implementation shows promising results.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
48,173
2402.09486
UMOEA/D: A Multiobjective Evolutionary Algorithm for Uniform Pareto Objectives based on Decomposition
Multiobjective optimization (MOO) is prevalent in numerous applications, in which a Pareto front (PF) is constructed to display optima under various preferences. Previous methods commonly utilize the set of Pareto objectives (particles on the PF) to represent the entire PF. However, the empirical distribution of the Pareto objectives on the PF is rarely studied, which implicitly impedes the generation of diverse and representative Pareto objectives in previous methods. To bridge the gap, we suggest in this paper constructing \emph{uniformly distributed} Pareto objectives on the PF, so as to alleviate the limited diversity found in previous MOO approaches. We are the first to formally define the concept of ``uniformity" for an MOO problem. We optimize the maximal minimal distances on the Pareto front using a neural network, resulting in both asymptotically and non-asymptotically uniform Pareto objectives. Our proposed method is validated through experiments on real-world and synthetic problems, which demonstrates the efficacy in generating high-quality uniform Pareto objectives and the encouraging performance exceeding existing state-of-the-art methods. The detailed model implementation and the code are scheduled to be open-sourced upon publication.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
429,548
2401.08668
Thermodynamic Perspectives on Computational Complexity: Exploring the P vs. NP Problem
The resolution of the P vs. NP problem, a cornerstone in computational theory, remains elusive despite extensive exploration through mathematical logic and algorithmic theory. This paper takes a novel approach by integrating information theory, thermodynamics, and computational complexity, offering a comprehensive landscape of interdisciplinary study. We focus on entropy, a concept traditionally linked with uncertainty and disorder, and reinterpret it to assess the complexity of computational problems. Our research presents a structured framework for establishing entropy profiles within computational tasks, enabling a clear distinction between P and NP-classified problems. This framework quantifies the 'information cost' associated with these problem categories, highlighting their intrinsic computational complexity. We introduce Entropy-Driven Annealing (EDA) as a new method to decipher the energy landscapes of computational problems, focusing on the unique characteristics of NP problems. This method proposes a differential thermodynamic profile for NP problems in contrast to P problems and explores potential thermodynamic routes for finding polynomial-time solutions to NP challenges. Our introduction of EDA and its application to complex computational problems like the Boolean satisfiability problem (SAT) and protein-DNA complexes suggests a potential pathway toward unraveling the intricacies of the P vs. NP problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
421,982
2211.00151
A Close Look into the Calibration of Pre-trained Language Models
Pre-trained language models (PLMs) may fail in giving reliable estimates of their predictive uncertainty. We take a close look into this problem, aiming to answer two questions: (1) Do PLMs learn to become calibrated in the training process? (2) How effective are existing calibration methods? For the first question, we conduct fine-grained control experiments to study the dynamic change in PLMs' calibration performance in training. We consider six factors as control variables, including dataset difficulty, available training samples, training steps, the number of tunable parameters, model scale, and pretraining. We observe a consistent change in calibration performance across six factors. We find that PLMs don't learn to become calibrated in training, evidenced by the continual increase in confidence, no matter whether the predictions are correct or not. We highlight that our finding somewhat contradicts two established conclusions: (a) Larger PLMs are more calibrated; (b) Pretraining improves model calibration. Next, we study the effectiveness of existing calibration methods in mitigating the overconfidence issue. Besides unlearnable calibration methods (e.g., label smoothing), we adapt and extend two recently proposed learnable methods that directly collect data to train models to have reasonable confidence estimations. Experimental results show that learnable methods significantly reduce PLMs' confidence in wrong predictions. The code is available at \url{https://github.com/lifan-yuan/PLMCalibration}.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
327,769
2411.02738
Novelty-focused R&D landscaping using transformer and local outlier factor
While numerous studies have explored the field of research and development (R&D) landscaping, the preponderance of these investigations has emphasized predictive analysis based on R&D outcomes, specifically patents, and academic literature. However, the value of research proposals and novelty analysis has seldom been addressed. This study proposes a systematic approach to constructing and navigating the R&D landscape that can be utilized to guide organizations to respond in a reproducible and timely manner to the challenges presented by increasing number of research proposals. At the heart of the proposed approach is the composite use of the transformer-based language model and the local outlier factor (LOF). The semantic meaning of the research proposals is captured with our further-trained transformers, thereby constructing a comprehensive R&D landscape. Subsequently, the novelty of the newly selected research proposals within the annual landscape is quantified on a numerical scale utilizing the LOF by assessing the dissimilarity of each proposal to others preceding and within the same year. A case study examining research proposals in the energy and resource sector in South Korea is presented. The systematic process and quantitative outcomes are expected to be useful decision-support tools, providing future insights regarding R&D planning and roadmapping.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
505,637
2501.02362
Easing Optimization Paths: a Circuit Perspective
Gradient descent is the method of choice for training large artificial intelligence systems. As these systems become larger, a better understanding of the mechanisms behind gradient training would allow us to alleviate compute costs and help steer these systems away from harmful behaviors. To that end, we suggest utilizing the circuit perspective brought forward by mechanistic interpretability. After laying out our intuition, we illustrate how it enables us to design a curriculum for efficient learning in a controlled setting. The code is available at \url{https://github.com/facebookresearch/pal}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
522,453
2108.04476
SP-GAN: Sphere-Guided 3D Shape Generation and Manipulation
We present SP-GAN, a new unsupervised sphere-guided generative model for direct synthesis of 3D shapes in the form of point clouds. Compared with existing models, SP-GAN is able to synthesize diverse and high-quality shapes with fine details and promote controllability for part-aware shape generation and manipulation, yet trainable without any parts annotations. In SP-GAN, we incorporate a global prior (uniform points on a sphere) to spatially guide the generative process and attach a local prior (a random latent code) to each sphere point to provide local details. The key insight in our design is to disentangle the complex 3D shape generation task into a global shape modeling and a local structure adjustment, to ease the learning process and enhance the shape generation quality. Also, our model forms an implicit dense correspondence between the sphere points and points in every generated shape, enabling various forms of structure-aware shape manipulations such as part editing, part-wise shape interpolation, and multi-shape part composition, etc., beyond the existing generative models. Experimental results, which include both visual and quantitative evaluations, demonstrate that our model is able to synthesize diverse point clouds with fine details and less noise, as compared with the state-of-the-art models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
250,016
2008.03201
Convolutional neural network based deep-learning architecture for intraprostatic tumour contouring on PSMA PET images in patients with primary prostate cancer
Accurate delineation of the intraprostatic gross tumour volume (GTV) is a prerequisite for treatment approaches in patients with primary prostate cancer (PCa). Prostate-specific membrane antigen positron emission tomography (PSMA-PET) may outperform MRI in GTV detection. However, visual GTV delineation underlies interobserver heterogeneity and is time consuming. The aim of this study was to develop a convolutional neural network (CNN) for automated segmentation of intraprostatic tumour (GTV-CNN) in PSMA-PET. Methods: The CNN (3D U-Net) was trained on [68Ga]PSMA-PET images of 152 patients from two different institutions and the training labels were generated manually using a validated technique. The CNN was tested on two independent internal (cohort 1: [68Ga]PSMA-PET, n=18 and cohort 2: [18F]PSMA-PET, n=19) and one external (cohort 3: [68Ga]PSMA-PET, n=20) test-datasets. Accordance between manual contours and GTV-CNN was assessed with Dice-S{\o}rensen coefficient (DSC). Sensitivity and specificity were calculated for the two internal test-datasets by using whole-mount histology. Results: Median DSCs for cohorts 1-3 were 0.84 (range: 0.32-0.95), 0.81 (range: 0.28-0.93) and 0.83 (range: 0.32-0.93), respectively. Sensitivities and specificities for GTV-CNN were comparable with manual expert contours: 0.98 and 0.76 (cohort 1) and 1 and 0.57 (cohort 2), respectively. Computation time was around 6 seconds for a standard dataset. Conclusion: The application of a CNN for automated contouring of intraprostatic GTV in [68Ga]PSMA- and [18F]PSMA-PET images resulted in a high concordance with expert contours and in high sensitivities and specificities in comparison with histology reference. This robust, accurate and fast technique may be implemented for treatment concepts in primary PCa. The trained model and the study's source code are available in an open source repository.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
190,832
2410.13964
Enhancing Generalization in Sparse Mixture of Experts Models: The Case for Increased Expert Activation in Compositional Tasks
As Transformer models grow in complexity, their ability to generalize to novel, compositional tasks becomes crucial. This study challenges conventional wisdom about sparse activation in Sparse Mixture of Experts (SMoE) models when faced with increasingly complex compositional tasks. Through experiments on the SRAVEN symbolic reasoning task and SKILL-MIX benchmark, we demonstrate that activating more experts improves performance on difficult tasks, with the optimal number of activated experts scaling with task complexity. Our findings reveal that pretrained SMoE-based Large Language Models achieve better results by increasing experts-per-token on challenging compositional tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
499,796
2006.05238
An Efficient Accelerator Design Methodology for Deformable Convolutional Networks
Deformable convolutional networks have demonstrated outstanding performance in object recognition tasks with an effective feature extraction. Unlike standard convolution, the deformable convolution decides the receptive field size using dynamically generated offsets, which leads to an irregular memory access. Especially, the memory access pattern varies both spatially and temporally, making static optimization ineffective. Thus, a naive implementation would lead to an excessive memory footprint. In this paper, we present a novel approach to accelerate deformable convolution on FPGA. First, we propose a novel training method to reduce the size of the receptive field in the deformable convolutional layer without compromising accuracy. By optimizing the receptive field, we can compress the maximum size of the receptive field by 12.6 times. Second, we propose an efficient systolic architecture to maximize its efficiency. We then implement our design on FPGA to support the optimized dataflow. Experimental results show that our accelerator achieves up to 17.25 times speedup over the state-of-the-art accelerator.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
180,991
2305.10427
Accelerating Transformer Inference for Translation via Parallel Decoding
Autoregressive decoding limits the efficiency of transformers for Machine Translation (MT). The community proposed specific network architectures and learning-based methods to solve this issue, which are expensive and require changes to the MT model, trading inference speed at the cost of the translation quality. In this paper, we propose to address the problem from the point of view of decoding algorithms, as a less explored but rather compelling direction. We propose to reframe the standard greedy autoregressive decoding of MT with a parallel formulation leveraging Jacobi and Gauss-Seidel fixed-point iteration methods for fast inference. This formulation allows to speed up existing models without training or modifications while retaining translation quality. We present three parallel decoding algorithms and test them on different languages and models showing how the parallelization introduces a speedup up to 38% w.r.t. the standard autoregressive decoding and nearly 2x when scaling the method on parallel resources. Finally, we introduce a decoding dependency graph visualizer (DDGviz) that let us see how the model has learned the conditional dependence between tokens and inspect the decoding procedure.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
365,054
1610.07328
SSH (Sketch, Shingle, & Hash) for Indexing Massive-Scale Time Series
Similarity search on time series is a frequent operation in large-scale data-driven applications. Sophisticated similarity measures are standard for time series matching, as they are usually misaligned. Dynamic Time Warping or DTW is the most widely used similarity measure for time series because it combines alignment and matching at the same time. However, the alignment makes DTW slow. To speed up the expensive similarity search with DTW, branch and bound based pruning strategies are adopted. However, branch and bound based pruning are only useful for very short queries (low dimensional time series), and the bounds are quite weak for longer queries. Due to the loose bounds branch and bound pruning strategy boils down to a brute-force search. To circumvent this issue, we design SSH (Sketch, Shingle, & Hashing), an efficient and approximate hashing scheme which is much faster than the state-of-the-art branch and bound searching technique: the UCR suite. SSH uses a novel combination of sketching, shingling and hashing techniques to produce (probabilistic) indexes which align (near perfectly) with DTW similarity measure. The generated indexes are then used to create hash buckets for sub-linear search. Our results show that SSH is very effective for longer time sequence and prunes around 95% candidates, leading to the massive speedup in search with DTW. Empirical results on two large-scale benchmark time series data show that our proposed method can be around 20 times faster than the state-of-the-art package (UCR suite) without any significant loss in accuracy.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
62,772
2302.12517
Knock Out 2PC with Practicality Intact: a High-performance and General Distributed Transaction Protocol (Technical Report)
Two-phase-commit (2PC) has been widely adopted for distributed transaction processing, but it also jeopardizes throughput by introducing two rounds of network communications and two durable log writes to a transaction's critical path. Despite the various proposals that eliminate 2PC such as deterministic database and access localization, 2PC remains the de facto standard since the alternatives often lack generality (e.g., requiring workloads without branches based on query results). In this paper, we present Primo, a distributed transaction protocol that supports a more general set of workloads without 2PC. Primo features write-conflict-free concurrency control that guarantees once a transaction enters the commit phase, no concurrency conflict (e.g., deadlock) would occur when installing the write-set -- hence the prepare phase is no longer needed to account for any potential conflict from any partition. In addition, Primo further optimizes the transaction path using asynchronous group commit. With that, the durability delay is also taken off the transaction's critical path. Empirical results on Primo are encouraging -- in YCSB and TPC-C, Primo attains 1.42x to 8.25x higher throughput than state-of-the-art general protocols including Sundial and COCO, while having similar latency as COCO which also employs group commit.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
347,600
2004.12765
ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for Computational Humor
Automation of humor detection and rating has interesting use cases in modern technologies, such as humanoid robots, chatbots, and virtual assistants. In this paper, we propose a novel approach for detecting and rating humor in short texts based on a popular linguistic theory of humor. The proposed technical method initiates by separating sentences of the given text and utilizing the BERT model to generate embeddings for each one. The embeddings are fed to separate lines of hidden layers in a neural network (one line for each sentence) to extract latent features. At last, the parallel lines are concatenated to determine the congruity and other relationships between the sentences and predict the target value. We accompany the paper with a novel dataset for humor detection consisting of 200,000 formal short texts. In addition to evaluating our work on the novel dataset, we participated in a live machine learning competition focused on rating humor in Spanish tweets. The proposed model obtained F1 scores of 0.982 and 0.869 in the humor detection experiments which outperform general and state-of-the-art models. The evaluation performed on two contrasting settings confirm the strength and robustness of the model and suggests two important factors in achieving high accuracy in the current task: 1) usage of sentence embeddings and 2) utilizing the linguistic structure of humor in designing the proposed model.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
174,345
2203.06210
Leveraging universality of jet taggers through transfer learning
A significant challenge in the tagging of boosted objects via machine-learning technology is the prohibitive computational cost associated with training sophisticated models. Nevertheless, the universality of QCD suggests that a large amount of the information learnt in the training is common to different physical signals and experimental setups. In this article, we explore the use of transfer learning techniques to develop fast and data-efficient jet taggers that leverage such universality. We consider the graph neural networks LundNet and ParticleNet, and introduce two prescriptions to transfer an existing tagger into a new signal based either on fine-tuning all the weights of a model or alternatively on freezing a fraction of them. In the case of $W$-boson and top-quark tagging, we find that one can obtain reliable taggers using an order of magnitude less data with a corresponding speed-up of the training process. Moreover, while keeping the size of the training data set fixed, we observe a speed-up of the training by up to a factor of three. This offers a promising avenue to facilitate the use of such tools in collider physics experiments.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
285,033
2404.08469
Supervisory Control Theory with Event Forcing
In the Ramadge-Wonham supervisory control theory the only interaction mechanism between supervisor and plant is that the supervisor may enable/disable events from the plant and the plant makes a final decision about which of the enabled events is actually taking place. In this paper, the interaction between supervisor and plant is enriched by allowing the supervisor to force specific events (called forcible events) that are allowed to preempt uncontrollable events. A notion of forcible-controllability is defined that captures the interplay between controllability of a supervisor w.r.t. the uncontrollable events provided by a plant in the setting with event forcing. Existence of a maximally permissive, forcibly-controllable, nonblocking supervisor is shown and an algorithm is provided that computes such a supervisor. The approach is illustrated by two small case studies.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
446,253
0807.3094
Energy-Efficient Resource Allocation in Multiuser MIMO Systems: A Game-Theoretic Framework
This paper focuses on the cross-layer issue of resource allocation for energy efficiency in the uplink of a multiuser MIMO wireless communication system. Assuming that all of the transmitters and the uplink receiver are equipped with multiple antennas, the situation considered is that in which each terminal is allowed to vary its transmit power, beamforming vector, and uplink receiver in order to maximize its own utility, which is defined as the ratio of data throughput to transmit power; the case in which non-linear interference cancellation is used at the receiver is also investigated. Applying a game-theoretic formulation, several non-cooperative games for utility maximization are thus formulated, and their performance is compared in terms of achieved average utility, achieved average SINR and average transmit power at the Nash equilibrium. Numerical results show that the use of the proposed cross-layer resource allocation policies brings remarkable advantages to the network performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
2,088
2204.12634
Discrete-Time Adaptive Control of a Class of Nonlinear Systems Using High-Order Tuners
This paper concerns the adaptive control of a class of discrete-time nonlinear systems with all states accessible. Recently, a high-order tuner algorithm was developed for the minimization of convex loss functions with time-varying regressors in the context of an identification problem. Based on Nesterov's algorithm, the high-order tuner was shown to guarantee bounded parameter estimation when regressors vary with time, and to lead to accelerated convergence of the tracking error when regressors are constant. In this paper, we apply the high-order tuner to the adaptive control of a particular class of discrete-time nonlinear dynamical systems. First, we show that for plants of this class, the underlying dynamical error model can be causally converted to an algebraic error model. Second, we show that using this algebraic error model, the high-order tuner can be applied to provably stabilize the class of dynamical systems around a reference trajectory.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
293,541
2011.11985
Adam$^+$: A Stochastic Method with Adaptive Variance Reduction
Adam is a widely used stochastic optimization method for deep learning applications. While practitioners prefer Adam because it requires less parameter tuning, its use is problematic from a theoretical point of view since it may not converge. Variants of Adam have been proposed with provable convergence guarantee, but they tend not be competitive with Adam on the practical performance. In this paper, we propose a new method named Adam$^+$ (pronounced as Adam-plus). Adam$^+$ retains some of the key components of Adam but it also has several noticeable differences: (i) it does not maintain the moving average of second moment estimate but instead computes the moving average of first moment estimate at extrapolated points; (ii) its adaptive step size is formed not by dividing the square root of second moment estimate but instead by dividing the root of the norm of first moment estimate. As a result, Adam$^+$ requires few parameter tuning, as Adam, but it enjoys a provable convergence guarantee. Our analysis further shows that Adam$^+$ enjoys adaptive variance reduction, i.e., the variance of the stochastic gradient estimator reduces as the algorithm converges, hence enjoying an adaptive convergence. We also propose a more general variant of Adam$^+$ with different adaptive step sizes and establish their fast convergence rate. Our empirical studies on various deep learning tasks, including image classification, language modeling, and automatic speech recognition, demonstrate that Adam$^+$ significantly outperforms Adam and achieves comparable performance with best-tuned SGD and momentum SGD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
208,020
2005.08424
Single-sample writers -- "Document Filter" and their impacts on writer identification
The writing can be used as an important biometric modality which allows to unequivocally identify an individual. It happens because the writing of two different persons present differences that can be explored both in terms of graphometric properties or even by addressing the manuscript as a digital image, taking into account the use of image processing techniques that can properly capture different visual attributes of the image (e.g. texture). In this work, perform a detailed study in which we dissect whether or not the use of a database with only a single sample taken from some writers may skew the results obtained in the experimental protocol. In this sense, we propose here what we call "document filter". The "document filter" protocol is supposed to be used as a preprocessing technique, such a way that all the data taken from fragments of the same document must be placed either into the training or into the test set. The rationale behind it, is that the classifier must capture the features from the writer itself, and not features regarding other particularities which could affect the writing in a specific document (i.e. emotional state of the writer, pen used, paper type, and etc.). By analyzing the literature, one can find several works dealing the writer identification problem. However, the performance of the writer identification systems must be evaluated also taking into account the occurrence of writer volunteers who contributed with a single sample during the creation of the manuscript databases. To address the open issue investigated here, a comprehensive set of experiments was performed on the IAM, BFL and CVL databases. They have shown that, in the most extreme case, the recognition rate obtained using the "document filter" protocol drops from 81.80% to 50.37%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
177,613
2010.00796
JAKET: Joint Pre-training of Knowledge Graph and Language Understanding
Knowledge graphs (KGs) contain rich information about world knowledge, entities and relations. Thus, they can be great supplements to existing pre-trained language models. However, it remains a challenge to efficiently integrate information from KG into language modeling. And the understanding of a knowledge graph requires related context. We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language. The knowledge module and language module provide essential information to mutually assist each other: the knowledge module produces embeddings for entities in text while the language module generates context-aware initial embeddings for entities and relations in the graph. Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains. Experimental results on several knowledge-aware NLP tasks show that our proposed framework achieves superior performance by effectively leveraging knowledge in language understanding.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
198,396
2401.14285
POUR-Net: A Population-Prior-Aided Over-Under-Representation Network for Low-Count PET Attenuation Map Generation
Low-dose PET offers a valuable means of minimizing radiation exposure in PET imaging. However, the prevalent practice of employing additional CT scans for generating attenuation maps (u-map) for PET attenuation correction significantly elevates radiation doses. To address this concern and further mitigate radiation exposure in low-dose PET exams, we propose POUR-Net - an innovative population-prior-aided over-under-representation network that aims for high-quality attenuation map generation from low-dose PET. First, POUR-Net incorporates an over-under-representation network (OUR-Net) to facilitate efficient feature extraction, encompassing both low-resolution abstracted and fine-detail features, for assisting deep generation on the full-resolution level. Second, complementing OUR-Net, a population prior generation machine (PPGM) utilizing a comprehensive CT-derived u-map dataset, provides additional prior information to aid OUR-Net generation. The integration of OUR-Net and PPGM within a cascade framework enables iterative refinement of $\mu$-map generation, resulting in the production of high-quality $\mu$-maps. Experimental results underscore the effectiveness of POUR-Net, showing it as a promising solution for accurate CT-free low-count PET attenuation correction, which also surpasses the performance of previous baseline methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
424,036
1904.12533
A Complete Classification of the Complexity and Rewritability of Ontology-Mediated Queries based on the Description Logic EL
We provide an ultimately fine-grained analysis of the data complexity and rewritability of ontology-mediated queries (OMQs) based on an EL ontology and a conjunctive query (CQ). Our main results are that every such OMQ is in AC0, NL-complete, or PTime-complete and that containment in NL coincides with rewritability into linear Datalog (whereas containment in AC0 coincides with rewritability into first-order logic). We establish natural characterizations of the three cases in terms of bounded depth and (un)bounded pathwidth, and show that every of the associated meta problems such as deciding wether a given OMQ is rewritable into linear Datalog is ExpTime-complete. We also give a way to construct linear Datalog rewritings when they exist and prove that there is no constant Datalog rewritings.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
129,125
2405.14347
Doubly-Dynamic ISAC Precoding for Vehicular Networks: A Constrained Deep Reinforcement Learning (CDRL) Approach
Integrated sensing and communication (ISAC) technology is essential for supporting vehicular networks. However, the communication channel in this scenario exhibits time variations, and the potential targets may move rapidly, resulting in double dynamics. This nature poses a challenge for real-time precoder design. While optimization-based solutions are widely researched, they are complex and heavily rely on perfect channel-related information, which is impractical in double dynamics. To address this challenge, we propose using constrained deep reinforcement learning to facilitate dynamic updates to the ISAC precoder. Additionally, the primal dual-deep deterministic policy gradient and Wolpertinger architecture are tailored to efficiently train the algorithm under complex constraints and varying numbers of users. The proposed scheme not only adapts to the dynamics based on observations but also leverages environmental information to enhance performance and reduce complexity. Its superiority over existing candidates has been validated through experiments.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
456,380
1611.05995
Wireless-powered relaying with finite block-length codes
This paper studies the outage probability and the throughput of amplify-and-forward relay networks with wireless information and energy transfer. We use some recent results on finite block-length codes to analyze the system performance in the cases with short codewords. Specifically, the time switching relaying and the power splitting relaying protocols are considered for energy and information transfer. We derive tight approximations for the outage probability/throughput. Then, we analyze the outage probability in asymptotically high signal-to-noise ratios. Finally, we use numerical results to confirm the accuracy of our analysis and to evaluate the system performance in different scenarios. Our results indicate that, in delay-constrained scenarios, the codeword length affects the outage probability/throughput of the joint energy and information transfer systems considerably.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
64,109
2305.13040
SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue Agents
Task-oriented dialogue (TOD) models have made significant progress in recent years. However, previous studies primarily focus on datasets written by annotators, which has resulted in a gap between academic research and real-world spoken conversation scenarios. While several small-scale spoken TOD datasets are proposed to address robustness issues such as ASR errors, they ignore the unique challenges in spoken conversation. To tackle the limitations, we introduce SpokenWOZ, a large-scale speech-text dataset for spoken TOD, containing 8 domains, 203k turns, 5.7k dialogues and 249 hours of audios from human-to-human spoken conversations. SpokenWOZ further incorporates common spoken characteristics such as word-by-word processing and reasoning in spoken language. Based on these characteristics, we present cross-turn slot and reasoning slot detection as new challenges. We conduct experiments on various baselines, including text-modal models, newly proposed dual-modal models, and LLMs, e.g., ChatGPT. The results show that the current models still have substantial room for improvement in spoken conversation, where the most advanced dialogue state tracker only achieves 25.65% in joint goal accuracy and the SOTA end-to-end model only correctly completes the user request in 52.1% of dialogues. The dataset, code, and leaderboard are available: https://spokenwoz.github.io/.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
366,310
1505.07427
PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization
We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at http://mi.eng.cam.ac.uk/projects/relocalisation/
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
true
false
false
43,541
2303.07810
Disentangled Graph Social Recommendation
Social recommender systems have drawn a lot of attention in many online web services, because of the incorporation of social information between users in improving recommendation results. Despite the significant progress made by existing solutions, we argue that current methods fall short in two limitations: (1) Existing social-aware recommendation models only consider collaborative similarity between items, how to incorporate item-wise semantic relatedness is less explored in current recommendation paradigms. (2) Current social recommender systems neglect the entanglement of the latent factors over heterogeneous relations (e.g., social connections, user-item interactions). Learning the disentangled representations with relation heterogeneity poses great challenge for social recommendation. In this work, we design a Disentangled Graph Neural Network (DGNN) with the integration of latent memory units, which empowers DGNN to maintain factorized representations for heterogeneous types of user and item connections. Additionally, we devise new memory-augmented message propagation and aggregation schemes under the graph neural architecture, allowing us to recursively distill semantic relatedness into the representations of users and items in a fully automatic manner. Extensive experiments on three benchmark datasets verify the effectiveness of our model by achieving great improvement over state-of-the-art recommendation techniques. The source code is publicly available at: https://github.com/HKUDS/DGNN.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
351,389
2312.04511
An LLM Compiler for Parallel Function Calling
The reasoning capabilities of the recent LLMs enable them to execute external function calls to overcome their inherent limitations, such as knowledge cutoffs, poor arithmetic skills, or lack of access to private data. This development has allowed LLMs to select and coordinate multiple functions based on the context to tackle more complex problems. However, current methods for function calling often require sequential reasoning and acting for each function which can result in high latency, cost, and sometimes inaccurate behavior. To address this, we introduce LLMCompiler, which executes functions in parallel to efficiently orchestrate multiple function calls. Drawing inspiration from the principles of classical compilers, LLMCompiler enables parallel function calling with three components: (i) a Function Calling Planner, formulating execution plans for function calling; (ii) a Task Fetching Unit, dispatching function calling tasks; and (iii) an Executor, executing these tasks in parallel. LLMCompiler automatically generates an optimized orchestration for the function calls and can be used with both open-source and closed-source models. We have benchmarked LLMCompiler on a range of tasks with different patterns of function calling. We observe consistent latency speedup of up to 3.7x, cost savings of up to 6.7x, and accuracy improvement of up to ~9% compared to ReAct. Our code is available at https://github.com/SqueezeAILab/LLMCompiler.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
413,698
1807.04801
Practical Obstacles to Deploying Active Learning
Active learning (AL) is a widely-used training strategy for maximizing predictive performance subject to a fixed annotation budget. In AL one iteratively selects training examples for annotation, often those for which the current model is most uncertain (by some measure). The hope is that active sampling leads to better performance than would be achieved under independent and identically distributed (i.i.d.) random samples. While AL has shown promise in retrospective evaluations, these studies often ignore practical obstacles to its use. In this paper we show that while AL may provide benefits when used with specific models and for particular domains, the benefits of current approaches do not generalize reliably across models and tasks. This is problematic because in practice one does not have the opportunity to explore and compare alternative AL strategies. Moreover, AL couples the training dataset with the model used to guide its acquisition. We find that subsequently training a successor model with an actively-acquired dataset does not consistently outperform training on i.i.d. sampled data. Our findings raise the question of whether the downsides inherent to AL are worth the modest and inconsistent performance gains it tends to afford.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
102,792
1305.2713
Early Detection of Alzheimer's - A Crucial Requirement
Alzheimer's, an old age disease of people over 65 years causes problems with memory, thinking and behavior. This disease progresses very slow and its identification in early stages is very difficult. The symptoms of Alzheimer's appear slowly and gradually will have worse effects. In its early stages, not only the patients themselves but their loved ones are generally unable to accept that the patient is suffering from disease. In this paper, we have proposed a new algorithm to detect patients of Alzheimer's at early stages by comparing the Magnetic Resonance Images (MRI) of the patients with normal persons of their age. The progress of the disease can also be monitored by periodic comparison of the previous and current MRI.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
24,540
2110.13681
Malicious Mode Attack on EV Coordinated Charging Load and MIADRC Defense Strategy
The Internet of Things (IoT) provides a salient communication environment to facilitate the coordinated charging of electric vehicle (EV) load. However, as IoT is connected with the public network, the coordinated charging system is in a low-level cyber security and greatly vulnerable to malicious attacks. This paper investigates the malicious mode attack (MMA), which is a new cyberattack pattern that simultaneously attacks massive EV charging piles to generate continuous sinusoidal power disturbance with the same frequency as the poorly-damped wide-area electromechanical mode. Thereby, high amplitude forced oscillations could be stimulated by MMA, which seriously threats the power system stability. First, the potential threat of MMA is clarified by investigating the vulnerability of the IoT-based coordinated charging load control system, and an MMA process like Mirai is pointed out as an example. And then, based on the attack process, an MMA model is established for impact analysis where expressions of the mean and stochastic responses of the MMA forced oscillation are derived to discover main impact factors. Further, to mitigate the impact of MMA, a defense strategy based on multi-index information active disturbance rejection control is proposed to improve the stability and anti-disturbance ability of the power system, which considers the impact factors of both mode damping and disturbance compensation. Simulations are conducted to verify the existence and characteristics of MMA threats, and the efficiency of the proposed defense strategy is also validated.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
263,280
1804.02032
A Multi-Layer Approach to Superpixel-based Higher-order Conditional Random Field for Semantic Image Segmentation
Superpixel-based Higher-order Conditional random fields (SP-HO-CRFs) are known for their effectiveness in enforcing both short and long spatial contiguity for pixelwise labelling in computer vision. However, their higher-order potentials are usually too complex to learn and often incur a high computational cost in performing inference. We propose an new approximation approach to SP-HO-CRFs that resolves these problems. Our approach is a multi-layer CRF framework that inherits the simplicity from pairwise CRFs by formulating both the higher-order and pairwise cues into the same pairwise potentials in the first layer. Essentially, this approach provides accuracy enhancement on the basis of pairwise CRFs without training by reusing their pre-trained parameters and/or weights. The proposed multi-layer approach performs especially well in delineating the boundary details (boarders) of object categories such as "trees" and "bushes". Multiple sets of experiments conducted on dataset MSRC-21 and PASCAL VOC 2012 validate the effectiveness and efficiency of the proposed methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,325
2008.10323
Experimental Verification of Stability Theory for a Planar Rigid Body with Two Unilateral Frictional Contacts
Stability of equilibrium states in mechanical systems with multiple unilateral frictional contacts is an important practical requirement, with high relevance for robotic applications. In our previous work, we theoretically analyzed finite-time Lyapunov stability for a minimal model of planar rigid body with two frictional point contacts. Assuming inelastic impacts and Coulomb friction, conditions for stability and instability of an equilibrium configuration have been derived. In this work, we present for the first time an experimental demonstration of this stability theory, using a variable-structure rigid ''biped'' with frictional footpads on an inclined plane. By changing the biped's center-of-mass location, we attain different equilibrium states, which respond to small perturbations by divergence or convergence, showing remarkable agreement with the predictions of the stability theory. Using high-speed recording of video movies, good quantitative agreement between experiments and numerical simulations is obtained, and limitations of the rigid-body model and inelastic impact assumptions are also studied. The results prove the utility and practical value of our stability theory.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
192,965
2401.03183
Exploring Defeasibility in Causal Reasoning
Defeasibility in causal reasoning implies that the causal relationship between cause and effect can be strengthened or weakened. Namely, the causal strength between cause and effect should increase or decrease with the incorporation of strengthening arguments (supporters) or weakening arguments (defeaters), respectively. However, existing works ignore defeasibility in causal reasoning and fail to evaluate existing causal strength metrics in defeasible settings. In this work, we present $\delta$-CAUSAL, the first benchmark dataset for studying defeasibility in causal reasoning. $\delta$-CAUSAL includes around 11K events spanning ten domains, featuring defeasible causality pairs, i.e., cause-effect pairs accompanied by supporters and defeaters. We further show current causal strength metrics fail to reflect the change of causal strength with the incorporation of supporters or defeaters in $\delta$-CAUSAL. To this end, we propose CESAR (Causal Embedding aSsociation with Attention Rating), a metric that measures causal strength based on token-level causal relationships. CESAR achieves a significant 69.7% relative improvement over existing metrics, increasing from 47.2% to 80.1% in capturing the causal strength change brought by supporters and defeaters. We further demonstrate even Large Language Models (LLMs) like GPT-3.5 still lag 4.5 and 10.7 points behind humans in generating supporters and defeaters, emphasizing the challenge posed by $\delta$-CAUSAL.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
420,006
2112.13320
Budget Sensitive Reannotation of Noisy Relation Classification Data Using Label Hierarchy
Large crowd-sourced datasets are often noisy and relation classification (RC) datasets are no exception. Reannotating the entire dataset is one probable solution however it is not always viable due to time and budget constraints. This paper addresses the problem of efficient reannotation of a large noisy dataset for the RC. Our goal is to catch more annotation errors in the dataset while reannotating fewer instances. Existing work on RC dataset reannotation lacks the flexibility about how much data to reannotate. We introduce the concept of a reannotation budget to overcome this limitation. The immediate follow-up problem is: Given a specific reannotation budget, which subset of the data should we reannotate? To address this problem, we present two strategies to selectively reannotate RC datasets. Our strategies utilize the taxonomic hierarchy of relation labels. The intuition of our work is to rely on the graph distance between actual and predicted relation labels in the label hierarchy graph. We evaluate our reannotation strategies on the well-known TACRED dataset. We design our experiments to answer three specific research questions. First, does our strategy select novel candidates for reannotation? Second, for a given reannotation budget is our reannotation strategy more efficient at catching annotation errors? Third, what is the impact of data reannotation on RC model performance measurement? Experimental results show that our both reannotation strategies are novel and efficient. Our analysis indicates that the current reported performance of RC models on noisy TACRED data is inflated.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
273,208
1809.04720
Sim-to-Real Transfer Learning using Robustified Controllers in Robotic Tasks involving Complex Dynamics
Learning robot tasks or controllers using deep reinforcement learning has been proven effective in simulations. Learning in simulation has several advantages. For example, one can fully control the simulated environment, including halting motions while performing computations. Another advantage when robots are involved, is that the amount of time a robot is occupied learning a task---rather than being productive---can be reduced by transferring the learned task to the real robot. Transfer learning requires some amount of fine-tuning on the real robot. For tasks which involve complex (non-linear) dynamics, the fine-tuning itself may take a substantial amount of time. In order to reduce the amount of fine-tuning we propose to learn robustified controllers in simulation. Robustified controllers are learned by exploiting the ability to change simulation parameters (both appearance and dynamics) for successive training episodes. An additional benefit for this approach is that it alleviates the precise determination of physics parameters for the simulator, which is a non-trivial task. We demonstrate our proposed approach on a real setup in which a robot aims to solve a maze game, which involves complex dynamics due to static friction and potentially large accelerations. We show that the amount of fine-tuning in transfer learning for a robustified controller is substantially reduced compared to a non-robustified controller.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
107,638
2304.02767
MethaneMapper: Spectral Absorption aware Hyperspectral Transformer for Methane Detection
Methane (CH$_4$) is the chief contributor to global climate change. Recent Airborne Visible-Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) has been very useful in quantitative mapping of methane emissions. Existing methods for analyzing this data are sensitive to local terrain conditions, often require manual inspection from domain experts, prone to significant error and hence are not scalable. To address these challenges, we propose a novel end-to-end spectral absorption wavelength aware transformer network, MethaneMapper, to detect and quantify the emissions. MethaneMapper introduces two novel modules that help to locate the most relevant methane plume regions in the spectral domain and uses them to localize these accurately. Thorough evaluation shows that MethaneMapper achieves 0.63 mAP in detection and reduces the model size (by 5x) compared to the current state of the art. In addition, we also introduce a large-scale dataset of methane plume segmentation mask for over 1200 AVIRIS-NG flight lines from 2015-2022. It contains over 4000 methane plume sites. Our dataset will provide researchers the opportunity to develop and advance new methods for tackling this challenging green-house gas detection problem with significant broader social impact. Dataset and source code are public
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
356,537
2210.10365
A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells
Collaborative robotic industrial cells are workspaces where robots collaborate with human operators. In this context, safety is paramount, and for that a complete perception of the space where the collaborative robot is inserted is necessary. To ensure this, collaborative cells are equipped with a large set of sensors of multiple modalities, covering the entire work volume. However, the fusion of information from all these sensors requires an accurate extrinsic calibration. The calibration of such complex systems is challenging, due to the number of sensors and modalities, and also due to the small overlapping fields of view between the sensors, which are positioned to capture different viewpoints of the cell. This paper proposes a sensor to pattern methodology that can calibrate a complex system such as a collaborative cell in a single optimization procedure. Our methodology can tackle RGB and Depth cameras, as well as LiDARs. Results show that our methodology is able to accurately calibrate a collaborative cell containing three RGB cameras, a depth camera and three 3D LiDARs.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
324,893
1612.07771
Highway and Residual Networks learn Unrolled Iterative Estimation
The past year saw the introduction of new architectures such as Highway networks and Residual networks which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation -- a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of Highway and Residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
65,981
2306.06184
A Unified Model and Dimension for Interactive Estimation
We study an abstract framework for interactive learning called interactive estimation in which the goal is to estimate a target from its "similarity'' to points queried by the learner. We introduce a combinatorial measure called dissimilarity dimension which largely captures learnability in our model. We present a simple, general, and broadly-applicable algorithm, for which we obtain both regret and PAC generalization bounds that are polynomial in the new dimension. We show that our framework subsumes and thereby unifies two classic learning models: statistical-query learning and structured bandits. We also delineate how the dissimilarity dimension is related to well-known parameters for both frameworks, in some cases yielding significantly improved analyses.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
372,495
2310.11564
Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging
While Reinforcement Learning from Human Feedback (RLHF) aligns Large Language Models (LLMs) with general, aggregate human preferences, it is suboptimal for learning diverse, individual perspectives. In this work, we study Reinforcement Learning from Personalized Human Feedback (RLPHF) problem, wherein LLMs are aligned to multiple (sometimes conflicting) preferences by modeling alignment as a Multi-Objective Reinforcement Learning (MORL) problem. Compared to strong single-objective baselines, we show that we can achieve personalized alignment by decomposing preferences into multiple dimensions. These dimensions are defined based on personalizations that are declared as desirable by the user. In this work, we show that they can be efficiently trained independently in a distributed manner and combined effectively post-hoc through parameter merging. The code is available at https://github.com/joeljang/RLPHF.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
400,684
1704.01926
Semantically-Guided Video Object Segmentation
This paper tackles the problem of semi-supervised video object segmentation, that is, segmenting an object in a sequence given its mask in the first frame. One of the main challenges in this scenario is the change of appearance of the objects of interest. Their semantics, on the other hand, do not vary. This paper investigates how to take advantage of such invariance via the introduction of a semantic prior that guides the appearance model. Specifically, given the segmentation mask of the first frame of a sequence, we estimate the semantics of the object of interest, and propagate that knowledge throughout the sequence to improve the results based on an appearance model. We present Semantically-Guided Video Object Segmentation (SGV), which improves results over previous state of the art on two different datasets using a variety of evaluation metrics, while running in half a second per frame.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
71,351
1801.05388
UAV Offloading: Spectrum Trading Contract Design for UAV Assisted 5G Networks
Unmanned Aerial Vehicle (UAV) has been recognized as a promising way to assist future wireless communications due to its high flexibility of deployment and scheduling. In this paper, we focus on temporarily deployed UAVs that provide downlink data offloading in some regions under a macro base station (MBS). Since the manager of the MBS and the operators of the UAVs could be of different interest groups, we formulate the corresponding spectrum trading problem by means of contract theory, where the manager of the MBS has to design an optimal contract to maximize its own revenue. Such contract comprises a set of bandwidth options and corresponding prices, and each UAV operator only chooses the most profitable one from all the options in the whole contract. We analytically derive the optimal pricing strategy based on fixed bandwidth assignment, and then propose a dynamic programming algorithm to calculate the optimal bandwidth assignment in polynomial time. By simulations, we compare the outcome of the MBS optimal contract with that of a social optimal one, and find that a selfish MBS manager sells less bandwidth to the UAV operators.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
88,446
1205.5141
There is no [21, 5, 14] code over F5
In this note, we demonstrate that there is no [21, 5, 14] code over F5.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
16,149
2107.14613
Incorporation of Deep Neural Network & Reinforcement Learning with Domain Knowledge
We present a study of the manners by which Domain information has been incorporated when building models with Neural Networks. Integrating space data is uniquely important to the development of Knowledge understanding model, as well as other fields that aid in understanding information by utilizing the human-machine interface and Reinforcement Learning. On numerous such occasions, machine-based model development may profit essentially from the human information on the world encoded in an adequately exact structure. This paper inspects expansive ways to affect encode such information as sensible and mathematical limitations and portrays methods and results that came to a couple of subcategories under all of those methodologies.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
248,526
2209.02655
Concentration of polynomial random matrices via Efron-Stein inequalities
Analyzing concentration of large random matrices is a common task in a wide variety of fields. Given independent random variables, many tools are available to analyze random matrices whose entries are linear in the variables, e.g. the matrix-Bernstein inequality. However, in many applications, we need to analyze random matrices whose entries are polynomials in the variables. These arise naturally in the analysis of spectral algorithms, e.g., Hopkins et al. [STOC 2016], Moitra-Wein [STOC 2019]; and in lower bounds for semidefinite programs based on the Sum of Squares hierarchy, e.g. Barak et al. [FOCS 2016], Jones et al. [FOCS 2021]. In this work, we present a general framework to obtain such bounds, based on the matrix Efron-Stein inequalities developed by Paulin-Mackey-Tropp [Annals of Probability 2016]. The Efron-Stein inequality bounds the norm of a random matrix by the norm of another simpler (but still random) matrix, which we view as arising by "differentiating" the starting matrix. By recursively differentiating, our framework reduces the main task to analyzing far simpler matrices. For Rademacher variables, these simpler matrices are in fact deterministic and hence, analyzing them is far easier. For general non-Rademacher variables, the task reduces to scalar concentration, which is much easier. Moreover, in the setting of polynomial matrices, our results generalize the work of Paulin-Mackey-Tropp. Using our basic framework, we recover known bounds in the literature for simple "tensor networks" and "dense graph matrices". Using our general framework, we derive bounds for "sparse graph matrices", which were obtained only recently by Jones et al. [FOCS 2021] using a nontrivial application of the trace power method, and was a core component in their work. We expect our framework to be helpful for other applications involving concentration phenomena for nonlinear random matrices.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
316,270
2304.11249
eWaSR -- an embedded-compute-ready maritime obstacle detection network
Maritime obstacle detection is critical for safe navigation of autonomous surface vehicles (ASVs). While the accuracy of image-based detection methods has advanced substantially, their computational and memory requirements prohibit deployment on embedded devices. In this paper we analyze the currently best-performing maritime obstacle detection network WaSR. Based on the analysis we then propose replacements for the most computationally intensive stages and propose its embedded-compute-ready variant eWaSR. In particular, the new design follows the most recent advancements of transformer-based lightweight networks. eWaSR achieves comparable detection results to state-of-the-art WaSR with only 0.52% F1 score performance drop and outperforms other state-of-the-art embedded-ready architectures by over 9.74% in F1 score. On a standard GPU, eWaSR runs 10x faster than the original WaSR (115 FPS vs 11 FPS). Tests on a real embedded device OAK-D show that, while WaSR cannot run due to memory restrictions, eWaSR runs comfortably at 5.5 FPS. This makes eWaSR the first practical embedded-compute-ready maritime obstacle detection network. The source code and trained eWaSR models are publicly available here: https://github.com/tersekmatija/eWaSR.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
359,735
2403.09577
The NeRFect Match: Exploring NeRF Features for Visual Localization
In this work, we propose the use of Neural Radiance Fields (NeRF) as a scene representation for visual localization. Recently, NeRF has been employed to enhance pose regression and scene coordinate regression models by augmenting the training database, providing auxiliary supervision through rendered images, or serving as an iterative refinement module. We extend its recognized advantages -- its ability to provide a compact scene representation with realistic appearances and accurate geometry -- by exploring the potential of NeRF's internal features in establishing precise 2D-3D matches for localization. To this end, we conduct a comprehensive examination of NeRF's implicit knowledge, acquired through view synthesis, for matching under various conditions. This includes exploring different matching network architectures, extracting encoder features at multiple layers, and varying training configurations. Significantly, we introduce NeRFMatch, an advanced 2D-3D matching function that capitalizes on the internal knowledge of NeRF learned via view synthesis. Our evaluation of NeRFMatch on standard localization benchmarks, within a structure-based pipeline, sets a new state-of-the-art for localization performance on Cambridge Landmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
437,822
2110.01532
Differentiable Spline Approximations
The paradigm of differentiable programming has significantly enhanced the scope of machine learning via the judicious use of gradient-based optimization. However, standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable, limiting their applicability. Our goal in this paper is to use a new, principled approach to extend gradient-based optimization to functions well modeled by splines, which encompass a large family of piecewise polynomial models. We derive the form of the (weak) Jacobian of such functions and show that it exhibits a block-sparse structure that can be computed implicitly and efficiently. Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
258,803
2103.06257
Maximum Entropy RL (Provably) Solves Some Robust RL Problems
Many potential applications of reinforcement learning (RL) require guarantees that the agent will perform well in the face of disturbances to the dynamics or reward function. In this paper, we prove theoretically that maximum entropy (MaxEnt) RL maximizes a lower bound on a robust RL objective, and thus can be used to learn policies that are robust to some disturbances in the dynamics and the reward function. While this capability of MaxEnt RL has been observed empirically in prior work, to the best of our knowledge our work provides the first rigorous proof and theoretical characterization of the MaxEnt RL robust set. While a number of prior robust RL algorithms have been designed to handle similar disturbances to the reward function or dynamics, these methods typically require additional moving parts and hyperparameters on top of a base RL algorithm. In contrast, our results suggest that MaxEnt RL by itself is robust to certain disturbances, without requiring any additional modifications. While this does not imply that MaxEnt RL is the best available robust RL method, MaxEnt RL is a simple robust RL method with appealing formal guarantees.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
224,245
2106.00107
3D map creation using crowdsourced GNSS data
3D maps are increasingly useful for many applications such as drone navigation, emergency services, and urban planning. However, creating 3D maps and keeping them up-to-date using existing technologies, such as laser scanners, is expensive. This paper proposes and implements a novel approach to generate 2.5D (otherwise known as 3D level-of-detail (LOD) 1) maps for free using Global Navigation Satellite Systems (GNSS) signals, which are globally available and are blocked only by obstacles between the satellites and the receivers. This enables us to find the patterns of GNSS signal availability and create 3D maps. The paper applies algorithms to GNSS signal strength patterns based on a boot-strapped technique that iteratively trains the signal classifiers while generating the map. Results of the proposed technique demonstrate the ability to create 3D maps using automatically processed GNSS data. The results show that the third dimension, i.e. height of the buildings, can be estimated with below 5 metre accuracy, which is the benchmark recommended by the CityGML standard.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
237,984
2202.01723
Systems Biology: Identifiability analysis and parameter identification via systems-biology informed neural networks
The dynamics of systems biological processes are usually modeled by a system of ordinary differential equations (ODEs) with many unknown parameters that need to be inferred from noisy and sparse measurements. Here, we introduce systems-biology informed neural networks for parameter estimation by incorporating the system of ODEs into the neural networks. To complete the workflow of system identification, we also describe structural and practical identifiability analysis to analyze the identifiability of parameters. We use the ultridian endocrine model for glucose-insulin interaction as the example to demonstrate all these methods and their implementation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
278,564
1510.05043
A cost function for similarity-based hierarchical clustering
The development of algorithms for hierarchical clustering has been hampered by a shortage of precise objective functions. To help address this situation, we introduce a simple cost function on hierarchies over a set of points, given pairwise similarities between those points. We show that this criterion behaves sensibly in canonical instances and that it admits a top-down construction procedure with a provably good approximation ratio.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
47,977
2308.12675
A Study of Age and Sex Bias in Multiple Instance Learning based Classification of Acute Myeloid Leukemia Subtypes
Accurate classification of Acute Myeloid Leukemia (AML) subtypes is crucial for clinical decision-making and patient care. In this study, we investigate the potential presence of age and sex bias in AML subtype classification using Multiple Instance Learning (MIL) architectures. To that end, we train multiple MIL models using different levels of sex imbalance in the training set and excluding certain age groups. To assess the sex bias, we evaluate the performance of the models on male and female test sets. For age bias, models are tested against underrepresented age groups in the training data. We find a significant effect of sex and age bias on the performance of the model for AML subtype classification. Specifically, we observe that females are more likely to be affected by sex imbalance dataset and certain age groups, such as patients with 72 to 86 years of age with the RUNX1::RUNX1T1 genetic subtype, are significantly affected by an age bias present in the training data. Ensuring inclusivity in the training data is thus essential for generating reliable and equitable outcomes in AML genetic subtype classification, ultimately benefiting diverse patient populations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
387,635
2302.02785
An intelligent tutor for planning in large partially observable environments
AI can not only outperform people in many planning tasks, but it can also teach them how to plan better. A recent and promising approach to improving human decision-making is to create intelligent tutors that utilize AI to discover and teach optimal planning strategies automatically. Prior work has shown that this approach can improve planning in artificial, fully observable planning tasks. Unlike these artificial tasks, the world is only partially observable. To bridge this gap, we developed and evaluated the first intelligent tutor for planning in partially observable environments. Compared to previous intelligent tutors for teaching planning strategies, this novel intelligent tutor combines two innovations: 1) a new metareasoning algorithm for discovering optimal planning strategies for large, partially observable environments, and 2) scaffolding the learning processing by having the learner choose from an increasing larger set of planning operations in increasingly larger planning problems. We found that our new strategy discovery algorithm is superior to the state-of-the-art. A preregistered experiment with 330 participants demonstrated that the new intelligent tutor is highly effective at improving people's ability to make good decisions in partially observable environments. This suggests our human-centered tutoring approach can successfully boost human planning in complex, partially observable sequential decision problems, a promising step towards using AI-powered intelligent tutors to improve human planning in the real world.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
344,111
2411.13770
A Novel Passive Occupational Shoulder Exoskeleton With Adjustable Peak Assistive Torque Angle For Overhead Tasks
Objective: Overhead tasks are a primary inducement to work-related musculoskeletal disorders. Aiming to reduce shoulder physical loads, passive shoulder exoskeletons are increasingly prevalent in the industry due to their lightweight, affordability, and effectiveness. However, they can only accommodate a specific task and cannot effectively balance between compactness and sufficient range of motion. Method: We proposed a novel passive occupational shoulder exoskeleton to handle various overhead tasks with different arm elevation angles and ensured a sufficient ROM while compactness. By formulating kinematic models and simulations, an ergonomic shoulder structure was developed. Then, we presented a torque generator equipped with an adjustable peak assistive torque angle to switch between low and high assistance phases through a passive clutch mechanism. Ten healthy participants were recruited to validate its functionality by performing the screwing task. Results: Measured range of motion results demonstrated that the exoskeleton can ensure a sufficient ROM in both sagittal (164{\deg}) and horizontal (158{\deg}) flexion/extension movements. The experimental results of the screwing task showed that the exoskeleton could reduce muscle activation (up to 49.6%), perceived effort and frustration, and provide an improved user experience (scored 79.7 out of 100). Conclusion: These results indicate that the proposed exoskeleton can guarantee natural movements and provide efficient assistance during overhead work, and thus have the potential to reduce the risk of musculoskeletal disorders. Significance: The proposed exoskeleton provides insights into multi-task adaptability and efficient assistance, highlighting the potential for expanding the application of exoskeletons.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
509,911
2401.05355
Developing a Resource-Constraint EdgeAI model for Surface Defect Detection
Resource constraints have restricted several EdgeAI applications to machine learning inference approaches, where models are trained on the cloud and deployed to the edge device. This poses challenges such as bandwidth, latency, and privacy associated with storing data off-site for model building. Training on the edge device can overcome these challenges by eliminating the need to transfer data to another device for storage and model development. On-device training also provides robustness to data variations as models can be retrained on newly acquired data to improve performance. We, therefore, propose a lightweight EdgeAI architecture modified from Xception, for on-device training in a resource-constraint edge environment. We evaluate our model on a PCB defect detection task and compare its performance against existing lightweight models - MobileNetV2, EfficientNetV2B0, and MobileViT-XXS. The results of our experiment show that our model has a remarkable performance with a test accuracy of 73.45% without pre-training. This is comparable to the test accuracy of non-pre-trained MobileViT-XXS (75.40%) and much better than other non-pre-trained models (MobileNetV2 - 50.05%, EfficientNetV2B0 - 54.30%). The test accuracy of our model without pre-training is comparable to pre-trained MobileNetV2 model - 75.45% and better than pre-trained EfficientNetV2B0 model - 58.10%. In terms of memory efficiency, our model performs better than EfficientNetV2B0 and MobileViT-XXS. We find that the resource efficiency of machine learning models does not solely depend on the number of parameters but also depends on architectural considerations. Our method can be applied to other resource-constraint applications while maintaining significant performance.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
420,732
2403.13765
Towards Principled Representation Learning from Videos for Reinforcement Learning
We study pre-training representations for decision-making using video data, which is abundantly available for tasks such as game agents and software testing. Even though significant empirical advances have been made on this problem, a theoretical understanding remains absent. We initiate the theoretical investigation into principled approaches for representation learning and focus on learning the latent state representations of the underlying MDP using video data. We study two types of settings: one where there is iid noise in the observation, and a more challenging setting where there is also the presence of exogenous noise, which is non-iid noise that is temporally correlated, such as the motion of people or cars in the background. We study three commonly used approaches: autoencoding, temporal contrastive learning, and forward modeling. We prove upper bounds for temporal contrastive learning and forward modeling in the presence of only iid noise. We show that these approaches can learn the latent state and use it to do efficient downstream RL with polynomial sample complexity. When exogenous noise is also present, we establish a lower bound result showing that the sample complexity of learning from video data can be exponentially worse than learning from action-labeled trajectory data. This partially explains why reinforcement learning with video pre-training is hard. We evaluate these representational learning methods in two visual domains, yielding results that are consistent with our theoretical findings.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
439,766
2103.09043
Inclined Quadrotor Landing using Deep Reinforcement Learning
Landing a quadrotor on an inclined surface is a challenging maneuver. The final state of any inclined landing trajectory is not an equilibrium, which precludes the use of most conventional control methods. We propose a deep reinforcement learning approach to design an autonomous landing controller for inclined surfaces. Using the proximal policy optimization (PPO) algorithm with sparse rewards and a tailored curriculum learning approach, an inclined landing policy can be trained in simulation in less than 90 minutes on a standard laptop. The policy then directly runs on a real Crazyflie 2.1 quadrotor and successfully performs real inclined landings in a flying arena. A single policy evaluation takes approximately 2.5\,ms, which makes it suitable for a future embedded implementation on the quadrotor.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
225,064
2008.06843
Learning Flow-based Feature Warping for Face Frontalization with Illumination Inconsistent Supervision
Despite recent advances in deep learning-based face frontalization methods, photo-realistic and illumination preserving frontal face synthesis is still challenging due to large pose and illumination discrepancy during training. We propose a novel Flow-based Feature Warping Model (FFWM) which can learn to synthesize photo-realistic and illumination preserving frontal images with illumination inconsistent supervision. Specifically, an Illumination Preserving Module (IPM) is proposed to learn illumination preserving image synthesis from illumination inconsistent image pairs. IPM includes two pathways which collaborate to ensure the synthesized frontal images are illumination preserving and with fine details. Moreover, a Warp Attention Module (WAM) is introduced to reduce the pose discrepancy in the feature level, and hence to synthesize frontal images more effectively and preserve more details of profile images. The attention mechanism in WAM helps reduce the artifacts caused by the displacements between the profile and the frontal images. Quantitative and qualitative experimental results show that our FFWM can synthesize photo-realistic and illumination preserving frontal images and performs favorably against the state-of-the-art results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
191,909
1904.04396
Hypersparse Neural Network Analysis of Large-Scale Internet Traffic
The Internet is transforming our society, necessitating a quantitative understanding of Internet traffic. Our team collects and curates the largest publicly available Internet traffic data containing 50 billion packets. Utilizing a novel hypersparse neural network analysis of "video" streams of this traffic using 10,000 processors in the MIT SuperCloud reveals a new phenomena: the importance of otherwise unseen leaf nodes and isolated links in Internet traffic. Our neural network approach further shows that a two-parameter modified Zipf-Mandelbrot distribution accurately describes a wide variety of source/destination statistics on moving sample windows ranging from 100,000 to 100,000,000 packets over collections that span years and continents. The inferred model parameters distinguish different network streams and the model leaf parameter strongly correlates with the fraction of the traffic in different underlying network topologies. The hypersparse neural network pipeline is highly adaptable and different network statistics and training models can be incorporated with simple changes to the image filter functions.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
true
127,021
1907.12393
To regulate or not: a social dynamics analysis of the race for AI supremacy
Rapid technological advancements in AI as well as the growing deployment of intelligent technologies in new application domains are currently driving the competition between businesses, nations and regions. This race for technological supremacy creates a complex ecology of choices that may lead to negative consequences, in particular, when ethical and safety procedures are underestimated or even ignored. As a consequence, different actors are urging to consider both the normative and social impact of these technological advancements. As there is no easy access to data describing this AI race, theoretical models are necessary to understand its dynamics, allowing for the identification of when, how and which procedures need to be put in place to favour outcomes beneficial for all. We show that, next to the risks of setbacks and being reprimanded for unsafe behaviour, the time-scale in which AI supremacy can be achieved plays a crucial role. When this supremacy can be achieved in a short term, those who completely ignore the safety precautions are bound to win the race but at a cost to society, apparently requiring regulatory actions. Our analysis reveals that blindly imposing regulations may not have anticipated effect as only for specific conditions a dilemma arises between what individually preferred and globally beneficial. Similar observations can be made for the long-term development case. Yet different from the short term situation, certain conditions require the promotion of risk-taking as opposed to compliance to safety regulations in order to improve social welfare. These results remain robust when two or several actors are involved in the race and when collective rather than individual setbacks are produced by risk-taking behaviour. When defining codes of conduct and regulatory policies for AI, a clear understanding about the time-scale of the race is required.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
140,113
2007.03681
Fast Bayesian Estimation of Spatial Count Data Models
Spatial count data models are used to explain and predict the frequency of phenomena such as traffic accidents in geographically distinct entities such as census tracts or road segments. These models are typically estimated using Bayesian Markov chain Monte Carlo (MCMC) simulation methods, which, however, are computationally expensive and do not scale well to large datasets. Variational Bayes (VB), a method from machine learning, addresses the shortcomings of MCMC by casting Bayesian estimation as an optimisation problem instead of a simulation problem. Considering all these advantages of VB, a VB method is derived for posterior inference in negative binomial models with unobserved parameter heterogeneity and spatial dependence. P\'olya-Gamma augmentation is used to deal with the non-conjugacy of the negative binomial likelihood and an integrated non-factorised specification of the variational distribution is adopted to capture posterior dependencies. The benefits of the proposed approach are demonstrated in a Monte Carlo study and an empirical application on estimating youth pedestrian injury counts in census tracts of New York City. The VB approach is around 45 to 50 times faster than MCMC on a regular eight-core processor in a simulation and an empirical study, while offering similar estimation and predictive accuracy. Conditional on the availability of computational resources, the embarrassingly parallel architecture of the proposed VB method can be exploited to further accelerate its estimation by up to 20 times.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,134
2407.12356
LTSim: Layout Transportation-based Similarity Measure for Evaluating Layout Generation
We introduce a layout similarity measure designed to evaluate the results of layout generation. While several similarity measures have been proposed in prior research, there has been a lack of comprehensive discussion about their behaviors. Our research uncovers that the majority of these measures are unable to handle various layout differences, primarily due to their dependencies on strict element matching, that is one-by-one matching of elements within the same category. To overcome this limitation, we propose a new similarity measure based on optimal transport, which facilitates a more flexible matching of elements. This approach allows us to quantify the similarity between any two layouts even those sharing no element categories, making our measure highly applicable to a wide range of layout generation tasks. For tasks such as unconditional layout generation, where FID is commonly used, we also extend our measure to deal with collection-level similarities between groups of layouts. The empirical result suggests that our collection-level measure offers more reliable comparisons than existing ones like FID and Max.IoU.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,887
2501.12149
On the practical applicability of modern DFT functionals for chemical computations. Case study of DM21 applicability for geometry optimization
Density functional theory (DFT) is probably the most promising approach for quantum chemistry calculations considering its good balance between calculations precision and speed. In recent years, several neural network-based functionals have been developed for exchange-correlation energy approximation in DFT, DM21 developed by Google Deepmind being the most notable between them. This study focuses on evaluating the efficiency of DM21 functional in predicting molecular geometries, with a focus on the influence of oscillatory behavior in neural network exchange-correlation functionals. We implemented geometry optimization in PySCF for the DM21 functional in geometry optimization problem, compared its performance with traditional functionals, and tested it on various benchmarks. Our findings reveal both the potential and the current challenges of using neural network functionals for geometry optimization in DFT. We propose a solution extending the practical applicability of such functionals and allowing to model new substances with their help.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
526,182
2306.07187
Video-to-Music Recommendation using Temporal Alignment of Segments
We study cross-modal recommendation of music tracks to be used as soundtracks for videos. This problem is known as the music supervision task. We build on a self-supervised system that learns a content association between music and video. In addition to the adequacy of content, adequacy of structure is crucial in music supervision to obtain relevant recommendations. We propose a novel approach to significantly improve the system's performance using structure-aware recommendation. The core idea is to consider not only the full audio-video clips, but rather shorter segments for training and inference. We find that using semantic segments and ranking the tracks according to sequence alignment costs significantly improves the results. We investigate the impact of different ranking metrics and segmentation methods.
false
false
true
false
false
true
true
false
false
false
false
false
false
false
false
false
false
true
372,916
2405.03003
Parameter-Efficient Fine-Tuning with Discrete Fourier Transform
Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices $A$ and $B$ to represent the weight change, i.e., $\Delta W=BA$. Despite LoRA's progress, it faces storage challenges when handling extensive customization adaptations or larger base models. In this work, we aim to further compress trainable parameters by enjoying the powerful expressiveness of the Fourier transform. Specifically, we introduce FourierFT, which treats $\Delta W$ as a matrix in the spatial domain and learns only a small fraction of its spectral coefficients. With the trained spectral coefficients, we implement the inverse discrete Fourier transform to recover $\Delta W$. Empirically, our FourierFT method shows comparable or better performance with fewer parameters than LoRA on various tasks, including natural language understanding, natural language generation, instruction tuning, and image classification. For example, when performing instruction tuning on the LLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable parameters, compared to LoRA's 33.5M. Our code is released at \url{https://github.com/Chaos96/fourierft}.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
452,013
2207.09455
To update or not to update? Neurons at equilibrium in deep models
Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters. Such a discovery has a broad impact from theory to applications, driving the research towards methods to identify the minimum subset of parameters to train without look-ahead information exploitation. However, the methods proposed do not match the state-of-the-art performance, and rely on unstructured sparsely connected models. In this work we shift our focus from the single parameters to the behavior of the whole neuron, exploiting the concept of neuronal equilibrium (NEq). When a neuron is in a configuration at equilibrium (meaning that it has learned a specific input-output relationship), we can halt its update; on the contrary, when a neuron is at non-equilibrium, we let its state evolve towards an equilibrium state, updating its parameters. The proposed approach has been tested on different state-of-the-art learning strategies and tasks, validating NEq and observing that the neuronal equilibrium depends on the specific learning setup.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
308,905
1510.07703
Full Duplex Assisted Inter-cell Interference Cancellation in Heterogeneous Networks
The paper studies the suppression of cross-tier inter-cell interference (ICI) generated by a macro base station (MBS) to pico user equipments (PUEs) in heterogeneous networks (HetNets). Different from existing ICI avoidance schemes such as enhanced ICI cancellation (eICIC) and coordinated beamforming, which generally operate at the MBS, we propose a full duplex (FD) assisted ICI cancellation (fICIC) scheme, which can operate at each pico BS (PBS) individually and is transparent to the MBS. The basic idea of the fICIC is to apply FD technique at the PBS such that the PBS can send the desired signals and forward the listened cross-tier ICI simultaneously to PUEs. We first consider the narrowband single-user case, where the MBS serves a single macro UE and each PBS serves a single PUE. We obtain the closed-form solution of the optimal fICIC scheme, and analyze its asymptotical performance in ICI-dominated scenario. We then investigate the general narrowband multi-user case, where both MBS and PBSs serve multiple UEs. We devise a low-complexity algorithm to optimize the fICIC aimed at maximizing the downlink sum rate of the PUEs subject to user fairness constraint. Finally, the generalization of the fICIC to wideband systems is investigated. Simulations validate the analytical results and demonstrate the advantages of the fICIC on mitigating cross-tier ICI.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
48,223
1706.06274
Learning Graphical Models Using Multiplicative Weights
We give a simple, multiplicative-weight update algorithm for learning undirected graphical models or Markov random fields (MRFs). The approach is new, and for the well-studied case of Ising models or Boltzmann machines, we obtain an algorithm that uses a nearly optimal number of samples and has quadratic running time (up to logarithmic factors), subsuming and improving on all prior work. Additionally, we give the first efficient algorithm for learning Ising models over general alphabets. Our main application is an algorithm for learning the structure of t-wise MRFs with nearly-optimal sample complexity (up to polynomial losses in necessary terms that depend on the weights) and running time that is $n^{O(t)}$. In addition, given $n^{O(t)}$ samples, we can also learn the parameters of the model and generate a hypothesis that is close in statistical distance to the true MRF. All prior work runs in time $n^{\Omega(d)}$ for graphs of bounded degree d and does not generate a hypothesis close in statistical distance even for t=3. We observe that our runtime has the correct dependence on n and t assuming the hardness of learning sparse parities with noise. Our algorithm--the Sparsitron-- is easy to implement (has only one parameter) and holds in the on-line setting. Its analysis applies a regret bound from Freund and Schapire's classic Hedge algorithm. It also gives the first solution to the problem of learning sparse Generalized Linear Models (GLMs).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
75,653
2407.05480
Biomedical Nested NER with Large Language Model and UMLS Heuristics
In this paper, we present our system for the BioNNE English track, which aims to extract 8 types of biomedical nested named entities from biomedical text. We use a large language model (Mixtral 8x7B instruct) and ScispaCy NER model to identify entities in an article and build custom heuristics based on unified medical language system (UMLS) semantic types to categorize the entities. We discuss the results and limitations of our system and propose future improvements. Our system achieved an F1 score of 0.39 on the BioNNE validation set and 0.348 on the test set.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
470,996
2408.12870
Can AI Assistance Aid in the Grading of Handwritten Answer Sheets?
With recent advancements in artificial intelligence (AI), there has been growing interest in using state of the art (SOTA) AI solutions to provide assistance in grading handwritten answer sheets. While a few commercial products exist, the question of whether AI-assistance can actually reduce grading effort and time has not yet been carefully considered in published literature. This work introduces an AI-assisted grading pipeline. The pipeline first uses text detection to automatically detect question regions present in a question paper PDF. Next, it uses SOTA text detection methods to highlight important keywords present in the handwritten answer regions of scanned answer sheets to assist in the grading process. We then evaluate a prototype implementation of the AI-assisted grading pipeline deployed on an existing e-learning management platform. The evaluation involves a total of 5 different real-life examinations across 4 different courses at a reputed institute; it consists of a total of 42 questions, 17 graders, and 468 submissions. We log and analyze the grading time for each handwritten answer while using AI assistance and without it. Our evaluations have shown that, on average, the graders take 31% less time while grading a single response and 33% less grading time while grading a single answer sheet using AI assistance.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
482,923
2102.06192
Adversarial Segmentation Loss for Sketch Colorization
We introduce a new method for generating color images from sketches or edge maps. Current methods either require some form of additional user-guidance or are limited to the "paired" translation approach. We argue that segmentation information could provide valuable guidance for sketch colorization. To this end, we propose to leverage semantic image segmentation, as provided by a general purpose panoptic segmentation network, to create an additional adversarial loss function. Our loss function can be integrated to any baseline GAN model. Our method is not limited to datasets that contain segmentation labels, and it can be trained for "unpaired" translation tasks. We show the effectiveness of our method on four different datasets spanning scene level indoor, outdoor, and children book illustration images using qualitative, quantitative and user study analysis. Our model improves its baseline up to 35 points on the FID metric. Our code and pretrained models can be found at https://github.com/giddyyupp/AdvSegLoss.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
219,667
2011.06428
Discriminative, Generative and Self-Supervised Approaches for Target-Agnostic Learning
Supervised learning, characterized by both discriminative and generative learning, seeks to predict the values of single (or sometimes multiple) predefined target attributes based on a predefined set of predictor attributes. For applications where the information available and predictions to be made may vary from instance to instance, we propose the task of target-agnostic learning where arbitrary disjoint sets of attributes can be used for each of predictors and targets for each to-be-predicted instance. For this task, we survey a wide range of techniques available for handling missing values, self-supervised training and pseudo-likelihood training, and adapt them to a suite of algorithms that are suitable for the task. We conduct extensive experiments on this suite of algorithms on a large collection of categorical, continuous and discretized datasets, and report their performance in terms of both classification and regression errors. We also report the training and prediction time of these algorithms when handling large-scale datasets. Both generative and self-supervised learning models are shown to perform well at the task, although their characteristics towards the different types of data are quite different. Nevertheless, our derived theorem for the pseudo-likelihood theory also shows that they are related for inferring a joint distribution model based on the pseudo-likelihood training.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
206,245
2001.02334
$\mu$VulDeePecker: A Deep Learning-Based System for Multiclass Vulnerability Detection
Fine-grained software vulnerability detection is an important and challenging problem. Ideally, a detection system (or detector) not only should be able to detect whether or not a program contains vulnerabilities, but also should be able to pinpoint the type of a vulnerability in question. Existing vulnerability detection methods based on deep learning can detect the presence of vulnerabilities (i.e., addressing the binary classification or detection problem), but cannot pinpoint types of vulnerabilities (i.e., incapable of addressing multiclass classification). In this paper, we propose the first deep learning-based system for multiclass vulnerability detection, dubbed $\mu$VulDeePecker. The key insight underlying $\mu$VulDeePecker is the concept of code attention, which can capture information that can help pinpoint types of vulnerabilities, even when the samples are small. For this purpose, we create a dataset from scratch and use it to evaluate the effectiveness of $\mu$VulDeePecker. Experimental results show that $\mu$VulDeePecker is effective for multiclass vulnerability detection and that accommodating control-dependence (other than data-dependence) can lead to higher detection capabilities.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
159,709
1901.08810
Unsupervised speech representation learning using WaveNet autoencoders
We consider the task of unsupervised extraction of meaningful latent representations of speech by applying autoencoding neural networks to speech waveforms. The goal is to learn a representation able to capture high level semantic content from the signal, e.g.\ phoneme identities, while being invariant to confounding low level details in the signal such as the underlying pitch contour or background noise. Since the learned representation is tuned to contain only phonetic content, we resort to using a high capacity WaveNet decoder to infer information discarded by the encoder from previous samples. Moreover, the behavior of autoencoder models depends on the kind of constraint that is applied to the latent representation. We compare three variants: a simple dimensionality reduction bottleneck, a Gaussian Variational Autoencoder (VAE), and a discrete Vector Quantized VAE (VQ-VAE). We analyze the quality of learned representations in terms of speaker independence, the ability to predict phonetic content, and the ability to accurately reconstruct individual spectrogram frames. Moreover, for discrete encodings extracted using the VQ-VAE, we measure the ease of mapping them to phonemes. We introduce a regularization scheme that forces the representations to focus on the phonetic content of the utterance and report performance comparable with the top entries in the ZeroSpeech 2017 unsupervised acoustic unit discovery task.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
119,583
1410.3248
One-shot Marton inner bound for classical-quantum broadcast channel
We consider the problem of communication over a classical-quantum broadcast channel with one sender and two receivers. Generalizing the classical inner bounds shown by Marton and the recent quantum asymptotic version shown by Savov and Wilde, we obtain one-shot inner bounds in the quantum setting. Our bounds are stated in terms of smooth min and max Renyi divergences. We obtain these results using a different analysis of the random codebook argument and employ a new one-shot classical mutual covering argument based on rejection sampling. These results give a full justification of the claims of Savov and Wilde in the classical-quantum asymptotic iid setting; the techniques also yield similar bounds in the information spectrum setting.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
36,691
1211.5494
Optimal design of PID controllers using the QFT method
An optimisation algorithm is proposed for designing PID controllers, which minimises the asymptotic open-loop gain of a system, subject to appropriate robust- stability and performance QFT constraints. The algorithm is simple and can be used to automate the loop-shaping step of the QFT design procedure. The effectiveness of the method is illustrated with an example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
19,893
2302.00463
Uncertain Quality-Diversity: Evaluation methodology and new methods for Quality-Diversity in Uncertain Domains
Quality-Diversity optimisation (QD) has proven to yield promising results across a broad set of applications. However, QD approaches struggle in the presence of uncertainty in the environment, as it impacts their ability to quantify the true performance and novelty of solutions. This problem has been highlighted multiple times independently in previous literature. In this work, we propose to uniformise the view on this problem through four main contributions. First, we formalise a common framework for uncertain domains: the Uncertain QD setting, a special case of QD in which fitness and descriptors for each solution are no longer fixed values but distribution over possible values. Second, we propose a new methodology to evaluate Uncertain QD approaches, relying on a new per-generation sampling budget and a set of existing and new metrics specifically designed for Uncertain QD. Third, we propose three new Uncertain QD algorithms: Archive-sampling, Parallel-Adaptive-sampling and Deep-Grid-sampling. We propose these approaches taking into account recent advances in the QD community toward the use of hardware acceleration that enable large numbers of parallel evaluations and make sampling an affordable approach to uncertainty. Our final and fourth contribution is to use this new framework and the associated comparison methods to benchmark existing and novel approaches. We demonstrate once again the limitation of MAP-Elites in uncertain domains and highlight the performance of the existing Deep-Grid approach, and of our new algorithms. The goal of this framework and methods is to become an instrumental benchmark for future works considering Uncertain QD.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
343,225
2006.07900
ResOT: Resource-Efficient Oblique Trees for Neural Signal Classification
Classifiers that can be implemented on chip with minimal computational and memory resources are essential for edge computing in emerging applications such as medical and IoT devices. This paper introduces a machine learning model based on oblique decision trees to enable resource-efficient classification on a neural implant. By integrating model compression with probabilistic routing and implementing cost-aware learning, our proposed model could significantly reduce the memory and hardware cost compared to state-of-the-art models, while maintaining the classification accuracy. We trained the resource-efficient oblique tree with power-efficient regularization (ResOT-PE) on three neural classification tasks to evaluate the performance, memory, and hardware requirements. On seizure detection task, we were able to reduce the model size by 3.4X and the feature extraction cost by 14.6X compared to the ensemble of boosted trees, using the intracranial EEG from 10 epilepsy patients. In a second experiment, we tested the ResOT-PE model on tremor detection for Parkinson's disease, using the local field potentials from 12 patients implanted with a deep-brain stimulation (DBS) device. We achieved a comparable classification performance as the state-of-the-art boosted tree ensemble, while reducing the model size and feature extraction cost by 10.6X and 6.8X, respectively. We also tested on a 6-class finger movement detection task using ECoG recordings from 9 subjects, reducing the model size by 17.6X and feature computation cost by 5.1X. The proposed model can enable a low-power and memory-efficient implementation of classifiers for real-time neurological disease detection and motor decoding.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
181,997
2009.00739
Non-asymptotic Identification of Linear Dynamical Systems Using Multiple Trajectories
This paper considers the problem of linear time-invariant (LTI) system identification using input/output data. Recent work has provided non-asymptotic results on partially observed LTI system identification using a single trajectory but is only suitable for stable systems. We provide finite-time analysis for learning Markov parameters based on the ordinary least-squares (OLS) estimator using multiple trajectories, which covers both stable and unstable systems. For unstable systems, our results suggest that the Markov parameters are harder to estimate in the presence of process noise. Without process noise, our upper bound on the estimation error is independent of the spectral radius of system dynamics with high probability. These two features are different from fully observed LTI systems for which recent work has shown that unstable systems with a bigger spectral radius are easier to estimate. Extensive numerical experiments demonstrate the performance of our OLS estimator.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
194,122
2205.10055
A Case of Exponential Convergence Rates for SVM
Classification is often the first problem described in introductory machine learning classes. Generalization guarantees of classification have historically been offered by Vapnik-Chervonenkis theory. Yet those guarantees are based on intractable algorithms, which has led to the theory of surrogate methods in classification. Guarantees offered by surrogate methods are based on calibration inequalities, which have been shown to be highly sub-optimal under some margin conditions, failing short to capture exponential convergence phenomena. Those "super" fast rates are becoming to be well understood for smooth surrogates, but the picture remains blurry for non-smooth losses such as the hinge loss, associated with the renowned support vector machines. In this paper, we present a simple mechanism to obtain fast convergence rates and we investigate its usage for SVM. In particular, we show that SVM can exhibit exponential convergence rates even without assuming the hard Tsybakov margin condition.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
297,537
0809.1963
Materialized View Selection by Query Clustering in XML Data Warehouses
XML data warehouses form an interesting basis for decision-support applications that exploit complex data. However, native XML database management systems currently bear limited performances and it is necessary to design strategies to optimize them. In this paper, we propose an automatic strategy for the selection of XML materialized views that exploits a data mining technique, more precisely the clustering of the query workload. To validate our strategy, we implemented an XML warehouse modeled along the XCube specifications. We executed a workload of XQuery decision-support queries on this warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when queries are complex.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
2,329
2006.09904
Learning Colour Representations of Search Queries
Image search engines rely on appropriately designed ranking features that capture various aspects of the content semantics as well as the historic popularity. In this work, we consider the role of colour in this relevance matching process. Our work is motivated by the observation that a significant fraction of user queries have an inherent colour associated with them. While some queries contain explicit colour mentions (such as 'black car' and 'yellow daisies'), other queries have implicit notions of colour (such as 'sky' and 'grass'). Furthermore, grounding queries in colour is not a mapping to a single colour, but a distribution in colour space. For instance, a search for 'trees' tends to have a bimodal distribution around the colours green and brown. We leverage historical clickthrough data to produce a colour representation for search queries and propose a recurrent neural network architecture to encode unseen queries into colour space. We also show how this embedding can be learnt alongside a cross-modal relevance ranker from impression logs where a subset of the result images were clicked. We demonstrate that the use of a query-image colour distance feature leads to an improvement in the ranker performance as measured by users' preferences of clicked versus skipped images.
false
false
false
false
false
true
true
false
false
false
false
true
false
false
false
false
false
false
182,695
1905.09152
Multi-View Large-Scale Bundle Adjustment Method for High-Resolution Satellite Images
Given enough multi-view image corresponding points (also called tie points) and ground control points (GCP), bundle adjustment for high-resolution satellite images is used to refine the orientations or most often used geometric parameters Rational Polynomial Coefficients (RPC) of each satellite image in a unified geodetic framework, which is very critical in many photogrammetry and computer vision applications. However, the growing number of high resolution spaceborne optical sensors has brought two challenges to the bundle adjustment: 1) images come from different satellite cameras may have different imaging dates, viewing angles, resolutions, etc., thus resulting in geometric and radiometric distortions in the bundle adjustment; 2) The large-scale mapping area always corresponds to vast number of bundle adjustment corrections (including RPC bias and object space point coordinates). Due to the limitation of computer memory, it is hard to refine all corrections at the same time. Hence, how to efficiently realize the bundle adjustment in large-scale regions is very important. This paper particularly addresses the multi-view large-scale bundle adjustment problem by two steps: 1) to get robust tie points among different satellite images, we design a multi-view, multi-source tie point matching algorithm based on plane rectification and epipolar constraints, which is able to compensate geometric and local nonlinear radiometric distortions among satellite datasets, and 2) to solve dozens of thousands or even millions of variables bundle adjustment corrections in the large scale bundle adjustment, we use an efficient solution with only a little computer memory. Experiments on in-track and off-track satellite datasets show that the proposed method is capable of computing sub-pixel accuracy bundle adjustment results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
131,659
2203.02502
No More Than 6ft Apart: Robust K-Means via Radius Upper Bounds
Centroid based clustering methods such as k-means, k-medoids and k-centers are heavily applied as a go-to tool in exploratory data analysis. In many cases, those methods are used to obtain representative centroids of the data manifold for visualization or summarization of a dataset. Real world datasets often contain inherent abnormalities, e.g., repeated samples and sampling bias, that manifest imbalanced clustering. We propose to remedy such a scenario by introducing a maximal radius constraint $r$ on the clusters formed by the centroids, i.e., samples from the same cluster should not be more than $2r$ apart in terms of $\ell_2$ distance. We achieve this constraint by solving a semi-definite program, followed by a linear assignment problem with quadratic constraints. Through qualitative results, we show that our proposed method is robust towards dataset imbalances and sampling artifacts. To the best of our knowledge, ours is the first constrained k-means clustering method with hard radius constraints. Codes at https://bit.ly/kmeans-constrained
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
283,767
2404.15564
Guided AbsoluteGrad: Magnitude of Gradients Matters to Explanation's Localization and Saliency
This paper proposes a new gradient-based XAI method called Guided AbsoluteGrad for saliency map explanations. We utilize both positive and negative gradient magnitudes and employ gradient variance to distinguish the important areas for noise deduction. We also introduce a novel evaluation metric named ReCover And Predict (RCAP), which considers the Localization and Visual Noise Level objectives of the explanations. We propose two propositions for these two objectives and prove the necessity of evaluating them. We evaluate Guided AbsoluteGrad with seven gradient-based XAI methods using the RCAP metric and other SOTA metrics in three case studies: (1) ImageNet dataset with ResNet50 model; (2) International Skin Imaging Collaboration (ISIC) dataset with EfficientNet model; (3) the Places365 dataset with DenseNet161 model. Our method surpasses other gradient-based approaches, showcasing the quality of enhanced saliency map explanations through gradient magnitude.
true
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
449,138