id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2409.03170
Solving Stochastic Orienteering Problems with Chance Constraints Using Monte Carlo Tree Search
We present a new Monte Carlo Tree Search (MCTS) algorithm to solve the stochastic orienteering problem with chance constraints, i.e., a version of the problem where travel costs are random, and one is assigned a bound on the tolerable probability of exceeding the budget. The algorithm we present is online and anytime, i.e., it alternates planning and execution, and the quality of the solution it produces increases as the allowed computational time increases. Differently from most former MCTS algorithms, for each action available in a state the algorithm maintains estimates of both its value and the probability that its execution will eventually result in a violation of the chance constraint. Then, at action selection time, our proposed solution prunes away trajectories that are estimated to violate the failure probability. Extensive simulation results show that this approach can quickly produce high-quality solutions and is competitive with the optimal but time-consuming solution.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
485,944
1806.11276
Generating Connected Random Graphs
Sampling random graphs is essential in many applications, and often algorithms use Markov chain Monte Carlo methods to sample uniformly from the space of graphs. However, often there is a need to sample graphs with some property that we are unable, or it is too inefficient, to sample using standard approaches. In this paper, we are interested in sampling graphs from a conditional ensemble of the underlying graph model. We present an algorithm to generate samples from an ensemble of connected random graphs using a Metropolis-Hastings framework. The algorithm extends to a general framework for sampling from a known distribution of graphs, conditioned on a desired property. We demonstrate the method to generate connected spatially embedded random graphs, specifically the well known Waxman network, and illustrate the convergence and practicalities of the algorithm.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
101,682
2106.13228
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space. However, these deformation-based approaches struggle to model changes in topology, as topological changes require a discontinuity in the deformation field, but these deformation fields are necessarily continuous. We address this limitation by lifting NeRFs into a higher dimensional space, and by representing the 5D radiance field corresponding to each individual input image as a slice through this "hyper-space". Our method is inspired by level set methods, which model the evolution of surfaces as slices through a higher dimensional surface. We evaluate our method on two tasks: (i) interpolating smoothly between "moments", i.e., configurations of the scene, seen in the input images while maintaining visual plausibility, and (ii) novel-view synthesis at fixed moments. We show that our method, which we dub HyperNeRF, outperforms existing methods on both tasks. Compared to Nerfies, HyperNeRF reduces average error rates by 4.1% for interpolation and 8.6% for novel-view synthesis, as measured by LPIPS. Additional videos, results, and visualizations are available at https://hypernerf.github.io.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
243,001
1611.03225
Sharper Bounds for Regularized Data Fitting
We study matrix sketching methods for regularized variants of linear regression, low rank approximation, and canonical correlation analysis. Our main focus is on sketching techniques which preserve the objective function value for regularized problems, which is an area that has remained largely unexplored. We study regularization both in a fairly broad setting, and in the specific context of the popular and widely used technique of ridge regularization; for the latter, as applied to each of these problems, we show algorithmic resource bounds in which the {\em statistical dimension} appears in places where in previous bounds the rank would appear. The statistical dimension is always smaller than the rank, and decreases as the amount of regularization increases. In particular, for the ridge low-rank approximation problem $\min_{Y,X} \lVert YX - A \rVert_F^2 + \lambda \lVert Y\rVert_F^2 + \lambda\lVert X \rVert_F^2$, where $Y\in\mathbb{R}^{n\times k}$ and $X\in\mathbb{R}^{k\times d}$, we give an approximation algorithm needing \[ O(\mathtt{nnz}(A)) + \tilde{O}((n+d)\varepsilon^{-1}k \min\{k, \varepsilon^{-1}\mathtt{sd}_\lambda(Y^*)\})+ \mathtt{poly}(\mathtt{sd}_\lambda(Y^*) \varepsilon^{-1}) \] time, where $s_{\lambda}(Y^*)\le k$ is the statistical dimension of $Y^*$, $Y^*$ is an optimal $Y$, $\varepsilon$ is an error parameter, and $\mathtt{nnz}(A)$ is the number of nonzero entries of $A$.This is faster than prior work, even when $\lambda=0$. We also study regularization in a much more general setting. For example, we obtain sketching-based algorithms for the low-rank approximation problem $\min_{X,Y} \lVert YX - A \rVert_F^2 + f(Y,X)$ where $f(\cdot,\cdot)$ is a regularizing function satisfying some very general conditions (chiefly, invariance under orthogonal transformations).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
63,672
2310.08745
AcTExplore: Active Tactile Exploration of Unknown Objects
Tactile exploration plays a crucial role in understanding object structures for fundamental robotics tasks such as grasping and manipulation. However, efficiently exploring such objects using tactile sensors is challenging, primarily due to the large-scale unknown environments and limited sensing coverage of these sensors. To this end, we present AcTExplore, an active tactile exploration method driven by reinforcement learning for object reconstruction at scales that automatically explores the object surfaces in a limited number of steps. Through sufficient exploration, our algorithm incrementally collects tactile data and reconstructs 3D shapes of the objects as well, which can serve as a representation for higher-level downstream tasks. Our method achieves an average of 95.97% IoU coverage on unseen YCB objects while just being trained on primitive shapes. Project Webpage: https://prg.cs.umd.edu/AcTExplore
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
399,508
2209.01383
Training Strategies for Improved Lip-reading
Several training strategies and temporal models have been recently proposed for isolated word lip-reading in a series of independent works. However, the potential of combining the best strategies and investigating the impact of each of them has not been explored. In this paper, we systematically investigate the performance of state-of-the-art data augmentation approaches, temporal models and other training strategies, like self-distillation and using word boundary indicators. Our results show that Time Masking (TM) is the most important augmentation followed by mixup and Densely-Connected Temporal Convolutional Networks (DC-TCN) are the best temporal model for lip-reading of isolated words. Using self-distillation and word boundary indicators is also beneficial but to a lesser extent. A combination of all the above methods results in a classification accuracy of 93.4%, which is an absolute improvement of 4.6% over the current state-of-the-art performance on the LRW dataset. The performance can be further improved to 94.1% by pre-training on additional datasets. An error analysis of the various training strategies reveals that the performance improves by increasing the classification accuracy of hard-to-recognise words.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
315,870
2105.09459
Enhancing safety in water transport system based on Internet of Things for developing countries
Accidents in inland waterways in developing countries are a regular phenomenon throughout the year causing deaths, injuries, monetary loss, and a significant amount of missing people. In consequence, a lot of families are losing their dear ones leading to much misery. The above context demands an intelligent, safe, and reliable water transport system for the developing countries. The concept of Intelligent Transport System (ITS) can be applied to develop such system; however, there are issues with ITS and Internet of Things (IoT) unlocks a new way of developing it. This paper proposes a model to transform the water transport system into an intelligent system based on IoT. IPv6 based machine-to-machine (M2M) protocol, 3G telecommunication technology, and IEEE 802.15.4 network standard play a significant role in this proposed IoT based system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
236,069
1209.3419
Tractable Optimization Problems through Hypergraph-Based Structural Restrictions
Several variants of the Constraint Satisfaction Problem have been proposed and investigated in the literature for modelling those scenarios where solutions are associated with some given costs. Within these frameworks computing an optimal solution is an NP-hard problem in general; yet, when restricted over classes of instances whose constraint interactions can be modelled via (nearly-)acyclic graphs, this problem is known to be solvable in polynomial time. In this paper, larger classes of tractable instances are singled out, by discussing solution approaches based on exploiting hypergraph acyclicity and, more generally, structural decomposition methods, such as (hyper)tree decompositions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
18,579
2110.01140
Structured abbreviation expansion in context
Ad hoc abbreviations are commonly found in informal communication channels that favor shorter messages. We consider the task of reversing these abbreviations in context to recover normalized, expanded versions of abbreviated messages. The problem is related to, but distinct from, spelling correction, in that ad hoc abbreviations are intentional and may involve substantial differences from the original words. Ad hoc abbreviations are productively generated on-the-fly, so they cannot be resolved solely by dictionary lookup. We generate a large, open-source data set of ad hoc abbreviations. This data is used to study abbreviation strategies and to develop two strong baselines for abbreviation expansion
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
258,659
1912.12502
Implicit supervision for fault detection and segmentation of emerging fault types with Deep Variational Autoencoders
Data-driven fault diagnostics of safety-critical systems often faces the challenge of a complete lack of labeled data associated with faulty system conditions (i.e., fault types) at training time. Since an unknown number and nature of fault types can arise during deployment, data-driven fault diagnostics in this scenario is an open-set learning problem. Most of the algorithms for open-set diagnostics are one-class classification and unsupervised algorithms that do not leverage all the available labeled and unlabeled data in the learning algorithm. As a result, their fault detection and segmentation performance (i.e., identifying and separating faults of different types) are sub-optimal. With this work, we propose training a variational autoencoder (VAE) with labeled and unlabeled samples while inducing implicit supervision on the latent representation of the healthy conditions. This, together with a modified sampling process of VAE, creates a compact and informative latent representation that allows good detection and segmentation of unseen fault types using existing one-class and clustering algorithms. We refer to the proposed methodology as "knowledge induced variational autoencoder with adaptive sampling" (KIL-AdaVAE). The fault detection and segmentation capabilities of the proposed methodology are demonstrated in a new simulated case study using the Advanced Geared Turbofan 30000 (AGTF30) dynamical model under real flight conditions. In an extensive comparison, we demonstrate that the proposed method outperforms other learning strategies (supervised learning, supervised learning with embedding and semi-supervised learning) and deep learning algorithms, yielding significant performance improvements on fault detection and fault segmentation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
158,861
2410.21661
Partial Orders in Rate-Matched Polar Codes
In this paper, we establish the partial order (POs) for both the binary erasure channel (BEC) and the binary memoryless symmetric channel (BMSC) under any block rate-matched polar codes. Firstly, we define the POs in the sense of rate-matched polar codes as a sequential block version. Furthermore, we demonstrate the persistence of POs after block rate matching in the BEC. Finally, leveraging the existing POs in the BEC, we obtain more POs in the BMSC under block rate matching. Simulations show that the PW sequence constructed from \beta-expansion can be improved by the tool of POs. Actually, any fixed reliable sequence in the mother polar codes can be improved by POs for rate matching.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
503,340
2401.14371
Efficient Optimisation of Physical Reservoir Computers using only a Delayed Input
We present an experimental validation of a recently proposed optimization technique for reservoir computing, using an optoelectronic setup. Reservoir computing is a robust framework for signal processing applications, and the development of efficient optimization approaches remains a key challenge. The technique we address leverages solely a delayed version of the input signal to identify the optimal operational region of the reservoir, simplifying the traditionally time-consuming task of hyperparameter tuning. We verify the effectiveness of this approach on different benchmark tasks and reservoir operating conditions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
424,060
2403.10572
Discovering Invariant Neighborhood Patterns for Heterophilic Graphs
This paper studies the problem of distribution shifts on non-homophilous graphs Mosting existing graph neural network methods rely on the homophilous assumption that nodes from the same class are more likely to be linked. However, such assumptions of homophily do not always hold in real-world graphs, which leads to more complex distribution shifts unaccounted for in previous methods. The distribution shifts of neighborhood patterns are much more diverse on non-homophilous graphs. We propose a novel Invariant Neighborhood Pattern Learning (INPL) to alleviate the distribution shifts problem on non-homophilous graphs. Specifically, we propose the Adaptive Neighborhood Propagation (ANP) module to capture the adaptive neighborhood information, which could alleviate the neighborhood pattern distribution shifts problem on non-homophilous graphs. We propose Invariant Non-Homophilous Graph Learning (INHGL) module to constrain the ANP and learn invariant graph representation on non-homophilous graphs. Extensive experimental results on real-world non-homophilous graphs show that INPL could achieve state-of-the-art performance for learning on large non-homophilous graphs.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
438,254
1907.10302
Fine-Grained Sentence Functions for Short-Text Conversation
Sentence function is an important linguistic feature referring to a user's purpose in uttering a specific sentence. The use of sentence function has shown promising results to improve the performance of conversation models. However, there is no large conversation dataset annotated with sentence functions. In this work, we collect a new Short-Text Conversation dataset with manually annotated SEntence FUNctions (STC-Sefun). Classification models are trained on this dataset to (i) recognize the sentence function of new data in a large corpus of short-text conversations; (ii) estimate a proper sentence function of the response given a test query. We later train conversation models conditioned on the sentence functions, including information retrieval-based and neural generative models. Experimental results demonstrate that the use of sentence functions can help improve the quality of the returned responses.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
139,586
1712.01690
Size Matters: A Comparative Analysis of Community Detection Algorithms
Understanding community structure of social media is critical due to its broad applications such as friend recommendations, user modeling and content personalizations. Existing research uses structural metrics such as modularity and conductance and functional metrics such as ground truth to measure the qualify of the communities discovered by various community detection algorithms, while overlooking a natural and important dimension, community size. Recently, anthropologist Dunbar suggests that the size of stable community in social media should be limited to 150, referred to as Dunbar's number. In this study, we propose a systematic way of algorithm comparison by orthogonally integrating community size as a new dimension into existing structural metrics for consistently and holistically evaluating the community quality in social media context. we design a heuristic clique based algorithm which controls the size and overlap of communities with adjustable parameters and evaluate it along with five state-of-the-art community detection algorithms on both Twitter network and DBLP network. Specifically, we divide the discovered communities based on their size into four classes called close friend, casual friend, acquaintance, and just a face, and then calculate the coverage, modularity, triangle participation ratio, conductance, transitivity, and the internal density of communities in each class. We discover that communities in different classes exhibit diverse structural qualities and many existing community detection algorithms tend to output extremely large communities.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
86,154
2401.06637
Adversarial Examples are Misaligned in Diffusion Model Manifolds
In recent years, diffusion models (DMs) have drawn significant attention for their success in approximating data distributions, yielding state-of-the-art generative results. Nevertheless, the versatility of these models extends beyond their generative capabilities to encompass various vision applications, such as image inpainting, segmentation, adversarial robustness, among others. This study is dedicated to the investigation of adversarial attacks through the lens of diffusion models. However, our objective does not involve enhancing the adversarial robustness of image classifiers. Instead, our focus lies in utilizing the diffusion model to detect and analyze the anomalies introduced by these attacks on images. To that end, we systematically examine the alignment of the distributions of adversarial examples when subjected to the process of transformation using diffusion models. The efficacy of this approach is assessed across CIFAR-10 and ImageNet datasets, including varying image sizes in the latter. The results demonstrate a notable capacity to discriminate effectively between benign and attacked images, providing compelling evidence that adversarial instances do not align with the learned manifold of the DMs.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
421,223
1908.05860
Learning Deep Representations by Mutual Information for Person Re-identification
Most existing person re-identification (ReID) methods have good feature representations to distinguish pedestrians with deep convolutional neural network (CNN) and metric learning methods. However, these works concentrate on the similarity between encoder output and ground-truth, ignoring the correlation between input and encoder output, which affects the performance of identifying different pedestrians. To address this limitation, We design a Deep InfoMax (DIM) network to maximize the mutual information (MI) between the input image and encoder output, which doesn't need any auxiliary labels. To evaluate the effectiveness of the DIM network, we propose end-to-end Global-DIM and Local-DIM models. Additionally, the DIM network provides a new solution for cross-dataset unsupervised ReID issue as it needs no extra labels. The experiments prove the superiority of MI theory on the ReID issue, which achieves the state-of-the-art results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
141,839
2308.13820
Video and Audio are Images: A Cross-Modal Mixer for Original Data on Video-Audio Retrieval
Cross-modal retrieval has become popular in recent years, particularly with the rise of multimedia. Generally, the information from each modality exhibits distinct representations and semantic information, which makes feature tends to be in separate latent spaces encoded with dual-tower architecture and makes it difficult to establish semantic relationships between modalities, resulting in poor retrieval performance. To address this issue, we propose a novel framework for cross-modal retrieval which consists of a cross-modal mixer, a masked autoencoder for pre-training, and a cross-modal retriever for downstream tasks.In specific, we first adopt cross-modal mixer and mask modeling to fuse the original modality and eliminate redundancy. Then, an encoder-decoder architecture is applied to achieve a fuse-then-separate task in the pre-training phase.We feed masked fused representations into the encoder and reconstruct them with the decoder, ultimately separating the original data of two modalities. In downstream tasks, we use the pre-trained encoder to build the cross-modal retrieval method. Extensive experiments on 2 real-world datasets show that our approach outperforms previous state-of-the-art methods in video-audio matching tasks, improving retrieval accuracy by up to 2 times. Furthermore, we prove our model performance by transferring it to other downstream tasks as a universal model.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
388,071
cmp-lg/9611002
Unsupervised Language Acquisition
This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,674
1805.00290
Python Framework for HP Adaptive Discontinuous Galerkin Method for Two Phase Flow in Porous Media
In this paper we present a framework for solving two phase flow problems in porous media. The discretization is based on a Discontinuous Galerkin method and includes local grid adaptivity and local choice of polynomial degree. The method is implemented using the new Python frontend Dune-FemPy to the open source framework Dune. The code used for the simulations is made available as Jupyter notebook and can be used through a Docker container. We present a number of time stepping approaches ranging from a classical IMPES method to fully coupled implicit scheme. The implementation of the discretization is very flexible allowing for test different formulations of the two phase flow model and adaptation strategies.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
96,400
2111.05149
Ethically aligned Deep Learning: Unbiased Facial Aesthetic Prediction
Facial beauty prediction (FBP) aims to develop a machine that automatically makes facial attractiveness assessment. In the past those results were highly correlated with human ratings, therefore also with their bias in annotating. As artificial intelligence can have racist and discriminatory tendencies, the cause of skews in the data must be identified. Development of training data and AI algorithms that are robust against biased information is a new challenge for scientists. As aesthetic judgement usually is biased, we want to take it one step further and propose an Unbiased Convolutional Neural Network for FBP. While it is possible to create network models that can rate attractiveness of faces on a high level, from an ethical point of view, it is equally important to make sure the model is unbiased. In this work, we introduce AestheticNet, a state-of-the-art attractiveness prediction network, which significantly outperforms competitors with a Pearson Correlation of 0.9601. Additionally, we propose a new approach for generating a bias-free CNN to improve fairness in machine learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
265,708
cmp-lg/9808015
Error-Driven Pruning of Treebank Grammars for Base Noun Phrase Identification
Finding simple, non-recursive, base noun phrases is an important subtask for many natural language processing applications. While previous empirical methods for base NP identification have been rather complex, this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the task. In particular, we present a corpus-based approach for finding base NPs by matching part-of-speech tag sequences. The training phase of the algorithm is based on two successful techniques: first the base NP grammar is read from a ``treebank'' corpus; then the grammar is improved by selecting rules with high ``benefit'' scores. Using this simple algorithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on the Penn Treebank Wall Street Journal.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,922
2202.11912
A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions
As deep learning (DL) efficacy grows, concerns for poor model explainability grow also. Attribution methods address the issue of explainability by quantifying the importance of an input feature for a model prediction. Among various methods, Integrated Gradients (IG) sets itself apart by claiming other methods failed to satisfy desirable axioms, while IG and methods like it uniquely satisfy said axioms. This paper comments on fundamental aspects of IG and its applications/extensions: 1) We identify key differences between IG function spaces and the supporting literature's function spaces which problematize previous claims of IG uniqueness. We show that with the introduction of an additional axiom, \textit{non-decreasing positivity}, the uniqueness claims can be established. 2) We address the question of input sensitivity by identifying function classes where IG is/is not Lipschitz in the attributed input. 3) We show that axioms for single-baseline methods have analogous properties for methods with probability distribution baselines. 4) We introduce a computationally efficient method of identifying internal neurons that contribute to specified regions of an IG attribution map. Finally, we present experimental results validating this method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
282,046
1210.6673
Semantically Secure Lattice Codes for the Gaussian Wiretap Channel
We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the {flatness factor} as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No \textit{a priori} distribution of the message is assumed, and no dither is used in our proposed schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
19,380
2208.12804
Algebraically Explainable Controllers: Decision Trees and Support Vector Machines Join Forces
Recently, decision trees (DT) have been used as an explainable representation of controllers (a.k.a. strategies, policies, schedulers). Although they are often very efficient and produce small and understandable controllers for discrete systems, complex continuous dynamics still pose a challenge. In particular, when the relationships between variables take more complex forms, such as polynomials, they cannot be obtained using the available DT learning procedures. In contrast, support vector machines provide a more powerful representation, capable of discovering many such relationships, but not in an explainable form. Therefore, we suggest to combine the two frameworks in order to obtain an understandable representation over richer, domain-relevant algebraic predicates. We demonstrate and evaluate the proposed method experimentally on established benchmarks.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
314,853
1204.3818
Throughput Optimal Policies for Energy Harvesting Wireless Transmitters with Non-Ideal Circuit Power
Characterizing the fundamental tradeoffs for maximizing energy efficiency (EE) versus spectrum efficiency (SE) is a key problem in wireless communication. In this paper, we address this problem for a point-to-point additive white Gaussian noise (AWGN) channel with the transmitter powered solely via energy harvesting from the environment. In addition, we assume a practical on-off transmitter model with non-ideal circuit power, i.e., when the transmitter is on, its consumed power is the sum of the transmit power and a constant circuit power. Under this setup, we study the optimal transmit power allocation to maximize the average throughput over a finite horizon, subject to the time-varying energy constraint and the non-ideal circuit power consumption. First, we consider the off-line optimization under the assumption that the energy arrival time and amount are a priori known at the transmitter. Although this problem is non-convex due to the non-ideal circuit power, we show an efficient optimal solution that in general corresponds to a two-phase transmission: the first phase with an EE-maximizing on-off power allocation, and the second phase with a SE-maximizing power allocation that is non-decreasing over time, thus revealing an interesting result that both the EE and SE optimizations are unified in an energy harvesting communication system. We then extend the optimal off-line algorithm to the case with multiple parallel AWGN channels, based on the principle of nested optimization. Finally, inspired by the off-line optimal solution, we propose a new online algorithm under the practical setup with only the past and present energy state information (ESI) known at the transmitter.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
15,540
2204.01850
Robust Portfolio Design and Stock Price Prediction Using an Optimized LSTM Model
Accurate prediction of future prices of stocks is a difficult task to perform. Even more challenging is to design an optimized portfolio with weights allocated to the stocks in a way that optimizes its return and the risk. This paper presents a systematic approach towards building two types of portfolios, optimum risk, and eigen, for four critical economic sectors of India. The prices of the stocks are extracted from the web from Jan 1, 2016, to Dec 31, 2020. Sector-wise portfolios are built based on their ten most significant stocks. An LSTM model is also designed for predicting future stock prices. Six months after the construction of the portfolios, i.e., on Jul 1, 2021, the actual returns and the LSTM-predicted returns for the portfolios are computed. A comparison of the predicted and the actual returns indicate a high accuracy level of the LSTM model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
289,749
1905.12061
On Evaluating the Effectiveness of the HoneyBot: A Case Study
In recent years, cyber-physical system (CPS) security as applied to robotic systems has become a popular research area. Mainly because robotics systems have traditionally emphasized the completion of a specific objective and lack security oriented design. Our previous work, HoneyBot \cite{celine}, presented the concept and prototype of the first software hybrid interaction honeypot specifically designed for networked robotic systems. The intuition behind HoneyBot was that it would be a remotely accessible robotic system that could simulate unsafe actions and physically perform safe actions to fool attackers. Unassuming attackers would think they were connected to an ordinary robotic system, believing their exploits were being successfully executed. All the while, the HoneyBot is logging all communications and exploits sent to be used for attacker attribution and threat model creation. In this paper, we present findings from the result of a user study performed to evaluate the effectiveness of the HoneyBot framework and architecture as it applies to real robotic systems. The user study consisted of 40 participants, was conducted over the course of several weeks, and drew from a wide range of participants aged between 18-60 with varying level of technical expertise. From the study we found that research subjects could not tell the difference between the simulated sensor values and the real sensor values coming from the HoneyBot, meaning the HoneyBot convincingly spoofed communications.
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
132,633
1812.04204
2.5D Visual Sound
Binaural audio provides a listener with 3D sound sensation, allowing a rich perceptual experience of the scene. However, binaural recordings are scarcely available and require nontrivial expertise and equipment to obtain. We propose to convert common monaural audio into binaural audio by leveraging video. The key idea is that visual frames reveal significant spatial cues that, while explicitly lacking in the accompanying single-channel audio, are strongly linked to it. Our multi-modal approach recovers this link from unlabeled video. We devise a deep convolutional neural network that learns to decode the monaural (single-channel) soundtrack into its binaural counterpart by injecting visual information about object and scene configurations. We call the resulting output 2.5D visual sound---the visual stream helps "lift" the flat single channel audio into spatialized sound. In addition to sound generation, we show the self-supervised representation learned by our network benefits audio-visual source separation. Our video results: http://vision.cs.utexas.edu/projects/2.5D_visual_sound/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
116,165
1806.11096
Recovering Trees with Convex Clustering
Convex clustering refers, for given $\left\{x_1, \dots, x_n\right\} \subset \mathbb{R}^p$, to the minimization of \begin{eqnarray*} u(\gamma) & = & \underset{u_1, \dots, u_n }{\arg\min}\;\sum_{i=1}^{n}{\lVert x_i - u_i \rVert^2} + \gamma \sum_{i,j=1}^{n}{w_{ij} \lVert u_i - u_j\rVert},\\ \end{eqnarray*} where $w_{ij} \geq 0$ is an affinity that quantifies the similarity between $x_i$ and $x_j$. We prove that if the affinities $w_{ij}$ reflect a tree structure in the $\left\{x_1, \dots, x_n\right\}$, then the convex clustering solution path reconstructs the tree exactly. The main technical ingredient implies the following combinatorial byproduct: for every set $\left\{x_1, \dots, x_n \right\} \subset \mathbb{R}^p$ of $n \geq 2$ distinct points, there exist at least $n/6$ points with the property that for any of these points $x$ there is a unit vector $v \in \mathbb{R}^p$ such that, when viewed from $x$, `most' points lie in the direction $v$ \begin{eqnarray*} \frac{1}{n-1}\sum_{i=1 \atop x_i \neq x}^{n}{ \left\langle \frac{x_i - x}{\lVert x_i - x \rVert}, v \right\rangle} & \geq & \frac{1}{4}. \end{eqnarray*}
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
101,647
2404.06150
(Not) Understanding Latin Poetic Style with Deep Learning
This article summarizes some mostly unsuccessful attempts to understand authorial style by examining the attention of various neural networks (LSTMs and CNNs) trained on a corpus of classical Latin verse that has been encoded to include sonic and metrical features. Carefully configured neural networks are shown to be extremely strong authorship classifiers, so it is hoped that they might therefore teach `traditional' readers something about how the authors differ in style. Sadly their reasoning is, so far, inscrutable. While the overall goal has not yet been reached, this work reports some useful findings in terms of effective ways to encode and embed verse, the relative strengths and weaknesses of the neural network families, and useful (and not so useful) techniques for designing and inspecting NN models in this domain. This article suggests that, for poetry, CNNs are better choices than LSTMs -- they train more quickly, have equivalent accuracy, and (potentially) offer better interpretability. Based on a great deal of experimentation, it also suggests that simple, trainable embeddings are more effective than domain-specific schemes, and stresses the importance of techniques to reduce overfitting, like dropout and batch normalization.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
445,344
2409.08116
Value of Communication: Data-Driven Topology Optimization for Distributed Linear Cyber-Physical Systems
Communication topology is a crucial part of a distributed control implementation for cyber-physical systems, yet is typically treated as a constraint within control design problems rather than a design variable. We propose a data-driven method for designing an optimal topology for the purpose of distributed control when a system model is unavailable or unaffordable, via a mixed-integer second-order conic program. The approach demonstrates improved control performance over random topologies in simulations and efficiently drops links which have a small effect on predictor accuracy, which we show correlates well with closed-loop control cost.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
487,778
1905.13658
Ordinal Regression as Structured Classification
This paper extends the class of ordinal regression models with a structured interpretation of the problem by applying a novel treatment of encoded labels. The net effect of this is to transform the underlying problem from an ordinal regression task to a (structured) classification task which we solve with conditional random fields, thereby achieving a coherent and probabilistic model in which all model parameters are jointly learnt. Importantly, we show that although we have cast ordinal regression to classification, our method still fall within the class of decomposition methods in the ordinal regression ontology. This is an important link since our experience is that many applications of machine learning to healthcare ignores completely the important nature of the label ordering, and hence these approaches should considered naive in this ontology. We also show that our model is flexible both in how it adapts to data manifolds and in terms of the operations that are available for practitioner to execute. Our empirical evaluation demonstrates that the proposed approach overwhelmingly produces superior and often statistically significant results over baseline approaches on forty popular ordinal regression models, and demonstrate that the proposed model significantly out-performs baselines on synthetic and real datasets. Our implementation, together with scripts to reproduce the results of this work, will be available on a public GitHub repository.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
133,208
1906.11336
A Simple Deep Personalized Recommendation System
Recommender systems are critical tools to match listings and travelers in two-sided vacation rental marketplaces. Such systems require high capacity to extract user preferences for items from implicit signals at scale. To learn those preferences, we propose a Simple Deep Personalized Recommendation System to compute travelers' conditional embeddings. Our method combines listing embeddings in a supervised structure to build short-term historical context to personalize recommendations for travelers. Deployed in the production environment, this approach is computationally efficient and scalable, and allows us to capture non-linear dependencies. Our offline evaluation indicates that traveler embeddings created using a Deep Average Network can improve the precision of a downstream conversion prediction model by seven percent, outperforming more complex benchmark methods for online shopping experience personalization.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
136,642
2002.11433
Efficient Semantic Video Segmentation with Per-frame Inference
For semantic segmentation, most existing real-time deep models trained with each frame independently may produce inconsistent results for a video sequence. Advanced methods take into considerations the correlations in the video sequence, e.g., by propagating the results to the neighboring frames using optical flow, or extracting the frame representations with other frames, which may lead to inaccurate results or unbalanced latency. In this work, we process efficient semantic video segmentation in a per-frame fashion during the inference process. Different from previous per-frame models, we explicitly consider the temporal consistency among frames as extra constraints during the training process and embed the temporal consistency into the segmentation network. Therefore, in the inference process, we can process each frame independently with no latency, and improve the temporal consistency with no extra computational cost and post-processing. We employ compact models for real-time execution. To narrow the performance gap between compact models and large models, new knowledge distillation methods are designed. Our results outperform previous keyframe based methods with a better trade-off between the accuracy and the inference speed on popular benchmarks, including the Cityscapes and Camvid. The temporal consistency is also improved compared with corresponding baselines which are trained with each frame independently. Code is available at: https://tinyurl.com/segment-video
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
165,710
1810.01021
Large batch size training of neural networks with adversarial training and second-order information
The most straightforward method to accelerate Stochastic Gradient Descent (SGD) computation is to distribute the randomly selected batch of inputs over multiple processors. To keep the distributed processors fully utilized requires commensurately growing the batch size. However, large batch training often leads to poorer generalization. A recently proposed solution for this problem is to use adaptive batch sizes in SGD. In this case, one starts with a small number of processes and scales the processes as training progresses. Two major challenges with this approach are (i) that dynamically resizing the cluster can add non-trivial overhead, in part since it is currently not supported, and (ii) that the overall speed up is limited by the initial phase with smaller batches. In this work, we address both challenges by developing a new adaptive batch size framework, with autoscaling based on the Ray framework. This allows very efficient elastic scaling with negligible resizing overhead (0.32\% of time for ResNet18 ImageNet training). Furthermore, we propose a new adaptive batch size training scheme using second order methods and adversarial training. These enable increasing batch sizes earlier during training, which leads to better training time. We extensively evaluate our method on Cifar-10/100, SVHN, TinyImageNet, and ImageNet datasets, using multiple neural networks, including ResNets and smaller networks such as SqueezeNext. Our method exceeds the performance of existing solutions in terms of both accuracy and the number of SGD iterations (up to 1\% and $5\times$, respectively). Importantly, this is achieved without any additional hyper-parameter tuning to tailor our method in any of these experiments.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
109,312
1710.02659
Interference Model Similarity Index and Its Applications to mmWave Networks: Extended version
In wireless communication networks, interference models are routinely used for tasks such as performance analysis, optimization, and protocol design. These tasks are heavily affected by the accuracy and tractability of the interference models. Yet, quantifying the accuracy of these models remains a major challenge. In this paper, we propose a new index for assessing the accuracy of any interference model under any network scenario. Specifically, it is based on a new index that quantifies the ability of any interference model in correctly predicting harmful interference events, that is, link outages. We consider a specific wireless scenario of both conventional sub-6~GHz and millimeter-wave (mmWave) networks and demonstrate how our index yields insights into the possibility of simplifying the set of dominant interferers, replacing a Nakagami or Rayleigh random fading by an equivalent deterministic channel, and ignoring antenna sidelobes. Our analysis reveals that in highly directional antenna settings with obstructions, even simple interference models (such as the classical protocol model) are accurate, while with omnidirectional antennas, more sophisticated and complex interference models (such as the classical physical model) are necessary. We further use the proposed index to develop a simple interference model for mmWave networks that can significantly simplify design principles of the important procedures for wireless communication, such as beamforming, interference management, scheduling, and topology control. Our new approach makes it possible to adopt the simplest interference model of adequate accuracy for every wireless network.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
82,204
2311.01729
CDGraph: Dual Conditional Social Graph Synthesizing via Diffusion Model
The social graphs synthesized by the generative models are increasingly in demand due to data scarcity and concerns over user privacy. One of the key performance criteria for generating social networks is the fidelity to specified conditionals, such as users with certain membership and financial status. While recent diffusion models have shown remarkable performance in generating images, their effectiveness in synthesizing graphs has not yet been explored in the context of conditional social graphs. In this paper, we propose the first kind of conditional diffusion model for social networks, CDGraph, which trains and synthesizes graphs based on two specified conditions. We propose the co-evolution dependency in the denoising process of CDGraph to capture the mutual dependencies between the dual conditions and further incorporate social homophily and social contagion to preserve the connectivity between nodes while satisfying the specified conditions. Moreover, we introduce a novel classifier loss, which guides the training of the diffusion process through the mutual dependency of dual conditions. We evaluate CDGraph against four existing graph generative methods, i.e., SPECTRE, GSM, EDGE, and DiGress, on four datasets. Our results show that the generated graphs from CDGraph achieve much higher dual-conditional validity and lower discrepancy in various social network metrics than the baselines, thus demonstrating its proficiency in generating dual-conditional social graphs.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
405,152
2406.10130
The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases, which may cause negative social impacts or even bring catastrophic results in application. Previous works on this problem mainly focused on using black-box methods such as probing to detect and quantify social biases in PLMs by observing model outputs. As a result, previous debiasing methods mainly finetune or even pre-train language models on newly constructed anti-stereotypical datasets, which are high-cost. In this work, we try to unveil the mystery of social bias inside language models by introducing the concept of {\sc Social Bias Neurons}. Specifically, we propose {\sc Integrated Gap Gradients (IG$^2$)} to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias. By formalizing undesirable behavior as a distributional property of language, we employ sentiment-bearing prompts to elicit classes of sensitive words (demographics) correlated with such sentiments. Our IG$^2$ thus attributes the uneven distribution for different demographics to specific Social Bias Neurons, which track the trail of unwanted behavior inside PLM units to achieve interoperability. Moreover, derived from our interpretable technique, {\sc Bias Neuron Suppression (BNS)} is further proposed to mitigate social biases. By studying BERT, RoBERTa, and their attributable differences from debiased FairBERTa, IG$^2$ allows us to locate and suppress identified neurons, and further mitigate undesired behaviors. As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
464,240
1911.08261
Unsupervised AER Object Recognition Based on Multiscale Spatio-Temporal Features and Spiking Neurons
This paper proposes an unsupervised address event representation (AER) object recognition approach. The proposed approach consists of a novel multiscale spatio-temporal feature (MuST) representation of input AER events and a spiking neural network (SNN) using spike-timing-dependent plasticity (STDP) for object recognition with MuST. MuST extracts the features contained in both the spatial and temporal information of AER event flow, and meanwhile forms an informative and compact feature spike representation. We show not only how MuST exploits spikes to convey information more effectively, but also how it benefits the recognition using SNN. The recognition process is performed in an unsupervised manner, which does not need to specify the desired status of every single neuron of SNN, and thus can be flexibly applied in real-world recognition tasks. The experiments are performed on five AER datasets including a new one named GESTURE-DVS. Extensive experimental results show the effectiveness and advantages of this proposed approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
154,143
2308.01946
Experimental Results regarding multiple Machine Learning via Quaternions
This paper presents an experimental study on the application of quaternions in several machine learning algorithms. Quaternion is a mathematical representation of rotation in three-dimensional space, which can be used to represent complex data transformations. In this study, we explore the use of quaternions to represent and classify rotation data, using randomly generated quaternion data and corresponding labels, converting quaternions to rotation matrices, and using them as input features. Based on quaternions and multiple machine learning algorithms, it has shown higher accuracy and significantly improved performance in prediction tasks. Overall, this study provides an empirical basis for exploiting quaternions for machine learning tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
383,429
2004.10462
Keyphrase Prediction With Pre-trained Language Model
Recently, generative methods have been widely used in keyphrase prediction, thanks to their capability to produce both present keyphrases that appear in the source text and absent keyphrases that do not match any source text. However, the absent keyphrases are generated at the cost of the performance on present keyphrase prediction, since previous works mainly use generative models that rely on the copying mechanism and select words step by step. Besides, the extractive model that directly extracts a text span is more suitable for predicting the present keyphrase. Considering the different characteristics of extractive and generative methods, we propose to divide the keyphrase prediction into two subtasks, i.e., present keyphrase extraction (PKE) and absent keyphrase generation (AKG), to fully exploit their respective advantages. On this basis, a joint inference framework is proposed to make the most of BERT in two subtasks. For PKE, we tackle this task as a sequence labeling problem with the pre-trained language model BERT. For AKG, we introduce a Transformer-based architecture, which fully integrates the present keyphrase knowledge learned from PKE by the fine-tuned BERT. The experimental results show that our approach can achieve state-of-the-art results on both tasks on benchmark datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
173,641
1907.00542
Cosine similarity-based adversarial process
An adversarial process between two deep neural networks is a promising approach to train a robust model. In this paper, we propose an adversarial process using cosine similarity, whereas conventional adversarial processes are based on inverted categorical cross entropy (CCE). When used for training an identification model, the adversarial process induces the competition of two discriminative models; one for a primary task such as speaker identification or image recognition, the other one for a subsidiary task such as channel identification or domain identification. In particular, the adversarial process degrades the performance of the subsidiary model by eliminating the subsidiary information in the input which, in assumption, may degrade the performance of the primary model. The conventional adversarial processes maximize the CCE of the subsidiary model to degrade the performance. We have studied a framework for training robust discriminative models by eliminating channel or domain information (subsidiary information) by applying such an adversarial process. However, we found through experiments that using the process of maximizing the CCE does not guarantee the performance degradation of the subsidiary model. In the proposed adversarial process using cosine similarity, on the contrary, the performance of the subsidiary model can be degraded more efficiently by searching feature space orthogonal to the subsidiary model. The experiments on speaker identification and image recognition show that we found features that make the outputs of the subsidiary models independent of the input, and the performances of the primary models are improved.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,083
1812.03040
Per-Flow Cardinality Estimation Based On Virtual LogLog Sketching
Flow cardinality estimation is the problem of estimating the number of distinct elements in a data flow, often with a stringent memory constraint. It has wide applications in network traffic measurement and in database systems. The virtual LogLog algorithm proposed recently by Xiao, Chen, Chen and Ling estimates the cardinalities of a large number of flows with a compact memory. The purpose of this thesis is to explore two new perspectives on the estimation process of this algorithm. Firstly, we propose and investigate a family of estimators that generalizes the original vHLL estimator and evaluate the performance of the vHLL estimator compared to other estimators in this family. Secondly, we propose an alternative solution to the estimation problem by deriving a maximum-likelihood estimator. Empirical evidence from both perspectives suggests the near-optimality of the vHLL estimator for per-flow estimation, analogous to the near-optimality of the HLL estimator for single-flow estimation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
115,920
1607.05810
You want to survive the data deluge: Be careful, Computational Intelligence will not serve you as a rescue boat
We are at the dawn of a new era, where advances in computer power, broadband communication and digital sensor technologies have led to an unprecedented flood of data inundating our surrounding. It is generally believed that means such as Computational Intelligence will help to outlive these tough times. However, these hopes are improperly high. Computational Intelligence is a surprising composition of two mutually exclusive and contradicting constituents that could be coupled only if you disregard and neglect their controversies: "Computational" implies reliance on data processing and "Intelligence" implies reliance on information processing. Only those who are indifferent to data-information discrepancy can believe that such a combination can be viable. We do not believe in miracles, so we will try to share with you our reservations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
58,799
1103.5405
Network Estimation and Packet Delivery Prediction for Control over Wireless Mesh Networks
Much of the current theory of networked control systems uses simple point-to-point communication models as an abstraction of the underlying network. As a result, the controller has very limited information on the network conditions and performs suboptimally. This work models the underlying wireless multihop mesh network as a graph of links with transmission success probabilities, and uses a recursive Bayesian estimator to provide packet delivery predictions to the controller. The predictions are a joint probability distribution on future packet delivery sequences, and thus capture correlations between successive packet deliveries. We look at finite horizon LQG control over a lossy actuation channel and a perfect sensing channel, both without delay, to study how the controller can compensate for predicted network outages.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
9,783
2302.09405
Doppler-Resilient Universal Filtered MultiCarrier (DR-UFMC): A Beyond-OTFS Modulation
In the past few years, some alternatives to the Orthogonal Frequency Division Multiplexing (OFDM) modulation have been considered to improve its spectral containment and its performance level in the presence of heavy Doppler shifts. This paper examines a novel modulation, named Doppler-Resilient Universal Filtered MultiCarrier (DR-UFMC), which has the objective of combining the advantages provided by the Universal Filtered MultiCarrier (UFMC) modulation (i.e., better spectral containment), with those of the Orthogonal Time Frequency Space (OTFS) modulation (i.e., better performance in time-varying environments). The paper contains the mathematical model and detailed transceiver block scheme of the newly described modulation, along with a numerical analysis contrasting DR-UFMC against OTFS, OFDM with one-tap frequency domain equalization (FDE), and OFDM with multicarrier multisymbol linear MMSE processing. Results clearly show the superiority, with respect to the cited benchmarks, of the newly proposed modulation in terms of achievable spectral efficiency. Interestingly, it is also seen that OFDM, when considered in conjunction with multicarrier multisymbol linear minimum mean squares error (MMSE) processing, performs slightly better than OTFS in terms of achievable spectral efficiency.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
346,411
1902.04600
Towards Jointly Optimal Placement and Delivery: To Code or Not to Code in Wireless Caching Networks
Coded caching techniques have received significant attention lately due to their provable gains in reducing the cost of data delivery in wireless networks. These gains, however, have only been demonstrated under the assumption of a free placement phase. This unrealistic assumption poses a significant limitation, especially in cases where aggressive placement strategies can lead to a significant transmission cost that may even be higher than the corresponding cost of the delivery phase. In this paper, we relax this assumption and propose a general caching framework that captures the transmission cost of the two phases, and hence, results in minimizing the overall rate of the caching network. We model the dynamic nature of the network through a cost structure that allows for varying the network architecture and cost per transmission, across the placement and delivery phases. We start with the scenario where the individual users have no limit on the available caching memory and characterize the jointly optimal solution as a function of the different parameters in our cost structure. Then, we characterize the effect of memory constraints on the optimal solution in certain special cases. Interestingly, our results identify regions where the uncoded caching scheme outperforms its coded counterpart. Further, coded caching is shown to offer performance gains only when the network architecture during the placement phase is different from that during the delivery phase.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
121,370
1910.06582
Identification of Behavioural Models for Railway Turnouts Monitoring
This study introduces a low-complexity behavioural model to describe the dynamic response of railway turnouts due to the ballast and railpad components. The behavioural model should serve as the basis for the future development of a supervisory system for the continuous monitoring of turnouts. A fourth order linear model is proposed based on spectral analysis of measured rail vertical accelerations gathered during a receptance test and it is then identified at several sections of the turnout applying the Eigensystem Realization Algorithm. The predictviness and robustness of the behavioural models have been assessed on a large data set of train passages differing for train type, speed and loading condition. Last, the need for a novel modeling method is argued in relation to high-fidelity mechanistic models widely used in the railway engineering community.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
149,384
2409.12306
Measuring Sound Symbolism in Audio-visual Models
Audio-visual pre-trained models have gained substantial attention recently and demonstrated superior performance on various audio-visual tasks. This study investigates whether pre-trained audio-visual models demonstrate non-arbitrary associations between sounds and visual representations$\unicode{x2013}$known as sound symbolism$\unicode{x2013}$which is also observed in humans. We developed a specialized dataset with synthesized images and audio samples and assessed these models using a non-parametric approach in a zero-shot setting. Our findings reveal a significant correlation between the models' outputs and established patterns of sound symbolism, particularly in models trained on speech data. These results suggest that such models can capture sound-meaning connections akin to human language processing, providing insights into both cognitive architectures and machine learning strategies.
false
false
true
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
489,525
2304.10780
Omni-Line-of-Sight Imaging for Holistic Shape Reconstruction
We introduce Omni-LOS, a neural computational imaging method for conducting holistic shape reconstruction (HSR) of complex objects utilizing a Single-Photon Avalanche Diode (SPAD)-based time-of-flight sensor. As illustrated in Fig. 1, our method enables new capabilities to reconstruct near-$360^\circ$ surrounding geometry of an object from a single scan spot. In such a scenario, traditional line-of-sight (LOS) imaging methods only see the front part of the object and typically fail to recover the occluded back regions. Inspired by recent advances of non-line-of-sight (NLOS) imaging techniques which have demonstrated great power to reconstruct occluded objects, Omni-LOS marries LOS and NLOS together, leveraging their complementary advantages to jointly recover the holistic shape of the object from a single scan position. The core of our method is to put the object nearby diffuse walls and augment the LOS scan in the front view with the NLOS scans from the surrounding walls, which serve as virtual ``mirrors'' to trap lights toward the object. Instead of separately recovering the LOS and NLOS signals, we adopt an implicit neural network to represent the object, analogous to NeRF and NeTF. While transients are measured along straight rays in LOS but over the spherical wavefronts in NLOS, we derive differentiable ray propagation models to simultaneously model both types of transient measurements so that the NLOS reconstruction also takes into account the direct LOS measurements and vice versa. We further develop a proof-of-concept Omni-LOS hardware prototype for real-world validation. Comprehensive experiments on various wall settings demonstrate that Omni-LOS successfully resolves shape ambiguities caused by occlusions, achieves high-fidelity 3D scan quality, and manages to recover objects of various scales and complexity.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
359,556
2311.10837
Evaluating the Relationship Between News Source Sharing and Political Beliefs
In an era marked by an abundance of news sources, access to information significantly influences public opinion. Notably, the bias of news sources often serves as an indicator of individuals' political leanings. This study explores this hypothesis by examining the news sharing behavior of politically active social media users, whose political ideologies were identified in a previous study. Using correspondence analysis, we estimate the Media Sharing Index (MSI), a measure that captures bias in media outlets and user preferences within a hidden space. During Argentina's 2019 election on Twitter, we observed a predictable pattern: center-right individuals predominantly shared media from center-right biased outlets. However, it is noteworthy that those with center-left inclinations displayed a more diverse media consumption, which is a significant finding. Despite a noticeable polarization based on political affiliation observed in a retweet network analysis, center-left users showed more diverse media sharing preferences, particularly concerning the MSI. Although these findings are specific to Argentina, the developed methodology can be applied in other countries to assess the correlation between users' political leanings and the media they share.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
408,687
2112.10275
Parallel Multi-Scale Networks with Deep Supervision for Hand Keypoint Detection
Keypoint detection plays an important role in a wide range of applications. However, predicting keypoints of small objects such as human hands is a challenging problem. Recent works fuse feature maps of deep Convolutional Neural Networks (CNNs), either via multi-level feature integration or multi-resolution aggregation. Despite achieving some success, the feature fusion approaches increase the complexity and the opacity of CNNs. To address this issue, we propose a novel CNN model named Multi-Scale Deep Supervision Network (P-MSDSNet) that learns feature maps at different scales with deep supervisions to produce attention maps for adaptive feature propagation from layers to layers. P-MSDSNet has a multi-stage architecture which makes it scalable while its deep supervision with spatial attention improves transparency to the feature learning at each stage. We show that P-MSDSNet outperforms the state-of-the-art approaches on benchmark datasets while requiring fewer number of parameters. We also show the application of P-MSDSNet to quantify finger tapping hand movements in a neuroscience study.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
272,392
2502.07811
CrossVideoMAE: Self-Supervised Image-Video Representation Learning with Masked Autoencoders
Current video-based Masked Autoencoders (MAEs) primarily focus on learning effective spatiotemporal representations from a visual perspective, which may lead the model to prioritize general spatial-temporal patterns but often overlook nuanced semantic attributes like specific interactions or sequences that define actions - such as action-specific features that align more closely with human cognition for space-time correspondence. This can limit the model's ability to capture the essence of certain actions that are contextually rich and continuous. Humans are capable of mapping visual concepts, object view invariance, and semantic attributes available in static instances to comprehend natural dynamic scenes or videos. Existing MAEs for videos and static images rely on separate datasets for videos and images, which may lack the rich semantic attributes necessary for fully understanding the learned concepts, especially when compared to using video and corresponding sampled frame images together. To this end, we propose CrossVideoMAE an end-to-end self-supervised cross-modal contrastive learning MAE that effectively learns both video-level and frame-level rich spatiotemporal representations and semantic attributes. Our method integrates mutual spatiotemporal information from videos with spatial information from sampled frames within a feature-invariant space, while encouraging invariance to augmentations within the video domain. This objective is achieved through jointly embedding features of visible tokens and combining feature correspondence within and across modalities, which is critical for acquiring rich, label-free guiding signals from both video and frame image modalities in a self-supervised manner. Extensive experiments demonstrate that our approach surpasses previous state-of-the-art methods and ablation studies validate the effectiveness of our approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
532,767
2001.11713
Stable Prediction with Model Misspecification and Agnostic Distribution Shift
For many machine learning algorithms, two main assumptions are required to guarantee performance. One is that the test data are drawn from the same distribution as the training data, and the other is that the model is correctly specified. In real applications, however, we often have little prior knowledge on the test data and on the underlying true model. Under model misspecification, agnostic distribution shift between training and test data leads to inaccuracy of parameter estimation and instability of prediction across unknown test data. To address these problems, we propose a novel Decorrelated Weighting Regression (DWR) algorithm which jointly optimizes a variable decorrelation regularizer and a weighted regression model. The variable decorrelation regularizer estimates a weight for each sample such that variables are decorrelated on the weighted training data. Then, these weights are used in the weighted regression to improve the accuracy of estimation on the effect of each variable, thus help to improve the stability of prediction across unknown test data. Extensive experiments clearly demonstrate that our DWR algorithm can significantly improve the accuracy of parameter estimation and stability of prediction with model misspecification and agnostic distribution shift.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
162,147
1908.09766
A Hybrid of Adaptation and Dynamic Routing based on SDN for Improving QoE in HTTP Adaptive VBR Video Streaming
Recently, HTTP Adaptive Streaming HAS has received significant attention from both industry and academia based on its ability to enhancing media streaming services over the Internet. Recent research solutions that have tried to improve HAS by adaptation at the client side only may not be completely effective without interacting with routing decisions in the upper layers. In this paper, we address the aforementioned issue by proposing a dynamic bandwidth allocation and management architecture for streaming video flows to improve users satisfaction. We also introduce an initial cross layer hybrid method that combines quality adaptation of variable bitrate video streaming over the HTTP protocol at the client side and SDN based dynamical routing. This scheme is enabled by the Software Defined Networking architecture that is now being considered as an emerging paradigm that disassociates the forwarding process from the routing process. SDN brings flexibility and the ability to flexibly change routing solutions, in turn resulting in dynamically improving the services provided in the application layer. Our experimental results show that the proposed solution offers significantly higher overall bitrates as well as smoother viewing experience than existing methods.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
142,927
1506.04299
Query-Answer Causality in Databases: Abductive Diagnosis and View-Updates
Causality has been recently introduced in databases, to model, characterize and possibly compute causes for query results (answers). Connections between query causality and consistency-based diagnosis and database repairs (wrt. integrity constrain violations) have been established in the literature. In this work we establish connections between query causality and abductive diagnosis and the view-update problem. The unveiled relationships allow us to obtain new complexity results for query causality -the main focus of our work- and also for the two other areas.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
44,148
1207.2459
Etude de Mod\`eles \`a base de r\'eseaux Bay\'esiens pour l'aide au diagnostic de tumeurs c\'er\'ebrales
This article describes different models based on Bayesian networks RB modeling expertise in the diagnosis of brain tumors. Indeed, they are well adapted to the representation of the uncertainty in the process of diagnosis of these tumors. In our work, we first tested several structures derived from the Bayesian network reasoning performed by doctors on the one hand and structures generated automatically on the other. This step aims to find the best structure that increases diagnostic accuracy. The machine learning algorithms relate MWST-EM algorithms, SEM and SEM + T. To estimate the parameters of the Bayesian network from a database incomplete, we have proposed an extension of the EM algorithm by adding a priori knowledge in the form of the thresholds calculated by the first phase of the algorithm RBE . The very encouraging results obtained are discussed at the end of the paper
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
17,385
2405.06598
A Lightweight Sparse Focus Transformer for Remote Sensing Image Change Captioning
Remote sensing image change captioning (RSICC) aims to automatically generate sentences that describe content differences in remote sensing bitemporal images. Recently, attention-based transformers have become a prevalent idea for capturing the features of global change. However, existing transformer-based RSICC methods face challenges, e.g., high parameters and high computational complexity caused by the self-attention operation in the transformer encoder component. To alleviate these issues, this paper proposes a Sparse Focus Transformer (SFT) for the RSICC task. Specifically, the SFT network consists of three main components, i.e. a high-level features extractor based on a convolutional neural network (CNN), a sparse focus attention mechanism-based transformer encoder network designed to locate and capture changing regions in dual-temporal images, and a description decoder that embeds images and words to generate sentences for captioning differences. The proposed SFT network can reduce the parameter number and computational complexity by incorporating a sparse attention mechanism within the transformer encoder network. Experimental results on various datasets demonstrate that even with a reduction of over 90\% in parameters and computational complexity for the transformer encoder, our proposed network can still obtain competitive performance compared to other state-of-the-art RSICC methods. The code is available at \href{https://github.com/sundongwei/SFT_chag2cap}{Lite\_Chag2cap}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
453,348
2308.09963
NeutrEx: A 3D Quality Component Measure on Facial Expression Neutrality
Accurate face recognition systems are increasingly important in sensitive applications like border control or migration management. Therefore, it becomes crucial to quantify the quality of facial images to ensure that low-quality images are not affecting recognition accuracy. In this context, the current draft of ISO/IEC 29794-5 introduces the concept of component quality to estimate how single factors of variation affect recognition outcomes. In this study, we propose a quality measure (NeutrEx) based on the accumulated distances of a 3D face reconstruction to a neutral expression anchor. Our evaluations demonstrate the superiority of our proposed method compared to baseline approaches obtained by training Support Vector Machines on face embeddings extracted from a pre-trained Convolutional Neural Network for facial expression classification. Furthermore, we highlight the explainable nature of our NeutrEx measures by computing per-vertex distances to unveil the most impactful face regions and allow operators to give actionable feedback to subjects.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
386,510
1711.00548
Autonomous Electric Race Car Design
Autonomous driving and electric vehicles are nowadays very active research and development areas. In this paper we present the conversion of a standard Kyburz eRod into an autonomous vehicle that can be operated in challenging environments such as Swiss mountain passes. The overall hardware and software architectures are described in detail with a special emphasis on the sensor requirements for autonomous vehicles operating in partially structured environments. Furthermore, the design process itself and the finalized system architecture are presented. The work shows state of the art results in localization and controls for self-driving high-performance electric vehicles. Test results of the overall system are presented, which show the importance of generalizable state estimation algorithms to handle a plethora of conditions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
83,739
2110.06777
Incremental Ensemble Gaussian Processes
Belonging to the family of Bayesian nonparametrics, Gaussian process (GP) based approaches have well-documented merits not only in learning over a rich class of nonlinear functions, but also in quantifying the associated uncertainty. However, most GP methods rely on a single preselected kernel function, which may fall short in characterizing data samples that arrive sequentially in time-critical applications. To enable {\it online} kernel adaptation, the present work advocates an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an {\it ensemble} of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary. With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with {\it scalability}, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions. Further, the novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner. To benchmark the performance of IE-GP and its dynamic variant in the adversarial setting where the modeling assumptions are violated, rigorous performance analysis has been conducted via the notion of regret, as the norm in online convex optimization. Last but not the least, online unsupervised learning for dimensionality reduction is explored under the novel IE-GP framework. Synthetic and real data tests demonstrate the effectiveness of the proposed schemes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
260,740
2105.07644
Collaborative Mapping of Archaeological Sites using multiple UAVs
UAVs have found an important application in archaeological mapping. Majority of the existing methods employ an offline method to process the data collected from an archaeological site. They are time-consuming and computationally expensive. In this paper, we present a multi-UAV approach for faster mapping of archaeological sites. Employing a team of UAVs not only reduces the mapping time by distribution of coverage area, but also improves the map accuracy by exchange of information. Through extensive experiments in a realistic simulation (AirSim), we demonstrate the advantages of using a collaborative mapping approach. We then create the first 3D map of the Sadra Fort, a 15th Century Fort located in Gujarat, India using our proposed method. Additionally, we present two novel archaeological datasets recorded in both simulation and real-world to facilitate research on collaborative archaeological mapping. For the benefit of the community, we make the AirSim simulation environment, as well as the datasets publicly available.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
235,508
2204.08377
Strengthening Subcommunities: Towards Sustainable Growth in AI Research
AI's rapid growth has been felt acutely by scholarly venues, leading to growing pains within the peer review process. These challenges largely center on the inability of specific subareas to identify and evaluate work that is appropriate according to criteria relevant to each subcommunity as determined by stakeholders of that subarea. We set forth a proposal that re-focuses efforts within these subcommunities through a decentralization of the reviewing and publication process. Through this re-centering effort, we hope to encourage each subarea to confront the issues specific to their process of academic publication and incentivization. This model has historically been successful for several subcommunities in AI, and we highlight those instances as examples for how the broader field can continue to evolve despite its continually growing size.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
292,059
2007.03819
Expressive Interviewing: A Conversational System for Coping with COVID-19
The ongoing COVID-19 pandemic has raised concerns for many regarding personal and public health implications, financial security and economic stability. Alongside many other unprecedented challenges, there are increasing concerns over social isolation and mental health. We introduce \textit{Expressive Interviewing}--an interview-style conversational system that draws on ideas from motivational interviewing and expressive writing. Expressive Interviewing seeks to encourage users to express their thoughts and feelings through writing by asking them questions about how COVID-19 has impacted their lives. We present relevant aspects of the system's design and implementation as well as quantitative and qualitative analyses of user interactions with the system. In addition, we conduct a comparative evaluation with a general purpose dialogue system for mental health that shows our system potential in helping users to cope with COVID-19 issues.
true
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
186,173
2405.10098
When Large Language Model Meets Optimization
Optimization algorithms and large language models (LLMs) enhance decision-making in dynamic environments by integrating artificial intelligence with traditional techniques. LLMs, with extensive domain knowledge, facilitate intelligent modeling and strategic decision-making in optimization, while optimization algorithms refine LLM architectures and output quality. This synergy offers novel approaches for advancing general AI, addressing both the computational challenges of complex problems and the application of LLMs in practical scenarios. This review outlines the progress and potential of combining LLMs with optimization algorithms, providing insights for future research directions.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
454,644
2410.06833
Dynamic metastability in the self-attention model
We consider the self-attention model - an interacting particle system on the unit sphere, which serves as a toy model for Transformers, the deep neural network architecture behind the recent successes of large language models. We prove the appearance of dynamic metastability conjectured in [GLPR23] - although particles collapse to a single cluster in infinite time, they remain trapped near a configuration of several clusters for an exponentially long period of time. By leveraging a gradient flow interpretation of the system, we also connect our result to an overarching framework of slow motion of gradient flows proposed by Otto and Reznikoff [OR07] in the context of coarsening and the Allen-Cahn equation. We finally probe the dynamics beyond the exponentially long period of metastability, and illustrate that, under an appropriate time-rescaling, the energy reaches its global maximum in finite time and has a staircase profile, with trajectories manifesting saddle-to-saddle-like behavior, reminiscent of recent works in the analysis of training dynamics via gradient descent for two-layer neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
496,366
1510.02513
A novel mutation operator based on the union of fitness and design spaces information for Differential Evolution
Differential Evolution (DE) is one of the most successful and powerful evolutionary algorithms for global optimization problem. The most important operator in this algorithm is mutation operator which parents are selected randomly to participate in it. Recently, numerous papers are tried to make this operator more intelligent by selection of parents for mutation intelligently. The intelligent selection for mutation vectors is performed by applying design space (also known as decision space) criterion or fitness space criterion, however, in both cases, half of valuable information of the problem space is disregarded. In this article, a Universal Differential Evolution (UDE) is proposed which takes advantage of both design and fitness spaces criteria for intelligent selection of mutation vectors. The experimental analysis on UDE are performed on CEC2005 benchmarks and the results stated that UDE significantly improved the performance of differential evolution in comparison with other methods that only use one criterion for intelligent selection.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
47,726
1507.04888
Maximum Entropy Deep Inverse Reinforcement Learning
This paper presents a general framework for exploiting the representational capacity of neural networks to approximate complex, nonlinear reward functions in the context of solving the inverse reinforcement learning (IRL) problem. We show in this context that the Maximum Entropy paradigm for IRL lends itself naturally to the efficient training of deep architectures. At test time, the approach leads to a computational complexity independent of the number of demonstrations, which makes it especially well-suited for applications in life-long learning scenarios. Our approach achieves performance commensurate to the state-of-the-art on existing benchmarks while exceeding on an alternative benchmark based on highly varying reward structures. Finally, we extend the basic architecture - which is equivalent to a simplified subclass of Fully Convolutional Neural Networks (FCNNs) with width one - to include larger convolutions in order to eliminate dependency on precomputed spatial features and work on raw input representations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
45,224
2411.15869
Self-Calibrated CLIP for Training-Free Open-Vocabulary Segmentation
Recent advancements in pre-trained vision-language models like CLIP, have enabled the task of open-vocabulary segmentation. CLIP demonstrates impressive zero-shot capabilities in various downstream tasks that require holistic image understanding. However, due to its image-level pre-training, CLIP struggles to capture local details, resulting in poor performance in segmentation tasks. Our analysis reveals that anomaly tokens emerge during the forward pass, drawing excessive attention from normal patch tokens, thereby diminishing spatial awareness. To address this issue, we propose Self-Calibrated CLIP (SC-CLIP), a training-free method that calibrates CLIP to produce finer-grained representations while preserving its original generalization ability, without introducing new parameters or relying on additional backbones. Specifically, we first identify and resolve the anomaly tokens to mitigate their negative impact. Next, we enhance feature discriminability and attention correlation by leveraging the semantic consistency found in CLIP's intermediate features. Furthermore, we employ multi-level feature fusion to enrich details. Collectively, these strategies enhance CLIP's feature representation with greater granularity and coherence. Experimental results demonstrate the effectiveness of SC-CLIP, achieving state-of-the-art results across eight semantic segmentation datasets and surpassing previous methods by 9.5%. Notably, SC-CLIP boosts the performance of vanilla CLIP ViT-L/14 by 6.8 times. Our source code is available at https://github.com/SuleBai/SC-CLIP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
510,806
2302.11306
Human MotionFormer: Transferring Human Motions with Vision Transformers
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis. An accurate matching between the source person and the target motion in both large and subtle motion changes is vital for improving the transferred motion quality. In this paper, we propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching, respectively. It consists of two ViT encoders to extract input features (i.e., a target motion image and a source human image) and a ViT decoder with several cascaded blocks for feature matching and motion transfer. In each block, we set the target motion feature as Query and the source person as Key and Value, calculating the cross-attention maps to conduct a global feature matching. Further, we introduce a convolutional layer to improve the local perception after the global cross-attention computations. This matching process is implemented in both warping and generation branches to guide the motion transfer. During training, we propose a mutual learning loss to enable the co-supervision between warping and generation branches for better motion representations. Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively. Project page: \url{https://github.com/KumapowerLIU/Human-MotionFormer}
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
347,164
2204.10945
Noncooperative Herding With Control Barrier Functions: Theory and Experiments
In this paper, we consider the problem of protecting a high-value unit from inadvertent attack by a group of agents using defending robots. Specifically, we develop a control strategy for the defending agents that we call "dog robots" to prevent a flock of "sheep agents" from breaching a protected zone. We take recourse to control barrier functions to pose this problem and exploit the interaction dynamics between the sheep and dogs to find dogs' velocities that result in the sheep getting repelled from the zone. We solve a QP reactively that incorporates the defending constraints to compute the desired velocities for all dogs. Owing to this, our proposed framework is composable \textit{i.e.} it allows for simultaneous inclusion of multiple protected zones in the constraints on dog robots' velocities. We provide a theoretical proof of feasibility of our strategy for the one dog/one sheep case. Additionally, we provide empirical results of two dogs defending the protected zone from upto ten sheep averaged over a hundred simulations and report high success rates. We also demonstrate this algorithm experimentally on non-holonomic robots. Videos of these results are available at https://tinyurl.com/4dj2kjwx.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
292,971
2412.04853
Budgeted Spatial Data Acquisition: When Coverage and Connectivity Matter
Data is undoubtedly becoming a commodity like oil, land, and labor in the 21st century. Although there have been many successful marketplaces for data trading, the existing data marketplaces lack consideration of the case where buyers want to acquire a collection of datasets (instead of one), and the overall spatial coverage and connectivity matter. In this paper, we take the first attempt to formulate this problem as Budgeted Maximum Coverage with Connectivity Constraint (BMCC), which aims to acquire a dataset collection with the maximum spatial coverage under a limited budget while maintaining spatial connectivity. To solve the problem, we propose two approximate algorithms with detailed theoretical guarantees and time complexity analysis, followed by two acceleration strategies to further improve the efficiency of the algorithm. Experiments are conducted on five real-world spatial dataset collections to verify the efficiency and effectiveness of our algorithms.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
514,597
1904.03264
Attack-Resilient Supervisory Control of Discrete-Event Systems: A Finite-State Transducer Approach
Resilience to sensor and actuator attacks is a major concern in the supervisory control of discrete events in cyber-physical systems (CPS). In this work, we propose a new framework to design supervisors for CPS under attacks using finite-state transducers (FSTs) to model the effects of the discrete events. FSTs can capture a general class of regular-rewriting attacks in which an attacker can nondeterministically rewrite sensing/actuation events according to a given regular relation. These include common insertion, deletion, event-wise replacement, and finite-memory replay attacks. We propose new theorems and algorithms with polynomial complexity to design resilient supervisors against these attacks. We also develop an open-source tool in Python based on the results and illustrate its applicability through a case study
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
126,652
1904.06346
Prior-aware Neural Network for Partially-Supervised Multi-Organ Segmentation
Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computer-aided intervention. As data annotation requires massive human labor from experienced radiologists, it is common that training data are partially labeled, e.g., pancreas datasets only have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the "background" usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, a prior statistics obtained from the fully-labeled dataset. As our training objective is difficult to be directly optimized using stochastic gradient descent [20], we propose to reformulate it in a min-max form and optimize it via the stochastic primal-dual gradient algorithm. PaNN achieves state-of-the-art performance on the MICCAI2015 challenge "Multi-Atlas Labeling Beyond the Cranial Vault", a competition on organ segmentation in the abdomen. We report an average Dice score of 84.97%, surpassing the prior art by a large margin of 3.27%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
127,534
2006.06069
Robust Spammer Detection by Nash Reinforcement Learning
Online reviews provide product evaluations for customers to make decisions. Unfortunately, the evaluations can be manipulated using fake reviews ("spams") by professional spammers, who have learned increasingly insidious and powerful spamming strategies by adapting to the deployed detectors. Spamming strategies are hard to capture, as they can be varying quickly along time, different across spammers and target products, and more critically, remained unknown in most cases. Furthermore, most existing detectors focus on detection accuracy, which is not well-aligned with the goal of maintaining the trustworthiness of product evaluations. To address the challenges, we formulate a minimax game where the spammers and spam detectors compete with each other on their practical goals that are not solely based on detection accuracy. Nash equilibria of the game lead to stable detectors that are agnostic to any mixed detection strategies. However, the game has no closed-form solution and is not differentiable to admit the typical gradient-based algorithms. We turn the game into two dependent Markov Decision Processes (MDPs) to allow efficient stochastic optimization based on multi-armed bandit and policy gradient. We experiment on three large review datasets using various state-of-the-art spamming and detection strategies and show that the optimization algorithm can reliably find an equilibrial detector that can robustly and effectively prevent spammers with any mixed spamming strategies from attaining their practical goal. Our code is available at https://github.com/YingtongDou/Nash-Detect.
false
false
false
true
false
false
true
false
false
false
false
false
true
false
false
false
false
true
181,297
2409.03142
Causal Temporal Representation Learning with Nonstationary Sparse Transition
Causal Temporal Representation Learning (Ctrl) methods aim to identify the temporal causal dynamics of complex nonstationary temporal sequences. Despite the success of existing Ctrl methods, they require either directly observing the domain variables or assuming a Markov prior on them. Such requirements limit the application of these methods in real-world scenarios when we do not have such prior knowledge of the domain variables. To address this problem, this work adopts a sparse transition assumption, aligned with intuitive human understanding, and presents identifiability results from a theoretical perspective. In particular, we explore under what conditions on the significance of the variability of the transitions we can build a model to identify the distribution shifts. Based on the theoretical result, we introduce a novel framework, Causal Temporal Representation Learning with Nonstationary Sparse Transition (CtrlNS), designed to leverage the constraints on transition sparsity and conditional independence to reliably identify both distribution shifts and latent factors. Our experimental evaluations on synthetic and real-world datasets demonstrate significant improvements over existing baselines, highlighting the effectiveness of our approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
485,930
2004.03555
Entity Linking via Dual and Cross-Attention Encoders
Entity Linking has two main open areas of research: 1) generate candidate entities without using alias tables and 2) generate more contextual representations for both mentions and entities. Recently, a solution has been proposed for the former as a dual-encoder entity retrieval system (Gillick et al., 2019) that learns mention and entity representations in the same space, and performs linking by selecting the nearest entity to the mention in this space. In this work, we use this retrieval system solely for generating candidate entities. We then rerank the entities by using a cross-attention encoder over the target mention and each of the candidate entities. Whereas a dual encoder approach forces all information to be contained in the small, fixed set of vector dimensions used to represent mentions and entities, a crossattention model allows for the use of detailed information (read: features) from the entirety of each <mention, context, candidate entity> tuple. We experiment with features used in the reranker including different ways of incorporating document-level context. We achieve state-of-the-art results on TACKBP-2010 dataset, with 92.05% accuracy. Furthermore, we show how the rescoring model generalizes well when trained on the larger CoNLL-2003 dataset and evaluated on TACKBP-2010.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
171,607
1811.01459
Deep Metric Learning by Online Soft Mining and Class-Aware Attention
Deep metric learning aims to learn a deep embedding that can capture the semantic similarity of data points. Given the availability of massive training samples, deep metric learning is known to suffer from slow convergence due to a large fraction of trivial samples. Therefore, most existing methods generally resort to sample mining strategies for selecting nontrivial samples to accelerate convergence and improve performance. In this work, we identify two critical limitations of the sample mining methods, and provide solutions for both of them. First, previous mining methods assign one binary score to each sample, i.e., dropping or keeping it, so they only selects a subset of relevant samples in a mini-batch. Therefore, we propose a novel sample mining method, called Online Soft Mining (OSM), which assigns one continuous score to each sample to make use of all samples in the mini-batch. OSM learns extended manifolds that preserve useful intraclass variances by focusing on more similar positives. Second, the existing methods are easily influenced by outliers as they are generally included in the mined subset. To address this, we introduce Class-Aware Attention (CAA) that assigns little attention to abnormal data samples. Furthermore, by combining OSM and CAA, we propose a novel weighted contrastive loss to learn discriminative embeddings. Extensive experiments on two fine-grained visual categorisation datasets and two video-based person re-identification benchmarks show that our method significantly outperforms the state-of-the-art.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
112,372
2209.12101
3D Reconstruction using Structured Light from off-the-shelf components
The coordinate measuring machine(CMM) has been the benchmark of accuracy in measuring solid objects from nearly past 50 years or more. However with the advent of 3D scanning technology, the accuracy and the density of point cloud generated has taken over. In this project we not only compare the different algorithms that can be used in a 3D scanning software, but also create our own 3D scanner from off-the-shelf components like camera and projector. Our objective has been : 1. To develop a prototype for 3D scanner to achieve a system that performs at optimal accuracy over a wide typology of objects. 2. To minimise the cost using off-the-shelf components. 3. To reach very close to the accuracy of CMM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
319,417
1701.03942
Can We Find Documents in Web Archives without Knowing their Contents?
Recent advances of preservation technologies have led to an increasing number of Web archive systems and collections. These collections are valuable to explore the past of the Web, but their value can only be uncovered with effective access and exploration mechanisms. Ideal search and rank- ing methods must be robust to the high redundancy and the temporal noise of contents, as well as scalable to the huge amount of data archived. Despite several attempts in Web archive search, facilitating access to Web archive still remains a challenging problem. In this work, we conduct a first analysis on different ranking strategies that exploit evidences from metadata instead of the full content of documents. We perform a first study to compare the usefulness of non-content evidences to Web archive search, where the evidences are mined from the metadata of file headers, links and URL strings only. Based on these findings, we propose a simple yet surprisingly effective learning model that combines multiple evidences to distinguish "good" from "bad" search results. We conduct empirical experiments quantitatively as well as qualitatively to confirm the validity of our proposed method, as a first step towards better ranking in Web archives taking meta- data into account.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
66,783
1004.4181
Displacement Calculus
The Lambek calculus provides a foundation for categorial grammar in the form of a logic of concatenation. But natural language is characterized by dependencies which may also be discontinuous. In this paper we introduce the displacement calculus, a generalization of Lambek calculus, which preserves its good proof-theoretic properties while embracing discontinuiity and subsuming it. We illustrate linguistic applications and prove Cut-elimination, the subformula property, and decidability
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
6,259
1912.10515
Bringing Belief Base Change into Dynamic Epistemic Logic
AGM's belief revision is one of the main paradigms in the study of belief change operations. In this context, belief bases (prioritised bases) have been primarily used to specify the agent's belief state. While the connection of iterated AGM-like operations and their encoding in dynamic epistemic logics have been studied before, few works considered how well-known postulates from iterated belief revision theory can be characterised by means of belief bases and their counterpart in dynamic epistemic logic. Particularly, it has been shown that some postulates can be characterised through transformations in priority graphs, while others may not be represented that way. This work investigates changes in the semantics of Dynamic Preference Logic that give rise to an appropriate syntactic representation for its models that allow us to represent and reason about iterated belief base change in this logic.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
158,339
2209.13041
Efficient Concurrent Design of the Morphology of Unmanned Aerial Systems and their Collective-Search Behavior
The collective operation of robots, such as unmanned aerial vehicles (UAVs) operating as a team or swarm, is affected by their individual capabilities, which in turn is dependent on their physical design, aka morphology. However, with the exception of a few (albeit ad hoc) evolutionary robotics methods, there has been very little work on understanding the interplay of morphology and collective behavior. There is especially a lack of computational frameworks to concurrently search for the robot morphology and the hyper-parameters of their behavior model that jointly optimize the collective (team) performance. To address this gap, this paper proposes a new co-design framework. Here the exploding computational cost of an otherwise nested morphology/behavior co-design is effectively alleviated through the novel concept of ``talent" metrics; while also allowing significantly better solutions compared to the typically sub-optimal sequential morphology$\to$behavior design approach. This framework comprises four major steps: talent metrics selection, talent Pareto exploration (a multi-objective morphology optimization process), behavior optimization, and morphology finalization. This co-design concept is demonstrated by applying it to design UAVs that operate as a team to localize signal sources, e.g., in victim search and hazard localization. Here, the collective behavior is driven by a recently reported batch Bayesian search algorithm called Bayes-Swarm. Our case studies show that the outcome of co-design provides significantly higher success rates in signal source localization compared to a baseline design, across a variety of signal environments and teams with 6 to 15 UAVs. Moreover, this co-design process provides two orders of magnitude reduction in computing time compared to a projected nested design approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
319,748
2309.14022
Hashing Neural Video Decomposition with Multiplicative Residuals in Space-Time
We present a video decomposition method that facilitates layer-based editing of videos with spatiotemporally varying lighting and motion effects. Our neural model decomposes an input video into multiple layered representations, each comprising a 2D texture map, a mask for the original video, and a multiplicative residual characterizing the spatiotemporal variations in lighting conditions. A single edit on the texture maps can be propagated to the corresponding locations in the entire video frames while preserving other contents' consistencies. Our method efficiently learns the layer-based neural representations of a 1080p video in 25s per frame via coordinate hashing and allows real-time rendering of the edited result at 71 fps on a single GPU. Qualitatively, we run our method on various videos to show its effectiveness in generating high-quality editing effects. Quantitatively, we propose to adopt feature-tracking evaluation metrics for objectively assessing the consistency of video editing. Project page: https://lightbulb12294.github.io/hashing-nvd/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
394,440
2304.01996
ANTN: Bridging Autoregressive Neural Networks and Tensor Networks for Quantum Many-Body Simulation
Quantum many-body physics simulation has important impacts on understanding fundamental science and has applications to quantum materials design and quantum technology. However, due to the exponentially growing size of the Hilbert space with respect to the particle number, a direct simulation is intractable. While representing quantum states with tensor networks and neural networks are the two state-of-the-art methods for approximate simulations, each has its own limitations in terms of expressivity and inductive bias. To address these challenges, we develop a novel architecture, Autoregressive Neural TensorNet (ANTN), which bridges tensor networks and autoregressive neural networks. We show that Autoregressive Neural TensorNet parameterizes normalized wavefunctions, allows for exact sampling, generalizes the expressivity of tensor networks and autoregressive neural networks, and inherits a variety of symmetries from autoregressive neural networks. We demonstrate our approach on quantum state learning as well as finding the ground state of the challenging 2D $J_1$-$J_2$ Heisenberg model with different systems sizes and coupling parameters, outperforming both tensor networks and autoregressive neural networks. Our work opens up new opportunities for quantum many-body physics simulation, quantum technology design, and generative modeling in artificial intelligence.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
356,283
1503.03650
Geo-SAGE: A Geographical Sparse Additive Generative Model for Spatial Item Recommendation
With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important means to help people discover attractive and interesting venues and events, especially when users travel out of town. However, this recommendation is very challenging compared to the traditional recommender systems. A user can visit only a limited number of spatial items, leading to a very sparse user-item matrix. Most of the items visited by a user are located within a short distance from where he/she lives, which makes it hard to recommend items when the user travels to a far away place. Moreover, user interests and behavior patterns may vary dramatically across different geographical regions. In light of this, we propose Geo-SAGE, a geographical sparse additive generative model for spatial item recommendation in this paper. Geo-SAGE considers both user personal interests and the preference of the crowd in the target region, by exploiting both the co-occurrence pattern of spatial items and the content of spatial items. To further alleviate the data sparsity issue, Geo-SAGE exploits the geographical correlation by smoothing the crowd's preferences over a well-designed spatial index structure called spatial pyramid. We conduct extensive experiments to evaluate the performance of our Geo-SAGE model on two real large-scale datasets. The experimental results clearly demonstrate our Geo-SAGE model outperforms the state-of-the-art in the two tasks of both out-of-town and home-town recommendations.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
false
41,080
2404.11314
Destructive and constructive RIS beamforming in an ISAC-multi-user MIMO network
Integrated sensing and communication (ISAC) has already established itself as a promising solution to the spectrum scarcity problem, even more so when paired with a reconfigurable intelligent surface (RIS), as RISs can shape the propagation environment by adjusting their phase-shift coefficients. Albeit the potential performance gain, a RIS is also a potential security threat to the system. In this paper, we explore both the positive and negative sides of having a RIS in a multi-user multiple-input multiple-output (MIMO) ISAC network. We first develop an alternating optimization algorithm, obtaining the active and passive beamforming vectors that maximize the sensing signal-to-noise ratio (SNR) under minimum signal-to-interference-plus-noise ratio (SINR) constraints for the communication users and finite power budget. We also investigate the destructive potential of the RIS by devising a RIS phase-shift optimization algorithm that minimizes the sensing SNR while preserving the same minimum communication SINR previously guaranteed by the system. We further investigate the impact of the RIS's individual element failures on the system performance. The simulation results show that the RIS performance-boosting potential is as good as its destructive one and that both of our optimization strategies are hindered by the investigated impairments.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
447,459
1808.04876
Plato: Approximate Analytics over Compressed Time Series with Tight Deterministic Error Guarantees
Plato provides fast approximate analytics on time series, by precomputing and storing compressed time series. Plato's key novelty is the delivery of tight deterministic error guarantees for time series analytics. Plato evaluates any time series expression composed by the linear algebra operators over vectors, along with arithmetic operators. This large scope of possible expressions includes common use cases such as correlation and cross-correlation expressions. Each time series is segmented either by fixed-length segmentation or by (a usually more effective) variable-length segmentation. Each segment is compressed by an estimation/compression function that approximates the actual values and is coming from a user-chosen function family, as taught by many prior works. The novelty is that Plato associates to each segment 1 to 3 (depending on the case) precomputed error measures and, using them, Plato computes tight deterministic error guarantees for analytics over the compressions. Importantly, some compression families lead to much better deterministic error guarantees. This work identifies two broad estimation function family groups (Vector Space (VS) and Linear Scalable Family (LSF)), which lead to theoretically and practically high-quality guarantees, even for expressions (eg correlation) that combine multiple time series that have been independently compressed and may, thus, use misaligned segmentations. The theoretical aspect of "high quality" is crisply captured by the Amplitude Independence (AI) property: An AI guarantee does not depend on the amplitude of the involved time series, even when we combine multiple time series. The experiments on four real-life datasets showed that when the novel AI guarantees were applicable, the approximate query results were certified to be very close (typically 1%) to the true results.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
105,250
2109.09047
Model-Free Safety-Critical Control for Robotic Systems
This paper presents a framework for the safety-critical control of robotic systems, when safety is defined on safe regions in the configuration space. To maintain safety, we synthesize a safe velocity based on control barrier function theory without relying on a -- potentially complicated -- high-fidelity dynamical model of the robot. Then, we track the safe velocity with a tracking controller. This culminates in model-free safety critical control. We prove theoretical safety guarantees for the proposed method. Finally, we demonstrate that this approach is application-agnostic. We execute an obstacle avoidance task with a Segway in high-fidelity simulation, as well as with a Drone and a Quadruped in hardware experiments.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
256,135
1909.13607
MGHRL: Meta Goal-generation for Hierarchical Reinforcement Learning
Most meta reinforcement learning (meta-RL) methods learn to adapt to new tasks by directly optimizing the parameters of policies over primitive action space. Such algorithms work well in tasks with relatively slight difference. However, when the task distribution becomes wider, it would be quite inefficient to directly learn such a meta-policy. In this paper, we propose a new meta-RL algorithm called Meta Goal-generation for Hierarchical RL (MGHRL). Instead of directly generating policies over primitive action space for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and generalized meta-learning from past experience.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
147,476
2301.10926
Evolution of Filter Bubbles and Polarization in News Recommendation
Recent work in news recommendation has demonstrated that recommenders can over-expose users to articles that support their pre-existing opinions. However, most existing work focuses on a static setting or over a short-time window, leaving open questions about the long-term and dynamic impacts of news recommendations. In this paper, we explore these dynamic impacts through a systematic study of three research questions: 1) How do the news reading behaviors of users change after repeated long-term interactions with recommenders? 2) How do the inherent preferences of users change over time in such a dynamic recommender system? 3) Can the existing SOTA static method alleviate the problem in the dynamic environment? Concretely, we conduct a comprehensive data-driven study through simulation experiments of political polarization in news recommendations based on 40,000 annotated news articles. We find that users are rapidly exposed to more extreme content as the recommender evolves. We also find that a calibration-based intervention can slow down this polarization, but leaves open significant opportunities for future improvements
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
341,959
2207.12687
A Kendall Shape Space Approach to 3D Shape Estimation from 2D Landmarks
3D shapes provide substantially more information than 2D images. However, the acquisition of 3D shapes is sometimes very difficult or even impossible in comparison with acquiring 2D images, making it necessary to derive the 3D shape from 2D images. Although this is, in general, a mathematically ill-posed problem, it might be solved by constraining the problem formulation using prior information. Here, we present a new approach based on Kendall's shape space to reconstruct 3D shapes from single monocular 2D images. The work is motivated by an application to study the feeding behavior of the basking shark, an endangered species whose massive size and mobility render 3D shape data nearly impossible to obtain, hampering understanding of their feeding behaviors and ecology. 2D images of these animals in feeding position, however, are readily available. We compare our approach with state-of-the-art shape-based approaches, both on human stick models and on shark head skeletons. Using a small set of training shapes, we show that the Kendall shape space approach is substantially more robust than previous methods and results in plausible shapes. This is essential for the motivating application in which specimens are rare and therefore only few training shapes are available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
310,088
1803.05383
Use of recurrent infomax to improve the memory capability of input-driven recurrent neural networks
The inherent transient dynamics of recurrent neural networks (RNNs) have been exploited as a computational resource in input-driven RNNs. However, the information processing capability varies from RNN to RNN, depending on their properties. Many authors have investigated the dynamics of RNNs and their relevance to the information processing capability. In this study, we present a detailed analysis of the information processing capability of an RNN optimized by recurrent infomax (RI), which is an unsupervised learning scheme that maximizes the mutual information of RNNs by adjusting the connection strengths of the network. Thus, we observe that a delay-line structure emerges from the RI and the network optimized by the RI possesses superior short-term memory, which is the ability to store the temporal information of the input stream in its transient dynamics.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
92,625
2008.03694
LiDAR Data Enrichment Using Deep Learning Based on High-Resolution Image: An Approach to Achieve High-Performance LiDAR SLAM Using Low-cost LiDAR
LiDAR-based SLAM algorithms are extensively studied to providing robust and accurate positioning for autonomous driving vehicles (ADV) in the past decades. Satisfactory performance can be obtained using high-grade 3D LiDAR with 64 channels, which can provide dense point clouds. Unfortunately, the high price significantly prevents its extensive commercialization in ADV. The cost-effective 3D LiDAR with 16 channels is a promising replacement. However, only limited and sparse point clouds can be provided by the 16 channels LiDAR, which cannot guarantee sufficient positioning accuracy for ADV in challenging dynamic environments. The high-resolution image from the low-cost camera can provide ample information about the surroundings. However, the explicit depth information is not available from the image. Inspired by the complementariness of 3D LiDAR and camera, this paper proposes to make use of the high-resolution images from a camera to enrich the raw 3D point clouds from the low-cost 16 channels LiDAR based on a state-of-the-art deep learning algorithm. An ERFNet is firstly employed to segment the image with the aid of the raw sparse 3D point clouds. Meanwhile, the sparse convolutional neural network is employed to predict the dense point clouds based on raw sparse 3D point clouds. Then, the predicted dense point clouds are fused with the segmentation outputs from ERFnet using a novel multi-layer convolutional neural network to refine the predicted 3D point clouds. Finally, the enriched point clouds are employed to perform LiDAR SLAM based on the state-of-the-art normal distribution transform (NDT). We tested our approach on the re-edited KITTI datasets: (1)the sparse 3D point clouds are significantly enriched with a mean square error of 1.1m MSE. (2)the map generated from the LiDAR SLAM is denser which includes more details without significant accuracy loss.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
190,997
2312.03740
A Survey on Prompting Techniques in LLMs
Autoregressive Large Language Models have transformed the landscape of Natural Language Processing. Pre-train and prompt paradigm has replaced the conventional approach of pre-training and fine-tuning for many downstream NLP tasks. This shift has been possible largely due to LLMs and innovative prompting techniques. LLMs have shown great promise for a variety of downstream tasks owing to their vast parameters and huge datasets that they are pre-trained on. However, in order to fully realize their potential, their outputs must be guided towards the desired outcomes. Prompting, in which a specific input or instruction is provided to guide the LLMs toward the intended output, has become a tool for achieving this goal. In this paper, we discuss the various prompting techniques that have been applied to fully harness the power of LLMs. We present a taxonomy of existing literature on prompting techniques and provide a concise survey based on this taxonomy. Further, we identify some open problems in the realm of prompting in autoregressive LLMs which could serve as a direction for future research.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
413,394
1703.10898
Thin-Slicing Network: A Deep Structured Model for Pose Estimation in Videos
Deep ConvNets have been shown to be effective for the task of human pose estimation from single images. However, several challenging issues arise in the video-based case such as self-occlusion, motion blur, and uncommon poses with few or no examples in training data sets. Temporal information can provide additional cues about the location of body joints and help to alleviate these issues. In this paper, we propose a deep structured model to estimate a sequence of human poses in unconstrained videos. This model can be efficiently trained in an end-to-end manner and is capable of representing appearance of body joints and their spatio-temporal relationships simultaneously. Domain knowledge about the human body is explicitly incorporated into the network providing effective priors to regularize the skeletal structure and to enforce temporal consistency. The proposed end-to-end architecture is evaluated on two widely used benchmarks (Penn Action dataset and JHMDB dataset) for video-based pose estimation. Our approach significantly outperforms the existing state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
70,986
1903.12480
Dynamic mode decomposition for multiscale nonlinear physics
We present a data-driven method for separating complex, multiscale systems into their constituent time-scale components using a recursive implementation of dynamic mode decomposition (DMD). Local linear models are built from windowed subsets of the data, and dominant time scales are discovered using spectral clustering on their eigenvalues. This approach produces time series data for each identified component, which sum to a faithful reconstruction of the input signal. It differs from most other methods in the field of multiresolution analysis (MRA) in that it 1) accounts for spatial and temporal coherencies simultaneously, making it more robust to scale overlap between components, and 2) yields a closed-form expression for local dynamics at each scale, which can be used for short-term prediction of any or all components. Our technique is an extension of multi-resolution dynamic mode decomposition (mrDMD), generalized to treat a broader variety of multiscale systems and more faithfully reconstruct their isolated components. In this paper we present an overview of our algorithm and its results on two example physical systems, and briefly discuss some advantages and potential forecasting applications for the technique.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
125,739
2010.13386
Video-based Facial Expression Recognition using Graph Convolutional Networks
Facial expression recognition (FER), aiming to classify the expression present in the facial image or video, has attracted a lot of research interests in the field of artificial intelligence and multimedia. In terms of video based FER task, it is sensible to capture the dynamic expression variation among the frames to recognize facial expression. However, existing methods directly utilize CNN-RNN or 3D CNN to extract the spatial-temporal features from different facial units, instead of concentrating on a certain region during expression variation capturing, which leads to limited performance in FER. In our paper, we introduce a Graph Convolutional Network (GCN) layer into a common CNN-RNN based model for video-based FER. First, the GCN layer is utilized to learn more significant facial expression features which concentrate on certain regions after sharing information between extracted CNN features of nodes. Then, a LSTM layer is applied to learn long-term dependencies among the GCN learned features to model the variation. In addition, a weight assignment mechanism is also designed to weight the output of different nodes for final classification by characterizing the expression intensities in each frame. To the best of our knowledge, it is the first time to use GCN in FER task. We evaluate our method on three widely-used datasets, CK+, Oulu-CASIA and MMI, and also one challenging wild dataset AFEW8.0, and the experimental results demonstrate that our method has superior performance to existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
203,111
2007.02410
Charge-voltage relation for a universal capacitor
Most capacitors do not satisfy the conventional assumption of a constant capacitance. They exhibit memory which is often described by a time-varying capacitance. It is shown that the classical relation, $Q\left(t\right)=CV\left(t\right)$, that relates the charge, $Q$, with the capacitance, $C$, and the voltage, $V$, is not applicable for capacitors with a time-varying capacitance. The expression for the current, $dQ/dt$, that is subsequently obtained following the substitution of $C$ by $C\left(t\right)$ in the classical relation corresponds to an inconsistent circuit. In order to address the inconsistency, I propose a charge-voltage relation according to which the charge on a capacitor is expressed by the convolution of its time-varying capacitance with the first-order time-derivative of the applied voltage, i.e., $Q\left(t\right)=C\left(t\right)\ast dV/dt$. This relation corresponds to the universal capacitor which is also known as the fractional capacitor among the fractional calculus community. Since the fractional capacitor has an inherent connection with the universal dielectric response that is expressed by the century old Curie-von Schweidler law, the finding extends to the study of dielectrics as well.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
185,729