aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1902.09192 | 2916446912 | We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs). BVAT addresses the shortcoming of GCNs that do not consider the smoothness of the model's output distribution against local perturbations around the input. We propose two algorithms, sample-based BVAT and optimization-based BVAT, which are suitable to promote the smoothness of the model for graph-structured data by either finding virtual adversarial perturbations for a subset of nodes far from each other or generating virtual adversarial perturbations for all nodes with an optimization process. Extensive experiments on three citation network datasets Cora, Citeseer and Pubmed and a knowledge graph dataset Nell validate the effectiveness of the proposed method, which establishes state-of-the-art results in the semi-supervised node classification tasks. | Learning node representations based on graph for semi-supervised learning and unsupervised learning has drawn increasing attention and has been developed mainly toward two directions: spectral approaches and non-spectral approaches. On one hand, label propagation @cite_10 , manifold regularization @cite_22 , deep semi-supervised embedding @cite_26 , Chebyshev expansion based spatially localized filters @cite_19 and graph convolutional networks @cite_8 inherit the ideas from spectral graph theory @cite_4 and demonstrate impressive results in the context of node classification. On the other hand, non-spectral approaches learn graph embeddings directly on spatially local neighborhoods. DeepWalk @cite_1 and its variants @cite_15 @cite_13 learn node representations based on the neighborhood generated by random walks. Planetoid @cite_23 , MoNet @cite_11 and GraphSAGE @cite_7 propose end-to-end frameworks for learning embeddings for semi-supervised learning or unsupervised learning on graph. Recently, graph attention networks @cite_24 introduce masked self-attentional layers into graph convolutions and establish a strong baseline for transductive and inductive learning on graph. | {
"cite_N": [
"@cite_13",
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2962756421",
"2407712691",
"2803678876",
"2104290444",
"2962767366",
"2964015378",
"2154851992",
"",
"2964321699",
"2963312446",
"",
"2139823104",
"2558460151"
],
"abstract": [
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"We show how nonlinear embedding algorithms popular for use with \"shallow\" semi-supervised learning techniques such as kernel methods can be easily applied to deep multi-layer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This trick provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.",
"Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool the model by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. Also, variants of genetic algorithms and gradient methods are presented in the scenario where prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.",
"We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.",
"Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"",
"In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.",
"We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embeddings and input feature vectors, while in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, we show improved performance over many of the existing models.",
"",
"An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propagation. The resulting learning algorithms have intimate connections with random walks, electric networks, and spectral graph theory. We discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm's ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text classification tasks.",
"Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches."
]
} |
1902.09192 | 2916446912 | We present batch virtual adversarial training (BVAT), a novel regularization method for graph convolutional networks (GCNs). BVAT addresses the shortcoming of GCNs that do not consider the smoothness of the model's output distribution against local perturbations around the input. We propose two algorithms, sample-based BVAT and optimization-based BVAT, which are suitable to promote the smoothness of the model for graph-structured data by either finding virtual adversarial perturbations for a subset of nodes far from each other or generating virtual adversarial perturbations for all nodes with an optimization process. Extensive experiments on three citation network datasets Cora, Citeseer and Pubmed and a knowledge graph dataset Nell validate the effectiveness of the proposed method, which establishes state-of-the-art results in the semi-supervised node classification tasks. | There is also an interest in applying regularization terms to semi-supervised learning based on the cluster assumption @cite_17 . Various sophisticated solutions have been proposed @cite_27 @cite_2 @cite_14 @cite_18 , which achieve striking results. Among them, virtual adversarial training (VAT) has been proved successful in the context of semi-supervised image and text classification tasks @cite_3 @cite_27 . However, VAT is not effective enough when straightforwardly applied to the models that deal with graph-structured data because of the interrelationship between different node instances. For this reason, we propose BVAT in the paper which introduces a novel regularization term to smooth the output distribution of the models. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_3",
"@cite_27",
"@cite_2",
"@cite_17"
],
"mid": [
"2963558289",
"2592691248",
"2963699875",
"2964159205",
"",
"92894758"
],
"abstract": [
"The recently proposed self-ensembling methods have achieved promising results in deep semi-supervised learning, which penalize inconsistent predictions of unlabeled data under different perturbations. However, they only consider adding perturbations to each single data point, while ignoring the connections between data samples. In this paper, we propose a novel method, called Smooth Neighbors on Teacher Graphs (SNTG). In SNTG, a graph is constructed based on the predictions of the teacher model, i.e., the implicit self-ensemble of models. Then the graph serves as a similarity measure with respect to which the representations of \"similar\" neighboring points are learned to be smooth on the low-dimensional manifold. We achieve state-of-the-art results on semi-supervised learning benchmarks. The error rates are 9.89 , 3.99 for CIFAR-10 with 4000 labels, SVHN with 500 labels, respectively. In particular, the improvements are significant when the labels are fewer. For the non-augmented MNIST with only 20 labels, the error rate is reduced from previous 4.81 to 1.36 . Our method also shows robustness to noisy labels.",
"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .",
"Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself. The proposed method achieves state of the art results on multiple benchmark semi-supervised and purely supervised tasks. We provide visualizations and analysis showing that the learned word embeddings have improved in quality and that while training, the model is less prone to overfitting.",
"We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.",
"",
""
]
} |
1902.09212 | 2949962589 | This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process. We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. The code and models have been publicly available at this https URL . | Most traditional solutions to single-person pose estimation adopt the probabilistic graphical model or the pictorial structure model @cite_71 @cite_1 , which is recently improved by exploiting deep learning for better modeling the unary and pair-wise energies @cite_95 @cite_75 @cite_25 or imitating the iterative inference process @cite_39 . Nowadays, deep convolutional neural network provides dominant solutions @cite_32 @cite_55 @cite_36 @cite_70 @cite_67 @cite_26 @cite_49 @cite_46 . There are two mainstream methods: regressing the position of keypoints @cite_29 @cite_34 , and estimating keypoint heatmaps @cite_39 @cite_7 @cite_31 followed by choosing the locations with the highest heat values as the keypoints. | {
"cite_N": [
"@cite_67",
"@cite_26",
"@cite_31",
"@cite_7",
"@cite_36",
"@cite_70",
"@cite_55",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_39",
"@cite_95",
"@cite_71",
"@cite_49",
"@cite_46",
"@cite_34",
"@cite_75",
"@cite_25"
],
"mid": [
"2798922769",
"2798409409",
"2464305746",
"2964105113",
"",
"2894913190",
"",
"2113325037",
"2097151019",
"2375583958",
"2330154883",
"2155394491",
"1994529670",
"2759748774",
"",
"2963474899",
"",
""
],
"abstract": [
"Human pose estimation still faces various difficulties in challenging scenarios. Human parsing, as a closely related task, can provide valuable cues for better pose estimation, which however has not been fully exploited. In this paper, we propose a novel Parsing Induced Learner to exploit parsing information to effectively assist pose estimation by learning to fast adapt the base pose estimation model. The proposed Parsing Induced Learner is composed of a parsing encoder and a pose model parameter adapter, which together learn to predict dynamic parameters of the pose model to extract complementary useful features for more accurate pose estimation. Comprehensive experiments on benchmarks LIP and extended PASCAL-Person-Part show that the proposed Parsing Induced Learner can improve performance of both single- and multi-person pose estimation to new state-of-the-art. Cross-dataset experiments also show that the proposed Parsing Induced Learner from LIP dataset can accelerate learning of a human pose estimation model on MPII benchmark in addition to achieving outperforming performance.",
"Random data augmentation is a critical technique to avoid overfitting in training deep neural network models. However, data augmentation and network training are usually treated as two isolated processes, limiting the effectiveness of network training. Why not jointly optimize the two? We propose adversarial data augmentation to address this limitation. The main idea is to design an augmentation network (generator) that competes against a target network (discriminator) by generating hard' augmentation operations online. The augmentation network explores the weaknesses of the target network, while the latter learns from hard' augmentations to achieve better performance. We also design a reward penalty strategy for effective joint training. We demonstrate our approach on the problem of human pose estimation and carry out a comprehensive experimental analysis, showing that our method can significantly improve state-of-the-art models without additional data efforts.",
"Recently, Deep Convolutional Neural Networks (DCNNs) have been applied to the task of human pose estimation, and have shown its potential of learning better feature representations and capturing contextual relationships. However, it is difficult to incorporate domain prior knowledge such as geometric relationships among body parts into DCNNs. In addition, training DCNN-based body part detectors without consideration of global body joint consistency introduces ambiguities, which increases the complexity of training. In this paper, we propose a novel end-to-end framework for human pose estimation that combines DCNNs with the expressive deformable mixture of parts. We explicitly incorporate domain prior knowledge into the framework, which greatly regularizes the learning process and enables the flexibility of our framework for loopy models or tree-structured models. The effectiveness of jointly learning a DCNN with a deformable mixture of parts model is evaluated through intensive experiments on several widely used benchmarks. The proposed approach significantly improves the performance compared with state-of-the-art approaches, especially on benchmarks with challenging articulations.",
"In this paper, we propose to incorporate convolutional neural networks with a multi-context attention mechanism into an end-to-end framework for human pose estimation. We adopt stacked hourglass networks to generate attention maps from features at multiple resolutions with various semantics. The Conditional Random Field (CRF) is utilized to model the correlations among neighboring regions in the attention map. We further combine the holistic attention model, which focuses on the global consistency of the full human body, and the body part attention model, which focuses on detailed descriptions for different body parts. Hence our model has the ability to focus on different granularity from local salient regions to global semantic consistent spaces. Additionally, we design novel Hourglass Residual Units (HRUs) to increase the receptive field of the network. These units are extensions of residual units with a side branch incorporating filters with larger receptive field, hence features with various scales are learned and combined within the HRUs. The effectiveness of the proposed multi-context attention mechanism and the hourglass residual units is evaluated on two widely used human pose estimation benchmarks. Our approach outperforms all existing methods on both benchmarks over all the body parts. Code has been made publicly available.",
"",
"This paper presents a novel Mutual Learning to Adapt model (MuLA) for joint human parsing and pose estimation. It effectively exploits mutual benefits from both tasks and simultaneously boosts their performance. Different from existing post-processing or multi-task learning based methods, MuLA predicts dynamic task-specific model parameters via recurrently leveraging guidance information from its parallel tasks. Thus MuLA can fast adapt parsing and pose models to provide more powerful representations by incorporating information from their counterparts, giving more robust and accurate results. MuLA is implemented with convolutional neural networks and end-to-end trainable. Comprehensive experiments on benchmarks LIP and extended PASCAL-Person-Part demonstrate the effectiveness of the proposed MuLA model with superior performance to well established baselines.",
"",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"In this paper we consider the challenging problem of articulated human pose estimation in still images. We observe that despite high variability of the body articulations, human motions and activities often simultaneously constrain the positions of multiple body parts. Modelling such higher order part dependencies seemingly comes at a cost of more expensive inference, which resulted in their limited use in state-of-the-art methods. In this paper we propose a model that incorporates higher order part dependencies while remaining efficient. We achieve this by defining a conditional model in which all body parts are connected a-priori, but which becomes a tractable tree-structured pictorial structures model once the image observations are available. In order to derive a set of conditioning variables we rely on the poselet-based features that have been shown to be effective for people detection but have so far found limited application for articulated human pose estimation. We demonstrate the effectiveness of our approach on three publicly available pose estimation benchmarks improving or being on-par with state of the art in each case.",
"In this work, we present an adaptation of the sequence-to-sequence model for structured vision tasks. In this model, the output variables for a given input are predicted sequentially using neural networks. The prediction for each output variable depends not only on the input but also on the previously predicted output variables. The model is applied to spatial localization tasks and uses convolutional neural networks (CNNs) for processing input images and a multi-scale deconvolutional architecture for making spatial predictions at each step. We explore the impact of weight sharing with a recurrent connection matrix between consecutive predictions, and compare it to a formulation where these weights are not tied. Untied weights are particularly suited for problems with a fixed sized structure, where different classes of output are predicted at different steps. We show that chain models achieve top performing results on human pose estimation from images and videos.",
"In this paper, we propose a structured feature learning framework to reason the correlations among body joints at the feature level in human pose estimation. Different from existing approaches of modeling structures on score maps or predicted labels, feature maps preserve substantially richer descriptions of body joints. The relationships between feature maps of joints are captured with the introduced geometrical transform kernels, which can be easily implemented with a convolution layer. Features and their relationships are jointly learned in an end-to-end learning system. A bi-directional tree structured model is proposed, so that the feature channels at a body joint can well receive information from other joints. The proposed framework improves feature learning substantially. With very simple post processing, it reaches the best mean PCP on the LSP and FLIC datasets. Compared with the baseline of learning features at each joint separately with ConvNet, the mean PCP has been improved by 18 on FLIC. The code is released to the public. 1",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.",
"We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50 while being orders of magnitude faster.",
"In this paper, we address the problem of estimating the positions of human joints, i.e., articulated pose estimation. Recent state-of-the-art solutions model two key issues, joint detection and spatial configuration refinement, together using convolutional neural networks. Our work mainly focuses on spatial configuration refinement by reducing variations of human poses statistically, which is motivated by the observation that the scattered distribution of the relative locations of joints (e.g., the left wrist is distributed nearly uniformly in a circular area around the left shoulder) makes the learning of convolutional spatial models hard. We present a two-stage normalization scheme, human body normalization and limb normalization, to make the distribution of the relative joint locations compact, resulting in easier learning of convolutional spatial models and more accurate pose estimation. In addition, our empirical results show that incorporating multi-scale supervision and multi-scale fusion into the joint detection network is beneficial. Experiment results demonstrate that our method consistently outperforms state-of-the-art methods on the benchmarks.",
"",
"Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.",
"",
""
]
} |
1902.09212 | 2949962589 | This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process. We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. The code and models have been publicly available at this https URL . | The straightforward way is to feed multi-resolution images separately into multiple networks and aggregate the output response maps @cite_77 . Hourglass @cite_92 and its extensions @cite_8 @cite_78 combine low-level features in the high-to-low process into the same-resolution high-level features in the low-to-high process progressively through skip connections. In cascaded pyramid network @cite_41 , a globalnet combines low-to-high level features in the high-to-low process progressively into the low-to-high process, and then a refinenet combines the low-to-high level features that are processed through convolutions. Our approach repeats multi-scale fusion, which is partially inspired by deep fusion and its extensions @cite_23 @cite_12 @cite_56 @cite_99 @cite_35 . | {
"cite_N": [
"@cite_99",
"@cite_35",
"@cite_78",
"@cite_8",
"@cite_41",
"@cite_92",
"@cite_56",
"@cite_77",
"@cite_23",
"@cite_12"
],
"mid": [
"",
"",
"2795262365",
"2742737904",
"2769331938",
"2307770531",
"",
"1936750108",
"2406474429",
""
],
"abstract": [
"",
"",
"We develop a robust multi-scale structure-aware neural network for human pose estimation. This method improves the recent deep conv-deconv hourglass models with four key improvements: (1) multi-scale supervision to strengthen contextual feature learning in matching body keypoints by combining feature heatmaps across scales, (2) multi-scale regression network at the end to globally optimize the structural matching of the multi-scale features, (3) structure-aware loss used in the intermediate supervision and at the regression to improve the matching of keypoints and respective neighbors to infer a higher-order matching configurations, and (4) a keypoint masking training scheme that can effectively fine-tune our network to robustly localize occluded keypoints via adjacent matches. Our method can effectively improve state-of-the-art pose estimation methods that suffer from difficulties in scale varieties, occlusions, and complex multi-person scenarios. This multi-scale supervision tightly integrates with the regression network to effectively (i) localize keypoints using the ensemble of multi-scale features, and (ii) infer global pose configuration by maximizing structural consistencies across multiple keypoints and scales. The keypoint masking training enhances these advantages to focus learning on hard occlusion samples. Our method achieves the leading position in the MPII challenge leaderboard among the state-of-the-art methods.",
"Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multibranch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at https: github.com bearpaw PyraNet.",
"The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19 relative improvement compared with 60.5 from the COCO 2016 keypoint challenge.Code (this https URL) and the detection results are publicly available for further research.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"",
"Recent state-of-the-art performance on human-body pose estimation has been achieved with Deep Convolutional Networks (ConvNets). Traditional ConvNet architectures include pooling and sub-sampling layers which reduce computational requirements, introduce invariance and prevent over-training. These benefits of pooling come at the cost of reduced localization accuracy. We introduce a novel architecture which includes an efficient ‘position refinement’ model that is trained to estimate the joint offset location within a small region of the image. This refinement model is jointly trained in cascade with a state-of-the-art ConvNet model [21] to achieve improved accuracy in human joint location estimation. We show that the variance of our detector approaches the variance of human annotations on the FLIC [20] dataset and outperforms all existing approaches on the MPII-human-pose dataset [1].",
"In this paper, we present a novel deep learning approach, deeply-fused nets. The central idea of our approach is deep fusion, i.e., combine the intermediate representations of base networks, where the fused output serves as the input of the remaining part of each base network, and perform such combinations deeply over several intermediate representations. The resulting deeply fused net enjoys several benefits. First, it is able to learn multi-scale representations as it enjoys the benefits of more base networks, which could form the same fused network, other than the initial group of base networks. Second, in our suggested fused net formed by one deep and one shallow base networks, the flows of the information from the earlier intermediate layer of the deep base network to the output and from the input to the later intermediate layer of the deep base network are both improved. Last, the deep and shallow base networks are jointly learnt and can benefit from each other. More interestingly, the essential depth of a fused net composed from a deep base network and a shallow base network is reduced because the fused net could be composed from a less deep base network, and thus training the fused net is less difficult than training the initial deep base network. Empirical results demonstrate that our approach achieves superior performance over two closely-related methods, ResNet and Highway, and competitive performance compared to the state-of-the-arts.",
""
]
} |
1902.09321 | 2922032776 | We introduce a new methodology for analyzing serial data by quantile regression assuming that the underlying quantile function consists of constant segments. The procedure does not rely on any distributional assumption besides serial independence. It is based on a multiscale statistic, which allows to control the (finite sample) probability for selecting the correct number of segments S at a given error level, which serves as a tuning parameter. For a proper choice of this parameter, this tends exponentially fast to the true S, as sample size increases. We further show that the location and size of segments are estimated at minimax optimal rate (compared to a Gaussian setting) up to a log-factor. Thereby, our approach leads to (asymptotically) uniform confidence bands for the entire quantile regression function in a fully nonparametric setup. The procedure is efficiently implemented using dynamic programming techniques with double heap structures, and software is provided. Simulations and data examples from genetic sequencing and ion channel recordings confirm the robustness of the proposed procedure, which at the same hand reliably detects changes in quantiles from arbitrary distributions with precise statistical guarantees. | The previously mentioned methods that target quantile regression do not come with specific confidence statements as MQS does. Conceptually, this is closely related to confidence bands and intervals in the context of change point regression introduced in @cite_9 for general exponential families. The present methodology extends this to the situation where no parametric model has to be assumed. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1989016323"
],
"abstract": [
"type=\"main\" xml:id=\"rssb12047-abs-0001\"> We introduce a new estimator, the simultaneous multiscale change point estimator SMUCE, for the change point problem in exponential family regression. An unknown step function is estimated by minimizing the number of change points over the acceptance region of a multiscale test at a level α. The probability of overestimating the true number of change points K is controlled by the asymptotic null distribution of the multiscale test statistic. Further, we derive exponential bounds for the probability of underestimating K. By balancing these quantities, α will be chosen such that the probability of correctly estimating K is maximized. All results are even non-asymptotic for the normal case. On the basis of these bounds, we construct (asymptotically) honest confidence sets for the unknown step function and its change points. At the same time, we obtain exponential bounds for estimating the change point locations which for example yield the minimax rate O ( n − 1 ) up to a log-term. Finally, the simultaneous multiscale change point estimator achieves the optimal detection rate of vanishing signals as n→∞, even for an unbounded number of change points. We illustrate how dynamic programming techniques can be employed for efficient computation of estimators and confidence regions. The performance of the multiscale approach proposed is illustrated by simulations and in two cutting edge applications from genetic engineering and photoemission spectroscopy."
]
} |
1902.09314 | 2916076862 | Targeted sentiment classification aims at determining the sentimental tendency towards specific targets. Most of the previous approaches model context and target words with RNN and attention. However, RNNs are difficult to parallelize and truncated backpropagation through time brings difficulty in remembering long-term patterns. To address this issue, this paper proposes an Attentional Encoder Network (AEN) which eschews recurrence and employs attention based encoders for the modeling between context and target. We raise the label unreliability issue and introduce label smoothing regularization. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of our model. | Traditional machine learning methods, including rule-based methods @cite_10 and statistic-based methods @cite_13 , mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier @cite_2 . The performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive. | {
"cite_N": [
"@cite_13",
"@cite_10",
"@cite_2"
],
"mid": [
"2113125055",
"1964613733",
""
],
"abstract": [
"Sentiment analysis on Twitter data has attracted much attention recently. In this paper, we focus on target-dependent Twitter sentiment classification; namely, given a query, we classify the sentiments of the tweets as positive, negative or neutral according to whether they contain positive, negative or neutral sentiments about that query. Here the query serves as the target of the sentiments. The state-of-the-art approaches for solving this problem always adopt the target-independent strategy, which may assign irrelevant sentiments to the given target. Moreover, the state-of-the-art approaches only take the tweet to be classified into consideration when classifying the sentiment; they ignore its context (i.e., related tweets). However, because tweets are usually short and more ambiguous, sometimes it is not enough to consider only the current tweet for sentiment classification. In this paper, we propose to improve target-dependent Twitter sentiment classification by 1) incorporating target-dependent features; and 2) taking related tweets into consideration. According to the experimental results, our approach greatly improves the performance of target-dependent sentiment classification.",
"One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
""
]
} |
1902.09359 | 2916652897 | We present a novel anytime heuristic (ALMA), inspired by the human principle of altruism, for solving the assignment problem. ALMA is decentralized, completely uncoupled, and requires no communication between the participants. We prove an upper bound on the convergence speed that is polynomial in the desired number of resources and competing agents per resource; crucially, in the realistic case where the aforementioned quantities are bounded independently of the total number of agents resources, the convergence time remains constant as the total problem size increases. We have evaluated ALMA under three test cases: (i) an anti-coordination scenario where agents with similar preferences compete over the same set of actions, (ii) a resource allocation scenario in an urban environment, under a constant-time constraint, and finally, (iii) an on-line matching scenario using real passenger-taxi data. In all of the cases, ALMA was able to reach high social welfare, while being orders of magnitude faster than the centralized, optimal algorithm. The latter allows our algorithm to scale to realistic scenarios with hundreds of thousands of agents, e.g., vehicle coordination in urban environments. | In reality, a centralized coordinator is not always available, and if so, it has to know the utilities of all the participants, which is often not feasible. In the literature of the assignment problem, there also exist several decentralized algorithms (e.g., @cite_18 @cite_33 @cite_3 @cite_13 which are the decentralized versions of the aforementioned well-known centralized algorithms). However, these algorithms require polynomial computational time and polynomial number of messages (such as cost matrices @cite_33 , pricing information @cite_3 , or a basis of the LP @cite_13 , etc.). Yet, agent interactions often repeat no more than a few hundreds of times. To the best of our knowledge, a decentralized algorithm that requires no message exchange (i.e., no communication network) between the participants, and achieves high efficiency, like ALMA does, has not appeared in the literature before. Let us stress the importance of such a heuristic: as autonomous agents proliferate, and their number and diversity continue to rise, differences between the agents in terms of origin, communication protocols, or the existence of sub-optimal, legacy agents will bring forth the need to collaborate without any form of explicit communication @cite_21 . Finally, inter-agent communication creates high overhead as well. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_21",
"@cite_3",
"@cite_13"
],
"mid": [
"1495826018",
"2744920931",
"1606056663",
"2135132740",
"2045294654"
],
"abstract": [
"In this work we address the Multi-Robot Task Allocation Problem (MRTA). We assume that the decision making environment is decentralized with as many decision makers (agents) as the robots in the system. To solve this problem, we developed a distributed version of the Hungarian Method for the assignment problem. The robots autonomously perform different substeps of the Hungarian algorithm on the base of the individual and the information received through the messages from the other robots in the system. It is assumed that each robot agent has an information regarding its distance from the targets in the environment. The inter-robot communication is performed over a connected dynamic communication network and the solution to the assignment problem is reached without any common coordinator or a shared memory of the system. The algorithm comes up with a global optimum solution in O(n3) cumulative time (O(n2) for each robot), with O(n3) number of messages exchanged among the n robots.",
"In this paper, a novel decentralized task allocation algorithm based on the Hungarian approach is proposed. The proposed algorithm guarantees an optimal solution as long as the agent network is connected, i.e., the second smallest eigenvalue of the Laplacian matrix of the agent graph is greater than zero. In order to show the motivation of the proposed algorithm, the original centralized auction and Hungarian algorithms are compared in terms of the converging speed versus the number of agents. The result shows the superiority of the Hungarian algorithm in scalability over the auction algorithm. Then, the performance of the proposed decentralized Hungarian-Based algorithm (DHBA) is compared with the consensus-based auction algorithm (CBAA) under different situations, including different number of agents and different network topologies. The simulation results show that DHBA outperforms CBAA in all cases on the basis of the converging speed, the optimality of assignments, and the computational time.",
"As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such ad hoc team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This paper challenges the AI community to develop theory and to implement prototypes of ad hoc team agents. It defines the concept of ad hoc team agents, specifies an evaluation paradigm, and provides examples of possible theoretical and empirical approaches to challenge. The goal is to encourage progress towards this ambitious, newly realistic, and increasingly important research goal.",
"The assignment problem constitutes one of the fundamental problems in the context of linear programming. Besides its theoretical significance, its frequent appearance in the areas of distributed control and facility allocation, where the problems? size and the cost for global computation and information can be highly prohibitive, gives rise to the need for local solutions that dynamically assign distinct agents to distinct tasks, while maximizing the total assignment benefit. In this paper, we consider the linear assignment problem in the context of networked systems, where the main challenge is dealing with the lack of global information due to the limited communication capabilities of the agents. We address this challenge by means of a distributed auction algorithm, where the agents are able to bid for the task to which they wish to be assigned. The desired assignment relies on an appropriate selection of bids that determine the prices of the tasks and render them more or less attractive for the agents to bid for. Up to date pricing information, necessary for accurate bidding, can be obtained in a multi-hop fashion by means of local communication between adjacent agents. Our algorithm is an extension to the parallel auction algorithm proposed by to the case where only local information is available and it is shown to always converge to an assignment that maximizes the total assignment benefit within a linear approximation of the optimal one.",
"In this paper we propose a novel distributed algorithm to solve degenerate linear programs on asynchronous peer-to-peer networks with distributed information structures. We propose a distributed version of the well-known simplex algorithm for general degenerate linear programs. A network of agents, running our algorithm, will agree on a common optimal solution, even if the optimal solution is not unique, or will determine infeasibility or unboundedness of the problem. We establish how the multi-agent assignment problem can be efficiently solved by means of our distributed simplex algorithm. We provide simulations supporting the conjecture that the completion time scales linearly with the diameter of the communication graph."
]
} |
1902.09240 | 2952693415 | Training a Neural Network (NN) with lots of parameters or intricate architectures creates undesired phenomena that complicate the optimization process. To address this issue we propose a first modular approach to NN design, wherein the NN is decomposed into a control module and several functional modules, implementing primitive operations. We illustrate the modular concept by comparing performances between a monolithic and a modular NN on a list sorting problem and show the benefits in terms of training speed, training stability and maintainability. We also discuss some questions that arise in modular NNs. | NNs have traditionally been regarded as a modular system @cite_34 . At the hardware level, the computation of neurons and layers can be decomposed down to a graph of multiplications and additions. This has been exploited by GPU computation, enabling the execution of non-dependant operations in parallel, and by the development of frameworks for this kind of computation. Regarding computational motivations, the avoidance of coupling among neurons and the quest for generalization and speed of learning have been the main arguments used in favor of modularity @cite_5 . | {
"cite_N": [
"@cite_5",
"@cite_34"
],
"mid": [
"2082004986",
"2045257906"
],
"abstract": [
"This paper presents and evaluates a modular hybrid connectionist system for speaker identification. Modularity has emerged as a powerful technique for reducing the complexity of connectionist systems, and allowing a priori knowledge to be incorporated into their design. Text-independent speaker identification is an inherently complex task where the amount of training data is often limited. It thus provides an ideal domain to test the validity of the modular hybrid connectionist approach. To achieve such identification, we develop, in this paper, an architecture based upon the cooperation of several connectionist modules, and a Hidden Markov Model module. When tested on a population of 102 speakers extracted from the DARPA-TIMIT database, perfect identification was obtained.",
"Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks (NNs) research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies. Advantages and disadvantages of the surveyed methods are pointed out, and an assessment with respect to practical potential is provided. Finally, some general recommendations for future designs are presented."
]
} |
1902.09240 | 2952693415 | Training a Neural Network (NN) with lots of parameters or intricate architectures creates undesired phenomena that complicate the optimization process. To address this issue we propose a first modular approach to NN design, wherein the NN is decomposed into a control module and several functional modules, implementing primitive operations. We illustrate the modular concept by comparing performances between a monolithic and a modular NN on a list sorting problem and show the benefits in terms of training speed, training stability and maintainability. We also discuss some questions that arise in modular NNs. | The main application of modularity has been the construction of NN ensembles though, focusing on learning algorithms that automate their formation @cite_34 @cite_20 . A type of ensemble with some similarities to our proposal is the Mixture of Experts @cite_12 @cite_2 , in which a gating network selects the output from multiple expert networks. Constructive Modularization Learning and boosting methods pursue the divide-and-conquer idea as well, although they do it through an automatic partitioning of the space. This automatic treatment of the modularization process what makes difficult to embed any kind of expertise in the system. | {
"cite_N": [
"@cite_34",
"@cite_12",
"@cite_20",
"@cite_2"
],
"mid": [
"2045257906",
"2150884987",
"2130017568",
""
],
"abstract": [
"Modular Neural Networks (MNNs) is a rapidly growing field in artificial Neural Networks (NNs) research. This paper surveys the different motivations for creating MNNs: biological, psychological, hardware, and computational. Then, the general stages of MNN design are outlined and surveyed as well, viz., task decomposition techniques, learning schemes and multi-module decision-making strategies. Advantages and disadvantages of the surveyed methods are pointed out, and an assessment with respect to practical potential is provided. Finally, some general recommendations for future designs are presented.",
"We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network.",
"In this chapter, we focus on two important areas in neural computation, i. e., deep and modular neural networks, given the fact that both deep and modular neural networks are among the most powerful machine learning and pattern recognition techniques for complex AI problem solving. We begin by providing a general overview of deep and modular neural networks to describe the general motivation behind such neural architectures and fundamental requirements imposed by complex AI problems. Next, we describe background and motivation, methodologies, major building blocks, and the state-of-the-art hybrid learning strategy in context of deep neural architectures. Then, we describe background and motivation, taxonomy, and learning algorithms pertaining to various typical modular neural networks in a wide context. Furthermore, we also examine relevant issues and discuss open problems in deep and modular neural network research areas.",
""
]
} |
1902.09240 | 2952693415 | Training a Neural Network (NN) with lots of parameters or intricate architectures creates undesired phenomena that complicate the optimization process. To address this issue we propose a first modular approach to NN design, wherein the NN is decomposed into a control module and several functional modules, implementing primitive operations. We illustrate the modular concept by comparing performances between a monolithic and a modular NN on a list sorting problem and show the benefits in terms of training speed, training stability and maintainability. We also discuss some questions that arise in modular NNs. | In @cite_18 , a visual question answering problem is solved with a modular NN. Each module is targeted to learn a certain operation and a module layout is dynamically generated after parsing the input question. The modules then converge to the expected functionality due to their role in such layout. This is an important step towards NN modularity, despite the modules being trained jointly. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2256357568"
],
"abstract": [
"Visual question answering is fundamentally compositional in nature---a question like \"where is the dog?\" shares substructure with questions like \"what color is the dog?\" and \"where is the cat?\" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural \"modules\" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes."
]
} |
1902.09240 | 2952693415 | Training a Neural Network (NN) with lots of parameters or intricate architectures creates undesired phenomena that complicate the optimization process. To address this issue we propose a first modular approach to NN design, wherein the NN is decomposed into a control module and several functional modules, implementing primitive operations. We illustrate the modular concept by comparing performances between a monolithic and a modular NN on a list sorting problem and show the benefits in terms of training speed, training stability and maintainability. We also discuss some questions that arise in modular NNs. | Many of the ideas presented here have already been discussed in @cite_30 . They use Reinforcement Learning (RL) for training the policy module and backpropagation for the rest of the modules. However, they predict the sequence of actions in one shot and do not yet consider the possibility of implementing the feedback loop. They implicitly exploit modularity to some extent, as they pretrain the policy from expert traces and use a pretrained VGG-16 network @cite_19 , but the modules are trained jointly afterwards. In @cite_13 they extend this concept integrating a feedback loop, but substituting the hard attention mechanism by a soft one in order to do an end-to-end training. Thus, the modular structure is present, but the independent training is not exploited. | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_13"
],
"mid": [
"2613526370",
"1686810756",
"2951891305"
],
"abstract": [
"Natural language questions are inherently compositional, and many are most easily answered by reasoning about their decomposition into modular sub-problems. For example, to answer \"is there an equal number of balls and boxes?\" we can look for balls, look for boxes, count them, and compare the results. The recently proposed Neural Module Network (NMN) architecture implements this approach to question answering by parsing questions into linguistic substructures and assembling question-specific deep networks from smaller modules that each solve one subtask. However, existing NMN implementations rely on brittle off-the-shelf parsers, and are restricted to the module configurations proposed by these parsers rather than learning them from data. In this paper, we propose End-to-End Module Networks (N2NMNs), which learn to reason by directly predicting instance-specific network layouts without the aid of a parser. Our model learns to generate network structures (by imitating expert demonstrations) while simultaneously learning network parameters (using the downstream task loss). Experimental results on the new CLEVR dataset targeted at compositional question answering show that N2NMNs achieve an error reduction of nearly 50 relative to state-of-the-art attentional approaches, while discovering interpretable network architectures specialized for each question.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional reasoning process, and, in many applications, the need for this reasoning process to be interpretable to assist users in both development and prediction. Existing models designed to produce interpretable traces of their decision-making process typically require these traces to be supervised at training time. In this paper, we present a novel neural modular approach that performs compositional reasoning by automatically inducing a desired sub-task decomposition without relying on strong supervision. Our model allows linking different reasoning tasks though shared modules that handle common routines across tasks. Experiments show that the model is more interpretable to human evaluators compared to other state-of-the-art models: users can better understand the model's underlying reasoning procedure and predict when it will succeed or fail based on observing its intermediate outputs."
]
} |
1902.09240 | 2952693415 | Training a Neural Network (NN) with lots of parameters or intricate architectures creates undesired phenomena that complicate the optimization process. To address this issue we propose a first modular approach to NN design, wherein the NN is decomposed into a control module and several functional modules, implementing primitive operations. We illustrate the modular concept by comparing performances between a monolithic and a modular NN on a list sorting problem and show the benefits in terms of training speed, training stability and maintainability. We also discuss some questions that arise in modular NNs. | The idea of a NN being an agent interacting with some environment is not new and is in fact the context in which RL is defined @cite_33 . RL problems focus on learning a policy that the agent should follow in order to maximize an expected reward. In such cases there is usually no point in training operation modules, as the agent interacts simply by selecting an existing operation. RL methods would therefore be a good option to train the control module. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2121863487"
],
"abstract": [
"Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning."
]
} |
1902.09323 | 2915350940 | To address the challenges in learning deep generative models (e.g.,the blurriness of variational auto-encoder and the instability of training generative adversarial networks, we propose a novel deep generative model, named Wasserstein-Wasserstein auto-encoders (WWAE). We formulate WWAE as minimization of the penalized optimal transport between the target distribution and the generated distribution. By noticing that both the prior @math and the aggregated posterior @math of the latent code Z can be well captured by Gaussians, the proposed WWAE utilizes the closed-form of the squared Wasserstein-2 distance for two Gaussians in the optimization process. As a result, WWAE does not suffer from the sampling burden and it is computationally efficient by leveraging the reparameterization trick. Numerical results evaluated on multiple benchmark datasets including MNIST, fashion- MNIST and CelebA show that WWAE learns better latent structures than VAEs and generates samples of better visual quality and higher FID scores than VAEs and GANs. | The blurriness of VAEs is caused by the combination of the Gaussian decoder and the regularization term in VAEs see Section 4.1 in @cite_33 for detail argument. The Gaussian decoder is induced by the reparametric trick, which can not be avoided. The regularization term in VAEs measures the discrepancy between the marginal encoded distribution and the prior distribution. To reduce the blurry of VAEs, much attention has been paid to find a better regularization term. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2620025707"
],
"abstract": [
"We study unsupervised generative modeling in terms of the optimal transport (OT) problem between true (but unknown) data distribution @math and the latent variable model distribution @math . We show that the OT problem can be equivalently written in terms of probabilistic encoders, which are constrained to match the posterior and prior distributions over the latent space. When relaxed, this constrained optimization problem leads to a penalized optimal transport (POT) objective, which can be efficiently minimized using stochastic gradient descent by sampling from @math and @math . We show that POT for the 2-Wasserstein distance coincides with the objective heuristically employed in adversarial auto-encoders (AAE) (, 2016), which provides the first theoretical justification for AAEs known to the authors. We also compare POT to other popular techniques like variational auto-encoders (VAE) (Kingma and Welling, 2014). Our theoretical results include (a) a better understanding of the commonly observed blurriness of images generated by VAEs, and (b) establishing duality between Wasserstein GAN (Arjovsky and Bottou, 2017) and POT for the 1-Wasserstein distance."
]
} |
1902.09399 | 2917431811 | The emerging technologies related to mobile data especially CDR data has great potential for mobility and transportation applications. However, it presents some challenges due to its spatio-temporal characteristics and sparseness. Therefore, in this article, we introduced a new model to refine the positioning accuracy of mobile devices using only CDR data and coverage areas locations. The adopted method has three steps: first, we discovered which model of movement (Move, Stay) is associated with the coverage areas where the mobile device was connected using a Kalman filter. Then, simultaneously we estimated the location or the position of the device. Finally, we applied map-matching to bring the positioning to the right road segment. The results are very encouraging; nevertheless, there is some enhancement that can be done at the level of movement models and map matching. For example by introducing more sophisticated movement model based on data-driven modeling and a map matching that uses the movement model type detected by matching "Stay" location to buildings and "Move" model to roads. | From this perspective, there is a lot of research done using GPS data as a source of positioning sensors. For example, in @cite_26 the authors collected GPS data and built an algorithm for extracting meaningful locations. Then, they predicted users' movement based on a mobility model created using the discovered displacement patterns. Of course, GPS data can be considered as a very good source of data for building mobility behavior models and localization; however, not all of the mobile phone users are happy to share this kind of information all the time, whenever and wherever they are. That's why CDR data can be a potential source of data for localization and mobility modeling @cite_18 . Moreover, telecommunication data has been collected continuously for many years now, which makes this type of data available in massive amounts @cite_4 . Furthermore, by correlating the CDR data and the geographic locations of towers as mentioned before, the CDR data can be very useful for discovering and extracting mobility patterns @cite_12 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_12"
],
"mid": [
"",
"1571751884",
"1994273294",
"2172211955"
],
"abstract": [
"",
"As technology to connect people across the world is advancing, there should be corresponding advancement in taking advantage of data that is generated out of such connection. To that end, next place prediction is an important problem for mobility data. In this paper we propose several models using dynamic Bayesian network (DBN). Idea behind development of these models come from typical daily mobility patterns a user have. Three features (location, day of the week (DoW), and time of the day (ToD)) and their combinations are used to develop these models. Knowing that not all models work well for all situations, we developed three combined models using least entropy, highest probability and ensemble. Extensive performance study is conducted to compare these models over two different mobility data sets: a CDR data and Nokia mobile data which is based on GPS. Results show that least entropy and highest probability DBNs perform the best.",
"Continuous personal position information has been attracting attention in a variety of service and research areas. In recent years, many studies have applied the telecommunication histories of mobile phones (CDRs: call detail records) to position acquisition. Although large-scale and long-term data are accumulated from CDRs through everyday use of mobile phones, the spatial resolution of CDRs is lower than that of existing positioning technologies. Therefore, interpolating spatiotemporal positions of such sparse CDRs in accordance with human behavior models will facilitate services and researches. In this paper, we propose a new method to compensate for CDR drawbacks in tracking positions. We generate as many candidate routes as possible in the spatiotemporal domain using trip patterns interpolated using road and railway networks and select the most likely route from them. Trip patterns are feasible combinations between stay places that are detected from individual location histories in CDRs. The most likely route could be estimated through comparing candidate routes to observed CDRs during a target day. We also show the assessment of our method using CDRs and GPS logs obtained in the experimental survey.",
"Models of human mobility have broad applicability in fields such as mobile computing, urban planning, and ecology. This paper proposes and evaluates WHERE, a novel approach to modeling how large populations move within different metropolitan areas. WHERE takes as input spatial and temporal probability distributions drawn from empirical data, such as Call Detail Records (CDRs) from a cellular telephone network, and produces synthetic CDRs for a synthetic population. We have validated WHERE against billions of anonymous location samples for hundreds of thousands of phones in the New York and Los Angeles metropolitan areas. We found that WHERE offers significantly higher fidelity than other modeling approaches. For example, daily range of travel statistics fall within one mile of their true values, an improvement of more than 14 times over a Weighted Random Waypoint model. Our modeling techniques and synthetic CDRs can be applied to a wide range of problems while avoiding many of the privacy concerns surrounding real CDRs."
]
} |
1902.09399 | 2917431811 | The emerging technologies related to mobile data especially CDR data has great potential for mobility and transportation applications. However, it presents some challenges due to its spatio-temporal characteristics and sparseness. Therefore, in this article, we introduced a new model to refine the positioning accuracy of mobile devices using only CDR data and coverage areas locations. The adopted method has three steps: first, we discovered which model of movement (Move, Stay) is associated with the coverage areas where the mobile device was connected using a Kalman filter. Then, simultaneously we estimated the location or the position of the device. Finally, we applied map-matching to bring the positioning to the right road segment. The results are very encouraging; nevertheless, there is some enhancement that can be done at the level of movement models and map matching. For example by introducing more sophisticated movement model based on data-driven modeling and a map matching that uses the movement model type detected by matching "Stay" location to buildings and "Move" model to roads. | In the same category, we a have the same method Sequential Monte Carlo (SMC) used in a different manner presented in @cite_6 , where the authors tried to enhance the application of SMC by making it robust regarding the localization estimation even if the range measurement error is high and unpredictable. The testing of the algorithm was done using a simulation and the results showed that the approach is capable of improving the localization range from 12 Regarding the methods based on propagation time and signal strength @cite_24 . Signal strength approaches are focused on received signal strength indicators, which reflect the attenuation measurement of the signal in the assumed free space propagation of radio signals. However, the reality might be described as a free space propagation; therefore, it affects the triangulation method for localization. | {
"cite_N": [
"@cite_24",
"@cite_6"
],
"mid": [
"1608123259",
"1608899808"
],
"abstract": [
"Being applicable for almost every scenario, mobile localization based on cellular network has gained increasing interest in recent years. Since received signal strength indication (RSSI) information is available in all mobile phones, RSSI-based techniques have become the preferred method for GSM localization. Although the GSM standard allows for a mobile phone to receive signal strength information from up to seven base stations (BSs), most of mobile phones only use the information of the associated cell as its estimated position. Therefore, the accuracy of GSM localization is seriously limited. In this paper, an algorithm for GSM localization is proposed with RSSI and Pearson's correlation coefficient (PCC). The information of seven cells, including the serving cell and six neighboring cells, is used to accurately estimate the mobile location. With redundant information, the proposed algorithm restrains the error of Cell-ID and shows good robustness against environmental change. Without any additional device or prior statistical knowledge, the proposed algorithm is implementable on common mobile devices. Furthermore, in the practical test, its maximum error is below 550 m, which is 100 m better than that of Cell-ID, and the mean error is below 150 m, which is 250 m better than Cell-ID.",
"Localization schemes for wireless sensor networks can be classified as range-based or range-free. They differ in the information used for localization. Range-based methods use range measurements, while range-free techniques only use the content of the messages. None of the existing algorithms evaluate both types of information. Most of the localization schemes do not consider mobility. In this paper, a Sequential Monte Carlo Localization Method is introduced that uses both types of information as well as mobility to obtain accurate position estimations, even when high range measurement errors are present in the network and unpredictable movements of the nodes occur. We test our algorithm in various environmental settings and compare it to other known localization algorithms. The simulations show that our algorithm outperforms these known range-oriented and range-free algorithms for both static and dynamic networks. Localization improvements range from 12 to 49 in a wide range of conditions."
]
} |
1902.09427 | 2953180347 | Early fault detection using instrumented sensor data is one of the promising application areas of machine learning in industrial facilities. However, it is difficult to improve the generalization performance of the trained fault-detection model because of the complex system configuration in the target diagnostic system and insufficient fault data. It is not trivial to apply the trained model to other systems. Here we propose a fault diagnosis method for refrigerant leak detection considering the physical modeling and control mechanism of an air-conditioning system. We derive a useful scaling law related to refrigerant leak. If the control mechanism is the same, the model can be applied to other air-conditioning systems irrespective of the system configuration. Small-scale off-line fault test data obtained in a laboratory are applied to estimate the scaling exponent. We evaluate the proposed scaling law by using real-world data. Based on a statistical hypothesis test of the interaction between two groups, we show that the scaling exponents of different air-conditioning systems are equivalent. In addition, we estimated the time series of the degree of leakage of real process data based on the scaling law and confirmed that the proposed method is promising for early leak detection through comparison with assessment by experts. | Here, we also refer to anomaly detection @cite_6 . While it does not require fault data in the training phase, there are multiple failure types in many industrial facilities, and the magnitudes of the failures are not unique. In the case of a facility having many types of failures, since it is difficult to specify the type of failure only by the statistical deviance from the normal state, an anomaly detection technique is not appropriate. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2122646361"
],
"abstract": [
"Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with."
]
} |
1902.09294 | 2951372878 | Multi-label network classification is a well-known task that is being used in a wide variety of web-based and non-web-based domains. It can be formalized as a multi-relational learning task for predicting nodes labels based on their relations within the network. In sparse networks, this prediction task can be very challenging when only implicit feedback information is available such as in predicting user interests in social networks. Current approaches rely on learning per-node latent representations by utilizing the network structure, however, implicit feedback relations are naturally sparse and contain only positive observed feedbacks which mean that these approaches will treat all observed relations as equally important. This is not necessarily the case in real-world scenarios as implicit relations might have semantic weights which reflect the strength of those relations. If those weights can be approximated, the models can be trained to differentiate between strong and weak relations. In this paper, we propose a weighted personalized two-stage multi-relational matrix factorization model with Bayesian personalized ranking loss for network classification that utilizes basic transitive node similarity function for weighting implicit feedback relations. Experiments show that the proposed model significantly outperforms the state-of-art models on three different real-world web-based datasets and a biology-based dataset. | Current approaches for multi-label node classification automate the process of features extraction and engineering by directly learning latent features for each node. These latent features are mainly generated based on the global network structure and the connectivity layout of each node. In earlier approaches such as @cite_17 @cite_18 , they produce k latent features for each node by utilizing either the first k eigenvectors of a generated modularity matrix for the friendship relation @cite_17 or a sparse k-means clustering of friendship edges @cite_18 . These k features are fed into an SVM for labels predictions. | {
"cite_N": [
"@cite_18",
"@cite_17"
],
"mid": [
"2105543219",
"2046253692"
],
"abstract": [
"The study of collective behavior is to understand how individuals behave in a social network environment. Oceans of data generated by social media like Facebook, Twitter, Flickr and YouTube present opportunities and challenges to studying collective behavior in a large scale. In this work, we aim to learn to predict collective behavior in social media. In particular, given information about some individuals, how can we infer the behavior of unobserved individuals in the same network? A social-dimension based approach is adopted to address the heterogeneity of connections presented in social media. However, the networks in social media are normally of colossal size, involving hundreds of thousands or even millions of actors. The scale of networks entails scalable learning of models for collective behavior prediction. To address the scalability issue, we propose an edge-centric clustering scheme to extract sparse social dimensions. With sparse social dimensions, the social-dimension based approach can efficiently handle networks of millions of actors while demonstrating comparable prediction performance as other non-scalable methods.",
"Social media such as blogs, Facebook, Flickr, etc., presents data in a network format rather than classical IID distribution. To address the interdependency among data instances, relational learning has been proposed, and collective inference based on network connectivity is adopted for prediction. However, connections in social media are often multi-dimensional. An actor can connect to another actor for different reasons, e.g., alumni, colleagues, living in the same city, sharing similar interests, etc. Collective inference normally does not differentiate these connections. In this work, we propose to extract latent social dimensions based on network information, and then utilize them as features for discriminative learning. These social dimensions describe diverse affiliations of actors hidden in the network, and the discriminative learning can automatically determine which affiliations are better aligned with the class labels. Such a scheme is preferred when multiple diverse relations are associated with the same network. We conduct extensive experiments on social media data (one from a real-world blog site and the other from a popular content sharing site). Our model outperforms representative relational learning methods based on collective inference, especially when few labeled data are available. The sensitivity of this model and its connection to existing methods are also examined."
]
} |
1902.09294 | 2951372878 | Multi-label network classification is a well-known task that is being used in a wide variety of web-based and non-web-based domains. It can be formalized as a multi-relational learning task for predicting nodes labels based on their relations within the network. In sparse networks, this prediction task can be very challenging when only implicit feedback information is available such as in predicting user interests in social networks. Current approaches rely on learning per-node latent representations by utilizing the network structure, however, implicit feedback relations are naturally sparse and contain only positive observed feedbacks which mean that these approaches will treat all observed relations as equally important. This is not necessarily the case in real-world scenarios as implicit relations might have semantic weights which reflect the strength of those relations. If those weights can be approximated, the models can be trained to differentiate between strong and weak relations. In this paper, we propose a weighted personalized two-stage multi-relational matrix factorization model with Bayesian personalized ranking loss for network classification that utilizes basic transitive node similarity function for weighting implicit feedback relations. Experiments show that the proposed model significantly outperforms the state-of-art models on three different real-world web-based datasets and a biology-based dataset. | Recently, semi-supervised @cite_19 and unsupervised approaches @cite_10 @cite_0 @cite_3 have been proposed to extract latent node representations in networks data. These models are inspired by the novel approaches for learning latent representations of words such as the convolutional neural networks and the Skip-gram models @cite_15 in the domain of natural language processing. They formulate the network classification problem as discrete words classification problem by representing the network as a document and all nodes as a sequence of words. The Skip-gram can then be used to predict the most likely labels for each node based on the assumption that similar nodes will have same labels. | {
"cite_N": [
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_10"
],
"mid": [
"2242161203",
"2154851992",
"2519887557",
"1614298861",
"2962756421"
],
"abstract": [
"Representation learning has shown its effectiveness in many tasks such as image classification and text mining. Network representation learning aims at learning distributed vector representation for each vertex in a network, which is also increasingly recognized as an important aspect for network analysis. Most network representation learning methods investigate network structures for learning. In reality, network vertices contain rich information (such as text), which cannot be well applied with algorithmic frameworks of typical representation learning methods. By proving that DeepWalk, a state-of-the-art network representation method, is actually equivalent to matrix factorization (MF), we propose text-associated DeepWalk (TADW). TADW incorporates text features of vertices into network representation learning under the framework of matrix factorization. We evaluate our method and various baseline methods by applying them to the task of multi-class classification of vertices. The experimental results show that, our method outperforms other baselines on all three datasets, especially when networks are noisy and training ratio is small. The source code of this paper can be obtained from https: github.com albertyang33 TADW.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks."
]
} |
1902.09294 | 2951372878 | Multi-label network classification is a well-known task that is being used in a wide variety of web-based and non-web-based domains. It can be formalized as a multi-relational learning task for predicting nodes labels based on their relations within the network. In sparse networks, this prediction task can be very challenging when only implicit feedback information is available such as in predicting user interests in social networks. Current approaches rely on learning per-node latent representations by utilizing the network structure, however, implicit feedback relations are naturally sparse and contain only positive observed feedbacks which mean that these approaches will treat all observed relations as equally important. This is not necessarily the case in real-world scenarios as implicit relations might have semantic weights which reflect the strength of those relations. If those weights can be approximated, the models can be trained to differentiate between strong and weak relations. In this paper, we propose a weighted personalized two-stage multi-relational matrix factorization model with Bayesian personalized ranking loss for network classification that utilizes basic transitive node similarity function for weighting implicit feedback relations. Experiments show that the proposed model significantly outperforms the state-of-art models on three different real-world web-based datasets and a biology-based dataset. | In @cite_5 , MR-BPR was proposed as learning to rank approach for tackling the multi-label classification problem by extending the BPR @cite_13 model for multi-relational settings. This approach expresses the problem as a multi-relational matrix factorization trained to optimize the AUC measure using BPR loss. Each network relation is represented by a sparse matrix and the relation between nodes and labels will be the target being predicted. Because of the BPR loss, this model is considered suitable for sparse networks with implicit feedback relations; however, since all implicit feedback connections are only observed or unobserved, the MR-BPR fail to realize that some implicit links are stronger than others in real-life. To solve this drawback in the original single relation BPR model, @cite_12 proposed BPR++ an extended version of the BPR model for user-item rating prediction. They utilized multiple weighting functions to approximate the logical weights between users and items. Those functions relied on the frequency of interaction and timestamps to weight each edge. In the training phase, they randomly alternate between learning to distinguish observed and unobserved relations and learning to rank weighted observed relations. This learning approach expands the BPR capacity to differentiate between strong and weak connections. | {
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"2064754513",
"2140310134",
"1981844236"
],
"abstract": [
"A key element of the social networks on the internet such as Facebook and Flickr is that they encourage users to create connections between themselves, other users and objects. One important task that has been approached in the literature that deals with such data is to use social graphs to predict user behavior (e.g. joining a group of interest). More specifically, we study the cold-start problem, where users only participate in some relations, which we will call social relations, but not in the relation on which the predictions are made, which we will refer to as target relations. We propose a formalization of the problem and a principled approach to it based on multi-relational factorization techniques. Furthermore, we derive a principled feature extraction scheme from the social data to extract predictors for a classifier on the target relation. Experiments conducted on real world datasets show that our approach outperforms current methods.",
"Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion.",
"In many application domains of recommender systems, explicit rating information is sparse or non-existent. The preferences of the current user have therefore to be approximated by interpreting his or her behavior, i.e., the implicit user feedback. In the literature, a number of algorithm proposals have been made that rely solely on such implicit feedback, among them Bayesian Personalized Ranking (BPR). In the BPR approach, pairwise comparisons between the items are made in the training phase and an item i is considered to be preferred over item j if the user interacted in some form with i but not with j. In real-world applications, however, implicit feedback is not necessarily limited to such binary decisions as there are, e.g., different types of user actions like item views, cart or purchase actions and there can exist several actions for an item over time. In this paper we show how BPR can be extended to deal with such more fine-granular, graded preference relations. An empirical analysis shows that this extension can help to measurably increase the predictive accuracy of BPR on realistic e-commerce datasets."
]
} |
1902.09357 | 2917947323 | Interpretability has always been a major concern for fuzzy rule-based classifiers. The usage of human-readable models allows them to explain the reasoning behind their predictions and decisions. However, when it comes to Big Data classification problems, fuzzy rule-based classifiers have not been able to maintain the good trade-off between accuracy and interpretability that has characterized these techniques in non-Big Data environments. The most accurate methods build too complex models composed of a large number of rules and fuzzy sets, while those approaches focusing on interpretability do not provide state-of-the-art discrimination capabilities. In this paper, we propose a new distributed learning algorithm to construct accurate and compact fuzzy rule-based classification systems for Big Data named CFM-BD. This method has been specifically designed from scratch for Big Data problems and does not adapt or extend any existing algorithm. The proposed learning process consists of three stages: 1) pre-processing based on the probability integral transform theorem; 2) rule induction inspired by CHI-BD and Apriori algorithms; 3) rule selection by means of a global evolutionary optimization. We conducted a complete empirical study to test the performance of our approach in terms of accuracy, complexity, and runtime. The results obtained were compared and contrasted with four state-of-the-art fuzzy classifiers for Big Data (FBDT, FMDT, Chi-Spark-RS, and CHI-BD). According to this study, CFM-BD is able to provide competitive discrimination capabilities using significantly simpler models composed of a few rules of less than 3 antecedents, employing 5 linguistic labels for all variables. | Distributed learning algorithms might, in turn, tackle classification problems either by decomposing the original training data into several local sub-problems @cite_44 @cite_12 @cite_22 or by performing a global distributed learning process @cite_4 @cite_20 @cite_7 @cite_13 @cite_34 , or even by combining these two approaches @cite_5 @cite_35 . In the former case, an independent local model is concurrently built in each subset () of data, so that the final classifier is obtained by aggregating all these models. In this manner, one could apply a well-known non-distributed algorithm to train each local model. However, similarly to incremental learning, the learning process becomes strongly dependent on the distribution of subsets and might miss important information available only when training data is treated as a whole. Regarding global distributed learning algorithms, the difficulty of parallelizing the training phase across several computing units is the main drawback. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_44",
"@cite_5",
"@cite_34",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"2735540554",
"2750427532",
"2749578821",
"2135074661",
"2180526715",
"2565966100",
"2756790431",
"2557042702",
"2734283703"
],
"abstract": [
"",
"Abstract The previous Fuzzy Rule-Based Classification Systems (FRBCSs) for Big Data problems consist in concurrently learning multiple FRBCSs whose rule bases are then aggregated. The problem of this approach is that different models are obtained when varying the configuration of the cluster, becoming less accurate as more computing nodes are added. Our aim with this work is to design a new FRBCS for Big Data classification problems (CHI-BD) which is able to provide exactly the same model as the one that would be obtained by the original algorithm if it could be executed with this quantity of data. In order to do so, we take advantage of the suitability of the algorithm for the MapReduce paradigm, solving the problems of the previous approach, which lead us to obtain the same model (i.e., classification accuracy) regardless of the number of computing nodes considered.",
"The significance and benefits of addressing classification tasks in Big Data applications is beyond any doubt. To do so, learning algorithms must be scalable to cope with such a high volume of data. The most suitable option to reach this objective is by using a MapReduce programming scheme, in which algorithms are automatically executed in a distributed and fault tolerant way. Among different available tools that support this framework, Spark has emerged as a “de facto” solution when using iterative approaches. In this work, our goal is to design and implement an Evolutionary Fuzzy Rule Selection algorithm within a Spark environment. To do so, we build different local rule bases within each Map Task that are later optimized by means of a genetic process. With this procedure, we seek to minimize the total number of rules that are gathered by each Reduce task to obtain a compact and accurate Fuzzy Rule Based Classification System. In particular, we set the experimental framework in the scenario of imbalanced classification. Therefore, the final objective will be analyzing the best synergy between the novel Evolutionary Fuzzy Rule Selection algorithm and the solutions applied to cope with skewed class distributions, namely cost-sensitive learning, random under-sampling and random-oversampling.",
"Internet and the new technologies are generating new scenarios with and a significant increase of data volumes. The treatment of this huge quantity of information is impossible with traditional methodologies and we need to design new approaches towards distributed paradigms such as MapReduce. This situation is widely known in the literature as Big Data. This contribution presents a first approach to handle fuzzy emerging patterns in big data environments. This new algorithm is called EvAFP-Spark and is development in Apache Spark based on MapReduce. The use of this paradigm allows us the analysis of huge datasets efficiently. The main idea of EvAEFP-Spark is to modify the methodology of evaluation of the populations in the evolutionary process. In this way, a population is evaluated in the different maps, obtained in the Map phase of the paradigm, and for each one a confusion matrix is obtained. Then, the Reduce function accumulates the confusion matrix for each map in a general matrix in order to evaluate the fitness of the individuals. An experimental study with high dimensional datasets is performed in order to show the advantages of this algorithm in emerging patterns mining.",
"Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation.",
"In this paper, we propose an efficient distributed fuzzy associative classification model based on the MapReduce paradigm. The learning algorithm first mines a set of fuzzy association classification rules by employing a distributed version of a fuzzy extension of the well-known FP-Growth algorithm. Then, it prunes this set by using three purposely adapted types of pruning. We implemented the distributed fuzzy associative classifier using the Hadoop framework. We show the scalability of our approach by carrying out a number of experiments on a real-world big dataset. In particular, we evaluate the achievable speedup on a small computer cluster, highlighting that the proposed approach allows handling big datasets even with modest hardware support.",
"Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.",
"Fuzzy associative classification has not been widely analyzed in the literature, although associative classifiers (ACs) have proved to be very effective in different real domain applications. The main reason is that learning fuzzy ACs is a very heavy task, especially when dealing with large datasets. To overcome this drawback, in this paper, we propose an efficient distributed fuzzy associative classification approach based on the MapReduce paradigm. The approach exploits a novel distributed discretizer based on fuzzy entropy for efficiently generating fuzzy partitions of the attributes. Then, a set of candidate fuzzy association rules is generated by employing a distributed fuzzy extension of the well-known FP-Growth algorithm. Finally, this set is pruned by using three purposely adapted types of pruning. We implemented our approach on the popular Hadoop framework. Hadoop allows distributing storage and processing of very large data sets on computer clusters built from commodity hardware. We have performed an extensive experimentation and a detailed analysis of the results using six very large datasets with up to 11 000 000 instances. We have also experimented different types of reasoning methods. Focusing on accuracy, model complexity, computation time, and scalability, we compare the results achieved by our approach with those obtained by two distributed nonfuzzy ACs recently proposed in the literature. We highlight that, although the accuracies result to be comparable, the complexity, evaluated in terms of number of rules, of the classifiers generated by the fuzzy distributed approach is lower than the one of the nonfuzzy classifiers.",
"The treatment and processing of Big Data problems imply an essential advantage for researchers and corporations. This is due to the huge quantity of knowledge that is hidden within the vast amount of information that is available nowadays. In order to be able to address with such volume of information in an efficient way, the scalability for Big Data applications is achieved by means of the MapReduce programming model. It is designed to divide the data into several chunks or groups that are processed in parallel, and whose result is “assembled” to provide a single solution.",
"Abstract In the last years, multi-objective evolutionary algorithms (MOEAs) have been extensively used to generate sets of fuzzy rule-based classifiers (FRBCs) with different trade-offs between accuracy and interpretability. Since the computation of the accuracy for each chromosome evaluation requires the scan of the overall training set, these approaches have proved to be very expensive in terms of execution time and memory occupation. For this reason, they have not been applied to very large datasets yet. On the other hand, just for these datasets, interpretability of classifiers would be very desirable. In the last years the advent of a number of open source cluster computing frameworks has however opened new interesting perspectives. In this paper, we exploit one of these frameworks, namely Apache Spark, and propose the first distributed multi-objective evolutionary approach to learn concurrently the rule and data bases of FRBCs by maximizing accuracy and minimizing complexity. During the evolutionary process, the computation of the fitness is divided among the cluster nodes, thus allowing the designer to distribute both the computational complexity and the dataset storing. We have performed a number of experiments on ten real-world big datasets, evaluating our distributed approach in terms of both classification rate and scalability, and comparing it with two well-known state-of-art distributed classifiers. Finally, we have evaluated the achievable speedup on a small computer cluster. We present that the distributed version can efficiently extract compact rule bases with high accuracy, preserving the interpretability of the rule base, and can manage big datasets even with modest hardware support."
]
} |
1902.09357 | 2917947323 | Interpretability has always been a major concern for fuzzy rule-based classifiers. The usage of human-readable models allows them to explain the reasoning behind their predictions and decisions. However, when it comes to Big Data classification problems, fuzzy rule-based classifiers have not been able to maintain the good trade-off between accuracy and interpretability that has characterized these techniques in non-Big Data environments. The most accurate methods build too complex models composed of a large number of rules and fuzzy sets, while those approaches focusing on interpretability do not provide state-of-the-art discrimination capabilities. In this paper, we propose a new distributed learning algorithm to construct accurate and compact fuzzy rule-based classification systems for Big Data named CFM-BD. This method has been specifically designed from scratch for Big Data problems and does not adapt or extend any existing algorithm. The proposed learning process consists of three stages: 1) pre-processing based on the probability integral transform theorem; 2) rule induction inspired by CHI-BD and Apriori algorithms; 3) rule selection by means of a global evolutionary optimization. We conducted a complete empirical study to test the performance of our approach in terms of accuracy, complexity, and runtime. The results obtained were compared and contrasted with four state-of-the-art fuzzy classifiers for Big Data (FBDT, FMDT, Chi-Spark-RS, and CHI-BD). According to this study, CFM-BD is able to provide competitive discrimination capabilities using significantly simpler models composed of a few rules of less than 3 antecedents, employing 5 linguistic labels for all variables. | Different strategies have been applied to obtain human-readable fuzzy models in Big Data classification problems, including fuzzy versions of decision trees (FDTs) @cite_20 @cite_34 , sub-group discovery (SD) @cite_35 , associative classifiers (FACs) @cite_5 @cite_13 , emerging patterns mining (EPM) @cite_7 , and rule-based classifiers (FRBCs) @cite_4 @cite_12 @cite_22 @cite_20 @cite_44 . @cite_20 , a distributed version of C4.5 is used to extract a candidate rule base that is optimized by an evolutionary algorithm. proposed a distributed FDT that exploits the classical Decision Tree implementation in Spark MLlib http: spark.apache.org mllib , extending the learning scheme by employing fuzzy information gain based on fuzzy entropy @cite_34 . A new algorithm for SD called MEFASD-BD was presented in @cite_35 , which makes use of an evolutionary fuzzy system to extract fuzzy rules describing subgroups for each partition, though the quality of each solution is measured on the whole training set. Fuzzy logic was also used for EPM in Big Data by Garc 'ia- @cite_7 , applying a global evolutionary fuzzy system that employs the entire training set. Finally, different distributed versions of both FACs and FRBCs were proposed in @cite_5 @cite_4 @cite_44 @cite_12 @cite_22 @cite_13 . | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_44",
"@cite_5",
"@cite_34",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"2735540554",
"2750427532",
"2749578821",
"2135074661",
"2180526715",
"2565966100",
"2756790431",
"2557042702",
"2734283703"
],
"abstract": [
"",
"Abstract The previous Fuzzy Rule-Based Classification Systems (FRBCSs) for Big Data problems consist in concurrently learning multiple FRBCSs whose rule bases are then aggregated. The problem of this approach is that different models are obtained when varying the configuration of the cluster, becoming less accurate as more computing nodes are added. Our aim with this work is to design a new FRBCS for Big Data classification problems (CHI-BD) which is able to provide exactly the same model as the one that would be obtained by the original algorithm if it could be executed with this quantity of data. In order to do so, we take advantage of the suitability of the algorithm for the MapReduce paradigm, solving the problems of the previous approach, which lead us to obtain the same model (i.e., classification accuracy) regardless of the number of computing nodes considered.",
"The significance and benefits of addressing classification tasks in Big Data applications is beyond any doubt. To do so, learning algorithms must be scalable to cope with such a high volume of data. The most suitable option to reach this objective is by using a MapReduce programming scheme, in which algorithms are automatically executed in a distributed and fault tolerant way. Among different available tools that support this framework, Spark has emerged as a “de facto” solution when using iterative approaches. In this work, our goal is to design and implement an Evolutionary Fuzzy Rule Selection algorithm within a Spark environment. To do so, we build different local rule bases within each Map Task that are later optimized by means of a genetic process. With this procedure, we seek to minimize the total number of rules that are gathered by each Reduce task to obtain a compact and accurate Fuzzy Rule Based Classification System. In particular, we set the experimental framework in the scenario of imbalanced classification. Therefore, the final objective will be analyzing the best synergy between the novel Evolutionary Fuzzy Rule Selection algorithm and the solutions applied to cope with skewed class distributions, namely cost-sensitive learning, random under-sampling and random-oversampling.",
"Internet and the new technologies are generating new scenarios with and a significant increase of data volumes. The treatment of this huge quantity of information is impossible with traditional methodologies and we need to design new approaches towards distributed paradigms such as MapReduce. This situation is widely known in the literature as Big Data. This contribution presents a first approach to handle fuzzy emerging patterns in big data environments. This new algorithm is called EvAFP-Spark and is development in Apache Spark based on MapReduce. The use of this paradigm allows us the analysis of huge datasets efficiently. The main idea of EvAEFP-Spark is to modify the methodology of evaluation of the populations in the evolutionary process. In this way, a population is evaluated in the different maps, obtained in the Map phase of the paradigm, and for each one a confusion matrix is obtained. Then, the Reduce function accumulates the confusion matrix for each map in a general matrix in order to evaluate the fitness of the individuals. An experimental study with high dimensional datasets is performed in order to show the advantages of this algorithm in emerging patterns mining.",
"Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation.",
"In this paper, we propose an efficient distributed fuzzy associative classification model based on the MapReduce paradigm. The learning algorithm first mines a set of fuzzy association classification rules by employing a distributed version of a fuzzy extension of the well-known FP-Growth algorithm. Then, it prunes this set by using three purposely adapted types of pruning. We implemented the distributed fuzzy associative classifier using the Hadoop framework. We show the scalability of our approach by carrying out a number of experiments on a real-world big dataset. In particular, we evaluate the achievable speedup on a small computer cluster, highlighting that the proposed approach allows handling big datasets even with modest hardware support.",
"Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.",
"Fuzzy associative classification has not been widely analyzed in the literature, although associative classifiers (ACs) have proved to be very effective in different real domain applications. The main reason is that learning fuzzy ACs is a very heavy task, especially when dealing with large datasets. To overcome this drawback, in this paper, we propose an efficient distributed fuzzy associative classification approach based on the MapReduce paradigm. The approach exploits a novel distributed discretizer based on fuzzy entropy for efficiently generating fuzzy partitions of the attributes. Then, a set of candidate fuzzy association rules is generated by employing a distributed fuzzy extension of the well-known FP-Growth algorithm. Finally, this set is pruned by using three purposely adapted types of pruning. We implemented our approach on the popular Hadoop framework. Hadoop allows distributing storage and processing of very large data sets on computer clusters built from commodity hardware. We have performed an extensive experimentation and a detailed analysis of the results using six very large datasets with up to 11 000 000 instances. We have also experimented different types of reasoning methods. Focusing on accuracy, model complexity, computation time, and scalability, we compare the results achieved by our approach with those obtained by two distributed nonfuzzy ACs recently proposed in the literature. We highlight that, although the accuracies result to be comparable, the complexity, evaluated in terms of number of rules, of the classifiers generated by the fuzzy distributed approach is lower than the one of the nonfuzzy classifiers.",
"The treatment and processing of Big Data problems imply an essential advantage for researchers and corporations. This is due to the huge quantity of knowledge that is hidden within the vast amount of information that is available nowadays. In order to be able to address with such volume of information in an efficient way, the scalability for Big Data applications is achieved by means of the MapReduce programming model. It is designed to divide the data into several chunks or groups that are processed in parallel, and whose result is “assembled” to provide a single solution.",
"Abstract In the last years, multi-objective evolutionary algorithms (MOEAs) have been extensively used to generate sets of fuzzy rule-based classifiers (FRBCs) with different trade-offs between accuracy and interpretability. Since the computation of the accuracy for each chromosome evaluation requires the scan of the overall training set, these approaches have proved to be very expensive in terms of execution time and memory occupation. For this reason, they have not been applied to very large datasets yet. On the other hand, just for these datasets, interpretability of classifiers would be very desirable. In the last years the advent of a number of open source cluster computing frameworks has however opened new interesting perspectives. In this paper, we exploit one of these frameworks, namely Apache Spark, and propose the first distributed multi-objective evolutionary approach to learn concurrently the rule and data bases of FRBCs by maximizing accuracy and minimizing complexity. During the evolutionary process, the computation of the fitness is divided among the cluster nodes, thus allowing the designer to distribute both the computational complexity and the dataset storing. We have performed a number of experiments on ten real-world big datasets, evaluating our distributed approach in terms of both classification rate and scalability, and comparing it with two well-known state-of-art distributed classifiers. Finally, we have evaluated the achievable speedup on a small computer cluster. We present that the distributed version can efficiently extract compact rule bases with high accuracy, preserving the interpretability of the rule base, and can manage big datasets even with modest hardware support."
]
} |
1902.09357 | 2917947323 | Interpretability has always been a major concern for fuzzy rule-based classifiers. The usage of human-readable models allows them to explain the reasoning behind their predictions and decisions. However, when it comes to Big Data classification problems, fuzzy rule-based classifiers have not been able to maintain the good trade-off between accuracy and interpretability that has characterized these techniques in non-Big Data environments. The most accurate methods build too complex models composed of a large number of rules and fuzzy sets, while those approaches focusing on interpretability do not provide state-of-the-art discrimination capabilities. In this paper, we propose a new distributed learning algorithm to construct accurate and compact fuzzy rule-based classification systems for Big Data named CFM-BD. This method has been specifically designed from scratch for Big Data problems and does not adapt or extend any existing algorithm. The proposed learning process consists of three stages: 1) pre-processing based on the probability integral transform theorem; 2) rule induction inspired by CHI-BD and Apriori algorithms; 3) rule selection by means of a global evolutionary optimization. We conducted a complete empirical study to test the performance of our approach in terms of accuracy, complexity, and runtime. The results obtained were compared and contrasted with four state-of-the-art fuzzy classifiers for Big Data (FBDT, FMDT, Chi-Spark-RS, and CHI-BD). According to this study, CFM-BD is able to provide competitive discrimination capabilities using significantly simpler models composed of a few rules of less than 3 antecedents, employing 5 linguistic labels for all variables. | However, the above-mentioned algorithms sacrifice either interpretability for classification accuracy or viceversa. Some algorithms focus on the accuracy and tend to generate too complex models having large amounts of rules @cite_4 @cite_12 @cite_44 , excessive rule lengths @cite_4 @cite_12 @cite_22 @cite_44 , or a high number of fuzzy sets (linguistic labels) @cite_13 @cite_34 . On the other hand, those algorithms that optimize the interpretability of the model are not able to achieve state-of-the-art results in terms of accuracy @cite_20 . There are also other contributions that, from our point of view, do not consider enough datasets to assess these aspects in Big Data environments @cite_5 @cite_7 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_20",
"@cite_44",
"@cite_5",
"@cite_34",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2735540554",
"2750427532",
"2749578821",
"2734283703",
"2135074661",
"2180526715",
"2565966100",
"2756790431",
"2557042702"
],
"abstract": [
"",
"Abstract The previous Fuzzy Rule-Based Classification Systems (FRBCSs) for Big Data problems consist in concurrently learning multiple FRBCSs whose rule bases are then aggregated. The problem of this approach is that different models are obtained when varying the configuration of the cluster, becoming less accurate as more computing nodes are added. Our aim with this work is to design a new FRBCS for Big Data classification problems (CHI-BD) which is able to provide exactly the same model as the one that would be obtained by the original algorithm if it could be executed with this quantity of data. In order to do so, we take advantage of the suitability of the algorithm for the MapReduce paradigm, solving the problems of the previous approach, which lead us to obtain the same model (i.e., classification accuracy) regardless of the number of computing nodes considered.",
"The significance and benefits of addressing classification tasks in Big Data applications is beyond any doubt. To do so, learning algorithms must be scalable to cope with such a high volume of data. The most suitable option to reach this objective is by using a MapReduce programming scheme, in which algorithms are automatically executed in a distributed and fault tolerant way. Among different available tools that support this framework, Spark has emerged as a “de facto” solution when using iterative approaches. In this work, our goal is to design and implement an Evolutionary Fuzzy Rule Selection algorithm within a Spark environment. To do so, we build different local rule bases within each Map Task that are later optimized by means of a genetic process. With this procedure, we seek to minimize the total number of rules that are gathered by each Reduce task to obtain a compact and accurate Fuzzy Rule Based Classification System. In particular, we set the experimental framework in the scenario of imbalanced classification. Therefore, the final objective will be analyzing the best synergy between the novel Evolutionary Fuzzy Rule Selection algorithm and the solutions applied to cope with skewed class distributions, namely cost-sensitive learning, random under-sampling and random-oversampling.",
"Internet and the new technologies are generating new scenarios with and a significant increase of data volumes. The treatment of this huge quantity of information is impossible with traditional methodologies and we need to design new approaches towards distributed paradigms such as MapReduce. This situation is widely known in the literature as Big Data. This contribution presents a first approach to handle fuzzy emerging patterns in big data environments. This new algorithm is called EvAFP-Spark and is development in Apache Spark based on MapReduce. The use of this paradigm allows us the analysis of huge datasets efficiently. The main idea of EvAEFP-Spark is to modify the methodology of evaluation of the populations in the evolutionary process. In this way, a population is evaluated in the different maps, obtained in the Map phase of the paradigm, and for each one a confusion matrix is obtained. Then, the Reduce function accumulates the confusion matrix for each map in a general matrix in order to evaluate the fitness of the individuals. An experimental study with high dimensional datasets is performed in order to show the advantages of this algorithm in emerging patterns mining.",
"Abstract In the last years, multi-objective evolutionary algorithms (MOEAs) have been extensively used to generate sets of fuzzy rule-based classifiers (FRBCs) with different trade-offs between accuracy and interpretability. Since the computation of the accuracy for each chromosome evaluation requires the scan of the overall training set, these approaches have proved to be very expensive in terms of execution time and memory occupation. For this reason, they have not been applied to very large datasets yet. On the other hand, just for these datasets, interpretability of classifiers would be very desirable. In the last years the advent of a number of open source cluster computing frameworks has however opened new interesting perspectives. In this paper, we exploit one of these frameworks, namely Apache Spark, and propose the first distributed multi-objective evolutionary approach to learn concurrently the rule and data bases of FRBCs by maximizing accuracy and minimizing complexity. During the evolutionary process, the computation of the fitness is divided among the cluster nodes, thus allowing the designer to distribute both the computational complexity and the dataset storing. We have performed a number of experiments on ten real-world big datasets, evaluating our distributed approach in terms of both classification rate and scalability, and comparing it with two well-known state-of-art distributed classifiers. Finally, we have evaluated the achievable speedup on a small computer cluster. We present that the distributed version can efficiently extract compact rule bases with high accuracy, preserving the interpretability of the rule base, and can manage big datasets even with modest hardware support.",
"Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation.",
"In this paper, we propose an efficient distributed fuzzy associative classification model based on the MapReduce paradigm. The learning algorithm first mines a set of fuzzy association classification rules by employing a distributed version of a fuzzy extension of the well-known FP-Growth algorithm. Then, it prunes this set by using three purposely adapted types of pruning. We implemented the distributed fuzzy associative classifier using the Hadoop framework. We show the scalability of our approach by carrying out a number of experiments on a real-world big dataset. In particular, we evaluate the achievable speedup on a small computer cluster, highlighting that the proposed approach allows handling big datasets even with modest hardware support.",
"Fuzzy decision trees (FDTs) have shown to be an effective solution in the framework of fuzzy classification. The approaches proposed so far to FDT learning, however, have generally neglected time and space requirements. In this paper, we propose a distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data. The scheme relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy. The fuzzy partitions are, therefore, used as an input to the FDT learning algorithm, which employs fuzzy information gain for selecting the attributes at the decision nodes. We have implemented the FDT learning scheme on the Apache Spark framework. We have used ten real-world publicly available big datasets for evaluating the behavior of the scheme along three dimensions: 1) performance in terms of classification accuracy, model complexity, and execution time; 2) scalability varying the number of computing units; and 3) ability to efficiently accommodate an increasing dataset size. We have demonstrated that the proposed scheme turns out to be suitable for managing big datasets even with a modest commodity hardware support. Finally, we have used the distributed decision tree learning algorithm implemented in the MLLib library and the Chi-FRBCS-BigData algorithm, a MapReduce distributed fuzzy rule-based classification system, for comparative analysis.",
"Fuzzy associative classification has not been widely analyzed in the literature, although associative classifiers (ACs) have proved to be very effective in different real domain applications. The main reason is that learning fuzzy ACs is a very heavy task, especially when dealing with large datasets. To overcome this drawback, in this paper, we propose an efficient distributed fuzzy associative classification approach based on the MapReduce paradigm. The approach exploits a novel distributed discretizer based on fuzzy entropy for efficiently generating fuzzy partitions of the attributes. Then, a set of candidate fuzzy association rules is generated by employing a distributed fuzzy extension of the well-known FP-Growth algorithm. Finally, this set is pruned by using three purposely adapted types of pruning. We implemented our approach on the popular Hadoop framework. Hadoop allows distributing storage and processing of very large data sets on computer clusters built from commodity hardware. We have performed an extensive experimentation and a detailed analysis of the results using six very large datasets with up to 11 000 000 instances. We have also experimented different types of reasoning methods. Focusing on accuracy, model complexity, computation time, and scalability, we compare the results achieved by our approach with those obtained by two distributed nonfuzzy ACs recently proposed in the literature. We highlight that, although the accuracies result to be comparable, the complexity, evaluated in terms of number of rules, of the classifiers generated by the fuzzy distributed approach is lower than the one of the nonfuzzy classifiers.",
"The treatment and processing of Big Data problems imply an essential advantage for researchers and corporations. This is due to the huge quantity of knowledge that is hidden within the vast amount of information that is available nowadays. In order to be able to address with such volume of information in an efficient way, the scalability for Big Data applications is achieved by means of the MapReduce programming model. It is designed to divide the data into several chunks or groups that are processed in parallel, and whose result is “assembled” to provide a single solution."
]
} |
1902.08912 | 2953082918 | Lexicalized parsing models are based on the assumptions that (i) constituents are organized around a lexical head (ii) bilexical statistics are crucial to solve ambiguities. In this paper, we introduce an unlexicalized transition-based parser for discontinuous constituency structures, based on a structure-label transition system and a bi-LSTM scoring system. We compare it to lexicalized parsing models in order to address the question of lexicalization in the context of discontinuous constituency parsing. Our experiments show that unlexicalized models systematically achieve higher results than lexicalized models, and provide additional empirical evidence that lexicalization is not necessary to achieve strong parsing results. Our best unlexicalized model sets a new state of the art on English and German discontinuous constituency treebanks. We further provide a per-phenomenon analysis of its errors on discontinuous constituents. | All these proposals use a lexicalized model, as defined in the introduction: they assign heads to new constituents and use them as features to inform parsing decisions. Previous work on unlexicalized transition-based parsing models only focused on projective constituency trees @cite_44 @cite_45 . In particular, introduced a system that does not require explicit binarization. Their system decouples the construction of a tree and the labelling of its nodes by assigning types ( or ) to each action, and alternating between a structural action for even steps and labelling action for odd steps. This distinction arguably makes each decision simpler. | {
"cite_N": [
"@cite_44",
"@cite_45"
],
"mid": [
"2963073938",
"2963372751"
],
"abstract": [
"Comunicacio presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016.",
"Both bottom-up and top-down strategies have been used for neural transition-based constituent parsing. The parsing strategies differ in terms of the order in which they recognize productions in the..."
]
} |
1902.09140 | 2950726318 | The technological advancements of recent years have steadily increased the complexity of vehicle-internal software systems, and the ongoing development towards autonomous driving will further aggravate this situation. This is leading to a level of complexity that is pushing the limits of existing vehicle software architectures and system designs. By changing the software structure to a service-based architecture, companies in other domains successfully managed the rising complexity and created a more agile and future-oriented development process. This paper presents a case-study investigating the feasibility and possible effects of changing the software architecture for a complex driver assistance function to a microservice architecture. The complete procedure is described, starting with the description of the software-environment and the corresponding requirements, followed by the implementation, and the final testing. In addition, this paper provides a high-level evaluation of the microservice architecture for the automotive use-case. The results show that microservice architectures can reduce complexity and time-consuming process steps and makes the automotive software systems prepared for upcoming challenges as long as the principles of microservice architectures are carefully followed. | There is work demonstrating how a monolithic application can be transformed into a microservice system. For example, the experience report from the banking sector dealt with the transformation of a currency conversion system from Danske Bank into a system based on microservices @cite_4 . Due to the enormous size of the system, tasks such as fault tolerance mechanisms, concurrency handling, and monitoring gained importance. Also the design of the system and the capability to manage all services was a challenging task during the case-study. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2800410375"
],
"abstract": [
"Microservices have seen their popularity blossoming with an explosion of concrete applications in real-life software. Several companies are currently involved in a major refactoring of their back-end systems in order to improve scalability. This article presents an experience report of a real-world case study, from the banking domain, in order to demonstrate how scalability is positively affected by reimplementing a monolithic architecture into microservices. The case study is based on the FX Core system for converting from one currency to another. FX Core is a mission-critical system of Danske Bank, the largest bank in Denmark and one of the leading financial institutions in Northern Europe."
]
} |
1902.09140 | 2950726318 | The technological advancements of recent years have steadily increased the complexity of vehicle-internal software systems, and the ongoing development towards autonomous driving will further aggravate this situation. This is leading to a level of complexity that is pushing the limits of existing vehicle software architectures and system designs. By changing the software structure to a service-based architecture, companies in other domains successfully managed the rising complexity and created a more agile and future-oriented development process. This paper presents a case-study investigating the feasibility and possible effects of changing the software architecture for a complex driver assistance function to a microservice architecture. The complete procedure is described, starting with the description of the software-environment and the corresponding requirements, followed by the implementation, and the final testing. In addition, this paper provides a high-level evaluation of the microservice architecture for the automotive use-case. The results show that microservice architectures can reduce complexity and time-consuming process steps and makes the automotive software systems prepared for upcoming challenges as long as the principles of microservice architectures are carefully followed. | The software environment and workflow that is used during our case-study is based on the developments and results from Berger @cite_6 . Benderius @cite_7 have shown that OpenDLV in combination with Docker are a well suited software environment for a successful microservice deployment for vehicles and autonomous driving. | {
"cite_N": [
"@cite_7",
"@cite_6"
],
"mid": [
"2763646004",
"2723047152"
],
"abstract": [
"This paper provides an in-depth description of the best rated human-machine interface that was presented during the 2016 Grand Cooperative Driving Challenge. It was demonstrated by the Chalmers Truck Team as the envisioned interface to their open source software framework OpenDLV, which is used to power Chalmers’ fleet of self-driving vehicles. The design originates from the postulate that the vehicle is fully autonomous to handle even complex traffic scenarios. Thus, by including external and internal interfaces, and introducing a show, don’t tell principle, it aims at fulfilling the needs of the vehicle occupants as well as other participants in the traffic environment. The design also attempts to comply with, and slightly extend, the current traffic rules and legislation for the purpose of being realistic for full-scale implementation.",
"In this paper, experiences and best practices from using containerized software microservices for self-driving vehicles are shared. We applied the containerized software paradigm successfully to both the software development and deployment to turn our software architecture in the vehicles following the idea of microservices. Key enabling elements include onboarding of new developers, both researchers and students, traceable development and packaging, convenient and bare-bone deployment, and traceably archiving binary distributions of our quickly evolving software environment. In this paper, we share our experience from working one year with containerized development and deployment for our self-driving vehicles highlighting our reflections and application-specific shortcomings, our approach uses several components from the widely used Docker ecosystem, but the discussion in this paper generalizes these concepts. We conclude that the growingly complex automotive software systems in combination with their computational platforms should be rather understood as data centers on wheels to design both, (a) the software development and deployment processes, and (b) the software architecture in such a way to enable continuous integration, continuous deployment, and continuous experimentation."
]
} |
1902.09140 | 2950726318 | The technological advancements of recent years have steadily increased the complexity of vehicle-internal software systems, and the ongoing development towards autonomous driving will further aggravate this situation. This is leading to a level of complexity that is pushing the limits of existing vehicle software architectures and system designs. By changing the software structure to a service-based architecture, companies in other domains successfully managed the rising complexity and created a more agile and future-oriented development process. This paper presents a case-study investigating the feasibility and possible effects of changing the software architecture for a complex driver assistance function to a microservice architecture. The complete procedure is described, starting with the description of the software-environment and the corresponding requirements, followed by the implementation, and the final testing. In addition, this paper provides a high-level evaluation of the microservice architecture for the automotive use-case. The results show that microservice architectures can reduce complexity and time-consuming process steps and makes the automotive software systems prepared for upcoming challenges as long as the principles of microservice architectures are carefully followed. | An alternative development environment is described by Kugele @cite_2 . In their work, the throughput of the OpenDDS software middleware is evaluated. Furthermore, a formal mapping of services of the data distribution service (DDS) and a case-study about fail-operational behavior are described. The paper concludes that the used SOA technique in combination with DCPS is suitable for the automotive industry, but some points regarding safety, certification, and security could not be clarified. Kugele point out that a commercial version of DDS could solve these problems. Also, DDS in combination with the Docker platform is a suitable development method in an agile work environment. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2883843233"
],
"abstract": [
"Context: The functional interconnection and data routing in today's automotive electric electronic architectures has reached a level of complexity which is hardly manageable and error-prone. This circumstance severely hinders short times from development to operation. Aim: The purpose of the study is to evaluate the feasibility of Data Distribution Services in accord with containerization technologies in an agile development process for automotive software. Method: We propose to represent services by means of topics in a data-centric publish-subscribe approach. We conduct performance benchmarks to evaluate its aptitude and present a case study illustrating fail-operational behavior in a setup recreated from highly automated driving. Results: Backed by the results and the case study we show that containerized services, along with data-centric messaging, manage to meet most of our proposed requirements. We furthermore reveal limitations of the used technology stack and discuss remedies to their shortcomings."
]
} |
1902.09140 | 2950726318 | The technological advancements of recent years have steadily increased the complexity of vehicle-internal software systems, and the ongoing development towards autonomous driving will further aggravate this situation. This is leading to a level of complexity that is pushing the limits of existing vehicle software architectures and system designs. By changing the software structure to a service-based architecture, companies in other domains successfully managed the rising complexity and created a more agile and future-oriented development process. This paper presents a case-study investigating the feasibility and possible effects of changing the software architecture for a complex driver assistance function to a microservice architecture. The complete procedure is described, starting with the description of the software-environment and the corresponding requirements, followed by the implementation, and the final testing. In addition, this paper provides a high-level evaluation of the microservice architecture for the automotive use-case. The results show that microservice architectures can reduce complexity and time-consuming process steps and makes the automotive software systems prepared for upcoming challenges as long as the principles of microservice architectures are carefully followed. | Another publish-subscribe middleware called Chromosome and a centralized platform architecture for automotive applications, called Race, have similarities with the above approach and with this work @cite_9 . It is a central computing architecture, which is supporting on the software and network level. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2084899319"
],
"abstract": [
"In the last couple of years software functionality of modern cars increased dramatically. This growing functionality leads directly to a higher complexity of development and configuration. Current studies show that the amount of software will continue to grow. Additionally, advanced driver assistance systems (ADAS) and autonomous functionality, such as highly and fully automated driving or parking, will be introduced. Many of these new functions require access to different communication domains within the car, which increases system complexity. AUTOSAR, the software architecture established as a standard in the automotive domain, provides no methodologies to reduce this kind of complexity and to master new challenges. One solution for these evolving systems is developed in the RACE project. Here, a centralized platform computer (CPC) is introduced, which is inspired by the well-established approach used in other domains like avionics and automation. The CPC establishes a generic safety-critical execution environment for applications, providing interfaces for test and verification as well as a reliable communication infrastructure to smart sensors and actuators. A centralized platform also significantly reduces the complexity of integration and verification of new applications, and enables the support for Plug&Play."
]
} |
1902.09145 | 2952001916 | We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time. | Optical flow estimation has been a long-standing challenge in computer vision. Early variational approaches @cite_10 @cite_1 formulate it as an energy minimization problem based on brightness constancy and spatial smoothness. Such methods are effective for small motion, but tend to fail when displacements are large. | {
"cite_N": [
"@cite_1",
"@cite_10"
],
"mid": [
"2033959528",
"1578285471"
],
"abstract": [
"The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark.",
"Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image."
]
} |
1902.09145 | 2952001916 | We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time. | Later, @cite_32 @cite_17 integrate feature matching to tackle this issue. Specially, they find sparse feature correspondences to initialize flow estimation and further refine it in a pyramidal coarse-to-fine manner. The seminal work EpicFlow @cite_23 interpolates dense flow from sparse matches and has become a widely used post-processing pipeline. Recently, @cite_16 @cite_19 use convolutional neural networks to learn a feature embedding for better matching and have demonstrated superior performance. However, all of these classical methods are often time-consuming, and their modules usually involve special tuning for different datasets. | {
"cite_N": [
"@cite_32",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_17"
],
"mid": [
"2131747574",
"2963210183",
"1951289974",
"2963219046",
"2113221323"
],
"abstract": [
"Optical flow estimation is classically marked by the requirement of dense sampling in time. While coarse-to-fine warping schemes have somehow relaxed this constraint, there is an inherent dependency between the scale of structures and the velocity that can be estimated. This particularly renders the estimation of detailed human motion problematic, as small body parts can move very fast. In this paper, we present a way to approach this problem by integrating rich descriptors into the variational optical flow setting. This way we can estimate a dense optical flow field with almost the same high accuracy as known from variational optical flow, while reaching out to new domains of motion analysis where the requirement of dense sampling in time is no longer satisfied.",
"We present an optical flow estimation approach that operates on the full four-dimensional cost volume. This direct approach shares the structural benefits of leading stereo matching pipelines, which are known to yield high accuracy. To this day, such approaches have been considered impractical due to the size of the cost volume. We show that the full four-dimensional cost volume can be constructed in a fraction of a second due to its regularity. We then exploit this regularity further by adapting semi-global matching to the four-dimensional setting. This yields a pipeline that achieves significantly higher accuracy than state-of-the-art optical flow methods while being faster than most. Our approach outperforms all published general-purpose optical flow methods on both Sintel and KITTI 2015 benchmarks.",
"We propose a novel approach for optical flow estimation, targeted at large displacements with significant occlusions. It consists of two steps: i) dense matching by edge-preserving interpolation from a sparse set of matches; ii) variational energy minimization initialized with the dense matches. The sparse-to-dense interpolation relies on an appropriate choice of the distance, namely an edge-aware geodesic distance. This distance is tailored to handle occlusions and motion boundaries - two common and difficult issues for optical flow computation. We also propose an approximation scheme for the geodesic distance to allow fast computation without loss of performance. Subsequent to the dense interpolation step, standard one-level variational energy minimization is carried out on the dense matches to obtain the final flow estimation. The proposed approach, called Edge-Preserving Interpolation of Correspondences (EpicFlow) is fast and robust to large displacements. It significantly outperforms the state of the art on MPI-Sintel and performs on par on Kitti and Middlebury.",
"Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.",
"Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset."
]
} |
1902.09145 | 2952001916 | We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time. | The success of deep neural networks has motivated the development of optical flow learning methods. The pioneer work is FlowNet @cite_14 , which takes two consecutive images as input and outputs a dense optical flow map. The following FlowNet 2.0 @cite_4 significantly improves accuracy by stacking several basic FlowNet modules together, and iteratively refining them. SpyNet @cite_20 proposes to warp images at multiple scales to handle large displacements, and introduces a compact spatial pyramid network to predict optical flow. Very recently, PWC-Net @cite_11 and LiteFlowNet @cite_8 propose to warp features extracted from CNNs rather than warp images over different scales. They achieve state-of-the-art results while keeping a much smaller model size. Though promising performance has been achieved, these methods require a large amount of labeled training data, which is particularly difficult to obtain for optical flow. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_20",
"@cite_11"
],
"mid": [
"764651262",
"2953296820",
"2964156315",
"2548527721",
""
],
"abstract": [
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a sub-network specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"FlowNet2 [14], the state-of-the-art convolutional neural network (CNN) for optical flow estimation, requires over 160M parameters to achieve accurate flow estimation. In this paper we present an alternative network that attains performance on par with FlowNet2 on the challenging Sintel final pass and KITTI benchmarks, while being 30 times smaller in the model size and 1.36 times faster in the running speed. This is made possible by drilling down to architectural details that might have been missed in the current frameworks: (1) We present a more effective flow inference approach at each pyramid level through a lightweight cascaded network. It not only improves flow estimation accuracy through early correction, but also permits seamless incorporation of descriptor matching in our network. (2) We present a novel flow regularization layer to ameliorate the issue of outliers and vague flow boundaries by using a feature-driven local convolution. (3) Our network owns an effective structure for pyramidal feature extraction and embraces feature warping rather than image warping as practiced in FlowNet2. Our code and trained models are available at github.com twhui LiteFlowNet.",
"We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96 smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
""
]
} |
1902.09145 | 2952001916 | We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time. | As a result, existing end-to-end deep learning based approaches @cite_14 @cite_35 @cite_6 turn to utilize synthetic datasets for pre-training. Unfortunately, there usually exists a large domain gap between the distribution of synthetic datasets and natural scenes @cite_25 . Existing networks @cite_14 @cite_20 trained only on synthetic data turn to overfit, and often perform poorly when directly evaluated on real sequences. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_6",
"@cite_25",
"@cite_20"
],
"mid": [
"2259424905",
"764651262",
"2894983388",
"",
"2548527721"
],
"abstract": [
"Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"Learning optical flow with neural networks is hampered by the need for obtaining training data with associated ground truth. Unsupervised learning is a promising direction, yet the performance of current unsupervised methods is still limited. In particular, the lack of proper occlusion handling in commonly used data terms constitutes a major source of error. While most optical flow methods process pairs of consecutive frames, more advanced occlusion reasoning can be realized when considering multiple frames. In this paper, we propose a framework for unsupervised learning of optical flow and occlusions over multiple frames. More specifically, we exploit the minimal configuration of three frames to strengthen the photometric loss and explicitly reason about occlusions. We demonstrate that our multi-frame, occlusion-sensitive formulation outperforms existing unsupervised two-frame methods and even produces results on par with some fully supervised methods.",
"",
"We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96 smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small ("
]
} |
1902.09145 | 2952001916 | We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time. | Very recently, @cite_33 @cite_21 propose to first reason occlusion map and then exclude those occluded pixels when computing the photometric difference. Most recently, @cite_6 introduce an unsupervised framework to estimate optical flow using a multi-frame formulation with temporal consistency. This method utilizes more data with more advanced occlusion reasoning, and hence achieves more accurate results. However, all these unsupervised learning methods rely on hand-crafted energy terms to guide optical flow estimation, lacking key capability to learn optical flow of occluded pixels. As a consequence, the performance is still a large gap compared with state-of-the-art supervised methods. | {
"cite_N": [
"@cite_21",
"@cite_33",
"@cite_6"
],
"mid": [
"2962864875",
"2963891416",
"2894983388"
],
"abstract": [
"It has been recently shown that a convolutional neural network can learn optical flow estimation with unsupervised learning. However, the performance of the unsupervised methods still has a relatively large gap compared to its supervised counterpart. Occlusion and large motion are some of the major factors that limit the current unsupervised learning of optical flow methods. In this work we introduce a new method which models occlusion explicitly and a new warping way that facilitates the learning of large motion. Our method shows promising results on Flying Chairs, MPI-Sintel and KITTI benchmark datasets. Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.",
"",
"Learning optical flow with neural networks is hampered by the need for obtaining training data with associated ground truth. Unsupervised learning is a promising direction, yet the performance of current unsupervised methods is still limited. In particular, the lack of proper occlusion handling in commonly used data terms constitutes a major source of error. While most optical flow methods process pairs of consecutive frames, more advanced occlusion reasoning can be realized when considering multiple frames. In this paper, we propose a framework for unsupervised learning of optical flow and occlusions over multiple frames. More specifically, we exploit the minimal configuration of three frames to strengthen the photometric loss and explicitly reason about occlusions. We demonstrate that our multi-frame, occlusion-sensitive formulation outperforms existing unsupervised two-frame methods and even produces results on par with some fully supervised methods."
]
} |
1902.09145 | 2952001916 | We present DDFlow, a data distillation approach to learning optical flow estimation from unlabeled data. The approach distills reliable predictions from a teacher network, and uses these predictions as annotations to guide a student network to learn optical flow. Unlike existing work relying on hand-crafted energy terms to handle occlusion, our approach is data-driven, and learns optical flow for occluded pixels. This enables us to train our model with a much simpler loss function, and achieve a much higher accuracy. We conduct a rigorous evaluation on the challenging Flying Chairs, MPI Sintel, KITTI 2012 and 2015 benchmarks, and show that our approach significantly outperforms all existing unsupervised learning methods, while running at real time. | To bridge this gap, we propose to perform knowledge distillation from unlabeled data, inspired by @cite_3 @cite_29 which performed knowledge distillation from multiple models or labeled data. In contrast to previous knowledge distillation methods, we do not use any human annotations. Our idea is to generate annotations on unlabeled data using a model trained with a classical optical flow energy, and then retrain the model using those extra generated annotations. This yields a simple yet effective method to learn optical flow for occluded pixels in a totally unsupervised manner. | {
"cite_N": [
"@cite_29",
"@cite_3"
],
"mid": [
"2963785012",
"1821462560"
],
"abstract": [
"We investigate omni-supervised learning, a special regime of semi-supervised learning in which the learner exploits all available labeled data plus internet-scale sources of unlabeled data. Omni-supervised learning is lower-bounded by performance on existing labeled datasets, offering the potential to surpass state-of-the-art fully supervised methods. To exploit the omni-supervised setting, we propose data distillation, a method that ensembles predictions from multiple transformations of unlabeled data, using a single model, to automatically generate new training annotations. We argue that visual recognition models have recently become accurate enough that it is now possible to apply classic ideas about self-training to challenging real-world data. Our experimental results show that in the cases of human keypoint detection and general object detection, state-of-the-art models trained with data distillation surpass the performance of using labeled data from the COCO dataset alone.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."
]
} |
1902.09087 | 2949197062 | Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly. In this paper, we propose a novel lattice based CNN model (LCNs) to utilize multi-granularity information inherent in the word lattice while maintaining strong ability to deal with the introduced noisy information for matching based question answering in Chinese. We conduct extensive experiments on both document based question answering and knowledge based question answering tasks, and experimental results show that the LCNs models can significantly outperform the state-of-the-art matching models and strong baselines by taking advantages of better ability to distill rich but discriminative information from the word lattice input. | Deep learning models have been widely adopted in natural language sentence matching. Representation based models @cite_2 @cite_1 @cite_4 @cite_5 encode and compare matching branches in hidden space. Interaction based models @cite_30 @cite_27 @cite_17 incorporates interactions features between all word pairs and adopts 2D-convolution to extract matching features. Our models are built upon the representation based architecture, which is better for short text matching. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_1",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_17"
],
"mid": [
"2949989304",
"2251143283",
"1591825359",
"2963053846",
"2186845332",
"2609569121",
"2952113915"
],
"abstract": [
"Matching two texts is a fundamental problem in many natural language processing tasks. An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score. Inspired by the success of convolutional neural network in image recognition, where neurons can capture many complicated patterns based on the extracted elementary visual patterns such as oriented edges and corners, we propose to model text matching as the problem of image recognition. Firstly, a matching matrix whose entries represent the similarities between words is constructed and viewed as an image. Then a convolutional neural network is utilized to capture rich matching patterns in a layer-by-layer way. We show that by resembling the compositional hierarchies of patterns in image recognition, our model can successfully identify salient signals such as n-gram and n-term matchings. Experimental results demonstrate its superiority against the baselines.",
"We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.",
"Answer sentence selection is the task of identifying sentences that contain the answer to a given question. This is an important problem in its own right as well as in the larger context of open domain question answering. We propose a novel approach to solving this task via means of distributed representations, and learn to match questions with answers by considering their semantic encoding. This contrasts prior work on this task, which typically relies on classifiers with large numbers of hand-crafted syntactic and semantic features and various external resources. Our approach does not require any feature engineering nor does it involve specialist linguistic data, making this model easily applicable to a wide range of domains and languages. Experimental results on a standard benchmark dataset from TREC demonstrate that---despite its simplicity---our model matches state of the art performance on the answer sentence selection task.",
"Matching natural language sentences is central for many applications such as information retrieval and question answering. Existing deep models rely on a single sentence representation or multiple granularity representations for matching. However, such methods cannot well capture the contextualized local information in the matching process. To tackle this problem, we present a new deep architecture to match two sentences with multiple positional sentence representations. Specifically, each positional sentence representation is a sentence representation at this position, generated by a bidirectional long short term memory (Bi-LSTM). The matching score is finally produced by aggregating interactions between these different positional sentence representations, through k-Max pooling and a multi-layer perceptron. Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.",
"This paper presents a series of new latent semantic models based on a convolutional neural network (CNN) to learn low-dimensional semantic vectors for search queries and Web documents. By using the convolution-max pooling operation, local contextual information at the word n-gram level is modeled first. Then, salient local fea-tures in a word sequence are combined to form a global feature vector. Finally, the high-level semantic information of the word sequence is extracted to form a global vector representation. The proposed models are trained on clickthrough data by maximizing the conditional likelihood of clicked documents given a query, us-ing stochastic gradient ascent. The new models are evaluated on a Web document ranking task using a large-scale, real-world data set. Results show that our model significantly outperforms other se-mantic models, which were state-of-the-art in retrieval performance prior to this work.",
"Relation detection is a core component for many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning that detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different hierarchies of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to enable one enhance another. Experimental results evidence that our approach achieves not only outstanding relation detection performance, but more importantly, it helps our KBQA system to achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks.",
"Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (word-by-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model under the \"matching-aggregation\" framework. Given two sentences @math and @math , our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions @math and @math . In each matching direction, each time step of one sentence is matched against all time-steps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fix-length matching vector. Finally, based on the matching vector, the decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks."
]
} |
1902.09087 | 2949197062 | Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly. In this paper, we propose a novel lattice based CNN model (LCNs) to utilize multi-granularity information inherent in the word lattice while maintaining strong ability to deal with the introduced noisy information for matching based question answering in Chinese. We conduct extensive experiments on both document based question answering and knowledge based question answering tasks, and experimental results show that the LCNs models can significantly outperform the state-of-the-art matching models and strong baselines by taking advantages of better ability to distill rich but discriminative information from the word lattice input. | In recent years, many researchers have become interested in utilizing all sorts of external or multi-granularity information in matching tasks. @cite_9 exploit hidden units in different depths to realize interaction between substrings with different lengths. @cite_17 join multiple pooling methods in merging sentence level features, @cite_19 exploit interactions between different lengths of text spans. For those more similar to our work, @cite_17 also incorporate characters, which is fed into LSTMs and concatenate the outcomes with word embeddings, and @cite_5 utilize words together with predicate level tokens in KBRE task. However, none of them exploit the multi-granularity information in word lattice in languages like Chinese that do not have space to segment words naturally. Furthermore, our model has no conflicts with most of them except @cite_17 and could gain further improvement. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_5",
"@cite_17"
],
"mid": [
"2809365612",
"2250889812",
"2609569121",
"2952113915"
],
"abstract": [
"Short Text Matching plays an important role in many natural language processing tasks such as information retrieval, question answering, and conversational system. Conventional text matching methods rely on predefined templates and rules, which are not applicable to short text with limited numebr of words and limit their ability to generalize to unobserved data. Many recent efforts have been made to apply deep neural network models to natural language processing tasks, which reduces the cost of feature engineering. In this paper, we present the design of Multi-Channel Information Crossing , a multi-channel convolutional neural network model for text matching, with additional attention mechanisms from sentence and text semantics. MIX compares text snippets at varied granularities to form a series of multi-channel similarity matrices, which are crossed with another set of carefully designed attention matrices to expose the rich structures of sentences to deep neural networks. We implemented MIX and deployed the system on Tencent's Venus distributed computation platform. Thanks to carefully engineered multi-channel information crossing, evaluation results suggest that MIX outperforms a wide range of state-of-the-art deep neural network models by at least 11.1 in terms of the normalized discounted cumulative gain (NDCG@3), on the English WikiQA dataset. Moreover, we also performed online A B tests with real users on the search service of Tencent QQ Browser. Results suggest that MIX raised the number of clicks on the returned results by 5.7 , due to an increased accuracy in query-document matching, which demonstrates the superior performance of MIX in production environments.",
"We present MultiGranCNN, a general deep learning architecture for matching text chunks. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one chunk can be directly compared to longer sequences in the other chunk. MultiGranCNN also contains a flexible and modularized match feature component that is easily adaptable to different types of chunk matching. We demonstrate stateof-the-art performance of MultiGranCNN on clause coherence and paraphrase identification tasks.",
"Relation detection is a core component for many NLP applications including Knowledge Base Question Answering (KBQA). In this paper, we propose a hierarchical recurrent neural network enhanced by residual learning that detects KB relations given an input question. Our method uses deep residual bidirectional LSTMs to compare questions and relation names via different hierarchies of abstraction. Additionally, we propose a simple KBQA system that integrates entity linking and our proposed relation detector to enable one enhance another. Experimental results evidence that our approach achieves not only outstanding relation detection performance, but more importantly, it helps our KBQA system to achieve state-of-the-art accuracy for both single-relation (SimpleQuestions) and multi-relation (WebQSP) QA benchmarks.",
"Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (word-by-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model under the \"matching-aggregation\" framework. Given two sentences @math and @math , our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions @math and @math . In each matching direction, each time step of one sentence is matched against all time-steps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fix-length matching vector. Finally, based on the matching vector, the decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks."
]
} |
1902.09087 | 2949197062 | Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly. In this paper, we propose a novel lattice based CNN model (LCNs) to utilize multi-granularity information inherent in the word lattice while maintaining strong ability to deal with the introduced noisy information for matching based question answering in Chinese. We conduct extensive experiments on both document based question answering and knowledge based question answering tasks, and experimental results show that the LCNs models can significantly outperform the state-of-the-art matching models and strong baselines by taking advantages of better ability to distill rich but discriminative information from the word lattice input. | GCNs @cite_3 @cite_15 and graph-RNNs @cite_10 @cite_13 have extended CNNs and RNNs to model graph information, and DGCs generalize GCNs on directed graphs in the fields of semantic-role labeling @cite_24 , document dating @cite_7 , and SQL query embedding @cite_18 . However, DGCs control information flowing from neighbor vertexes via edge types, while we focus on capturing different contexts for each word in word lattice via convolutional kernels and poolings. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_10",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_13"
],
"mid": [
"",
"2951539692",
"2963020213",
"1662382123",
"2951545716",
"2468907370",
"2798759657"
],
"abstract": [
"",
"Document date is essential for many important tasks, such as document retrieval, summarization, event detection, etc. While existing approaches for these tasks assume accurate knowledge of the document date, this is not always available, especially for arbitrary documents from the Web. Document Dating is a challenging problem which requires inference over the temporal structure of the document. Prior document dating systems have largely relied on handcrafted features while ignoring such document internal structures. In this paper, we propose NeuralDater, a Graph Convolutional Network (GCN) based document dating approach which jointly exploits syntactic and temporal graph structures of document in a principled way. To the best of our knowledge, this is the first application of deep learning for the problem of document dating. Through extensive experiments on real-world datasets, we find that NeuralDater significantly outperforms state-of-the-art baseline by 19 absolute (45 relative) accuracy points.",
"Past work in relation extraction focuses on binary relations in single sentences. Recent NLP inroads in high-valued domains have kindled strong interest in the more general setting of extracting n-ary relations that span multiple sentences. In this paper, we explore a general relation extraction framework based on graph long short-term memory (graph LSTM), which can be easily extended to cross-sentence n-ary relation extraction. The graph formulation provides a unifying way to explore different LSTM approaches and incorporate various intra-sentential and inter-sentential dependencies, such as sequential, syntactic, and discourse relations. A robust contextual representation is learned for the entities, which serves as input to the relation classifier, making it easy for scaling to arbitrary relation arity n, as well as for multi-task learning with related relations. We evaluated this framework in two important domains in precision medicine and demonstrated its effectiveness with both supervised learning and distant supervision. Cross-sentence extraction produced far more knowledge, and multi-task learning significantly improved extraction accuracy. A thorough analysis comparing various LSTM approaches yielded interesting insight on how linguistic analysis impacts the performance.",
"Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
"Semantic role labeling (SRL) is the task of identifying the predicate-argument structure of a sentence. It is typically regarded as an important step in the standard NLP pipeline. As the semantic representations are closely related to syntactic ones, we exploit syntactic information in our model. We propose a version of graph convolutional networks (GCNs), a recent class of neural networks operating on graphs, suited to model syntactic dependency graphs. GCNs over syntactic dependency trees are used as sentence encoders, producing latent feature representations of words in a sentence. We observe that GCN layers are complementary to LSTM ones: when we stack both GCN and LSTM layers, we obtain a substantial improvement over an already state-of-the-art LSTM SRL model, resulting in the best reported scores on the standard benchmark (CoNLL-2009) both for Chinese and English.",
"In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.",
"The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. The current state-of-the-art method uses a sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR structure. Although being able to model non-local semantic information, a sequence LSTM can lose information from the AMR graph structure, and thus faces challenges with large graphs, which result in long sequences. We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature."
]
} |
1902.09087 | 2949197062 | Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly. In this paper, we propose a novel lattice based CNN model (LCNs) to utilize multi-granularity information inherent in the word lattice while maintaining strong ability to deal with the introduced noisy information for matching based question answering in Chinese. We conduct extensive experiments on both document based question answering and knowledge based question answering tasks, and experimental results show that the LCNs models can significantly outperform the state-of-the-art matching models and strong baselines by taking advantages of better ability to distill rich but discriminative information from the word lattice input. | Previous works involved Chinese lattice into RNNs for Chinese-English translation @cite_14 , Chinese named entity recognition @cite_0 , and Chinese word segmentation @cite_11 . To the best of our knowledge, we are the first to conduct CNNs on word lattice, and the first to involve word lattice in matching tasks. And we motivate to utilize multi-granularity information in word lattices to relieve word mismatch and diverse expressions in Chinese question answering, while they mainly focus on error propagations from segmenters. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_11"
],
"mid": [
"2799436012",
"2527133236",
"2899395607"
],
"abstract": [
"We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.",
"Neural machine translation (NMT) heavily relies on word-level modelling to learn semantic representations of input sentences. However, for languages without natural word delimiters (e.g., Chinese) where input sentences have to be tokenized first, conventional NMT is confronted with two issues: 1) it is difficult to find an optimal tokenization granularity for source sentence modelling, and 2) errors in 1-best tokenizations may propagate to the encoder of NMT. To handle these issues, we propose word-lattice based Recurrent Neural Network (RNN) encoders for NMT, which generalize the standard RNN to word lattice topology. The proposed encoders take as input a word lattice that compactly encodes multiple tokenizations, and learn to generate new hidden states from arbitrarily many inputs and hidden states in preceding time steps. As such, the word-lattice based encoders not only alleviate the negative impact of tokenization errors but also are more expressive and flexible to embed input sentences. Experiment results on Chinese-English translation demonstrate the superiorities of the proposed encoders over the conventional encoder.",
"We investigate a lattice LSTM network for Chinese word segmentation (CWS) to utilize words or subwords. It integrates the character sequence features with all subsequences information matched from a lexicon. The matched subsequences serve as information shortcut tunnels which link their start and end characters directly. Gated units are used to control the contribution of multiple input links. Through formula derivation and comparison, we show that the lattice LSTM is an extension of the standard LSTM with the ability to take multiple inputs. Previous lattice LSTM model takes word embeddings as the lexicon input, we prove that subword encoding can give the comparable performance and has the benefit of not relying on any external segmentor. The contribution of lattice LSTM comes from both lexicon and pretrained embeddings information, we find that the lexicon information contributes more than the pretrained embeddings information through controlled experiments. Our experiments show that the lattice structure with subword encoding gives competitive or better results with previous state-of-the-art methods on four segmentation benchmarks. Detailed analyses are conducted to compare the performance of word encoding and subword encoding in lattice LSTM. We also investigate the performance of lattice LSTM structure under different circumstances and when this model works or fails."
]
} |
1902.08956 | 2952186362 | Data is the new oil for the car industry. Cars generate data about how they are used and who is behind the wheel which gives rise to a novel way of profiling individuals. Several prior works have successfully demonstrated the feasibility of driver re-identification using the in-vehicle network data captured on the vehicles CAN (Controller Area Network) bus. However, all of them used signals (e.g., velocity, brake pedal or accelerator position) that have already been extracted from the CAN log which is itself not a straightforward process. Indeed, car manufacturers intentionally do not reveal the exact signal location within CAN logs. Nevertheless, we show that signals can be efficiently extracted from CAN logs using machine learning techniques. We exploit that signals have several distinguishing statistical features which can be learnt and effectively used to identify them across different vehicles, that is, to quasi reverse-engineer the CAN protocol. We also demonstrate that the extracted signals can be successfully used to re-identify individuals in a dataset of 33 drivers. Therefore, not revealing signal locations in CAN logs per se does not prevent them to be regarded as personal data of drivers. | has investigated @cite_4 driver characteristics when following another vehicle and pedal operation patterns were modeled using speech recognition methods. Sensor signals were collected in both a driving simulator and a real vehicle. Using car-following patterns and spectral features of pedal operation signals authors achieved an identification rate of 89.6 @cite_11 discovered that driving maneuvers during turning exhibit personal traits that are promising regarding driver re-identification. Using the same dataset from Audi and its affiliates, @cite_8 , showed that four behavioral traits, namely braking, turning, speeding and fuel efficiency could characterize driver adequately well. They provided a (mostly theoretical) methodology to reduce the vast CAN dataset along these lines. | {
"cite_N": [
"@cite_8",
"@cite_4",
"@cite_11"
],
"mid": [
"2761751533",
"1972441921",
"2963535483"
],
"abstract": [
"People's driving behavior is influenced by different human and environmental factors, and several attempts to characterize it have been proposed. Nowadays, the standardization of the CAN bus and the increase of the electronic components units in modern cars offer a large availability of sensors data that make possible a more reliable and direct characterization of driving styles. In this work, we propose the concept of \"Driving DNA\" as a way of describing the complexity of driving behavior through a set of individual and easy-to-measure quantities. These quantities are responsible for some aspects of the driver's behavior, just as -- in the metaphor -- genes are responsible for the tracts of an individual. The concept has been tested on a dataset collected from the CAN bus consisting of more than 2000 trips performed by 53 people, in a wide scenario of road types and open traffic conditions. The Driving DNAs have been calculated for each person, and a graphical visualization of their comparison is provided.",
"All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8 for a field test with 276 drivers, resulting in a relative error reduction of 55 over driver models that use raw pedal operation signals without spectral analysis",
"As automotive electronics continue to advance, cars are becoming more and more reliant on sensors to perform everyday driving operations. These sensors are omnipresent and help the car navigate, reduce accidents, and provide comfortable rides. However, they can also be used to learn about the drivers themselves. In this paper, we propose a method to predict, from sensor data collected at a single turn, the identity of a driver out of a given set of individuals. We cast the problem in terms of time series classification, where our dataset contains sensor readings at one turn, repeated several times by multiple drivers. We build a classifier to find unique patterns in each individual's driving style, which are visible in the data even on such a short road segment. To test our approach, we analyze a new dataset collected by AUDI AG and Audi Electronics Venture, where a fleet of test vehicles was equipped with automotive data loggers storing all sensor readings on real roads. We show that turns are particularly well-suited for detecting variations across drivers, especially when compared to straightaways. We then focus on the 12 most frequently made turns in the dataset, which include rural, urban, highway on-ramps, and more, obtaining accurate identification results and learning useful insights about driver behavior in a variety of settings."
]
} |
1902.08951 | 2915532496 | This paper presents a vision based robotic system to handle the picking problem involved in automatic express package dispatching. By utilizing two RealSense RGB-D cameras and one UR10 industrial robot, package dispatching task which is usually done by human can be completed automatically. In order to determine grasp point for overlapped deformable objects, we improved the sampling algorithm proposed by the group in Berkeley to directly generate grasp candidate from depth images. For the purpose of package recognition, the deep network framework YOLO is integrated. We also designed a multi-modal robot hand composed of a two-fingered gripper and a vacuum suction cup to deal with different kinds of packages. All the technologies have been integrated in a work cell which simulates the practical conditions of an express package dispatching scenario. The proposed system is verified by experiments conducted for two typical express items. | Our work is inspired mainly by the algorithms proposed by the group in Berkeley and we have made the following contributions. Firstly, we improve the grasp sampling algorithm and it demonstrated better performance compared with the original one when dealing with the picking problems confronted with express package dispatching application. Secondly, we design a two-functioned robot hand consisting of a two-fingered gripper and a vacuum suction cup. Finally, by combining the methods for object detection YOLO @cite_1 @cite_8 with the Open Source Robot Operating System, we integrated a robot system shown in system , with which a typical express package dispatching demonstration is realized. | {
"cite_N": [
"@cite_1",
"@cite_8"
],
"mid": [
"2963037989",
"2570343428"
],
"abstract": [
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time."
]
} |
1902.08985 | 2917640178 | Squamous Cell Carcinoma (SCC) is the most common cancer type of the epithelium and is often detected at a late stage. Besides invasive diagnosis of SCC by means of biopsy and histo-pathologic assessment, Confocal Laser Endomicroscopy (CLE) has emerged as noninvasive method that was successfully used to diagnose SCC in vivo. For interpretation of CLE images, however, extensive training is required, which limits its applicability and use in clinical practice of the method. To aid diagnosis of SCC in a broader scope, automatic detection methods have been proposed. This work compares two methods with regard to their applicability in a transfer learning sense, i.e. training on one tissue type (from one clinical team) and applying the learnt classification system to another entity (different anatomy, different clinical team). Besides a previously proposed, patch-based method based on convolutional neural networks, a novel classification method on image level (based on a pre-trained Inception V.3 network with dedicated preprocessing and interpretation of class activation maps) is proposed and evaluated. The newly presented approach improves recognition performance, yielding accuracies of 91.63 on the first data set (oral cavity) and 92.63 on a joint data set. The generalization from oral cavity to the second data set (vocal folds) lead to similar area-under-the-ROC curve values than a direct training on the vocal folds data set, indicating good generalization. | Another way of approaching this issue was proposed by Murthy : They proposed a two-staged cascaded network, where the first stage would only perform a classification with low confidence, and the second stage would only be trained on data that was considered difficult by the first stage @cite_27 . For many image recognition tasks, transfer learning from networks pre-trained on large data bases (e.g. ImageNet) has proven to be an effective and well performing approach. Izadyyazdanabadi have used transfer learning on CLE images, and have shown that different fine-tuned models each outperformed the same model trained from scratch @cite_15 . | {
"cite_N": [
"@cite_27",
"@cite_15"
],
"mid": [
"2594334603",
"2593635672"
],
"abstract": [
"Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10 . In addition, CDDN can also be applied to other image classification problems.",
"Confocal laser endomicroscopy (CLE), although capable of obtaining images at cellular resolution during surgery of brain tumors in real time, creates as many non-diagnostic as diagnostic images. Non-useful images are often distorted due to relative motion between probe and brain or blood artifacts. Many images, however, simply lack diagnostic features immediately informative to the physician. Examining all the hundreds or thousands of images from a single case to discriminate diagnostic images from nondiagnostic ones can be tedious. Providing a real time diagnostic value assessment of images (fast enough to be used during the surgical acquisition process and accurate enough for the pathologist to rely on) to automatically detect diagnostic frames would streamline the analysis of images and filter useful images for the pathologist surgeon. We sought to automatically classify images as diagnostic or non-diagnostic. AlexNet, a deep-learning architecture, was used in a 4-fold cross validation manner. Our dataset includes 16,795 images (8572 nondiagnostic and 8223 diagnostic) from 74 CLE-aided brain tumor surgery patients. The ground truth for all the images is provided by the pathologist. Average model accuracy on test data was 91 overall (90.79 accuracy, 90.94 sensitivity and 90.87 specificity). To evaluate the model reliability we also performed receiver operating characteristic (ROC) analysis yielding 0.958 average for area under ROC curve (AUC). These results demonstrate that a deeply trained AlexNet network can achieve a model that reliably and quickly recognizes diagnostic CLE images."
]
} |
1902.08832 | 2952612146 | Automatically evaluating the quality of dialogue responses for unstructured domains is a challenging problem. ADEM( 2017) formulated the automatic evaluation of dialogue systems as a learning problem and showed that such a model was able to predict responses which correlate significantly with human judgements, both at utterance and system level. Their system was shown to have beaten word-overlap metrics such as BLEU with large margins. We start with the question of whether an adversary can game the ADEM model. We design a battery of targeted attacks at the neural network based ADEM evaluation system and show that automatic evaluation of dialogue systems still has a long way to go. ADEM can get confused with a variation as simple as reversing the word order in the text! We report experiments on several such adversarial scenarios that draw out counterintuitive scores on the dialogue responses. We take a systematic look at the scoring function proposed by ADEM and connect it to linear system theory to predict the shortcomings evident in the system. We also devise an attack that can fool such a system to rate a response generation system as favorable. Finally, we allude to future research directions of using the adversarial attacks to design a truly automated dialogue evaluation system. | Since our work focuses on a critique of automatic evaluation metrics we first do a quick review of various popular metrics used for automatic evaluation and then review works which are similar in idea to ours and themselves do a critique of these evaluation metrics. The research on dialogue generation models is guided by the dialogue evaluation metrics which provide the means for comparison. BLEU and METEOR scores, originally used for machine translation, are adopted for this task by various works @cite_17 @cite_5 @cite_9 @cite_0 @cite_1 @cite_18 . BLEU analyses the co-occurrences of n-grams whereas METEOR creates an explicit alignment using exact matching, followed by WordNet synonyms, stemmed tokens, and paraphrases, in that order. Similarly the ROUGE metric variants, originally used for automatic summarization, work on overlapping units such as n-grams, word sub-sequences and word pairs. The ROUGE metrics being recall-oriented, require a sufficient number of references to produce reliable scores. | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_5",
"@cite_17"
],
"mid": [
"2963206148",
"1518951372",
"2140054881",
"1948566616",
"10957333",
"2964268978"
],
"abstract": [
"Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., I don’t know) regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.",
"We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.",
"We introduce Discriminative BLEU (∆BLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs. Reference strings are scored for quality by human raters on a scale of [−1, +1] to weight multi-reference BLEU. In tasks involving generation of conversational responses, ∆BLEU correlates reasonably with human judgments and outperforms sentence-level and IBM BLEU in terms of both Spearman’s ρ and Kendall’s τ .",
"© 2015 Association for Computational Linguistics. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems..",
"We present a data-driven approach to generating responses to Twitter status posts, based on phrase-based Statistical Machine Translation. We find that mapping conversational stimuli onto responses is more difficult than translating between languages, due to the wider range of possible responses, the larger fraction of unaligned words phrases, and the presence of large phrase pairs whose alignment cannot be further decomposed. After addressing these challenges, we compare approaches based on SMT and Information Retrieval in a human evaluation. We show that SMT outperforms IR on this task, and its output is preferred over actual human responses in 15 of cases. As far as we are aware, this is the first work to investigate the use of phrase-based SMT to directly translate a linguistic stimulus into an appropriate response.",
"As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines."
]
} |
1902.09096 | 2950010088 | Recommendation systems and computing advertisements have gradually entered the field of academic research from the field of commercial applications. Click-through rate prediction is one of the core research issues because the prediction accuracy affects the user experience and the revenue of merchants and platforms. Feature engineering is very important to improve click-through rate prediction. Traditional feature engineering heavily relies on people's experience, and is difficult to construct a feature combination that can describe the complex patterns implied in the data. This paper combines traditional feature combination methods and deep neural networks to automate feature combinations to improve the accuracy of click-through rate prediction. We propose a mechannism named 'Field-aware Neural Factorization Machine' (FNFM). This model can have strong second order feature interactive learning ability like Field-aware Factorization Machine, on this basis, deep neural network is used for higher-order feature combination learning. Experiments show that the model has stronger expression ability than current deep learning feature combination models like the DeepFM, DCN and NFM. | The FFM(Field-aware Factorization Machine) @cite_5 model is an improvement of the FM model. The FFM model introduces the concept of field, that is, using different hidden vectors presenting different feature groups. When calculating the weight of the interaction term between each pair of features, the traditional FM model is represented by the inner product of the hidden vectors corresponding to the two features. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2509235963"
],
"abstract": [
"Click-through rate (CTR) prediction plays an important role in computational advertising. Models based on degree-2 polynomial mappings and factorization machines (FMs) are widely used for this task. Recently, a variant of FMs, field-aware factorization machines (FFMs), outperforms existing models in some world-wide CTR-prediction competitions. Based on our experiences in winning two of them, in this paper we establish FFMs as an effective method for classifying large sparse data including those from CTR prediction. First, we propose efficient implementations for training FFMs. Then we comprehensively analyze FFMs and compare this approach with competing models. Experiments show that FFMs are very useful for certain classification problems. Finally, we have released a package of FFMs for public use."
]
} |
1902.09096 | 2950010088 | Recommendation systems and computing advertisements have gradually entered the field of academic research from the field of commercial applications. Click-through rate prediction is one of the core research issues because the prediction accuracy affects the user experience and the revenue of merchants and platforms. Feature engineering is very important to improve click-through rate prediction. Traditional feature engineering heavily relies on people's experience, and is difficult to construct a feature combination that can describe the complex patterns implied in the data. This paper combines traditional feature combination methods and deep neural networks to automate feature combinations to improve the accuracy of click-through rate prediction. We propose a mechannism named 'Field-aware Neural Factorization Machine' (FNFM). This model can have strong second order feature interactive learning ability like Field-aware Factorization Machine, on this basis, deep neural network is used for higher-order feature combination learning. Experiments show that the model has stronger expression ability than current deep learning feature combination models like the DeepFM, DCN and NFM. | DeepFM @cite_9 is a model that combines FM and DNN to model low-order feature combinations like FM and model high-order feature combinations like DNN. Unlike WDL @cite_0 , DeepFM can perform end-to-end training without any feature engineering because its wide side and deep side share the same input and embedding vectors. The model structure is as follows: | {
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2951581544",
"2951001079"
],
"abstract": [
"Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.",
"Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its \"wide\" and \"deep\" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data."
]
} |
1902.09096 | 2950010088 | Recommendation systems and computing advertisements have gradually entered the field of academic research from the field of commercial applications. Click-through rate prediction is one of the core research issues because the prediction accuracy affects the user experience and the revenue of merchants and platforms. Feature engineering is very important to improve click-through rate prediction. Traditional feature engineering heavily relies on people's experience, and is difficult to construct a feature combination that can describe the complex patterns implied in the data. This paper combines traditional feature combination methods and deep neural networks to automate feature combinations to improve the accuracy of click-through rate prediction. We propose a mechannism named 'Field-aware Neural Factorization Machine' (FNFM). This model can have strong second order feature interactive learning ability like Field-aware Factorization Machine, on this basis, deep neural network is used for higher-order feature combination learning. Experiments show that the model has stronger expression ability than current deep learning feature combination models like the DeepFM, DCN and NFM. | Deep and Cross Network (DCN) @cite_1 adopts a cross-network structure to explicitly calculate the cross-combination between features. The crossover network consists of different intersecting layers. This special structure enables the order of the interactive features to increase as the number of layers increases. The highest order (as compared to the original input) that a @math layered cross network can capture is @math .. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2964182926"
],
"abstract": [
"Feature engineering has been the key to the success of many prediction models. However, the process is nontrivial and often requires manual feature engineering or exhaustive searching. DNNs are able to automatically learn feature interactions; however, they generate all the interactions implicitly, and are not necessarily efficient in learning all types of cross features. In this paper, we propose the Deep & Cross Network (DCN) which keeps the benefits of a DNN model, and beyond that, it introduces a novel cross network that is more efficient in learning certain bounded-degree feature interactions. In particular, DCN explicitly applies feature crossing at each layer, requires no manual feature engineering, and adds negligible extra complexity to the DNN model. Our experimental results have demonstrated its superiority over the state-of-art algorithms on the CTR prediction dataset and dense classification dataset, in terms of both model accuracy and memory usage."
]
} |
1902.09103 | 2917413209 | Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at this https URL. | Current state-of-the-art visual SLAM approaches can be generally characterized into two categories: indirect and direct formulations. Indirect methods conquer the motion estimation problem by first computing some stable and intermediate geometric representations such as keypoint @cite_25 , edgelet @cite_4 and optical flow @cite_6 . error is then minimized using these reliable geometric representations either with sliding-window or global bundle adjustment @cite_17 . This is the most widely-used formulation for SLAM systems @cite_7 @cite_30 @cite_25 . | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_25",
"@cite_17"
],
"mid": [
"2121013842",
"1661995841",
"2152671441",
"2427448504",
"2535547924",
""
],
"abstract": [
"Many successful indoor mapping techniques employ frame-to-frame matching of laser scans to produce detailed local maps as well as the closing of large loops. In this paper, we propose a framework for applying the same techniques to visual imagery. We match visual frames with large numbers of point features, using classic bundle adjustment techniques from computational vision, but we keep only relative frame pose information (a skeleton). The skeleton is a reduced nonlinear system that is a faithful approximation of the larger system and can be used to solve large loop closures quickly, as well as forming a backbone for data association and local registration. We illustrate the workings of the system with large outdoor datasets (10 km), showing large-scale loop closure and precise localization in real time.",
"The ability to localise a camera moving in a previously unknown environment is desirable for a wide range of applications. In computer vision this problem is studied as monocular SLAM. Recent years have seen improvements to the usability and scalability of monocular SLAM systems to the point that they may soon find uses outside of laboratory conditions. However, the robustness of these systems to rapid camera motions (we refer to this quality as agility) still lags behind that of tracking systems which use known object models. In this paper we attempt to remedy this. We present two approaches to improving the agility of a keyframe-based SLAM system: Firstly, we add edge features to the map and exploit their resilience to motion blur to improve tracking under fast motion. Secondly, we implement a very simple inter-frame rotation estimator to aid tracking when the camera is rapidly panning --- and demonstrate that this method also enables a trivially simple yet effective relocalisation method. Results show that a SLAM system combining points, edge features and motion initialisation allows highly agile tracking at a moderate increase in processing time.",
"We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the \"pure vision\" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera",
"We present an approach to dense depth estimation from a single monocular camera that is moving through a dynamic scene. The approach produces a dense depth map from two consecutive frames. Moving objects are reconstructed along with the surrounding environment. We provide a novel motion segmentation algorithm that segments the optical flow field into a set of motion models, each with its own epipolar geometry. We then show that the scene can be reconstructed based on these motion models by optimizing a convex program. The optimization jointly reasons about the scales of different objects and assembles the scene in a common coordinate frame, determined up to a global scale. Experimental results demonstrate that the presented approach outperforms prior methods for monocular depth estimation in dynamic scenes.",
"We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.",
""
]
} |
1902.09103 | 2917413209 | Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at this https URL. | For visual odometry or visual SLAM (vSLAM), direct methods directly optimize the error which corresponds to the light value received by the actual sensors. Examples include @cite_15 @cite_40 @cite_34 . Given accurate photometric calibration information (such as gamma correction, lens attenuation), this formulation spares the costly sparse geometric computation and could potentially generate finer-grained geometry like per-pixel depth. However, this formulation is less robust than indirect ones with the presence of dynamic moving objects, reflective surfaces and inaccurate photometric calibration. Note that the self-supervised learning framework derives from the direct method. | {
"cite_N": [
"@cite_40",
"@cite_15",
"@cite_34"
],
"mid": [
"612478963",
"2108134361",
"2474281075"
],
"abstract": [
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.",
"Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness."
]
} |
1902.09103 | 2917413209 | Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at this https URL. | Most of pioneering depth estimation works rely on supervision from depth sensors @cite_42 @cite_20 . @cite_2 propose an iterative supervised approach to jointly estimate optical flow, depth and motion. This iterative process allows the use of stereopsis and gives fairly good results given depth and motion supervision. | {
"cite_N": [
"@cite_42",
"@cite_20",
"@cite_2"
],
"mid": [
"2158211626",
"2171740948",
"2561074213"
],
"abstract": [
"We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training."
]
} |
1902.09103 | 2917413209 | Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at this https URL. | The above view-synthesis-based methods @cite_46 @cite_9 @cite_26 @cite_12 is based on the assumptions that the modeling scene is static and the camera is carefully calibrated to get rid of photometric distortions such as automatic exposure changes and lens attenuation (vignetting) @cite_21 . This problem becomes serious as most of the previous works train models on KITTI @cite_48 or Cityscapes @cite_19 datasets, in which the camera calibration does not consider non-linear response functions (gamma-correction white-balancing) and vignetting. As the input image size is limited by the GPU memory, the pixel value information is further degraded by down-sampling. | {
"cite_N": [
"@cite_26",
"@cite_48",
"@cite_9",
"@cite_21",
"@cite_19",
"@cite_46",
"@cite_12"
],
"mid": [
"2964314455",
"2115579991",
"2608018946",
"2123315723",
"2340897893",
"2609883120",
"2963906250"
],
"abstract": [
"We propose a novel monocular visual odometry (VO) system called UnDeepVO in this paper. UnDeepVO is able to estimate the 6-DoF pose of a monocular camera and the depth of its view by using deep neural networks. There are two salient features of the proposed UnDeepVo:one is the unsupervised deep learning scheme, and the other is the absolute scale recovery. Specifically, we train UnDeepVoby using stereo image pairs to recover the scale but test it by using consecutive monocular images. Thus, UnDeepVO is a monocular system. The loss function defined for training the networks is based on spatial and temporal dense information. A system overview is shown in Fig. 1. The experiments on KITTI dataset show our UnDeepVO achieves good performance in terms of pose accuracy.",
"We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.",
"We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frame-to-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the re-projection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfM-Net extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.",
"In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data, which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics that are more representative of the scene than normal mosaics.",
"Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.",
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the whole scene, and enforce consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.1"
]
} |
1902.09103 | 2917413209 | Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at this https URL. | These learning-based methods optimizing corresponds to the direct methods @cite_40 @cite_34 for SLAM. Indirect methods @cite_7 @cite_25 , on the other hand, decompose the structure and motion estimation problem by first generating an intermediate representation and then computing the desired quantities based on . These intermediate representations like keypoints @cite_27 @cite_41 are typically stable and resilient to occlusions and photometric distortions. In this paper, we advocate to import geometric losses into the self-supervised depth and relative pose estimation problem. | {
"cite_N": [
"@cite_7",
"@cite_41",
"@cite_27",
"@cite_40",
"@cite_34",
"@cite_25"
],
"mid": [
"2152671441",
"2117228865",
"2884088147",
"612478963",
"2474281075",
"2535547924"
],
"abstract": [
"We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the \"pure vision\" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera",
"Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.",
"Learned local descriptors based on Convolutional Neural Networks (CNNs) have achieved significant improvements on patch-based benchmarks, whereas not having demonstrated strong generalization ability on recent benchmarks of image-based 3D reconstruction. In this paper, we mitigate this limitation by proposing a novel local descriptor learning approach that integrates geometry constraints from multi-view reconstructions, which benefits the learning process in terms of data generation, data sampling and loss computation. We refer to the proposed descriptor as GeoDesc, and demonstrate its superior performance on various large-scale benchmarks, and in particular show its great success on challenging reconstruction tasks. Moreover, we provide guidelines towards practical integration of learned descriptors in Structure-from-Motion (SfM) pipelines, showing the good trade-off that GeoDesc delivers to 3D reconstruction tasks between accuracy and efficiency.",
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.",
"We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields."
]
} |
1902.09093 | 2950801460 | Question Answering (QA), as a research field, has primarily focused on either knowledge bases (KBs) or free text as a source of knowledge. These two sources have historically shaped the kinds of questions that are asked over these sources, and the methods developed to answer them. In this work, we look towards a practical use-case of QA over user-instructed knowledge that uniquely combines elements of both structured QA over knowledge bases, and unstructured QA over narrative, introducing the task of multi-relational QA over personal narrative. As a first step towards this goal, we make three key contributions: (i) we generate and release TextWorldsQA, a set of five diverse datasets, where each dataset contains dynamic narrative that describes entities and relations in a simulated world, paired with variably compositional questions over that knowledge, (ii) we perform a thorough evaluation and analysis of several state-of-the-art QA models and their variants at this task, and (iii) we release a lightweight Python-based framework we call TextWorlds for easily generating arbitrary additional worlds and narrative, with the goal of allowing the community to create and share a growing collection of diverse worlds as a test-bed for this task. | proposes to use synthetic QA tasks (the bAbI dataset) to better understand the limitations of QA systems. bAbI builds on a simulated physical world similar to interactive fiction @cite_30 with simple objects and relations and includes 20 different reasoning tasks. Various types of end-to-end neural networks @cite_29 @cite_39 @cite_31 have demonstrated promising accuracies on this dataset. However, the performance can hardly translate to real-world QA datasets, as bAbI uses a small vocabulary (150 words) and short sentences with limited language variations (e.g., nesting sentences, coreference). A more sophisticated QA dataset with a supporting KB is WikiMovies @cite_44 , which contains 100k questions about movies, each of them is answerable by using either a KB or a Wikipedia article. However, WikiMovies is highly domain-specific, and similar to bAbI , the questions are designed to be in simple forms with little compositionality and hence limit the difficulty level of the tasks. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_39",
"@cite_44",
"@cite_31"
],
"mid": [
"1511456413",
"",
"2175874768",
"2409591106",
"2133585753"
],
"abstract": [
"From the Publisher: Interactive fiction--the best-known form of which is the text game or text adventure--has not received as much critical attention as have such other forms of electronic literature as hypertext fiction and the conversational programs known as chatterbots. Twisty Little Passages (the title refers to a maze in Adventure, the first interactive fiction) is the first book-length consideration of this form, examining it from gaming and literary perspectives. Nick Montfort, an interactive fiction author himself, offers both aficionados and first-time users a way to approach interactive fiction that will lead to a more pleasurable and meaningful experience of it. Twisty Little Passages looks at interactive fiction beginning with its most important literary ancestor, the riddle. Montfort then discusses Adventure and its precursors (including the I Ching and Dungeons and Dragons), and follows this with an examination of mainframe text games developed in response, focusing on the most influential work of that era, Zork. He then considers the introduction of commercial interactive fiction for home computers, particularly that produced by Infocom. Commercial works inspired an independent reaction, and Montfort describes the emergence of independent creators and the development of an online interactive fiction community in the 1990s. Finally, he considers the influence of interactive fiction on other literary and gaming forms. With Twisty Little Passages Nick Montfort places interactive fiction in its computational and literary contexts, opening up this still-developing form to new consideration.",
"",
"Question answering tasks have shown remarkable progress with distributed vector representation. In this paper, we investigate the recently proposed Facebook bAbI tasks which consist of twenty different categories of questions that require complex reasoning. Because the previous work on bAbI are all end-to-end models, errors could come from either an imperfect understanding of semantics or in certain steps of the reasoning. For clearer analysis, we propose two vector space models inspired by Tensor Product Representation (TPR) to perform knowledge encoding and logical reasoning based on common-sense inference. They together achieve near-perfect accuracy on all categories including positional reasoning and path finding that have proved difficult for most of the previous approaches. We hypothesize that the difficulties in these categories are due to the multi-relations in contrast to uni-relational characteristic of other categories. Our exploration sheds light on designing more sophisticated dataset and moving one step toward integrating transparent and interpretable formalism of TPR into existing learning paradigms.",
"Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WikiQA benchmark.",
"We propose Neural Reasoner, a framework for neural network-based reasoning over natural language sentences. Given a question, Neural Reasoner can infer over multiple supporting facts and find an answer to the question in specific forms. Neural Reasoner has 1) a specific interaction-pooling mechanism, allowing it to examine multiple facts, and 2) a deep architecture, allowing it to model the complicated logical relations in reasoning tasks. Assuming no particular structure exists in the question and facts, Neural Reasoner is able to accommodate different types of reasoning and different forms of language expressions. Despite the model complexity, Neural Reasoner can still be trained effectively in an end-to-end manner. Our empirical studies show that Neural Reasoner can outperform existing neural reasoning systems with remarkable margins on two difficult artificial tasks (Positional Reasoning and Path Finding) proposed in [8]. For example, it improves the accuracy on Path Finding(10K) from 33.4 [6] to over 98 ."
]
} |
1902.09093 | 2950801460 | Question Answering (QA), as a research field, has primarily focused on either knowledge bases (KBs) or free text as a source of knowledge. These two sources have historically shaped the kinds of questions that are asked over these sources, and the methods developed to answer them. In this work, we look towards a practical use-case of QA over user-instructed knowledge that uniquely combines elements of both structured QA over knowledge bases, and unstructured QA over narrative, introducing the task of multi-relational QA over personal narrative. As a first step towards this goal, we make three key contributions: (i) we generate and release TextWorldsQA, a set of five diverse datasets, where each dataset contains dynamic narrative that describes entities and relations in a simulated world, paired with variably compositional questions over that knowledge, (ii) we perform a thorough evaluation and analysis of several state-of-the-art QA models and their variants at this task, and (iii) we release a lightweight Python-based framework we call TextWorlds for easily generating arbitrary additional worlds and narrative, with the goal of allowing the community to create and share a growing collection of diverse worlds as a test-bed for this task. | Other large-scale QA datasets include Cloze-style datasets such as CNN Daily Mail @cite_1 , Children's Book Test @cite_42 , and Who Did What @cite_40 ; datasets with answers being spans in the document, such as SQuAD @cite_8 , NewsQA @cite_24 , and TriviaQA @cite_26 ; and datasets with human generated answers, for instance, MS MARCO @cite_12 and SearchQA @cite_4 . One common drawback of these datasets is the difficulty in accessing a system's capability of integrating information across a document context. recently emphasized this issue and proposed NarrativeQA, a dataset of fictional stories with questions that reflect the complexity of narratives: characters, events, and evolving relations. Our dataset contains similar narrative elements, but it is created with a supporting KB and hence it is easier to analyze and interpret results in a controlled setting. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_42",
"@cite_1",
"@cite_24",
"@cite_40",
"@cite_12"
],
"mid": [
"2963339397",
"2609826708",
"2963748441",
"2126209950",
"1544827683",
"2949776890",
"2512077205",
"2951534261"
],
"abstract": [
"",
"We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet's URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.",
"",
"We introduce a new test of how well language models capture meaning in children's books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.",
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at this https URL.",
"We have constructed a new \"Who-did-What\" dataset of over 200,000 fill-in-the-gap (cloze) multiple choice reading comprehension problems constructed from the LDC English Gigaword newswire corpus. The WDW dataset has a variety of novel features. First, in contrast with the CNN and Daily Mail datasets (, 2015) we avoid using article summaries for question formation. Instead, each problem is formed from two independent articles --- an article given as the passage to be read and a separate article on the same events used to form the question. Second, we avoid anonymization --- each choice is a person named entity. Third, the problems have been filtered to remove a fraction that are easily solved by simple baselines, while remaining 84 solvable by humans. We report performance benchmarks of standard systems and propose the WDW dataset as a challenge task for the community.",
"We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models."
]
} |
1902.08688 | 2967936895 | Wings of flying animals can not only generate lift and control torques but also can sense their surroundings. Such dual functions of sensing and actuation coupled in one element are particularly useful for small sized bio-inspired robotic flyers, whose weight, size, and power are under stringent constraint. In this work, we present the first flapping-wing robot using its flapping wings for environmental perception and navigation in tight space, without the need for any visual feedback. As the test platform, we introduce the Purdu Hummingbird, a flapping-wing robot with 17cm wingspan and 12 grams weight, with a pair of 30-40Hz flapping wings driven by only two actuators. By interpreting the wing loading feedback and its variations, the vehicle can detect the presence of environmental changes such as grounds, walls, stairs, obstacles and wind gust. The instantaneous wing loading can be obtained through the measurements and interpretation of the current feedback by the motors that actuate the wings. The effectiveness of the proposed approach is experimentally demonstrated on several challenging flight tasks without vision: terrain following, wall following and going through a narrow corridor. To ensure flight stability, a robust controller was designed for handling unforeseen disturbances during the flight. Sensing and navigating one’s environment through actuator loading is a promising method for mobile robots, and it can serve as an alternative or complementary method to visual perception. | The sophisticated sensory system in nature's flyers is critical to their extraordinary environmental adaptability from reactive control to high-level navigation @cite_17 @cite_12 . Inspired by nature, in order to sense the surroundings and improve flight performance, numerous efforts have been devoted to onboard sensor design and implementation of FWMAVs. Among them, the visual sensors are the most widely used. At insect-scale, Harvard Robobee's ocelli-inspired optical sensor was able to stabilize the robot to its upright orientation @cite_21 . At bird-scale, an onboard stereo vision system was successfully integrated on Delfly for obstacle avoidance @cite_26 . However, visual sensors typically have strict restrictions of light condition, and at such a small-scale, they show additional limitations due to weight, power and high computational load constraints. In this work, we present to using actuator loading for sensing, which can be an alternative or complementary method besides vision. | {
"cite_N": [
"@cite_21",
"@cite_26",
"@cite_12",
"@cite_17"
],
"mid": [
"2103332553",
"2055034398",
"2126798433",
"2100151633"
],
"abstract": [
"Scaling a flying robot down to the size of a fly or bee requires advances in manufacturing, sensing and control, and will provide insights into mechanisms used by their biological counterparts. Controlled flight at this scale has previously required external cameras to provide the feedback to regulate the continuous corrective manoeuvres necessary to keep the unstable robot from tumbling. One stabilization mechanism used by flying insects may be to sense the horizon or Sun using the ocelli, a set of three light sensors distinct from the compound eyes. Here, we present an ocelli-inspired visual sensor and use it to stabilize a fly-sized robot. We propose a feedback controller that applies torque in proportion to the angular velocity of the source of light estimated by the ocelli. We demonstrate theoretically and empirically that this is sufficient to stabilize the robot's upright orientation. This constitutes the first known use of onboard sensors at this scale. Dipteran flies use halteres to provide gyroscopic velocity feedback, but it is unknown how other insects such as honeybees stabilize flight without these sensory organs. Our results, using a vehicle of similar size and dynamics to the honeybee, suggest how the ocelli could serve this role.",
"Autonomous flight of Flapping Wing Micro Air Vehicles (FWMAVs) is a major challenge in the field of robotics, due to their light weight and the flapping-induced body motions. In this article, we present the first FWMAV with onboard vision processing for autonomous flight in generic environments. In particular, we introduce the DelFly ‘Explorer’, a 20-gram FWMAV equipped with a 0.98-gram autopilot and a 4.0-gram onboard stereo vision system. We explain the design choices that permit carrying the extended payload, while retaining the DelFly’s hover capabilities. In addition, we introduce a novel stereo vision algorithm, LongSeq, designed specifically to cope with the flapping motion and the desire to attain a computational effort tuned to the frame rate. The onboard stereo vision system is illustrated in the context of an obstacle avoidance task in an environment with sparse obstacles.",
"Bats are the only mammals capable of powered flight, and they perform impressive aerial maneuvers like tight turns, hovering, and perching upside down. The bat wing contains five digits, and its specialized membrane is covered with stiff, microscopically small, domed hairs. We provide here unique empirical evidence that the tactile receptors associated with these hairs are involved in sensorimotor flight control by providing aerodynamic feedback. We found that neurons in bat primary somatosensory cortex respond with directional sensitivity to stimulation of the wing hairs with low-speed airflow. Wing hairs mostly preferred reversed airflow, which occurs under flight conditions when the airflow separates and vortices form. This finding suggests that the hairs act as an array of sensors to monitor flight speed and or airflow conditions that indicate stall. Depilation of different functional regions of the bats’ wing membrane altered the flight behavior in obstacle avoidance tasks by reducing aerial maneuverability, as indicated by decreased turning angles and increased flight speed.",
"The complex morphology of an insect campaniform sensillum is responsible for transforming strains of the integument into a displacement of the campaniform dome and subsequently a deformation of the dendritic membrane. In this paper, the first step in this coupling process was investigated in identified campaniform sensilla on the wing of the blowfly by stimulating the sensilla with chord-wise deflections of the wing blade. Campaniform sensilla neurones were sensitive to both dorsal and ventral deflections of the wing, and thus exhibited no strong directional sensitivity to the chord-wise components of wing deformation. These results are consistent with a simplified mechanical model in which the wing veins act as cylinders that undergo bending and torsion during chord-wise wing deformation. By comparing the responses of campaniform neurones to chord-wise deflections of the wing with those evoked by direct punctate stimulation of the dome, it is possible to estimate the dynamic properties of the coupling process that links wing deformation to dome deformation. In the identified campaniform neurone examined, wing-dome coupling attenuates high frequencies and transforms the chord-wise deflections of the wing into dome deformation similar in degree of excitation to that caused by direct punctate indentions that are two or more orders of magnitude smaller in size."
]
} |
1902.08688 | 2967936895 | Wings of flying animals can not only generate lift and control torques but also can sense their surroundings. Such dual functions of sensing and actuation coupled in one element are particularly useful for small sized bio-inspired robotic flyers, whose weight, size, and power are under stringent constraint. In this work, we present the first flapping-wing robot using its flapping wings for environmental perception and navigation in tight space, without the need for any visual feedback. As the test platform, we introduce the Purdu Hummingbird, a flapping-wing robot with 17cm wingspan and 12 grams weight, with a pair of 30-40Hz flapping wings driven by only two actuators. By interpreting the wing loading feedback and its variations, the vehicle can detect the presence of environmental changes such as grounds, walls, stairs, obstacles and wind gust. The instantaneous wing loading can be obtained through the measurements and interpretation of the current feedback by the motors that actuate the wings. The effectiveness of the proposed approach is experimentally demonstrated on several challenging flight tasks without vision: terrain following, wall following and going through a narrow corridor. To ensure flight stability, a robust controller was designed for handling unforeseen disturbances during the flight. Sensing and navigating one’s environment through actuator loading is a promising method for mobile robots, and it can serve as an alternative or complementary method to visual perception. | Besides visual perception, haptic feedback provides another method for environment sensing. Inspired by the cockroaches antennas, researchers implemented artificial antennas on ground vehicles for tactile sensing @cite_19 @cite_8 . They demonstrated successful wall detection and following without visual cues. Similarly, as presented in @cite_27 , the legged robot can use contact responses of the leg to leverage a Bayesian classifier for terrain identification. The touch sensing strategy can be used on the FWMAVs as well. In this paper, we achieve the same function without using any specialized sensors, only measuring wing loading to infer the surroundings change. By taking advantages of the flexibility and reciprocating motion of the flapping wing, the safety of the FWMAV can be assured if the wing collided. In comparison, rigid-winged vehicles usually avoid hitting objects to prevent the wing wear and tear, e.g., drones need a cage-like shield to ensure passive safety when traveling through tight spaces with obstacles and turns @cite_29 . | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_29",
"@cite_8"
],
"mid": [
"2171014518",
"194233768",
"1954064829",
""
],
"abstract": [
"Through the use of mechanical, actuated antennae a biologically-inspired robot is capable of autonomous decision-making and navigation when faced with an obstacle that can be climbed over or tunneled under. Vertically-sweeping mechanical antennae and interface microcontrollers have been added to the Whegs ™ II [1] sensor platform that allow it to autonomously sense the presence of, and successfully navigate a horizontal shelf placed in its path. The obstacle is sensed when the antennae make contact with it, and navigation is made possible through articulation of the Whegs ™ II body flexion joint.",
"In this paper, we explore the idea of using inertial and actuator information to accurately identify the environment of an amphibious robot. In particular, in our work with a legged robot we use internal sensors to measure the dynamics and interaction forces experienced by the robot. From these measurements we use simple machine learning methods to probabilistically infer properties of the environment, and therefore identify it. The robot’s gait can then be automatically selected in response to environmental changes. Experimental results show that for several environments (sand, water, snow, ice, etc.), the identification process is over 90 per cent accurate. The requisite data can be collected during a half-leg rotation (about 250 ms), making it one of the fastest and most economical environment identifiers for a dynamic robot. For the littoral setting, a gaitchange experiment is done as a proof-of-concept of a robot automatically adapting its gait to suit the environment.",
"On 7 February 2015 in Dubai, United Arab Emirates (UAE), with applause from an international jury of drone experts, UAE ministers, and international dignitaries, His Highness Mohammed bin Rashid Al Maktoum handed us a US$1 million check and the first-place prize of the UAE Drones for Good Award, the “World Cup of drones.”",
""
]
} |
1902.08647 | 2915929245 | We study the stochastic multi-armed bandits problem in the presence of adversarial corruption. We present a new algorithm for this problem whose regret is nearly optimal, substantially improving upon previous work. Our algorithm is agnostic to the level of adversarial contamination and can tolerate a significant amount of corruption with virtually no degradation in performance. | A somewhat different flavor of robustness in multi-armed bandits has been explored by @cite_1 . Their Robust UCB'' algorithm can tolerate heavy-tailed distributions of rewards; namely, it does not require boundness or sub-Gaussianity of the rewards, and instead only needs them to have bounded variance (or any higher-order moment). These ideas were later adapted to other bandit settings (e.g., yu2018pure,shao2018almost ). | {
"cite_N": [
"@cite_1"
],
"mid": [
"1984332158"
],
"abstract": [
"The stochastic multiarmed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper, we examine the bandit problem under the weaker assumption that the distributions have moments of order 1 + e, for some e ∈ (0,1]. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when e <; 1."
]
} |
1902.08740 | 2916963723 | Process Mining is a famous technique which is frequently applied to Software Development Processes, while being neglected in Human-Computer Interaction (HCI) recommendation applications. Organizations usually train employees to interact with required IT systems. Often, employees, or users in general, develop their own strategies for solving repetitive tasks and processes. However, organizations find it hard to detect whether employees interact efficiently with IT systems or not. Hence, we have developed a method which detects inefficient behavior assuming that at least one optimal HCI strategy is known. This method provides recommendations to gradually adapt users' behavior towards the optimal way of interaction considering satisfaction of users. Based on users' behavior logs tracked by a Java application suitable for multi-application and multi-instance environments, we demonstrate the applicability for a specific task in a common Windows environment utilizing realistic simulated behaviors of users. | Formal methods, such as Process Mining techniques, have been used for usability analysis and user assistance for a long time to achieve effective, efficient and satisfying UIs. However, they have not been applied to provide automatic HCI recommendations. For example, proposed a method called AUGUR to assist users in navigating and entering data in form applications @cite_8 . However, this method applies only to web-based applications and does not consider the interaction between multiple applications. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1987533592"
],
"abstract": [
"As user interfaces become more and more complex and feature laden, usability tends to decrease. One possibility to counter this effect are intelligent support mechanisms. In this paper, we present AUGUR, a system that provides context-aware interaction support for navigating and entering data in arbitrary form-based web applications. We further report the results of an initial user study we performed to evaluate the usability of such context-aware interaction support. AUGUR combines several novel approaches: (i) it considers various context sources for providing interaction support, and (ii) it contains a context store that mimics the user's short-term memory to keep track of the context information that currently influences the user's interactions. AUGUR thereby combines the advantages of the three main approaches for supporting the user's interactions, i.e. knowledge-based systems, learning agents, and end-user programming."
]
} |
1902.08740 | 2916963723 | Process Mining is a famous technique which is frequently applied to Software Development Processes, while being neglected in Human-Computer Interaction (HCI) recommendation applications. Organizations usually train employees to interact with required IT systems. Often, employees, or users in general, develop their own strategies for solving repetitive tasks and processes. However, organizations find it hard to detect whether employees interact efficiently with IT systems or not. Hence, we have developed a method which detects inefficient behavior assuming that at least one optimal HCI strategy is known. This method provides recommendations to gradually adapt users' behavior towards the optimal way of interaction considering satisfaction of users. Based on users' behavior logs tracked by a Java application suitable for multi-application and multi-instance environments, we demonstrate the applicability for a specific task in a common Windows environment utilizing realistic simulated behaviors of users. | proposed a formal method consisting of a set of different techniques to facilitate the evaluation of usability @cite_40 . Their approach is to model applications using high-level PNs and to evaluate the application using observed user's logs. From the user's logs and replay on the application PN, one can observe failures of the task as well as usability issues. This approach can detect usability issues. However, it cannot be used to train the users. It cannot be used for user's behavior evaluation either. Moreover, this method requires a modeling of the application in a non-automated way. | {
"cite_N": [
"@cite_40"
],
"mid": [
"2046020695"
],
"abstract": [
"This paper offers a contribution for engineering interaction techniques by proposing a model-based approach for supporting usability evaluation. This approach combines different techniques including formal analysis of models, simulation and, in particular, analysis of log data in a model-based environment. This approach is integrated in a process and is supported by a model-based CASE tool for modeling, simulation and evaluation of interactive systems. A case study illustrates the approach and operation of the tool. The results demonstrate that the log data at model level can be used not only to identify usability problems but also to identify where to operate changes to these models in order to fix usability problems. Finally we show how the analysis of log data allows the designer to easily shape up the interaction technique (as the results of log analysis are presented at the same abstraction level of models). Such as an approach offers an alternative to user testing that are very difficult to configure and to interpret especially when advanced interaction techniques are concerned"
]
} |
1902.08740 | 2916963723 | Process Mining is a famous technique which is frequently applied to Software Development Processes, while being neglected in Human-Computer Interaction (HCI) recommendation applications. Organizations usually train employees to interact with required IT systems. Often, employees, or users in general, develop their own strategies for solving repetitive tasks and processes. However, organizations find it hard to detect whether employees interact efficiently with IT systems or not. Hence, we have developed a method which detects inefficient behavior assuming that at least one optimal HCI strategy is known. This method provides recommendations to gradually adapt users' behavior towards the optimal way of interaction considering satisfaction of users. Based on users' behavior logs tracked by a Java application suitable for multi-application and multi-instance environments, we demonstrate the applicability for a specific task in a common Windows environment utilizing realistic simulated behaviors of users. | have applied Social Network Analysis theory to HCI @cite_37 . Their work shows that an application's features can be interrelated using a social network. User's behavior represented as a social network graph is then able to unveil user's interactions and used features. Again, this application does not aim to provide automated user recommendations to interact with the application in an efficient way. It rather evaluates the work of the designer and gives hints which functions need more or less attention. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2162892381"
],
"abstract": [
"What methods can we use to help understand why users adopt certain use strategies, and how can we evaluate designs to anticipate and perhaps positively modify how users are likely to behave? This paper proposes taking advantage of social network analysis (SNA) to identify features of interaction. There are plausible reasons why SNA should be relevant to interaction programming and design, but we also show that SNA has promise, identifies and explains interesting use phenomena, and can be used effectively on conventionally-programmed interactive devices. Social network analysis is a very rich field, practically and theoretically, and many further forms of application and analysis beyond the promising examples explored in this paper are possible."
]
} |
1902.08740 | 2916963723 | Process Mining is a famous technique which is frequently applied to Software Development Processes, while being neglected in Human-Computer Interaction (HCI) recommendation applications. Organizations usually train employees to interact with required IT systems. Often, employees, or users in general, develop their own strategies for solving repetitive tasks and processes. However, organizations find it hard to detect whether employees interact efficiently with IT systems or not. Hence, we have developed a method which detects inefficient behavior assuming that at least one optimal HCI strategy is known. This method provides recommendations to gradually adapt users' behavior towards the optimal way of interaction considering satisfaction of users. Based on users' behavior logs tracked by a Java application suitable for multi-application and multi-instance environments, we demonstrate the applicability for a specific task in a common Windows environment utilizing realistic simulated behaviors of users. | utilize presentation interaction models (which are state transition systems) to describe graphical UIs and by modeling system manuals @cite_4 . Their objective is to align and unveil inconsistencies between a graphical UI and according manuals. Though this work relates to user's training by providing correct training materials, it does not consider user's behavior logs and does not provide automated, step-wise training recommendations for users. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2067100278"
],
"abstract": [
"Ensuring that users can successfully interact with software and hardware devices is a critical part of software engineering. There are many approaches taken to ensure successful interaction, e.g. the use of user-centred design, usability studies, training and education etc. In this paper we consider how the users of modal medical devices, such as syringe pumps, are supported (or not) post-training by documentation such as user manuals. Our intention is to show that modelling such documents is a useful component in the software engineering process, allowing us to discover inconsistencies between devices and manuals as well as uncovering potentially undesirable properties of the devices being modelled."
]
} |
1902.08740 | 2916963723 | Process Mining is a famous technique which is frequently applied to Software Development Processes, while being neglected in Human-Computer Interaction (HCI) recommendation applications. Organizations usually train employees to interact with required IT systems. Often, employees, or users in general, develop their own strategies for solving repetitive tasks and processes. However, organizations find it hard to detect whether employees interact efficiently with IT systems or not. Hence, we have developed a method which detects inefficient behavior assuming that at least one optimal HCI strategy is known. This method provides recommendations to gradually adapt users' behavior towards the optimal way of interaction considering satisfaction of users. Based on users' behavior logs tracked by a Java application suitable for multi-application and multi-instance environments, we demonstrate the applicability for a specific task in a common Windows environment utilizing realistic simulated behaviors of users. | developed a framework for the OpenOffice.org Suite which enables the logging of HCIs while interacting with OpenOffice @cite_1 . This tool enables many opportunities to analyze user's behavior in OpenOffice applications. However, it is limited to OpenOffice only and does not track HCI between applications and instances outside the OpenOffice.org Suite. Unfortunately, no research has been conducted on user's behavior logs observed by OpenOffice in order to recommend user's interactions. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2071889901"
],
"abstract": [
"We introduce a research framework which enables to use the OpenOffice.org as a platform for HCI research, particularly for performing user studies or prototyping and evaluating intelligent user interfaces. We make two contributions: (1) we introduce an innovative hybrid logging technique which provides high-level, rich and accurate information about issued user commands, command parameters and used interaction styles. Our logging technique also avoids an unwanted requirement for further complex processing of logged user interface events to infer user commands, which must be performed on most loggers these days. (2) Our logging tool acts as a component object in OpenOffice.org with an easy-to-use Application Program Interface (API) which enables, along with deep OpenOffice.org programmability, to use OpenOffice.org as a research framework for developing and evaluating intelligent user interfaces."
]
} |
1902.08628 | 2914676493 | Community norm violations can impair constructive communication and collaboration online. As a defense mechanism, community moderators often address such transgressions by temporarily blocking the perpetrator. Such actions, however, come with the cost of potentially alienating community members. Given this tradeoff, it is essential to understand to what extent, and in which situations, this common moderation practice is effective in reinforcing community rules. In this work, we introduce a computational framework for studying the future behavior of blocked users on Wikipedia. After their block expires, they can take several distinct paths: they can reform and adhere to the rules, but they can also recidivate, or straight-out abandon the community. We reveal that these trajectories are tied to factors rooted both in the characteristics of the blocked individual and in whether they perceived the block to be fair and justified. Based on these insights, we formulate a series of prediction tasks aiming to determine which of these paths a user is likely to take after being blocked for their first offense, and demonstrate the feasibility of these new tasks. Overall, this work builds towards a more nuanced approach to moderation by highlighting the tradeoffs that are in play. | Antisocial behavior Online moderation largely addresses the problem of antisocial behavior, which occurs in the form of harassment @cite_35 , cyberbullying @cite_46 , and general aggression @cite_16 . Approaches to moderating such content include decentralized, community-driven methods @cite_17 , as well as top-down methods relying on designated community managers or moderators @cite_37 . Prior research in this area ranges from understanding the actors involved in antisocial behavior @cite_47 @cite_5 @cite_27 @cite_6 @cite_55 to analyzing its effects @cite_41 to tools for identifying such behavior @cite_22 @cite_1 , and even forecasting future instances @cite_36 @cite_0 . While inspired by this line of work, our present study extends it by focusing on what happens a user is blocked for violating community rules. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_47",
"@cite_22",
"@cite_41",
"@cite_36",
"@cite_55",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_5",
"@cite_46",
"@cite_16",
"@cite_17"
],
"mid": [
"2588107045",
"2560267416",
"2621008736",
"2540646130",
"2963336390",
"2963955897",
"2788708648",
"",
"2555691318",
"",
"",
"",
"2587856214",
"1973306475",
"2334489881"
],
"abstract": [
"The popularity, availability, and ubiquity of information and communication technologies create new opportunities for online harassment. The present study evaluates factors associated with young adult women's online harassment experiences through a multi-factor measure accounting for the frequency and severity of negative events. Findings from a survey of 659 undergraduate and graduate students highlight the relationship between harassment, well-being, and engagement in strategies to manage one's online identity. We further identify differences in harassment experiences across three popular social media platforms: Facebook, Twitter, and Instagram. We conclude by discussing this study's contribution to feminist theory and describing five potential design interventions derived from our data that may minimize these negative experiences, mitigate the psychological harm they cause, and provide women with more proactive ways to regain agency when using communication technologies.",
"A wide range of behavior may be seen as destructive to online communities. Yet behavior that is ‘bad’ in one community may be celebrated in another. The work of community maintenance is therefore strongly contextual, involving complex choices due to differing norms, community cross-membership, and the need to invoke fairness. The experienced, “lived in” work of moderators; how they enact norms and make choices about social maintenance, remains poorly understood. Our study addresses this gap, using a negotiated order lens. We employed netnographic techniques, analyzing online interviews with moderators of sub-communities in Reddit, and records of critical incidents. We find that moderators are intuitive prosecutors who draw on a variety of logics to accomplish their work. Controlling bad behavior, articulating and enforcing norms is, therefore, a collective accomplishment through which moderators make choices, create a jurisprudence record, and reconcile nested community norms in the maintenance of social order.",
"",
"The damage personal attacks cause to online discourse motivates many platforms to try to curb the phenomenon. However, understanding the prevalence and impact of personal attacks in online platforms at scale remains surprisingly difficult. The contribution of this paper is to develop and illustrate a method that combines crowdsourcing and machine learning to analyze personal attacks at scale. We show an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate. We apply our methodology to English Wikipedia, generating a corpus of over 100k high quality human-labeled comments and 63M machine-labeled ones from a classifier that is as good as the aggregate of 3 crowd-workers, as measured by the area under the ROC curve and Spearman correlation. Using this corpus of machine-labeled scores, our methodology allows us to explore some of the open questions about the nature of online personal attacks. This reveals that the majority of personal attacks on Wikipedia are not the result of a few malicious users, nor primarily the consequence of allowing anonymous contributions from unregistered users.",
"User-generated content online is shaped by many factors, including endogenous elements such as platform affordances and norms, as well as exogenous elements, in particular significant events. These impact what users say, how they say it, and when they say it. In this paper, we focus on quantifying the impact of violent events on various types of hate speech, from offensive and derogatory to intimidation and explicit calls for violence. We anchor this study in a series of attacks involving Arabs and Muslims as perpetrators or victims, occurring in Western countries, that have been covered extensively by news media. These attacks have fueled intense policy debates around immigration in various fora, including online media, which have been marred by racist prejudice and hateful speech. The focus of our research is to model the effect of the attacks on the volume and type of hateful speech on two social media platforms, Twitter and Reddit. Among other findings, we observe that extremist violence tends to lead to an increase in online hate speech, particularly on messages directly advocating violence. Our research has implications for the way in which hate speech online is monitored and suggests ways in which it could be fought.",
"",
"Users organize themselves into communities on web platforms. These communities can interact with one another, often leading to conflicts and toxic interactions. However, little is known about the mechanisms of interactions between communities and how they impact users. Here we study intercommunity interactions across 36,000 communities on Reddit, examining cases where users of one community are mobilized by negative sentiment to comment in another community. We show that such conflicts tend to be initiated by a handful of communities—less than 1 of communities start 74 of conflicts. While conflicts tend to be initiated by highly active community members, they are carried out by significantly less active members. We find that conflicts are marked by formation of echo chambers, where users primarily talk to other users from their own community. In the long-term, conflicts have adverse effects and reduce the overall activity of users in the targeted communities. Our analysis of user interactions also suggests strategies for mitigating the negative impact of conflicts—such as increasing the direct engagement between attackers and defenders. Further, we design classifiers to predict whether conflict will occur by creating an LSTM model which combines graph embeddings, user, community, and text features, and we also use these techniques to predict if a user will participate in a conflict. Altogether, this work presents a data-driven view of community interactions and conflict, and paves the way towards healthier online communities.",
"",
"I conduct an experiment which examines the impact of group norm promotion and social sanctioning on racist online harassment. Racist online harassment de-mobilizes the minorities it targets, and the open, unopposed expression of racism in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur. This paper extends findings from lab experiments to a naturalistic setting using an objective, behavioral outcome measure and a continuous 2-month data collection period. This represents an advance in the study of prejudiced behavior.",
"",
"",
"",
"This exploratory work studies the effects of emerging app features on the cyberbullying practices in high school settings. These include the increasing prevalence of image video content, perceived ephemerality, anonymity, and hyperlocal communication. Based on qualitative analysis of focus groups and follow-up individual interviews with high school students, these features were found to influence the practice of cyberbullying, as well as creating negative socio-psychological effects. For example, visual data was found to be used in cyberbullying settings as evidence of contentious events, a repeated reminder, and caused a graphic impact on recipients. Similarly, perceived ephemerality of content was found to be associated with \"broken expectations\" with respect to the apps and severe bullying outcomes for those affected. Results shed light on an important technology-mediated social phenomenon of cyberbullying, improve understanding of app use (and abuse) by the teenage user population, and pave the way for future research on countering appcentric cyberbullying.",
"Recent research on uninhibited behavior in computer‐mediated communication (CMC) systems have suggested that flaming is social‐context dependent and not a media characteristic of CMC. This study takes a closer look at the social context in which flaming occurs, which need not necessarily be developed online but, as well, can be the social, religious, and political background and affiliations of the participants. The study analyzed messages posted during 1 week to 4 Usenet social newsgroups that represent different national and cultural groups. The levels of flaming in these groups were found to be higher than any reported in other studies. The findings show that the frequency of flaming differed between the newsgroups, and differed within newsgroups according to the general topic under discussion, confirming that social context and not the medium is the primary determinant of online uninhibited behavior. © 1998 John Wiley & Sons, Inc.",
"This article introduces and discusses bot-based collective blocklists (or blockbots) in Twitter, which have been developed by volunteers to combat harassment in the social networking site. Blockbots support the curation of a shared blocklist of accounts, where subscribers to a blockbot will not receive any notifications or messages from those on the blocklist. Blockbots support counterpublic communities, helping people moderate their own experiences of a site. This article provides an introduction and overview of blockbots and the issues that they raise about networked publics and platform governance, extending an intersecting literature on online harassment, platform governance, and the politics of algorithms. Such projects involve a far more reflective, intentional, transparent, collaborative, and decentralized way of using algorithmic systems to respond to issues like harassment. I argue that blockbots are not just technical solutions but social ones as well, a notable exception to common technologically determinist solutions that often push responsibility for issues like harassment to the individual user. Beyond the case of Twitter, blockbots call our attention to collective, bottom-up modes of computationally assisted moderation that can be deployed by counterpublic groups who want to participate in networked publics where hegemonic and exclusionary practices are increasingly prevalent."
]
} |
1902.08628 | 2914676493 | Community norm violations can impair constructive communication and collaboration online. As a defense mechanism, community moderators often address such transgressions by temporarily blocking the perpetrator. Such actions, however, come with the cost of potentially alienating community members. Given this tradeoff, it is essential to understand to what extent, and in which situations, this common moderation practice is effective in reinforcing community rules. In this work, we introduce a computational framework for studying the future behavior of blocked users on Wikipedia. After their block expires, they can take several distinct paths: they can reform and adhere to the rules, but they can also recidivate, or straight-out abandon the community. We reveal that these trajectories are tied to factors rooted both in the characteristics of the blocked individual and in whether they perceived the block to be fair and justified. Based on these insights, we formulate a series of prediction tasks aiming to determine which of these paths a user is likely to take after being blocked for their first offense, and demonstrate the feasibility of these new tasks. Overall, this work builds towards a more nuanced approach to moderation by highlighting the tradeoffs that are in play. | Effects of moderation Prior work has examined the effects of different kinds of moderation on online platforms. Community-driven moderation can affect overall user participation @cite_8 @cite_42 and comment quality @cite_43 in conversations. Centralized moderation can also have effects on conversation, mainly through the moderator's role as an authority figure @cite_3 . These existing studies of moderation effects have focused on the short term, largely operating at the level of individual conversations. By contrast, in this work we intend to study the long term, level effects of moderation: what happens to a user in the days, weeks, and months after a moderator takes action against them? | {
"cite_N": [
"@cite_43",
"@cite_42",
"@cite_3",
"@cite_8"
],
"mid": [
"2962764888",
"",
"2588098783",
"2063248556"
],
"abstract": [
"Social media systems rely on user feedback and rating mechanisms for personalization, ranking, and content filtering. However, when users evaluate content contributed by fellow users (e.g., by liking a post or voting on a comment), these evaluations create complex social feedback effects. This paper investigates how ratings on a piece of content affect its author's future behavior. By studying four large comment-based news communities, we find that negative feedback leads to significant behavioral changes that are detrimental to the community. Not only do authors of negatively-evaluated content contribute more, but also their future posts are of lower quality, and are perceived by the community as such. Moreover, these authors are more likely to subsequently evaluate their fellow users negatively, percolating these effects through the community. In contrast, positive feedback does not carry similar effects, and neither encourages rewarded authors to write more, nor improves the quality of their posts. Interestingly, the authors that receive no feedback are most likely to leave a community. Furthermore, a structural analysis of the voter network reveals that evaluations polarize the community the most when positive and negative votes are equally split.",
"",
"Online communities have the potential to be supportive, cruel, or anywhere in between. The development of positive norms for interaction can help users build bonds, grow, and learn. Using millions of messages sent in Twitch chatrooms, we explore the effectiveness of methods for encouraging and discouraging specific behaviors, including taking advantage of imitation effects through setting positive examples and using moderation tools to discourage antisocial behaviors. Consistent with aspects of imitation theory and deterrence theory, users imitated examples of behavior that they saw, and more so for behaviors from high status users. Proactive moderation tools, such as chat modes which restricted the ability to post certain content, proved effective at discouraging spam behaviors, while reactive bans were able to discourage a wider variety of behaviors. This work considers the intersection of tools, authority, and types of behaviors, offering a new frame through which to consider the development of moderation strategies.",
"Our society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32 and created accumulating positive herding that increased final ratings by 25 on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future."
]
} |
1902.08628 | 2914676493 | Community norm violations can impair constructive communication and collaboration online. As a defense mechanism, community moderators often address such transgressions by temporarily blocking the perpetrator. Such actions, however, come with the cost of potentially alienating community members. Given this tradeoff, it is essential to understand to what extent, and in which situations, this common moderation practice is effective in reinforcing community rules. In this work, we introduce a computational framework for studying the future behavior of blocked users on Wikipedia. After their block expires, they can take several distinct paths: they can reform and adhere to the rules, but they can also recidivate, or straight-out abandon the community. We reveal that these trajectories are tied to factors rooted both in the characteristics of the blocked individual and in whether they perceived the block to be fair and justified. Based on these insights, we formulate a series of prediction tasks aiming to determine which of these paths a user is likely to take after being blocked for their first offense, and demonstrate the feasibility of these new tasks. Overall, this work builds towards a more nuanced approach to moderation by highlighting the tradeoffs that are in play. | Norms and engagement A major factor governing engagement in online communities is a sense of belonging @cite_54 , which in many communities engenders the emergence of community-specific norms, such as specific patterns of language @cite_51 @cite_28 @cite_48 @cite_14 . Wikipedia is no exception: it relies on group dynamics to promote editing productivity @cite_18 and quality @cite_56 , as well as participation in governance @cite_52 @cite_40 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_28",
"@cite_48",
"@cite_54",
"@cite_52",
"@cite_56",
"@cite_40",
"@cite_51"
],
"mid": [
"2110158471",
"2963492501",
"",
"2962815454",
"2570325132",
"2169495811",
"2890240742",
"2117134415",
"2127411301"
],
"abstract": [
"Peer production systems rely on users to self-select appropriate tasks and \"scratch their personal itch\". However, many such systems require significant maintenance work, which also implies the need for collective action, that is, individuals following goals set by the group and performing good citizenship behaviors. How can this paradox be resolved? Here we examine one potential answer: the influence of social identification with the larger group on contributors' behavior. We examine Wikipedia, a highly successful peer production system, and find a significant and growing influence of group structure, with a prevalent example being the WikiProject. Comparison of editors who join projects with those who do not and comparisons of the joiners' behavior before and after they join a project suggest their identification with the group plays an important role in directing them towards group goals and good citizenship behaviors. Upon joining, Wikipedians are more likely to work on project-related content, to shift their contributions towards coordination rather than production work, and to perform maintenance work such as reverting vandalism. These results suggest that group influence can play an important role in maintaining the health of online communities, even when such communities are putatively self-directed peer production systems.",
"",
"",
"",
"Platforms like Reddit have attracted large and vibrant communities, but the individuals in those communities are free to migrate to other platforms at any time. History has borne this out with the mass migration from Slashdot to Digg. The underlying motivations of individuals who migrate between platforms, and the conditions that favor migration online are not well-understood. We examine Reddit during a period of community unrest affecting millions of users in the summer of 2015, and analyze large-scale changes in user behavior and migration patterns to Reddit-like alternative platforms. Using self-reported statements from user comments, surveys, and a computational analysis of the activity of users with accounts on multiple platforms, we identify the primary motivations driving user migration. While a notable number of Reddit users left for other platforms, we found that an important pull factor that enabled Reddit to retain users was its long tail of niche content. Other platforms may reach critical mass to support popular or “mainstream” topics, but Reddit’s large userbase provides a key advantage in supporting niche topics.",
"This paper presents a model of the behavior of candidates for promotion to administrator status in Wikipedia. It uses a policy capture framework to highlight similarities and differences in the community's stated criteria for promotion decisions to those criteria actually correlated with promotion success. As promotions are determined by the consensus of dozens of voters with conflicting opinions and unwritten expectations, the results highlight the degree to which consensus is truly reached. The model is fast and easily computable on the fly, and thus could be applied as a self-evaluation tool for editors considering becoming administrators, as a dashboard for voters to view a nominee's relevant statistics, or as a tool to automatically search for likely future administrators. Implications for distributed consensus-building in online communities are discussed.",
"Wikipedia has a strong norm of writing in a 'neutral point of view' (NPOV). Articles that violate this norm are tagged, and editors are encouraged to make corrections. But the impact of this tagging system has not been quantitatively measured. Does NPOV tagging help articles to converge to the desired style? Do NPOV corrections encourage editors to adopt this style? We study these questions using a corpus of NPOV-tagged articles and a set of lexicons associated with biased language. An interrupted time series analysis shows that after an article is tagged for NPOV, there is a significant decrease in biased language in the article, as measured by several lexicons. However, for individual editors, NPOV corrections and talk page discussions yield no significant change in the usage of words in most of these lexicons, including Wikipedia's own list of 'words to watch.' This suggests that NPOV tagging and discussion does improve content, but has less success enculturating editors to the site's linguistic norms.",
"Social media sites are often guided by a core group of committed users engaged in various forms of governance. A crucial aspect of this type of governance is deliberation, in which such a group reaches decisions on issues of importance to the site. Despite its crucial — though subtle — role in how a number of prominent social media sites function, there has been relatively little investigation of the deliberative aspects of social media governance. Here we explore this issue, investigating a particular deliberative process that is extensive, public, and recorded: the promotion of Wikipedia admins, which is determined by elections that engage committed members of the Wikipedia community. We find that the group decision-making at the heart of this process exhibits several fundamental forms of relative assessment. First we observe that the chance that a voter will support a candidate is strongly dependent on the relationship between characteristics of the voter and the candidate. Second we investigate how both individual voter decisions and overall election outcomes can be based on models that take into account the sequential, public nature of the voting.",
"Vibrant online communities are in constant flux. As members join and depart, the interactional norms evolve, stimulating further changes to the membership and its social dynamics. Linguistic change --- in the sense of innovation that becomes accepted as the norm --- is essential to this dynamic process: it both facilitates individual expression and fosters the emergence of a collective identity. We propose a framework for tracking linguistic change as it happens and for understanding how specific users react to these evolving norms. By applying this framework to two large online communities we show that users follow a determined two-stage lifecycle with respect to their susceptibility to linguistic change: a linguistically innovative learning phase in which users adopt the language of the community followed by a conservative phase in which users stop changing and the evolving community norms pass them by. Building on this observation, we show how this framework can be used to detect, early in a user's career, how long she will stay active in the community. Thus, this work has practical significance for those who design and maintain online communities. It also yields new theoretical insights into the evolution of linguistic norms and the complex interplay between community-level and individual-level linguistic change."
]
} |
1902.08698 | 2952859539 | We consider approximation algorithms for packing integer programs (PIPs) of the form @math where @math , @math , and @math are nonnegative. We let @math denote the width of @math which is at least @math . Previous work by bansal-sparse obtained an @math -approximation ratio where @math is the maximum number of nonzeroes in any column of @math (in other words the @math -column sparsity of @math ). They raised the question of obtaining approximation ratios based on the @math -column sparsity of @math (denoted by @math ) which can be much smaller than @math . Motivated by recent work on covering integer programs (CIPs) cq,chs-16 we show that simple algorithms based on randomized rounding followed by alteration, similar to those of bansal-sparse (but with a twist), yield approximation ratios for PIPs based on @math . First, following an integrality gap example from bansal-sparse , we observe that the case of @math is as hard as maximum independent set even when @math . In sharp contrast to this negative result, as soon as width is strictly larger than one, we obtain positive results via the natural LP relaxation. For PIPs with width @math where @math , we obtain an @math -approximation. In the large width regime, when @math , we obtain an @math -approximation. We also obtain a @math -approximation when @math . | We note that PIPs are equivalent to the multi-dmensional knapsack problem. When @math we have the classical knapsack problem which admits a very efficient FPTAS (see @cite_11 ). There is a PTAS for any fixed @math @cite_0 but unless @math an FPTAS does not exist for @math . | {
"cite_N": [
"@cite_0",
"@cite_11"
],
"mid": [
"2055037276",
"2783992526"
],
"abstract": [
"Abstract We describe a polynomial approximation scheme for an m-constraint 0–1 integer programming problem (m fixed) based on the use of the dual simplex algorithm for linear programming. We also analyse the asymptotic properties of a particular random model.",
"We revisit the standard 0-1 knapsack problem. The latest polynomial-time approximation scheme by Rhee (2015) with approximation factor 1+eps has running time near O(n+(1 eps)^ 5 2 ) (ignoring polylogarithmic factors), and is randomized. We present a simpler algorithm which achieves the same result and is deterministic. With more effort, our ideas can actually lead to an improved time bound near O(n + (1 eps)^ 12 5 ), and still further improvements for small n."
]
} |
1902.08698 | 2952859539 | We consider approximation algorithms for packing integer programs (PIPs) of the form @math where @math , @math , and @math are nonnegative. We let @math denote the width of @math which is at least @math . Previous work by bansal-sparse obtained an @math -approximation ratio where @math is the maximum number of nonzeroes in any column of @math (in other words the @math -column sparsity of @math ). They raised the question of obtaining approximation ratios based on the @math -column sparsity of @math (denoted by @math ) which can be much smaller than @math . Motivated by recent work on covering integer programs (CIPs) cq,chs-16 we show that simple algorithms based on randomized rounding followed by alteration, similar to those of bansal-sparse (but with a twist), yield approximation ratios for PIPs based on @math . First, following an integrality gap example from bansal-sparse , we observe that the case of @math is as hard as maximum independent set even when @math . In sharp contrast to this negative result, as soon as width is strictly larger than one, we obtain positive results via the natural LP relaxation. For PIPs with width @math where @math , we obtain an @math -approximation. In the large width regime, when @math , we obtain an @math -approximation. We also obtain a @math -approximation when @math . | Approximation algorithms for PIPs in their general form were considered initially by Raghavan and Thompson @cite_8 and refined substantially by Srinivasan @cite_12 . Srinivasan obtained approximation ratios of the form @math when @math had entries from @math , and a ratio of the form @math when @math had entries from @math . Pritchard @cite_2 was the first to obtain a bound for PIPs based solely on the column sparsity parameter @math . He used iterated rounding and his initial bound was improved in @cite_5 to @math . The current state of the art is due to Bansal al @cite_17 . Previously we ignored constant factors when describing the ratio. In fact @cite_17 obtains a ratio of @math by strengthening the basic LP relaxation. | {
"cite_N": [
"@cite_8",
"@cite_2",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"2022191808",
"",
"2062075466",
"1993119087",
"782508147"
],
"abstract": [
"We study the relation between a class of 0–1 integer linear programs and their rational relaxations. We give a randomized algorithm for transforming an optimal solution of a relaxed problem into a provably good solution for the 0–1 problem. Our technique can be a of extended to provide bounds on the disparity between the rational and 0–1 optima for a given problem instance.",
"",
"The main focus of this paper is a pair of new approximation algorithms for certain integer programs. First, for covering integer programs min cx:Ax≥b,0≤x≤d where A has at most k nonzeroes per row, we give a k-approximation algorithm. (We assume A,b,c,d are nonnegative.) For any k≥2 and e>0, if P≠NP this ratio cannot be improved to k−1−e, and under the unique games conjecture this ratio cannot be improved to k−e. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsack-cover inequalities. Second, for packing integer programs max cx:Ax≤b,0≤x≤d where A has at most k nonzeroes per column, we give a (2k 2+2)-approximation algorithm. Our approach builds on the iterated LP relaxation framework. In addition, we obtain improved approximations for the second problem when k=2, and for both problems when every A ij is small compared to b i . Finally, we demonstrate a 17 16-inapproximability for covering integer programs with at most two nonzeroes per column.",
"Several important NP-hard combinatorial optimization problems can be posed as packing covering integer programs; the randomized rounding technique of Raghavan and Thompson is a powerful tool with which to approximate them well. We present one elementary unifying property of all these integer linear programs and use the FKG correlation inequality to derive an improved analysis of randomized rounding on them. This yields a pessimistic estimator, thus presenting deterministic polynomial-time algorithms for them with approximation guarantees that are significantly better than those known.",
"We give new approximation algorithms for packing integer programs (PIPs) by employing the method of randomized rounding combined with alterations. Our first result is a simpler approximation algorithm for general PIPs which matches the best known bounds, and which admits an efficient parallel implementation. We also extend these results to a multi-criteria version of PIPs. Our second result is for the class of packing integer programs (PIPs) that are column sparse, i. e., where there is a specified upper bound k on the number of constraints that each variable appears in. We give an (ek+ o(k))-approximation algorithm for k-column sparse PIPs, improving over previously known O(k 2 )-approximation ratios. We also generalize our result to the case of maximizing non-negative monotone submodular functions over k-column sparse packing constraints, and obtain an e 2k"
]
} |
1902.08698 | 2952859539 | We consider approximation algorithms for packing integer programs (PIPs) of the form @math where @math , @math , and @math are nonnegative. We let @math denote the width of @math which is at least @math . Previous work by bansal-sparse obtained an @math -approximation ratio where @math is the maximum number of nonzeroes in any column of @math (in other words the @math -column sparsity of @math ). They raised the question of obtaining approximation ratios based on the @math -column sparsity of @math (denoted by @math ) which can be much smaller than @math . Motivated by recent work on covering integer programs (CIPs) cq,chs-16 we show that simple algorithms based on randomized rounding followed by alteration, similar to those of bansal-sparse (but with a twist), yield approximation ratios for PIPs based on @math . First, following an integrality gap example from bansal-sparse , we observe that the case of @math is as hard as maximum independent set even when @math . In sharp contrast to this negative result, as soon as width is strictly larger than one, we obtain positive results via the natural LP relaxation. For PIPs with width @math where @math , we obtain an @math -approximation. In the large width regime, when @math , we obtain an @math -approximation. We also obtain a @math -approximation when @math . | In terms of hardness of approximation, PIPs generalize MIS and hence one cannot obtain a ratio better than @math unless @math @cite_10 @cite_9 . Building on MIS, @cite_4 shows that PIPs are hard to approximate within a @math factor for any constant width @math . Hardness of MIS in bounded degree graphs @cite_14 and hardness for @math -set-packing @cite_7 imply that PIPs are hard to approximate to within @math and to within @math when @math is a sufficiently large constant. These hardness results are based on @math matrices for which @math and @math coincide. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_10"
],
"mid": [
"2001663593",
"2042276676",
"1966923282",
"2126186592",
"2081254453"
],
"abstract": [
"par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 .",
"We study the approximability of multidimensional generalizations of three classical packing problems: multiprocessor scheduling, bin packing, and the knapsack problem. Specifically, we study the vector scheduling problem, its dual problem, namely, the vector bin packing problem, and a class of packing integer programs. The vector scheduling problem is to schedule n d-dimensional tasks on m machines such that the maximum load over all dimensions and all machines is minimized. The vector bin packing problem, on the other hand, seeks to minimize the number of bins needed to schedule all n tasks such that the maximum load on any dimension across all bins is bounded by a fixed quantity, say, 1. Such problems naturally arise when scheduling tasks that have multiple resource requirements. Finally, packing integer programs capture a core problem that directly relates to both vector scheduling and vector bin packing, namely, the problem of packing a maximum number of vectors in a single bin of unit height. We obtain a variety of new algorithmic as well as inapproximability results for these three problems.",
"Given a k-uniform hypergraph, the MAXIMUM k-SET PACKING problem is to find the maximum disjoint set of edges. We prove that this problem cannot be efficiently approximated to within a factor of Ω(k ln k) unless P = NP. This improves the previous hardness of approximation factor of k 2O(√lnk) by Trevisan. This result extends to the problem of k-Dimensional-Matching.",
"A randomness extractor is an algorithm which extracts randomness from a low-quality random source, using some additional truly random bits. We construct new extractors which require only log n + O(1) additional random bits for sources with constant entropy rate. We further construct dispersers, which are similar to one-sided extractors, which use an arbitrarily small constant times log n additional random bits for sources with constant entropy rate. Our extractors and dispersers output 1-α fraction of the randomness, for any α>0.We use our dispersers to derandomize results of Hastad [23] and Feige-Kilian [19] and show that for all e>0, approximating MAX CLIQUE and CHROMATIC NUMBER to within n1-e are NP-hard. We also derandomize the results of Khot [29] and show that for some γ > 0, no quasi-polynomial time algorithm approximates MAX CLIQUE or CHROMATIC NUMBER to within n 2(log n)1-γ, unless NP = P.Our constructions rely on recent results in additive number theory and extractors by Bourgain-Katz-Tao [11], Barak-Impagliazzo-Wigderson [5], Barak-Kindler-Shaltiel-Sudakov-Wigderson [6], and Raz [36]. We also simplify and slightly strengthen key theorems in the second and third of these papers, and strengthen a related theorem by Bourgain [10].",
""
]
} |
1902.08698 | 2952859539 | We consider approximation algorithms for packing integer programs (PIPs) of the form @math where @math , @math , and @math are nonnegative. We let @math denote the width of @math which is at least @math . Previous work by bansal-sparse obtained an @math -approximation ratio where @math is the maximum number of nonzeroes in any column of @math (in other words the @math -column sparsity of @math ). They raised the question of obtaining approximation ratios based on the @math -column sparsity of @math (denoted by @math ) which can be much smaller than @math . Motivated by recent work on covering integer programs (CIPs) cq,chs-16 we show that simple algorithms based on randomized rounding followed by alteration, similar to those of bansal-sparse (but with a twist), yield approximation ratios for PIPs based on @math . First, following an integrality gap example from bansal-sparse , we observe that the case of @math is as hard as maximum independent set even when @math . In sharp contrast to this negative result, as soon as width is strictly larger than one, we obtain positive results via the natural LP relaxation. For PIPs with width @math where @math , we obtain an @math -approximation. In the large width regime, when @math , we obtain an @math -approximation. We also obtain a @math -approximation when @math . | There is a large literature on deterministic and randomized rounding algorithms for packing and covering integer programs and connections to several topics and applications including discrepancy theory. @math -sparsity guarantees for covering integer programs were first obtained by Chen, Harris and Srinivasan @cite_13 partly inspired by @cite_16 . | {
"cite_N": [
"@cite_16",
"@cite_13"
],
"mid": [
"2078116611",
"2963986064"
],
"abstract": [
"A folklore result uses the Lovasz local lemma to analyze the discrepancy of hypergraphs with bounded degree and edge size. We generalize this result to the context of real matrices with bounded row and column sums.",
"We consider positive covering integer programs, which generalize set cover and which have attracted a long line of research developing (randomized) approximation algorithms. Srinivasan (2006) gave a rounding algorithm based on the FKG inequality for systems which are \"column-sparse.\" This algorithm may return an integer solution in which the variables get assigned large (integral) values; Kolliopoulos & Young (2005) modified this algorithm to limit the solution size, at the cost of a worse approximation ratio. We develop a new rounding scheme based on the Partial Resampling variant of the Lovasz Local Lemma developed by Harris & Srinivasan (2013). This achieves an approximation ratio of 1 + ln([EQUATION]), where amin is the minimum covering constraint and Δ1 is the maximum e1-norm of any column of the covering matrix (whose entries are scaled to lie in [0, 1]); we also show nearly-matching inapproximability and integrality-gap lower bounds. Our approach improves asymptotically, in several different ways, over known results. First, it replaces Δ0, the maximum number of nonzeroes in any column (from the result of Srinivasan) by Δ1 which is always - and can be much - smaller than Δ0; this is the first such result in this context. Second, our algorithm automatically handles multi-criteria programs; we achieve improved approximation ratios compared to the algorithm of Srinivasan, and give, for the first time when the number of objective functions is large, polynomial-time algorithms with good multi-criteria approximations. We also significantly improve upon the upper-bounds of Kolliopoulos & Young when the integer variables are required to be within (1 + e) of some given upper-bounds, and show nearly-matching inapproximability."
]
} |
1902.08231 | 2951731176 | Recent progresses in model-free single object tracking (SOT) algorithms have largely inspired applying SOT to (MOT) to improve the robustness as well as relieving dependency on external detector. However, SOT algorithms are generally designed for distinguishing a target from its environment, and hence meet problems when a target is spatially mixed with similar objects as observed frequently in MOT. To address this issue, in this paper we propose an instance-aware tracker to integrate SOT techniques for MOT by encoding awareness both within and between target models. In particular, we construct each target model by fusing information for distinguishing target both from background and other instances (tracking targets). To conserve uniqueness of all target models, our instance-aware tracker considers response maps from all target models and assigns spatial locations exclusively to optimize the overall accuracy. Another contribution we make is a dynamic model refreshing strategy learned by a convolutional neural network. This strategy helps to eliminate initialization noise as well as to adapt to the variation of target size and appearance. To show the effectiveness of the proposed approach, it is evaluated on the popular MOT15 and MOT16 challenge benchmarks. On both benchmarks, our approach achieves the best overall performances in comparison with published results. | Recent works on MOT primarily focuses on the tracking-by-detection principle. Most of these methods can be roughly categorized into two groups. The first group treat MOT as an offline global optimization problem that uses frame observation from both previous and future to estimate the current status of targets @cite_36 @cite_16 @cite_0 @cite_30 @cite_1 . These methods usually focus on data association based methods such as Hungarian algorithm @cite_24 @cite_13 , network flow @cite_8 @cite_33 and multiple hypotheses tracking @cite_25 . Their performance heavily depends on the quality of detections from external detector. Different from these methods, our approach learns tracking model for each target to search and predict locations of next frame online. Detections in our approach are only used for model uniqueness verification and model refreshing. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_1",
"@cite_0",
"@cite_24",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"",
"",
"1528063097",
"2510977782",
"1988963225",
"2519362791",
"2252355370",
"",
"2122469558",
"2237765446"
],
"abstract": [
"",
"",
"Data association is an essential component of any human tracking system. The majority of current methods, such as bipartite matching, incorporate a limited-temporal-locality of the sequence into the data association problem, which makes them inherently prone to IDswitches and difficulties caused by long-term occlusion, cluttered background, and crowded scenes.We propose an approach to data association which incorporates both motion and appearance in a global manner. Unlike limited-temporal-locality methods which incorporate a few frames into the data association problem, we incorporate the whole temporal span and solve the data association problem for one object at a time, while implicitly incorporating the rest of the objects. In order to achieve this, we utilize Generalized Minimum Clique Graphs to solve the optimization problem of our data association method. Our proposed method yields a better formulated approach to data association which is supported by our superior results. Experiments show the proposed method makes significant improvements in tracking in the diverse sequences of Town Center [1], TUD-crossing [2], TUD-Stadtmitte [2], PETS2009 [3], and a new sequence called Parking Lot compared to the state of the art methods.",
"Object tracking is an ubiquitous problem in computer vision with many applications in human-machine and human-robot interaction, augmented reality, driving assistance, surveillance, etc. Although thoroughly investigated, tracking multiple persons remains a challenging and an open problem. In this paper, an online variational Bayesian model for multiple-person tracking is proposed. This yields a variational expectation-maximization (VEM) algorithm. The computational efficiency of the proposed method is due to closed-form expressions for both the posterior distributions of the latent variables and for the estimation of the model parameters. A stochastic process that handles person birth and person death enables the tracker to handle a varying number of persons over long periods of time. The proposed method is benchmarked using the MOT 2016 dataset.",
"In this paper we extend the minimum-cost network flow approach to multi-target tracking, by incorporating a motion model, allowing the tracker to better cope with long term occlusions and missed detections. In our new method, the tracking problem is solved iteratively: Firstly, an initial tracking solution is found without the help of motion information. Given this initial set of track lets, the motion at each detection is estimated, and used to refine the tracking solution. Finally, special edges are added to the tracking graph, allowing a further revised tracking solution to be found, where distant track lets may be linked based on motion similarity. Our system has been tested on the PETS S2.L1 and Oxford town-center sequences, outperforming the baseline system, and achieving results comparable with the current state of the art.",
"A cooperative detection and model-free tracking algorithm, referred to as CDT, for multiple object tracking is proposed in this work. The proposed CDT algorithm has three components: object detector, forward tracker, and backward tracker. First, the object detector detects targets with high confidence levels only to reduce spurious detection and achieve a high precision rate. Then, each detected target is traced by the forward tracker and then by the backward tracker to restore undetected states. In the tracking processes, the object detector cooperates with the trackers to handle appearing or disappearing targets and to refine inaccurate state estimates. With this detection guidance, the model-free tracking can trace multiple objects reliably and accurately. Experimental results show that the proposed CDT algorithm provides excellent performance on a recent benchmark. Furthermore, an online version of the proposed algorithm also excels in the benchmark.",
"This paper explores a pragmatic approach to multiple object tracking where the main focus is to associate objects efficiently for online and realtime applications. To this end, detection quality is identified as a key factor influencing tracking performance, where changing the detector can improve tracking by up to 18.9 . Despite only using a rudimentary combination of familiar techniques such as the Kalman Filter and Hungarian algorithm for the tracking components, this approach achieves an accuracy comparable to state-of-the-art online trackers. Furthermore, due to the simplicity of our tracking method, the tracker updates at a rate of 260 Hz which is over 20x faster than other state-of-the-art trackers.",
"",
"We present a detection-based three-level hierarchical association approach to robustly track multiple objects in crowded environments from a single camera. At the low level, reliable tracklets (i.e. short tracks for further analysis) are generated by linking detection responses based on conservative affinity constraints. At the middle level, these tracklets are further associated to form longer tracklets based on more complex affinity measures. The association is formulated as a MAP problem and solved by the Hungarian algorithm. At the high level, entries, exits and scene occluders are estimated using the already computed tracklets, which are used to refine the final trajectories. This approach is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results show a great improvement in performance compared to previous methods.",
"This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge."
]
} |
1902.08231 | 2951731176 | Recent progresses in model-free single object tracking (SOT) algorithms have largely inspired applying SOT to (MOT) to improve the robustness as well as relieving dependency on external detector. However, SOT algorithms are generally designed for distinguishing a target from its environment, and hence meet problems when a target is spatially mixed with similar objects as observed frequently in MOT. To address this issue, in this paper we propose an instance-aware tracker to integrate SOT techniques for MOT by encoding awareness both within and between target models. In particular, we construct each target model by fusing information for distinguishing target both from background and other instances (tracking targets). To conserve uniqueness of all target models, our instance-aware tracker considers response maps from all target models and assigns spatial locations exclusively to optimize the overall accuracy. Another contribution we make is a dynamic model refreshing strategy learned by a convolutional neural network. This strategy helps to eliminate initialization noise as well as to adapt to the variation of target size and appearance. To show the effectiveness of the proposed approach, it is evaluated on the popular MOT15 and MOT16 challenge benchmarks. On both benchmarks, our approach achieves the best overall performances in comparison with published results. | The second group only needs observations till to the current frame to online estimate target status @cite_14 @cite_20 @cite_19 @cite_41 @cite_43 @cite_17 @cite_18 . In @cite_19 , MOT is formulated as a Markov decision process with a policy estimated on the labeled training data. @cite_41 extends the work @cite_19 to use deep CNN and LSTM to encode long-term temporal dependencies by fusing clues from motion, interaction and person re-identification model. Chu, et al @cite_43 use a dynamic CNN-based framework with a learned spatial-temporal attention map to handle occlusion, where CNN trained on ImageNet is used for pedestrian feature extraction. @cite_17 gather target candidates from both detector and independent SOT trackers and select the optimal candidates through an ensemble model. Our approach differs from these methods by adding awareness between SOT trackers and dynamically refreshing model to eliminate possible noise in model initialization. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_41",
"@cite_19",
"@cite_43",
"@cite_20",
"@cite_17"
],
"mid": [
"2022515186",
"1825108226",
"2579024533",
"2225887246",
"2963481014",
"2293909728",
"2168054893"
],
"abstract": [
"Online multi-object tracking with a single moving camera is a challenging problem as the assumptions of 2D conventional motion models (e.g., first or second order models) in the image coordinate no longer hold because of global camera motion. In this paper, we consider motion context from multiple objects which describes the relative movement between objects and construct a Relative Motion Network (RMN) to factor out the effects of unexpected camera motion for robust tracking. The RMN consists of multiple relative motion models that describe spatial relations between objects, thereby facilitating robust prediction and data association for accurate tracking under arbitrary camera movements. The RMN can be incorporated into various multi-object tracking frameworks and we demonstrate its effectiveness with one tracking framework based on a Bayesian filter. Experiments on benchmark datasets show that online multi-object tracking performance can be better achieved by the proposed method.",
"We introduce an online learning approach to produce discriminative part-based appearance models (DPAMs) for tracking multiple humans in real scenes by incorporating association based and category free tracking methods. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous multi-target tracking approaches which do not explicitly consider occlusions in appearance modeling, we introduce a part based model that explicitly finds unoccluded parts by occlusion reasoning in each frame, so that occluded parts are removed in appearance modeling. Then DPAMs for each tracklet is online learned to distinguish a tracklet with others as well as the background, and is further used in a conservative category free tracking approach to partially overcome the missed detection problem as well as to reduce difficulties in tracklet associations under long gaps. We evaluate our approach on three public data sets, and show significant improvements compared with state-of-art methods.",
"The majority of existing solutions to the Multi-Target Tracking (MTT) problem do not combine cues over a long period of time in a coherent fashion. In this paper, we present an online method that encodes long-term temporal dependencies across multiple cues. One key challenge of tracking methods is to accurately track occluded targets or those which share similar appearance properties with surrounding objects. To address this challenge, we present a structure of Recurrent Neural Networks (RNN) that jointly reasons on multiple cues over a temporal window. Our method allows to correct data association errors and recover observations from occluded states. We demonstrate the robustness of our data-driven approach by tracking multiple targets using their appearance, motion, and even interactions. Our method outperforms previous works on multiple publicly available datasets including the challenging MOT benchmark.",
"Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth death and appearance disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark [24] to verify the effectiveness of our method.",
"In this paper, we propose a CNN-based framework for online MOT. This framework utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame. Simply applying single object tracker for MOT will encounter the problem in computational efficiency and drifted results caused by occlusion. Our framework achieves computational efficiency by sharing features and using ROI-Pooling to obtain individual features for each target. Some online learned target-specific CNN layers are used for adapting the appearance model for each target. In the framework, we introduce spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets. The visibility map of the target is learned and used for inferring the spatial attention map. The spatial attention map is then applied to weight the features. Besides, the occlusion status can be estimated from the visibility map, which controls the online updating process via weighted loss on training samples with different occlusion statuses in different frames. It can be considered as temporal attention mechanism. The proposed algorithm achieves 34.3 and 46.0 in MOTA on challenging MOT15 and MOT16 benchmark dataset respectively.",
"Multi-person tracking is still a challenging problem due to recurrent occlusion, pose variation and similar appearances between people. Inspired by the success of sparse representations in single object tracking and face recognition, we propose in this paper an online tracking by detection framework based on collaborative sparse representations. We argue that collaborative representations can better differentiate people compared to target-specific models and therefore help to produce a more robust tracking system. We also show that despite the size of the dictionaries involved, these representations can be efficiently computed with large-scale optimization techniques to get a near real-time algorithm. Experiments show that the proposed approach compares well to other recent online tracking systems on various datasets.",
"This paper presents a novel approach for multi-target tracking using an ensemble framework that optimally chooses target tracking results from that of independent trackers and a detector at each time step. The ensemble model is designed to select the best candidate scored by a function integrating detection confidence, appearance affinity, and smoothness constraints imposed using geometry and motion information. Parameters of our association score function are discriminatively trained with a max-margin framework. Optimal selection is achieved through a hierarchical data association step that progressively associates candidates to targets. By introducing a second target classifier and using the ranking score from the pre-trained classifier as the detection confidence measure, we add additional robustness against unreliable detections. The proposed algorithm robustly tracks a large number of moving objects in complex scenes with occlusions. We evaluate our approach on a variety of public datasets and show promising improvements over state-of-the-art methods."
]
} |
1902.08164 | 2917360832 | This paper demonstrates that collision detection-intensive applications such as robotic motion planning may be accelerated by performing collision checks with a machine learning model. We propose Fastron, a learning-based algorithm to model a robot's configuration space to be used as a proxy collision detector in place of standard geometric collision checkers. We demonstrate that leveraging the proxy collision detector results in up to an order of magnitude faster performance in robot simulation and planning than state-of-the-art collision detection libraries. Our results show that Fastron learns a model more than 100 times faster than a competing C-space modeling approach, while also providing theoretical guarantees of learning convergence. Using the OMPL motion planning libraries, we were able to generate initial motion plans across all experiments with varying robot and environment complexities. With Fastron, we can repeatedly perform planning from scratch at a 56 Hz rate, showing its application toward autonomous surgical assistance task in shared environments with human-controlled manipulators. All performance gains were achieved despite using only CPU-based calculations, suggesting further computational gains with a GPU approach that can parallelize tensor algebra. Code is available online. | @cite_25 use incremental support vector machines to represent an accurate collision boundary in C-space for a pair of objects and an active learning strategy to iteratively improve the boundary. This method is suitable for changing environments because moving one body relative to another body's frame is simply represented as translating a point in configuration space. However, a new model is required for each pair of objects, and each model must be trained offline. | {
"cite_N": [
"@cite_25"
],
"mid": [
"1660808159"
],
"abstract": [
"ABSTRACT The configuration space is a fundamental concept that is widely used in algorithmic robotics. Many applications in robotics, computer-aided design, and related areas can be reduced to computational problems in terms of configuration spaces. In this paper, we survey some of our recent work on solving two important challenges related to configuration spaces: how to efficiently compute an approximate representation of high-dimensional configuration spaces; and how to efficiently perform geometric proximity and motion planning queries in high-dimensional configuration spaces. We present new configuration space construction algorithms based on machine learning and geometric approximation techniques. These algorithms perform collision queries on many configuration samples. The collision query results are used to compute an approximate representation for the configuration space, which quickly converges to the exact configuration space. We also present parallel GPU-based algorithms to accelerate the performance of optimization and search computations in configuration spaces. In particular, we design efficient GPU-based parallel k -nearest neighbor and parallel collision detection algorithms and use these algorithms to accelerate motion planning."
]
} |
1902.08164 | 2917360832 | This paper demonstrates that collision detection-intensive applications such as robotic motion planning may be accelerated by performing collision checks with a machine learning model. We propose Fastron, a learning-based algorithm to model a robot's configuration space to be used as a proxy collision detector in place of standard geometric collision checkers. We demonstrate that leveraging the proxy collision detector results in up to an order of magnitude faster performance in robot simulation and planning than state-of-the-art collision detection libraries. Our results show that Fastron learns a model more than 100 times faster than a competing C-space modeling approach, while also providing theoretical guarantees of learning convergence. Using the OMPL motion planning libraries, we were able to generate initial motion plans across all experiments with varying robot and environment complexities. With Fastron, we can repeatedly perform planning from scratch at a 56 Hz rate, showing its application toward autonomous surgical assistance task in shared environments with human-controlled manipulators. All performance gains were achieved despite using only CPU-based calculations, suggesting further computational gains with a GPU approach that can parallelize tensor algebra. Code is available online. | Neural networks have been applied to perform collision detection for box-shaped objects and have achieved suitably low enough error to calculate collision response in physics simulations @cite_0 . A disadvantage of the neural network approach is there is typically no formulaic method to determine the optimal set of parameters for neural networks, which in this case required training thousands of networks to find the best-performing network. A significantly large amount of data was required to train and cross-validate the models. Finally, this method has only been tried on box obstacles, suggesting a new network must be trained for other objects. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1538281461"
],
"abstract": [
"The objective of the present work has been to develop a collision detection algorithm suitable for real-time applications. It is applicable to box-shaped objects and it is based on the relation between the colliding object positions and the impact point. The most known neural network (multilayer perceptron) trained with the familiar backpropagation learning algorithm has been used for this problem; such algorithm models the collision, then decides the impact point and the direction of the forces. The algorithm results are very good for the case of box-shaped objects. Furthermore, the computational cost is independent from the object positions and the way the surfaces are modeled, so it is also suitable for real-time applications. The model is being used and validated in a real harbor crane simulator developed by the Robotics Institute for Valencia Harbor in Spain."
]
} |
1902.08164 | 2917360832 | This paper demonstrates that collision detection-intensive applications such as robotic motion planning may be accelerated by performing collision checks with a machine learning model. We propose Fastron, a learning-based algorithm to model a robot's configuration space to be used as a proxy collision detector in place of standard geometric collision checkers. We demonstrate that leveraging the proxy collision detector results in up to an order of magnitude faster performance in robot simulation and planning than state-of-the-art collision detection libraries. Our results show that Fastron learns a model more than 100 times faster than a competing C-space modeling approach, while also providing theoretical guarantees of learning convergence. Using the OMPL motion planning libraries, we were able to generate initial motion plans across all experiments with varying robot and environment complexities. With Fastron, we can repeatedly perform planning from scratch at a 56 Hz rate, showing its application toward autonomous surgical assistance task in shared environments with human-controlled manipulators. All performance gains were achieved despite using only CPU-based calculations, suggesting further computational gains with a GPU approach that can parallelize tensor algebra. Code is available online. | proposes a potentially transformative approach to motion planning and bypasses collision checking during motion planning runtime by directly generating waypoints with MPNet, a pair of neural networks that encodes the workspace and generates feasible motion plans @cite_29 . Motion plans may be generated up to 100 times faster with MPNet than the state-of-the-art BIT* motion planning method. One limitation is the excessive amount of data needed to train MPNet. | {
"cite_N": [
"@cite_29"
],
"mid": [
"2808278079"
],
"abstract": [
"Fast and efficient motion planning algorithms are crucial for many state-of-the-art robotics applications such as self-driving cars. Existing motion planning methods become ineffective as their computational complexity increases exponentially with the dimensionality of the motion planning problem. To address this issue, we present Motion Planning Networks (MPNet), a neural network-based novel planning algorithm. The proposed method encodes the given workspaces directly from a point cloud measurement and generates the end-to-end collision-free paths for the given start and goal configurations. We evaluate MPNet on various 2D and 3D environments including the planning of a 7 DOF Baxter robot manipulator. The results show that MPNet is not only consistently computationally efficient in all environments but also generalizes to completely unseen environments. The results also show that the computation time of MPNet consistently remains less than 1 second in all presented experiments, which is significantly lower than existing state-of-the-art motion planning algorithms."
]
} |
1902.08246 | 2921115195 | Developing knowledge-driven contemporaneous health index (CHI) that can precisely reflect the underlying patient across the course of the condition's progression holds a unique value, like facilitating a range of clinical decision-making opportunities. This is particularly important for monitoring degenerative condition such as Alzheimer's disease (AD), where the condition of the patient will decay over time. Detecting early symptoms and progression sign, and continuous severity evaluation, are all essential for disease management. While a few methods have been developed in the literature, uncertainty quantification of those health index models has been largely neglected. To ensure the continuity of the care, we should be more explicit about the level of confidence in model outputs. Ideally, decision-makers should be provided with recommendations that are robust in the face of substantial uncertainty about future outcomes. In this paper, we aim at filling this gap by developing an uncertainty quantification based contemporaneous longitudinal index, named UQ-CHI, with a particular focus on continuous patient monitoring of degenerative conditions. Our method is to combine convex optimization and Bayesian learning using the maximum entropy learning (MEL) framework, integrating uncertainty on labels as well. Our methodology also provides closed-form solutions in some important decision making tasks, e.g., such as predicting the label of a new sample. Numerical studies demonstrate the effectiveness of the propose UQ-CHI method in prediction accuracy, monitoring efficacy, and unique advantages if uncertainty quantification is enabled practice. | Let, @math , denote a training set of @math patients. Each measurement @math , is the value of the @math th variable for the @math th subject in a given time @math , where @math is the time index. our goal is, given a training set, convert each measurement @math into an health index @math , which requires a mathematical model of @math . For simplicity, multivariable form of the hypothesis function @math was studies in @cite_13 , i.e., @math , where @math is a vector of weight coefficients that combines the @math variables. The total number of positive and negative samples is shown by @math and @math respectively, i.e., @math and @math . The formulation of the CHI learning framework is shown in below: | {
"cite_N": [
"@cite_13"
],
"mid": [
"2737258108"
],
"abstract": [
"Abstract In this paper, we develop a novel formulation for contemporaneous patient risk monitoring by exploiting the emerging data-rich environment in many healthcare applications, where an abundance of longitudinal data that reflect the degeneration of the health condition can be continuously collected. Our objective, and the developed formulation, is fundamentally different from many existing risk score models for different healthcare applications, which mostly focus on predicting the likelihood of a certain outcome at a pre-specified time. Rather, our formulation translates multivariate longitudinal measurements into a contemporaneous health index (CHI) that captures patient condition changes over the course of progression. Another significant feature of our formulation is that, CHI can be estimated with or without label information, different from other risk score models strictly based on supervised learning. To develop this formulation, we focus on the degenerative disease conditions, for which we could utilize the monotonic progression characteristic (either towards disease or recovery) to learn CHI. Such a domain knowledge leads us to a novel learning formulation, and on top of that, we further generalize this formulation with a capacity to incorporate label information if available. We further develop algorithms to mitigate the challenges associated with the nonsmooth convex optimization problem by first identifying its dual reformulation as a constrained smooth optimization problem, and then, using the block coordinate descent algorithm to iteratively solve the optimization with a derived efficient projection at each iteration. Extensive numerical studies are performed on both synthetic datasets and real-world applications on Alzheimer’s disease and Surgical Site Infection, which demonstrate the utility and efficacy of the proposed method on degenerative conditions that include a wide range of applications."
]
} |
1902.08246 | 2921115195 | Developing knowledge-driven contemporaneous health index (CHI) that can precisely reflect the underlying patient across the course of the condition's progression holds a unique value, like facilitating a range of clinical decision-making opportunities. This is particularly important for monitoring degenerative condition such as Alzheimer's disease (AD), where the condition of the patient will decay over time. Detecting early symptoms and progression sign, and continuous severity evaluation, are all essential for disease management. While a few methods have been developed in the literature, uncertainty quantification of those health index models has been largely neglected. To ensure the continuity of the care, we should be more explicit about the level of confidence in model outputs. Ideally, decision-makers should be provided with recommendations that are robust in the face of substantial uncertainty about future outcomes. In this paper, we aim at filling this gap by developing an uncertainty quantification based contemporaneous longitudinal index, named UQ-CHI, with a particular focus on continuous patient monitoring of degenerative conditions. Our method is to combine convex optimization and Bayesian learning using the maximum entropy learning (MEL) framework, integrating uncertainty on labels as well. Our methodology also provides closed-form solutions in some important decision making tasks, e.g., such as predicting the label of a new sample. Numerical studies demonstrate the effectiveness of the propose UQ-CHI method in prediction accuracy, monitoring efficacy, and unique advantages if uncertainty quantification is enabled practice. | The CHI formulation can be solved by using the block coordinate descent algorithm that is illustrated in @cite_13 . Note, the CHI formulation generalizes many existing models, such as SVM, sparse SVM, LASSO, etc. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2737258108"
],
"abstract": [
"Abstract In this paper, we develop a novel formulation for contemporaneous patient risk monitoring by exploiting the emerging data-rich environment in many healthcare applications, where an abundance of longitudinal data that reflect the degeneration of the health condition can be continuously collected. Our objective, and the developed formulation, is fundamentally different from many existing risk score models for different healthcare applications, which mostly focus on predicting the likelihood of a certain outcome at a pre-specified time. Rather, our formulation translates multivariate longitudinal measurements into a contemporaneous health index (CHI) that captures patient condition changes over the course of progression. Another significant feature of our formulation is that, CHI can be estimated with or without label information, different from other risk score models strictly based on supervised learning. To develop this formulation, we focus on the degenerative disease conditions, for which we could utilize the monotonic progression characteristic (either towards disease or recovery) to learn CHI. Such a domain knowledge leads us to a novel learning formulation, and on top of that, we further generalize this formulation with a capacity to incorporate label information if available. We further develop algorithms to mitigate the challenges associated with the nonsmooth convex optimization problem by first identifying its dual reformulation as a constrained smooth optimization problem, and then, using the block coordinate descent algorithm to iteratively solve the optimization with a derived efficient projection at each iteration. Extensive numerical studies are performed on both synthetic datasets and real-world applications on Alzheimer’s disease and Surgical Site Infection, which demonstrate the utility and efficacy of the proposed method on degenerative conditions that include a wide range of applications."
]
} |
1902.08123 | 2917983759 | The massive availability of cameras and personal devices results in a wide variability between imaging conditions, producing large intra-class variations and performance drop if such images are compared for person recognition. However, as biometric solutions are extensively deployed, it will be common to replace acquisition hardware as it is damaged or newer designs appear, or to exchange information between agencies or applications in heterogeneous environments. Furthermore, variations in imaging bands can also occur. For example, faces are typically acquired in the visible (VW) spectrum, while iris images are captured in the near-infrared (NIR) spectrum. However, cross-spectrum comparison may be needed if for example a face from a surveillance camera needs to be compared against a legacy iris database. Here, we propose a multialgorithmic approach to cope with cross-sensor periocular recognition. We integrate different systems using a fusion scheme based on linear logistic regression, in which fused scores tend to be log-likelihood ratios. This allows easy combination by just summing scores of available systems. We evaluate our approach in the context of the 1st Cross-Spectral Iris Periocular Competition, whose aim was to compare person recognition approaches when periocular data from VW and NIR images is matched. The proposed fusion approach achieves reductions in error rates of up to 20-30 in cross-spectral NIR-VW comparison, leading to an EER of 0.22 and a FRR of just 0.62 for FAR=0.01 , representing the best overall approach of the mentioned competition.. Experiments are also reported with a database of VW images from two different smartphones, achieving even higher relative improvements in performance. We also discuss our approach from the point of view of template size and computation times, with the most computationally heavy system playing an important role in the results. | Regarding cross-sensor recognition across different spectra, the work @cite_70 proposes to compare the ocular region cropped from VW face images against NIR iris images, since face images are usually captured in the visible range, while iris images in commercial systems are usually acquired using near-infrared illumination. They employ three different feature descriptors, namely Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Database Sparse Representation (JDSR). Using a self-captured database, they report a cross-spectral performance of EER=23 of the three experts. | {
"cite_N": [
"@cite_70"
],
"mid": [
"2081358760"
],
"abstract": [
"We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario."
]
} |
1902.08123 | 2917983759 | The massive availability of cameras and personal devices results in a wide variability between imaging conditions, producing large intra-class variations and performance drop if such images are compared for person recognition. However, as biometric solutions are extensively deployed, it will be common to replace acquisition hardware as it is damaged or newer designs appear, or to exchange information between agencies or applications in heterogeneous environments. Furthermore, variations in imaging bands can also occur. For example, faces are typically acquired in the visible (VW) spectrum, while iris images are captured in the near-infrared (NIR) spectrum. However, cross-spectrum comparison may be needed if for example a face from a surveillance camera needs to be compared against a legacy iris database. Here, we propose a multialgorithmic approach to cope with cross-sensor periocular recognition. We integrate different systems using a fusion scheme based on linear logistic regression, in which fused scores tend to be log-likelihood ratios. This allows easy combination by just summing scores of available systems. We evaluate our approach in the context of the 1st Cross-Spectral Iris Periocular Competition, whose aim was to compare person recognition approaches when periocular data from VW and NIR images is matched. The proposed fusion approach achieves reductions in error rates of up to 20-30 in cross-spectral NIR-VW comparison, leading to an EER of 0.22 and a FRR of just 0.62 for FAR=0.01 , representing the best overall approach of the mentioned competition.. Experiments are also reported with a database of VW images from two different smartphones, achieving even higher relative improvements in performance. We also discuss our approach from the point of view of template size and computation times, with the most computationally heavy system playing an important role in the results. | Latest advancements have resulted in devices with ability to see through fog, rain, at night, and to operate at long ranges. In the work @cite_19 , the authors carry out experiments with different wavelengths, namely VW, NIR, SWIR (ShortWave Infrared), and MWIR (MiddleWave Infrared. In the mentioned paper, they use images captured at distances of 1.5 m, 50 m, and 105 m. Feature extraction is done with a bank of Gabor filters, with the magnitude and phase responses further encoded with three descriptors: Weber Local Descriptor (WLD) @cite_73 , Local Binary Patterns (LBP) @cite_38 , and Histogram of Oriented Gradients (HOG) @cite_55 . Extensive experiments are done in this work between the different spectra and standoff distances. Recently, the work @cite_6 presented a new multispectral database captured in eight bands across VW and NIR spectrum (530 to 1000 nm). 52 subjects were acquired using a custom-built sensor which captures ocular images simultaneously in the eight bands. The descriptors evaluated were Histogram of Oriented Gradients (HOG), perceptual descriptors (GIST), Log-Gabor filters (LG), and Binarized Statistical Image Features (BSIF). The cross-band accuracy varied greatly depending on the reference and probe bands, ranging from 8.46 . | {
"cite_N": [
"@cite_38",
"@cite_55",
"@cite_6",
"@cite_19",
"@cite_73"
],
"mid": [
"2163352848",
"2161969291",
"2860651061",
"2172986903",
"2130258210"
],
"abstract": [
"Presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed \"uniform,\" are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the \"uniform\" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"Recent development of sensors has allowed to explore the possibility of biometric authentication beyond visible spectrum.Particularly, multi-spectral imaging has shown a great potential in biometrics to work robustly under unknown varying illumination conditions for face recognition. While face biometrics in traditional settings has also indicated the applicability of ocular regions for improving the recognition performance, there are not many works that have explored recent imaging techniques. In this paper, we present a study that explores the possibility of recognizing ocular biometric features using multi-spectral imaging. While exploring the possibility of recognizing the periocular region in different spectral bands, this work also presents the performance variation of periocular region for cross-spectral recognition. We have captured a new ocular image database in eight narrow spectral bands across Visible (VIS) and Near-Infra-Red (NIR) spectrum (530nm to 1000nm) using our custom built sensor. The database consists of images from 52 subjects with a sample size of 4160 spectral band images captured in two different sessions. The extensive set of experimental evaluation obtained on the state-of-the-art methods indicate highest recognition rate of 96.92 at Rank-1, demonstrating the potential of multi-spectral imaging for robust periocular recognition.",
"A new compound operator is proposed for heterogeneous periocular recognition.The new operator outperforms three basic operators and three state-of-the-art compound operators.NIR, SWIR, MWIR and LWIR spectra at different standoff distances are considered.A metric is introduced to measure image quality and explain its impact on performance.The new operator does not require training and can be applied to various datasets. Cross-spectral matching of active and passive infrared (IR) periocular images to a visible light periocular image gallery is a challenging research problem. This scenario is motivated by a number of surveillance applications such as recognition of subjects at night or in harsh environmental conditions. This problem becomes even more challenging with a varying standoff distance. To address this problem a new compound operator named GWLH that fuses three local descriptors - Histogram of Gradients (HOG), Local Binary Patterns (LBP) and Weber Local Descriptors (WLD) - applied to the outputs of Gabor filters is proposed. The local operators encode both magnitude and phase information. When applied to periocular regions, GWLH outperforms other compound operators that recently appeared in the literature. During performance evaluation LBP, Gabor filters, HOG, and a fusion of HOG and LBP establish a baseline for the performance comparison, while other compound operators such as Gabor followed by HOG and LBP as well as Gabor followed by WLD, LBP and GLBP present the state-of-the-art. The active IR band is presented by short-wave infrared (SWIR) and near-infrared (NIR) and passive IR is presented by mid-wave infrared (MWIR) and long-wave infrared (LWIR). In addition to varying spectrum, we also vary the standoff distance of SWIR and NIR probes. In all but one case of the combination of spectrum and range, GWLH outperforms all the other operators. A sharpness metric is introduced to measure the quality of heterogeneous periocular images and to emphasize the need in development of image enhancement approaches for heterogeneous periocular biometrics. Based on the statistics of the sharpness metric, the performance difference between compound and single operators is increasing proportionally with increasing sharpness metric values.",
"Inspired by Weber's Law, this paper proposes a simple, yet very powerful and robust local descriptor, called the Weber Local Descriptor (WLD). It is based on the fact that human perception of a pattern depends not only on the change of a stimulus (such as sound, lighting) but also on the original intensity of the stimulus. Specifically, WLD consists of two components: differential excitation and orientation. The differential excitation component is a function of the ratio between two terms: One is the relative intensity differences of a current pixel against its neighbors, the other is the intensity of the current pixel. The orientation component is the gradient orientation of the current pixel. For a given image, we use the two components to construct a concatenated WLD histogram. Experimental results on the Brodatz and KTH-TIPS2-a texture databases show that WLD impressively outperforms the other widely used descriptors (e.g., Gabor and SIFT). In addition, experimental results on human face detection also show a promising performance comparable to the best known results on the MIT+CMU frontal face test set, the AR face data set, and the CMU profile test set."
]
} |
1902.08224 | 2928310731 | Fusing a low-resolution hyperspectral image (HSI) and a high-resolution multispectral image (MSI) of the same scene leads to a super-resolution image (SRI), which is information rich spatially and spectrally. In this paper, we super-resolve the HSI using the graph Laplacian defined on the MSI. Unlike many existing works, we don't assume prior knowledge about the spatial degradation from SRI to HSI, nor a perfectly aligned HSI and MSI pair. Our algorithm progressively alternates between finding the blur kernel and fusing HSI with MSI, generating accurate estimations of the blur kernel and the SRI at convergence. Experiments on various datasets demonstrate the advantages of the proposed algorithm in the quality of fusion and its capability in dealing with unknown spatial degradation. | To our knowledge, dTV @cite_3 , STEREO @cite_23 and HySure @cite_6 can deal with some unknown spatial degradation. In the literature, spatial degradation from SRI to HSI is usually modeled by the convolution of every band of the SRI using a blur kernel (small matrix, nonnegative, sum one), followed by downsampling. We assume the SRI is always spatially aligned with the MSI. If the blur kernel is not restricted to be centered at origin, it can compensate some translation error between the HSI and the SRI (and therefore also the MSI). The blur kernel is called separable if it can be decomposed into the inner product of two vectors. Gaussian kernels, centered at origin or not, are separable. STEREO assumes the blur is separable, which may not be the case in practice. dTV and HySure do not make such separability assumptions, thus they can handle broader types of blurs. | {
"cite_N": [
"@cite_23",
"@cite_6",
"@cite_3"
],
"mid": [
"2798016471",
"2021046129",
""
],
"abstract": [
"Hyperspectral super-resolution refers to the problem of fusing a hyperspectral image (HSI) and a multispectral image (MSI) to produce a super-resolution image (SRI) that admits fine spatial and spectral resolutions. State-of-the-art methods approach the problem via low-rank matrix approximations to the matricized HSI and MSI. These methods are effective to some extent, but a number of challenges remain. First, HSIs and MSIs are naturally third-order tensors (data “cubes”) and thus matricization is prone to a loss of structural information, which could degrade performance. Second, it is unclear whether these low-rank matrix-based fusion strategies can guarantee the identifiability of the SRI under realistic assumptions. However, identifiability plays a pivotal role in estimation problems and usually has a significant impact on practical performance. Third, a majority of the existing methods assume known (or easily estimated) degradation operators from the SRI to the corresponding HSI and MSI, which is hardly the case in practice. In this paper, we propose to tackle the super-resolution problem from a tensor perspective. Specifically, we utilize the multidimensional structure of the HSI and MSI to propose a coupled tensor factorization framework that can effectively overcome the aforementioned issues. The proposed approach guarantees the identifiability of the SRI under mild and realistic conditions. Furthermore, it works with little knowledge about the degradation operators, which is clearly a favorable feature in practice. Semi-real scenarios are simulated to showcase the effectiveness of the proposed approach.",
"Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely, multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images that combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector total variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the nonquadratic and nonsmooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally “live” in a low-dimensional subspace and by tailoring the split augmented Lagrangian shrinkage algorithm (SALSA), which is an instance of the alternating direction method of multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state of the art, as illustrated in a series of experiments with simulated and real-life data.",
""
]
} |
1902.08224 | 2928310731 | Fusing a low-resolution hyperspectral image (HSI) and a high-resolution multispectral image (MSI) of the same scene leads to a super-resolution image (SRI), which is information rich spatially and spectrally. In this paper, we super-resolve the HSI using the graph Laplacian defined on the MSI. Unlike many existing works, we don't assume prior knowledge about the spatial degradation from SRI to HSI, nor a perfectly aligned HSI and MSI pair. Our algorithm progressively alternates between finding the blur kernel and fusing HSI with MSI, generating accurate estimations of the blur kernel and the SRI at convergence. Experiments on various datasets demonstrate the advantages of the proposed algorithm in the quality of fusion and its capability in dealing with unknown spatial degradation. | The spectral degradation from the SRI to the MSI can be modeled by a weighted summation of the hyperspectral bands according to the spectral responses of the multispectral sensor. STEREO assumes the availability of such information. HySure provides a way to estimate the spectral response. However, it still needs the spectral coverage information of the multispectral bands. So does dTV, which uses the directional total variation @cite_10 defined on each of the multispectral bands to super-resolve the bands of the HSI within the spectral coverage. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2281493160"
],
"abstract": [
"Magnetic resonance imaging (MRI) is a versatile imaging technique that allows different contrasts depending on the acquisition parameters. Many clinical imaging studies acquire MRI data for more than one of these contrasts---such as, for instance, @math and @math weighted images---which makes the overall scanning procedure very time consuming. As all of these images show the same underlying anatomy, one can try to omit unnecessary measurements by taking the similarity into account during reconstruction. We will discuss two modifications of total variation---based on (i) location and (ii) direction---that take structural a priori knowledge into account and reduce to total variation in the degenerate case when no structural knowledge is available. We solve the resulting convex minimization problem with the alternating direction method of multipliers which separates the forward operator from the prior. For both priors the corresponding proximal operator can be implemented as an extension of the fast gradient..."
]
} |
1902.07891 | 2916562010 | Effective and real-time eyeblink detection is of wide-range applications, such as deception detection, drive fatigue detection, face anti-spoofing, etc. Although numerous of efforts have already been paid, most of them focus on addressing the eyeblink detection problem under the constrained indoor conditions with the relative consistent subject and environment setup. Nevertheless, towards the practical applications eyeblink detection in the wild is more required, and of greater challenges. However, to our knowledge this has not been well studied before. In this paper, we shed the light to this research topic. A labelled eyeblink in the wild dataset (i.e., HUST-LEBW) of 673 eyeblink video samples (i.e., 381 positives, and 292 negatives) is first established by us. These samples are captured from the unconstrained movies, with the dramatic variation on human attribute, human pose, illumination condition, imaging configuration, etc. Then, we formulate eyeblink detection task as a spatial-temporal pattern recognition problem. After locating and tracking human eye using SeetaFace engine and KCF tracker respectively, a modified LSTM model able to capture the multi-scale temporal information is proposed to execute eyeblink verification. A feature extraction approach that reveals appearance and motion characteristics simultaneously is also proposed. The experiments on HUST-LEBW reveal the superiority and efficiency of our approach. It also verifies that, the existing eyeblink detection methods cannot achieve satisfactory performance in the wild. | Besides the patter recognition model, another essential issue for eyeblink verification is feature extraction. Generally speaking, appearance feature (e.g., EAR @cite_19 , LBP @cite_40 , Haar @cite_34 , or HOG @cite_37 ) or motion feature (e.g., KTL tracker motion @cite_19 or pixel-wise frame difference between the consecutive 2 frames @cite_30 ) are extracted to this end. Nevertheless, few approaches take appearance and motion information into consideration simultaneously. To address this, we propose to use uniform LBP as appearance feature and its difference between the consecutive 2 frames as motion feature to jointly characterize eyeblink. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_19",
"@cite_40",
"@cite_34"
],
"mid": [
"1979845503",
"2161969291",
"",
"2111119856",
"2134637340"
],
"abstract": [
"A vision-based human--computer interface is presented in the paper. The interface detects voluntary eye-blinks and interprets them as control commands. The employed image processing methods include Haar-like features for automatic face detection, and template matching based eye tracking and eye-blink detection. Interface performance was tested by 49 users (of which 12 were with physical disabilities). Test results indicate interface usefulness in offering an alternative mean of communication with computers. The users entered English and Polish text (with average time of less than 12s per character) and were able to browse the Internet. The interface is based on a notebook equipped with a typical web camera and requires no extra light sources. The interface application is available on-line as open-source software.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"",
"This paper proposes a robust and efficient eye state detection method based on an improved algorithm called LBP+SVM mode. LBP (local binary pattern) methodology is first used to select the two groups of candidates from a whole face image. Then corresponding SVMs (supporting vector machine) are employed to verify the real eye and its state. The LBP methodology makes it robust against rotation, illumination and occlusion to find the candidates, and the SVM helps to make the final verification correct.",
"In this paper, we present an approach for eye state recognition and closed-eye photo correction. For eye state recognition, AdaBoosted cascade open-eye detectors of different scales are trained. For closed-eye photo correction, a PCA generative model of concatenated corresponding closed-eye and open-eye texture patterns is built, and given a closed-eye texture pattern, an algorithm is proposed to recover its corresponding open-eye one for closed-eye replacement. Experiments on popular consumer images show our open-eye detectors achieved 94.71 correct recognition rate, and the closed-eye photo correction looks very natural."
]
} |
1902.07891 | 2916562010 | Effective and real-time eyeblink detection is of wide-range applications, such as deception detection, drive fatigue detection, face anti-spoofing, etc. Although numerous of efforts have already been paid, most of them focus on addressing the eyeblink detection problem under the constrained indoor conditions with the relative consistent subject and environment setup. Nevertheless, towards the practical applications eyeblink detection in the wild is more required, and of greater challenges. However, to our knowledge this has not been well studied before. In this paper, we shed the light to this research topic. A labelled eyeblink in the wild dataset (i.e., HUST-LEBW) of 673 eyeblink video samples (i.e., 381 positives, and 292 negatives) is first established by us. These samples are captured from the unconstrained movies, with the dramatic variation on human attribute, human pose, illumination condition, imaging configuration, etc. Then, we formulate eyeblink detection task as a spatial-temporal pattern recognition problem. After locating and tracking human eye using SeetaFace engine and KCF tracker respectively, a modified LSTM model able to capture the multi-scale temporal information is proposed to execute eyeblink verification. A feature extraction approach that reveals appearance and motion characteristics simultaneously is also proposed. The experiments on HUST-LEBW reveal the superiority and efficiency of our approach. It also verifies that, the existing eyeblink detection methods cannot achieve satisfactory performance in the wild. | Accurate eye localization is the key step for eyeblink detection within spatial domain. Some existing approaches @cite_14 @cite_2 @cite_21 resort to using color or spectral characteristics to locate eye. Another way is to use motion information @cite_35 to detect and track eye. Nevertheless, their performance is not promising. Most of the state-of-the-art methods @cite_30 @cite_16 @cite_19 @cite_43 @cite_24 resort to detect facial landmark to this end in the way of face parsing. To achieve the balance between effectiveness and efficiency, we choose use SeetaFace engine @cite_26 for eye detection first, and then track eye using KCF @cite_22 for high efficiency. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_43",
"@cite_2",
"@cite_16"
],
"mid": [
"1979845503",
"2088205261",
"1996633729",
"2523915246",
"2154889144",
"2155217597",
"2341528187",
"",
"2616063741",
"2542495828",
""
],
"abstract": [
"A vision-based human--computer interface is presented in the paper. The interface detects voluntary eye-blinks and interprets them as control commands. The employed image processing methods include Haar-like features for automatic face detection, and template matching based eye tracking and eye-blink detection. Interface performance was tested by 49 users (of which 12 were with physical disabilities). Test results indicate interface usefulness in offering an alternative mean of communication with computers. The users entered English and Polish text (with average time of less than 12s per character) and were able to browse the Internet. The interface is based on a notebook equipped with a typical web camera and requires no extra light sources. The interface application is available on-line as open-source software.",
"The proposed method performs the determination of eye blink states by tracking iris and eyelids. Two novelties of this method are the simultaneous exploitation of intensity and edge information for detecting the eye state as well as the record of the patterns of eyelids before closing for tracking the reopened eyes. Experiments show the efficiency of the proposed method.",
"In the present study, a vehicle driver drowsiness warning system using image processing technique with fuzzy logic inference is developed and investigated. The principle of the proposed system is based on facial images analysis for warning the driver of drowsiness or inattention to prevent traffic accidents. The facial images of driver are taken by a CCD camera which is installed on the dashboard in front of the driver. A fuzzy logic algorithm and an inference are proposed to determine the level of fatigue by measuring the blinding duration and its frequency, and warn the driver accordingly. The experimental works are carried to evaluate the effect of the proposed system for drowsiness warning under various operation conditions. The experimental results indicated that the proposed expert system is effective for increasing safe in drive. The detail of image processing technique and the characteristic also is present in this paper.",
"Multi-view face detection in open environment is a challenging task due to diverse variations of face appearances and shapes. Most multi-view face detectors depend on multiple models and organize them in parallel, pyramid or tree structure, which compromise between the accuracy and time-cost. Aiming at a more favorable multi-view face detector, we propose a novel funnel-structured cascade (FuSt) detection framework. In a coarse-to-fine flavor, our FuSt consists of, from top to bottom, (1) multiple view-specific fast LAB cascade for extremely quick face proposal, (2) multiple coarse MLP cascade for further candidate window verification, and (3) a unified fine MLP cascade with shape-indexed features for accurate face detection. Compared with other structures, on the one hand, the proposed one uses multiple computationally efficient distributed classifiers to propose a small number of candidate windows but with a high recall of multi-view faces. On the other hand, by using a unified MLP cascade to examine proposals of all views in a centralized style, it provides a favorable solution for multi-view face detection with high accuracy and low timecost. Besides, the FuSt detector is alignment-aware and performs a coarse facial part prediction which is beneficial for subsequent face alignment. Extensive experiments on two challenging datasets, FDDB and AFW, demonstrate the effectiveness of our FuSt detector in both accuracy and speed.",
"The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies—any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.",
"This paper describes a real-time prototype computer vision system for monitoring driver vigilance. The main components of the system consists of a remotely located video CCD camera, a specially designed hardware system for real-time image acquisition and for controlling the illuminator and the alarm system, and various computer vision algorithms for simultaneously, real-time and non-intrusively monitoring various visual bio-behaviors that typically characterize a driver's level of vigilance. The visual behaviors include eyelid movement, face orientation, and gaze movement (pupil movement). The system was tested in a simulating environment with subjects of different ethnic backgrounds, different genders, ages, with without glasses, and under different illumination conditions, and it was found very robust, reliable and accurate.",
"Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance.",
"",
"",
"Drowsiness detection is vital in preventing traffic accidents. Eye state analysis - detecting whether the eye is open or closed - is critical step for drowsiness detection. In this paper, we propose an easy algorithm for pupil center and iris boundary localization and a new algorithm for eye state analysis, which we incorporate into a four step system for drowsiness detection: face detection, eye detection, eye state analysis, and drowsy decision. This new system requires no training data at any step or special cameras. Our eye detection algorithm uses Eye Map, thus achieving excellent pupil center and iris boundary localization results on the IMM database. Our novel eye state analysis algorithm detects eye state using the saturation (S) channel of the HSV color space. We analyze our eye state analysis algorithm using five video sequences and show superior results compared to the common technique based on distance between eyelids.",
""
]
} |
1902.07792 | 2915145160 | Data theft and tampering are serious concerns as attackers have aggressively begun to exploit weaknesses in current memory systems to advance their nefarious schemes. The storage industry is moving toward emerging non-volatile memories (NVM), including the spin-transfer torque magnetoresistive random access memory (STT-MRAM) and the phase change memory (PCM), owing to their high density and low power operation. The advent of novel memory technologies has led to new vulnerabilities including data sensitivity to magnetic field and temperature fluctuations and data persistence after power down. In this paper, we propose SMART: a Secure Magnetoelectric Antiferromagnet-Based Tamper-Proof memory, which leverages unique properties of antiferromagnetic materials and offers dense, on-chip non-volatile storage. SMART memory is not only resilient against data confidentiality attacks seeking to leak sensitive information but also protects data integrity and prevents Denial of Service (DoS) attacks on the memory. It is impervious to power side-channel attacks, which exploit asymmetric reads writes for 0 and 1 logic levels, and photonic side-channel attacks, which monitor photo-emission signatures from the chip backside. Further, the ultra-low power magnetoelectric switching coupled with the terahertz regime antiferromagnetic dynamics result in 4 orders lower energy-per-bit and 3 orders smaller latency for the SMART memory as compared to prior NVMs such as STT-MRAM and PCM. | Prior works on securing NVMs have focused mainly on memory encryption schemes, which are necessary to prevent attackers from exploiting data persistence in off-state. Chhabra proposed an incremental encryption scheme @cite_8 for NVMs where only inert memory pages, which have not been accessed for a while, are encrypted selectively. The working set of the memory (which is in current use) is in plaintext and, hence, incurs no encryption overhead on access. Such a selective encryption ensures that the majority of the main memory content (but not all) remains encrypted at all times, without overly compromising the performance. However, it requires dedicated hardware, inert page prediction, and scheduling for its implementation. A sneak path encryption scheme was demonstrated for memristor-based NVMs in @cite_37 , wherein sneak paths in the memristor crossbar array are exploited to apply encryption pulses to change the resistances of the memory cells, and hence, encrypt the stored data. | {
"cite_N": [
"@cite_37",
"@cite_8"
],
"mid": [
"2167677691",
"1997933199"
],
"abstract": [
"Non-volatile memory devices such as phase change memories and memristors are promising alternatives to SRAM and DRAM main memories as they provide higher density and improved energy efficiency. However, non-volatile main memories (NVMM) introduce security vulnerabilities. Sensitive data such as passwords and keys residing in the NVMM will persist and can be probed after power down. We propose sneak-path encryption (SPE), for memristor-based NVMM. SPE exploits the physical parameters, multilevel cell (MLC) capability and the sneak paths in crossbar memories to encrypt the data stored in memristor-based NVMM. We investigate three attacks on NVMMs and show the resilience of SPE against them. We use a cycle accurate simulator to evaluate the security and performance impact of SPE based NVMM. SPE can secure the NVMM with a latency of 16 cycles and 1.5 performance overhead.",
"Emerging technologies for building non-volatile main memory (NVMM) systems suffer from a security vulnerability where information lingers on long after the system is powered down, enabling an attacker with physical access to the system to extract sensitive information off the memory. The goal of this study is to find a solution for such a security vulnerability. We introduce i-NVMM, a data privacy protection scheme for NVMM, where the main memory is encrypted incrementally, i.e. different data in the main memory is encrypted at different times depending on whether the data is predicted to still be useful to the processor. The motivation behind incremental encryption is the observation that the working set of an application is much smaller than its resident set. By identifying the working set and encrypting remaining part of the resident set, i-NVMM can keep the majority of the main memory encrypted at all times without penalizing performance by much. Our experiments demonstrate promising results. i-NVMM keeps 78 of the main memory encrypted across SPEC2006 benchmarks, yet only incurs 3.7 execution time overhead, and has a negligible impact on the write endurance of NVMM, all achieved with a relatively simple hardware support in the memory module."
]
} |
1902.07995 | 2916609414 | SLAM technology has recently seen many successes and attracted the attention of high-technological companies. However, how to unify the interface of existing or emerging algorithms, and effectively perform benchmark about the speed, robustness and portability are still problems. In this paper, we propose a novel SLAM platform named GSLAM, which not only provides evaluation functionality, but also supplies useful toolkit for researchers to quickly develop their own SLAM systems. The core contribution of GSLAM is an universal, cross-platform and full open-source SLAM interface for both research and commercial usage, which is aimed to handle interactions with input dataset, SLAM implementation, visualization and applications in an unified framework. Through this platform, users can implement their own functions for better performance with plugin form and further boost the application to practical usage of the SLAM. | SLAM techniques build a map of an unknown environment and localize the sensor in the map with a strong focus on real-time operation. Early SLAM are mostly based on extended kalman filter (EKF) @cite_7 . The @math DOF motion parameters and 3D landmarks are probabilistically represented as a single state vector. The complexity of classic EKF grows quadratically with the number of landmarks, restricting its scalability. In recent years, SLAM technology develops rapidly and lots of monocular visual SLAM systems including key-point based @cite_7 @cite_52 @cite_8 , direct @cite_67 @cite_55 @cite_40 and semi-direct methods @cite_33 @cite_24 are proposed. However, monocular SLAM systems lack scale information and are not able to handle pure rotation situation, then, some other multi-sensor SLAM systems including RGBD @cite_60 @cite_14 @cite_46 , Stereo @cite_19 @cite_24 @cite_17 and inertial aided methods @cite_64 @cite_10 @cite_23 are being studied for higher robustness and precision. | {
"cite_N": [
"@cite_67",
"@cite_14",
"@cite_64",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_60",
"@cite_55",
"@cite_52",
"@cite_24",
"@cite_19",
"@cite_40",
"@cite_23",
"@cite_46",
"@cite_10",
"@cite_17"
],
"mid": [
"2108134361",
"2527142681",
"2091790851",
"",
"2152671441",
"1612997784",
"2064451896",
"612478963",
"2151290401",
"2564632156",
"2218842719",
"2474281075",
"2797929305",
"2336469227",
"2745859992",
"2535547924"
],
"abstract": [
"DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.",
"We present a novel approach to real-time dense visual simultaneous localisation and mapping. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments and beyond explored using an RGB-D camera in an incremental online fashion, without pose graph optimization or any post-processing steps. This is accomplished by using dense frame-to-model camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimizations as often as possible to stay close to the mode of the map distribution, while utilizing global loop closure to recover from arbitrary drift and maintain global consistency. In the spirit of improving map quality as well as tracking accuracy and robustness, we furthermore explore a novel approach to real-time discrete light source detection. This technique is capable of detecting numerous light sources in indoo...",
"Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual-inertial odometry or simultaneous localization and mapping SLAM. While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.",
"",
"We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the \"pure vision\" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera",
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.",
"In this paper, we propose a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels. In contrast to sparse, feature-based methods, this allows us to better exploit the available information in the image data which leads to higher pose accuracy. Furthermore, we propose an entropy-based similarity measure for keyframe selection and loop closure detection. From all successful matches, we build up a graph that we optimize using the g2o framework. We evaluated our approach extensively on publicly available benchmark datasets, and found that it performs well in scenes with low texture as well as low structure. In direct comparison to several state-of-the-art methods, our approach yields a significantly lower trajectory error. We release our software as open-source.",
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.",
"Direct methods for visual odometry (VO) have gained popularity for their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, in which established feature-based methods succeed instead. Based on these considerations, we propose a semidirect VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients, but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy.",
"We propose a novel Large-Scale Direct SLAM algorithm for stereo cameras (Stereo LSD-SLAM) that runs in real-time at high frame rate on standard CPUs. In contrast to sparse interest-point based methods, our approach aligns images directly based on the photoconsistency of all high-contrast pixels, including corners, edges and high texture areas. It concurrently estimates the depth at these pixels from two types of stereo cues: Static stereo through the fixed-baseline stereo camera setup as well as temporal multi-view stereo exploiting the camera motion. By incorporating both disparity sources, our algorithm can even estimate depth of pixels that are under-constrained when only using fixed-baseline stereo. Using a fixed baseline, on the other hand, avoids scale-drift that typically occurs in pure monocular SLAM.We furthermore propose a robust approach to enforce illumination invariance, capable of handling aggressive brightness changes between frames - greatly improving the performance in realistic settings. In experiments, we demonstrate state-of-the-art results on stereo SLAM benchmarks such as Kitti or challenging datasets from the EuRoC Challenge 3 for micro aerial vehicles.",
"Direct Sparse Odometry (DSO) is a visual odometry method based on a novel, highly accurate sparse and direct structure and motion formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry-represented as inverse depth in a reference frame-and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on essentially featureless walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.",
"We present VI-DSO, a novel approach for visual-inertial odometry, which jointly estimates camera poses and sparse scene geometry by minimizing photometric and IMU measurement errors in a combined energy functional. The visual part of the system performs a bundle-adjustment like optimization on a sparse set of points, but unlike key-point based systems it directly minimizes a photometric error. This makes it possible for the system to track not only corners, but any pixels with large enough intensity gradients. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between keyframes. We explicitly include scale and gravity direction into our model and jointly optimize them together with other variables such as poses. As the scale is often not immediately observable using IMU data this allows us to initialize our visual-inertial system with an arbitrary scale instead of having to delay the initialization until everything is observable. We perform partial marginalization of old variables so that updates can be computed in a reasonable time. In order to keep the system consistent we propose a novel strategy which we call \"dynamic marginalization\". This technique allows us to use partial marginalization even in cases where the initial scale estimate is far from the optimum. We evaluate our method on the challenging EuRoC dataset, showing that VI-DSO outperforms the state of the art.",
"In this paper we present a novel semi-direct tracking and mapping (SDTAM) approach for RGB-D cameras which inherits the advantages of both direct and feature based methods, and consequently it achieves high efficiency, accuracy, and robustness. The input RGB-D frames are tracked with a direct method and keyframes are refined by minimizing a proposed measurement residual function which takes both geometric and depth information into account. A local optimization is performed to refine the local map while global optimization detects and corrects loops with the appearance based bag of words and a co-visibility weighted pose graph. Our method has higher accuracy on both trajectory tracking and surface reconstruction compared to state-of-the-art frame-to-frame or frame-model approaches. We test our system in challenging sequences with motion blur, fast pure rotation, and large moving objects, the results demonstrate it can still successfully obtain results with high accuracy. Furthermore, the proposed approach achieves real-time speed which only uses part of the CPU computation power, and it can be applied to embedded devices such as phones, tablets, or micro aerial vehicles (MAVs).",
"One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. In this paper, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization. A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations. A loop detection module, in combination with our tightly coupled formulation, enables relocalization with minimum computation. We additionally perform 4-DOF pose graph optimization to enforce the global consistency. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. The current and previous maps can be merged together by the global pose graph optimization. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy in localization. We open source our implementations for both PCs ( https: github.com HKUST-Aerial-Robotics VINS-Mono ) and iOS mobile devices ( https: github.com HKUST-Aerial-Robotics VINS-Mobile ).",
"We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields."
]
} |
1902.07995 | 2916609414 | SLAM technology has recently seen many successes and attracted the attention of high-technological companies. However, how to unify the interface of existing or emerging algorithms, and effectively perform benchmark about the speed, robustness and portability are still problems. In this paper, we propose a novel SLAM platform named GSLAM, which not only provides evaluation functionality, but also supplies useful toolkit for researchers to quickly develop their own SLAM systems. The core contribution of GSLAM is an universal, cross-platform and full open-source SLAM interface for both research and commercial usage, which is aimed to handle interactions with input dataset, SLAM implementation, visualization and applications in an unified framework. Through this platform, users can implement their own functions for better performance with plugin form and further boost the application to practical usage of the SLAM. | Recently, supervised @cite_42 @cite_20 @cite_25 and unsupervised @cite_61 @cite_11 deep learning based visual odometers (VO) present novel ideas compared to traditional geometry based methods, but it is still not easy to optimize the predicted poses further for consistencies of multiple keyframes. The tools provided by GSLAM could help them for obtaining better global consistency. Through our framework, it is more easier to visualize or evaluate the results, and further be applied to various industry fields. | {
"cite_N": [
"@cite_61",
"@cite_42",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2609883120",
"2598706937",
"2732287207",
"2771230335",
"2963583471"
],
"abstract": [
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.",
"Compared to LiDAR-based localization methods, which provide high accuracy but rely on expensive sensors, visual localization approaches only require a camera and thus are more cost-effective while their accuracy and reliability typically is inferior to LiDAR-based methods. In this work, we propose a vision-based localization approach that learns from LiDAR-based localization methods by using their output as training data, thus combining a cheap, passive sensor with an accuracy that is on-par with LiDAR-based localization. The approach consists of two deep networks trained on visual odometry and topological localization, respectively, and a successive optimization to combine the predictions of these two networks. We evaluate the approach on a new challenging pedestrian-based dataset captured over the course of six months in varying weather conditions with a high degree of noise. The experiments demonstrate that the localization errors are up to 10 times smaller than with traditional vision-based localization methods.",
"Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g., 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work.",
"We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones."
]
} |
1902.07995 | 2916609414 | SLAM technology has recently seen many successes and attracted the attention of high-technological companies. However, how to unify the interface of existing or emerging algorithms, and effectively perform benchmark about the speed, robustness and portability are still problems. In this paper, we propose a novel SLAM platform named GSLAM, which not only provides evaluation functionality, but also supplies useful toolkit for researchers to quickly develop their own SLAM systems. The core contribution of GSLAM is an universal, cross-platform and full open-source SLAM interface for both research and commercial usage, which is aimed to handle interactions with input dataset, SLAM implementation, visualization and applications in an unified framework. Through this platform, users can implement their own functions for better performance with plugin form and further boost the application to practical usage of the SLAM. | Inspired by the ROS2 @cite_38 messaging architecture, GSLAM implements a similar intra-process communication utility class named Messenger. This provides an alternative option to replace ROS inside the SLAM implementation, and maintains the compatibility, that means all ROS defined messages are supported and ROS wrapper are naturally implemented within our framework. Due to the intra-process design, there is no serialization and data transferring, messages are sent without latency and extra cost. Meanwhile the payloads are not limited to ROS defined messages but any copyable data structures. Moreover, we not only provides evaluation functionality, but also supplies useful toolkit for researchers to quickly develop and integrate their own SLAM algorithm. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2530771494"
],
"abstract": [
"Middleware for robotics development must meet demanding requirements in real-time distributed embedded systems. The Robot Operating System (ROS), open-source middleware, has been widely used for robotics applications. However, the ROS is not suitable for real-time embedded systems because it does not satisfy real-time requirements and only runs on a few OSs. To address this problem, ROS1 will undergo a significant upgrade to ROS2 by utilizing the Data Distribution Service (DDS). DDS is suitable for real-time distributed embedded systems due to its various transport configurations (e.g., deadline and fault-tolerance) and scalability. ROS2 must convert data for DDS and abstract DDS from its users; however, this incurs additional overhead, which is examined in this study. Transport latencies between ROS2 nodes vary depending on the use cases, data size, configurations, and DDS vendors. We conduct proof of concept for DDS approach to ROS and arrange DDS characteristic and guidelines from various evaluations. By highlighting the DDS capabilities, we explore and evaluate the potential and constraints of DDS and ROS2."
]
} |
1902.07995 | 2916609414 | SLAM technology has recently seen many successes and attracted the attention of high-technological companies. However, how to unify the interface of existing or emerging algorithms, and effectively perform benchmark about the speed, robustness and portability are still problems. In this paper, we propose a novel SLAM platform named GSLAM, which not only provides evaluation functionality, but also supplies useful toolkit for researchers to quickly develop their own SLAM systems. The core contribution of GSLAM is an universal, cross-platform and full open-source SLAM interface for both research and commercial usage, which is aimed to handle interactions with input dataset, SLAM implementation, visualization and applications in an unified framework. Through this platform, users can implement their own functions for better performance with plugin form and further boost the application to practical usage of the SLAM. | Currently, there exist several SLAM Benchmarks, including KITTI Benchmark Suite @cite_12 , TUM RGB-D Benchmarking @cite_65 and ICL-NUIM RGB-D Benchmark Dataset @cite_51 , which only provide evaluation functionality. In addition, SLAMBench2 @cite_5 expanded these benchmarks into algorithms and datasets, which requires users to make released implementation SLAMBench2-compatible for evaluation and it is difficult to extend to further applications. Different from these systems, the proposed GSLAM platform provides a solution which serves the whole life-circle of the SLAM implementation from development, evaluation to application. We provide useful toolkit for researchers to quickly develop their own SLAM system, and further visualization, evaluation and applications are developed based on an unified interface. | {
"cite_N": [
"@cite_51",
"@cite_5",
"@cite_65",
"@cite_12"
],
"mid": [
"2058535340",
"2789218862",
"2021851106",
""
],
"abstract": [
"We introduce the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset for the evaluation of visual odometry, 3D reconstruction and SLAM algorithms that typically use RGB-D data. We present a collection of handheld RGB-D camera sequences within synthetically generated environments. RGB-D sequences with perfect ground truth poses are provided as well as a ground truth surface model that enables a method of quantitatively evaluating the final map or surface reconstruction accuracy. Care has been taken to simulate typically observed real-world artefacts in the synthetic imagery by modelling sensor noise in both RGB and depth data. While this dataset is useful for the evaluation of visual odometry and SLAM trajectory estimation, our main focus is on providing a method to benchmark the surface reconstruction accuracy which to date has been missing in the RGB-D community despite the plethora of ground truth RGB-D datasets available.",
"SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phone-based AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. SLAMBench2 is a benchmarking framework to evaluate existing and future SLAM systems, both open and close source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing SLAM algorithms and datasets is supported, e.g. ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS, and integrating new ones is straightforward and clearly specified by the framework. SLAMBench2 is a publicly-available software framework which represents a starting point for quantitative, comparable and val-idatable experimental research to investigate trade-offs across SLAM systems.",
"In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
""
]
} |
1902.08039 | 2905606790 | In Reinforcement Learning (RL), an agent explores the environment and collects trajectories into the memory buffer for later learning. However, the collected trajectories can easily be imbalanced with respect to the achieved goal states. The problem of learning from imbalanced data is a well-known problem in supervised learning, but has not yet been thoroughly researched in RL. To address this problem, we propose a novel Curiosity-Driven Prioritization (CDP) framework to encourage the agent to over-sample those trajectories that have rare achieved goal states. The CDP framework mimics the human learning process and focuses more on relatively uncommon events. We evaluate our methods using the robotic environment provided by OpenAI Gym. The environment contains six robot manipulation tasks. In our experiments, we combined CDP with Deep Deterministic Policy Gradient (DDPG) with or without Hindsight Experience Replay (HER). The experimental results show that CDP improves both performance and sample-efficiency of reinforcement learning agents, compared to state-of-the-art methods. | Curiosity-driven exploration is a well-studied topic in reinforcement learning @cite_7 @cite_0 @cite_51 @cite_34 @cite_17 . encourage the agent to explore states with high prediction error. The agents are also encouraged to explore "novel" or uncertain states @cite_47 @cite_4 @cite_40 @cite_9 @cite_44 @cite_14 @cite_24 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_0",
"@cite_44",
"@cite_40",
"@cite_24",
"@cite_47",
"@cite_34",
"@cite_51",
"@cite_17"
],
"mid": [
"2139612737",
"2160589914",
"2000514530",
"2963639957",
"2101524054",
"2962730405",
"2116459397",
"779494576",
"2963276097",
"2034806191",
"172298727",
"1550989509"
],
"abstract": [
"Psychologists call behavior intrinsically motivated when it is engaged in for its own sake rather than as a step toward solving a specific problem of clear practical value. But what we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated reinforcement learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy.",
"Formal exploration approaches in model-based reinforcement learning estimate the accuracy of the currently learned model without consideration of the empirical prediction error. For example, PAC-MDP approaches such as R-MAX base their model certainty on the amount of collected data, while Bayesian approaches assume a prior over the transition dynamics. We propose extensions to such approaches which drive exploration solely based on empirical estimates of the learner's accuracy and learning progress. We provide a \"sanity check\" theoretical analysis, discussing the behavior of our extensions in the standard stationary finite state-action case. We then provide experimental studies demonstrating the robustness of these exploration measures in cases of non-stationary environments or where original approaches are misled by wrong domain assumptions.",
"Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.",
"Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.",
"Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology",
"The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents. Most learning algorithms that involve optimisation of the mutual information rely on the Blahut-Arimoto algorithm — an enumerative algorithm with exponential complexity that is not suitable for modern machine learning applications. This paper provides a new approach for scalable optimisation of the mutual information by merging techniques from variational inference and deep learning. We develop our approach by focusing on the problem of intrinsically-motivated learning, where the mutual information forms the definition of a well-known internal drive known as empowerment. Using a variational lower bound on the mutual information, combined with convolutional networks for handling visual input streams, we develop a stochastic optimisation algorithm that allows for scalable information maximisation and empowerment-based reasoning directly from pixels to actions.",
"Reinforcement learning (RL) was originally proposed as a framework to allow agents to learn in an online fashion as they interact with their environment. Existing RL algorithms come short of achieving this goal because the amount of exploration required is often too costly and or too time consuming for online learning. As a result, RL is mostly used for offline learning in simulated environments. We propose a new algorithm, called BEETLE, for effective online learning that is computationally efficient while minimizing the amount of exploration. We take a Bayesian model-based approach, framing RL as a partially observable Markov decision process. Our two main contributions are the analytical derivation that the optimal value function is the upper envelope of a set of multivariate polynomials, and an efficient point-based value iteration algorithm that exploits this simple parameterization.",
"Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.",
"We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA'S REVENGE.",
"The simple, but general formal theory of fun and intrinsic motivation and creativity (1990-2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less formal ideas in aesthetics theory and developmental psychology. It has been argued that the theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997-2002) are reviewed, as well as several recent variants by others (2005-2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation.",
"",
"To maximize its success, an AGI typically needs to explore its initially unknown world. Is there an optimal way of doing so? Here we derive an affirmative answer for a broad class of environments."
]
} |
1902.08039 | 2905606790 | In Reinforcement Learning (RL), an agent explores the environment and collects trajectories into the memory buffer for later learning. However, the collected trajectories can easily be imbalanced with respect to the achieved goal states. The problem of learning from imbalanced data is a well-known problem in supervised learning, but has not yet been thoroughly researched in RL. To address this problem, we propose a novel Curiosity-Driven Prioritization (CDP) framework to encourage the agent to over-sample those trajectories that have rare achieved goal states. The CDP framework mimics the human learning process and focuses more on relatively uncommon events. We evaluate our methods using the robotic environment provided by OpenAI Gym. The environment contains six robot manipulation tasks. In our experiments, we combined CDP with Deep Deterministic Policy Gradient (DDPG) with or without Hindsight Experience Replay (HER). The experimental results show that CDP improves both performance and sample-efficiency of reinforcement learning agents, compared to state-of-the-art methods. | However, we integrate curiosity into prioritization and tackle the problem of data imbalance @cite_38 in the memory buffer of RL agents. A recent work @cite_6 introduced a form of re-sampling for RL agents based on trajectory energy functions. The idea of our method is complementary and can be combined. The motivation of our method is from the curiosity mechanism in the human brain @cite_46 . The essence of our method is to assign priority to the achieved trajectories with lower density, which are relatively more valuable to learn from. In supervised learning, similar tricks are used to mitigate the class imbalance challenge, such as over-sampling the data in the under-represented class @cite_52 @cite_2 . | {
"cite_N": [
"@cite_38",
"@cite_52",
"@cite_6",
"@cite_2",
"@cite_46"
],
"mid": [
"2099454382",
"1543614656",
"2895626374",
"2118978333",
"2098036797"
],
"abstract": [
"Classifier learning with data-sets that suffer from imbalanced class distributions is a challenging problem in data mining community. This issue occurs when the number of examples that represent one class is much lower than the ones of the other classes. Its presence in many real-world applications has brought along a growth of attention from researchers. In machine learning, the ensemble of classifiers are known to increase the accuracy of single classifiers by combining several of them, but neither of these learning techniques alone solve the class imbalance problem, to deal with this issue the ensemble learning algorithms have to be designed specifically. In this paper, our aim is to review the state of the art on ensemble techniques in the framework of imbalanced data-sets, with focus on two-class problems. We propose a taxonomy for ensemble-based methods to address the class imbalance where each proposal can be categorized depending on the inner ensemble methodology in which it is based. In addition, we develop a thorough empirical comparison by the consideration of the most significant published approaches, within the families of the taxonomy proposed, to show whether any of them makes a difference. This comparison has shown the good behavior of the simplest approaches which combine random undersampling techniques with bagging or boosting ensembles. In addition, the positive synergy between sampling techniques and bagging has stood out. Furthermore, our results show empirically that ensemble-based algorithms are worthwhile since they outperform the mere use of preprocessing techniques before learning the classifier, therefore justifying the increase of complexity by means of a significant enhancement of the results.",
"Abstract The uniformity of the cortical architecture and the ability of functions to move to different areas of cortex following early damage strongly suggest that there is a single basic learning algorithm for extracting underlying structure from richly structured, high-dimensional sensory data. There have been many attempts to design such an algorithm, but until recently they all suffered from serious computational weaknesses. This chapter describes several of the proposed algorithms and shows how they can be combined to produce hybrid methods that work efficiently in networks with many layers and millions of adaptive connections.",
"In Hindsight Experience Replay (HER), a reinforcement learning agent is trained by treating whatever it has achieved as virtual goals. However, in previous work, the experience was replayed at random, without considering which episode might be the most valuable for learning. In this paper, we develop an energy-based framework for prioritizing hindsight experience in robotic manipulation tasks. Our approach is inspired by the work-energy principle in physics. We define a trajectory energy function as the sum of the transition energy of the target object over the trajectory. We hypothesize that replaying episodes that have high trajectory energy is more effective for reinforcement learning in robotics. To verify our hypothesis, we designed a framework for hindsight experience prioritization based on the trajectory energy of goal states. The trajectory energy function takes the potential, kinetic, and rotational energy into consideration. We evaluate our Energy-Based Prioritization (EBP) approach on four challenging robotic manipulation tasks in simulation. Our empirical results show that our proposed method surpasses state-of-the-art approaches in terms of both performance and sample-efficiency on all four tasks, without increasing computational time. A video showing experimental results is available at this https URL",
"With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications, the problem of learning from imbalanced data (the imbalanced learning problem) is a relatively new challenge that has attracted growing attention from both academia and industry. The imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. In this paper, we provide a comprehensive review of the development of research in learning from imbalanced data. Our focus is to provide a critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario. Furthermore, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potential important research directions for learning from imbalanced data.",
"Summary People find it easier to learn about topics that interest them, but little is known about the mechanisms by which intrinsic motivational states affect learning. We used functional magnetic resonance imaging to investigate how curiosity (intrinsic motivation to learn) influences memory. In both immediate and one-day-delayed memory tests, participants showed improved memory for information that they were curious about and for incidental material learned during states of high curiosity. Functional magnetic resonance imaging results revealed that activity in the midbrain and the nucleus accumbens was enhanced during states of high curiosity. Importantly, individual variability in curiosity-driven memory benefits for incidental material was supported by anticipatory activity in the midbrain and hippocampus and by functional connectivity between these regions. These findings suggest a link between the mechanisms supporting extrinsic reward motivation and intrinsic curiosity and highlight the importance of stimulating curiosity to create more effective learning experiences. Video Abstract"
]
} |
1902.07821 | 2950550251 | This paper aims to improve the widely used deep speaker embedding x-vector model. We propose the following improvements: (1) a hybrid neural network structure using both time delay neural network (TDNN) and long short-term memory neural networks (LSTM) to generate complementary speaker information at different levels; (2) a multi-level pooling strategy to collect speaker information from both TDNN and LSTM layers; (3) a regularization scheme on the speaker embedding extraction layer to make the extracted embeddings suitable for the following fusion step. The synergy of these improvements are shown on the NIST SRE 2016 eval test (with a 19 EER reduction) and SRE 2018 dev test (with a 9 EER reduction), as well as more than 10 DCF scores reduction on these two test sets over the x-vector baseline. | Combining CNN or TDNN with LSTM is proved effective in automatic speech recognition (ASR) tasks. @cite_10 proposed to stack CNN, LSTM and DNN sequentially for speech recognition and it shows superior results than using CNN, LSTM or DNN alone. @cite_5 conducted experiments to compare the stack order of TDNN and LSTM and found interleaving of TDNN layers with LSTM layers could be more effective than simple stacking strategy. | {
"cite_N": [
"@cite_5",
"@cite_10"
],
"mid": [
"2729190387",
"1600744878"
],
"abstract": [
"Bidirectional long short-term memory (BLSTM) acoustic models provide a significant word error rate reduction compared to their unidirectional counterpart, as they model both the past and future temporal contexts. However, it is nontrivial to deploy bidirectional acoustic models for online speech recognition due to an increase in latency. In this letter, we propose the use of temporal convolution, in the form of time-delay neural network (TDNN) layers, along with unidirectional LSTM layers to limit the latency to 200 ms. This architecture has been shown to outperform the state-of-the-art low frame rate (LFR) BLSTM models. We further improve these LFR BLSTM acoustic models by operating them at higher frame rates at lower layers and show that the proposed model performs similar to these mixed frame rate BLSTMs. We present results on the Switchboard 300 h LVCSR task and the AMI LVCSR task, in the three microphone conditions.",
"Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4–6 relative improvement in WER over an LSTM, the strongest of the three individual models."
]
} |
1902.08018 | 2915839211 | Extreme Ultraviolet (EUV) photolithography is seen as the key enabler for increasing transistor density in the next decade. In EUV lithography, 13.5 nm EUV light is illuminated through a reticle, holding a pattern to be printed, onto a silicon wafer. This process is performed about 100 times per wafer, at a rate of over a hundred wafers an hour. During this process, a certain percentage of the light energy is converted into heat in the wafer. In turn, this heat causes the wafer to deform which increases the overlay error, and as a result, reduces the manufacturing yield. To alleviate this, we propose a firm real-time control system that uses a wafer heat feed-forward model to compensate for the wafer deformation. The model calculates the expected wafer deformation, and then, compensates for that by adjusting the light projection and or the wafer movement. However, the model computational demands are very high. As a result, it needs to be executed on dedicated HW that can perform computations quickly. To this end, we deploy Graphics Processing Units (GPUs) to accelerate the calculations. In order to fit the computations within the required time budgets, we combine in a novel manner multiple techniques, such as compression and mixed-precision arithmetic, with recent advancements in GPUs to build a GPU-based real-time control system. A proof-of-concept implementation using NVIDIA P100 GPUs is able to deliver decompression throughput of 33 GB s and a sustained 198 GFLOP s per GPU for mixed-precision dense matrix-vector multiplication. | The past decade witnessed an increase in the interest of using General Purpose GPU (GPGPU) computing in embedded, real-time, and industrial systems @cite_2 @cite_0 @cite_9 @cite_10 @cite_16 @cite_5 @cite_3 . @cite_2 , Elliot and Anderson discussed the applications that can benefit from GPUs and the constraints on using them. @cite_0 @cite_10 , discussed the potential of GPUs in embedded and industrial systems and the issues facing their adoption. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"1973636889",
"",
"2063849431",
"2163923247",
"",
"2578609449",
"2005997810"
],
"abstract": [
"We present the design, analysis, and real-time implementation of a distributed computation particle filter on a graphic processing unit (GPU) architecture that is especially suited for fast real-time control applications. The proposed filter architecture is composed of a number of local subfilters that can share limited information among each other via an arbitrarily chosen abstract connected communication topology. We develop a detailed implementation procedure for GPU architectures focusing on distributed resampling as a crucial step in our approach, and describe alternative methods in the literature. We analyze the role of the most important parameters such as the number of exchanged particles and the effect of the particle exchange topology. The significant speedup and increase in performance obtained with our framework with respect to both available GPU solutions and standard sequential CPU methods enable particle filter implementations in fast real-time feedback control systems. This is illustrated via experimental and simulation results using a real-time visual servoing problem of a robotic arm capable of running in closed loop with an update rate of 100 Hz, while performing particle filter calculations that involve over one million particles.",
"",
"The Graphics Processing Unit (GPU) is becoming a very powerful platform to accelerate graphics and dataparallel compute-intensive applications. It gives a high performance and at the same time it has a low power consumption. This combination is of high performance and low power consumption is useful when it comes to building an embedded system. In this paper we are looking at the possibility to use a combination of CPU and GPU to provide performance metrics that are required in an embedded system. In particular we look at requirements inherent in the process and power industries where we believe that the GPU has the potential to be a useful and natural element in future embedded system architectures.",
"Graphics processing units (GPUs) are becoming increasingly important in today's platforms as their increased generality allows for them to be used as powerful coprocessors. In this paper, we explore possible applications for GPUs in real-time systems, discuss the limitations and constraints imposed by current GPU technology, and present a summary of our research addressing many such constraints.",
"",
"In the present work, fault detection in industrial automation processes is investigated. A fault detection method for observable process variables is extended for application cases, where the observations of process variables are noisy. The principle of this method consists in building a probability distribution model and evaluating the likelihood of observations under that model. The probability distribution model is based on a hybrid automaton which takes into account several system modes, i.e. phases with continuous system behaviour. Transitions between the modes are attributed to discrete control events such as on off signals. The discrete event system composed of system modes and transitions is modeled as finite state machine. Continuous process behaviour in the particular system modes is modeled with stochastic state space models, which incorporate neural networks. Fault detection is accomplished by evaluation of the underlying probability distribution model with a particle filter. In doing so both the hybrid system model and a linear observation model for noisy observations are taken into account. Experimental results show superior fault detection performance compared to the baseline method for observable process variables. The runtime of the proposed fault detection method has been significantly reduced by parallel implementation on a GPU.",
"In this work in progress paper we present parts of our ongoing work on using the Graphical Processing Unit (GPU) in the context of Embedded Systems. As a first step we are investigating the possibility to move functions from a Digital Signal Processor (DSP) to a GPU. If it is possible to make such a migration then it would simplify the hardware designs for some embedded systems by removing external hardware and also remove a potential life cycle issue with obsolete components. We are currently designing a test system to be able to compare performance between a legacy control system used today in industry, based on a CPU DSP combination, to a new design with a CPU GPU combination. In this setting the pre-filtering of sampled data, previously done in the DSP, is moved to the GPU."
]
} |
1902.08018 | 2915839211 | Extreme Ultraviolet (EUV) photolithography is seen as the key enabler for increasing transistor density in the next decade. In EUV lithography, 13.5 nm EUV light is illuminated through a reticle, holding a pattern to be printed, onto a silicon wafer. This process is performed about 100 times per wafer, at a rate of over a hundred wafers an hour. During this process, a certain percentage of the light energy is converted into heat in the wafer. In turn, this heat causes the wafer to deform which increases the overlay error, and as a result, reduces the manufacturing yield. To alleviate this, we propose a firm real-time control system that uses a wafer heat feed-forward model to compensate for the wafer deformation. The model calculates the expected wafer deformation, and then, compensates for that by adjusting the light projection and or the wafer movement. However, the model computational demands are very high. As a result, it needs to be executed on dedicated HW that can perform computations quickly. To this end, we deploy Graphics Processing Units (GPUs) to accelerate the calculations. In order to fit the computations within the required time budgets, we combine in a novel manner multiple techniques, such as compression and mixed-precision arithmetic, with recent advancements in GPUs to build a GPU-based real-time control system. A proof-of-concept implementation using NVIDIA P100 GPUs is able to deliver decompression throughput of 33 GB s and a sustained 198 GFLOP s per GPU for mixed-precision dense matrix-vector multiplication. | @cite_9 , the first real attempt in accelerating a real-time control loop (involving a particles filter) on GPUs is proposed. The proposed implementation was done on commercial GeForce cards and can be integrated in large industrial systems. Similar to our proposed solution, @cite_9 is targeting latencies below 50 ms. However, a key difference between @cite_9 and our solution is the model computational complexity and the amount of data needed to execute the model. Particle filters are compute-bound and require small amounts of data compared to the dense matrices involved in solving the 3D mechanical deformation equation described in . | {
"cite_N": [
"@cite_9"
],
"mid": [
"1973636889"
],
"abstract": [
"We present the design, analysis, and real-time implementation of a distributed computation particle filter on a graphic processing unit (GPU) architecture that is especially suited for fast real-time control applications. The proposed filter architecture is composed of a number of local subfilters that can share limited information among each other via an arbitrarily chosen abstract connected communication topology. We develop a detailed implementation procedure for GPU architectures focusing on distributed resampling as a crucial step in our approach, and describe alternative methods in the literature. We analyze the role of the most important parameters such as the number of exchanged particles and the effect of the particle exchange topology. The significant speedup and increase in performance obtained with our framework with respect to both available GPU solutions and standard sequential CPU methods enable particle filter implementations in fast real-time feedback control systems. This is illustrated via experimental and simulation results using a real-time visual servoing problem of a robotic arm capable of running in closed loop with an update rate of 100 Hz, while performing particle filter calculations that involve over one million particles."
]
} |
1902.08018 | 2915839211 | Extreme Ultraviolet (EUV) photolithography is seen as the key enabler for increasing transistor density in the next decade. In EUV lithography, 13.5 nm EUV light is illuminated through a reticle, holding a pattern to be printed, onto a silicon wafer. This process is performed about 100 times per wafer, at a rate of over a hundred wafers an hour. During this process, a certain percentage of the light energy is converted into heat in the wafer. In turn, this heat causes the wafer to deform which increases the overlay error, and as a result, reduces the manufacturing yield. To alleviate this, we propose a firm real-time control system that uses a wafer heat feed-forward model to compensate for the wafer deformation. The model calculates the expected wafer deformation, and then, compensates for that by adjusting the light projection and or the wafer movement. However, the model computational demands are very high. As a result, it needs to be executed on dedicated HW that can perform computations quickly. To this end, we deploy Graphics Processing Units (GPUs) to accelerate the calculations. In order to fit the computations within the required time budgets, we combine in a novel manner multiple techniques, such as compression and mixed-precision arithmetic, with recent advancements in GPUs to build a GPU-based real-time control system. A proof-of-concept implementation using NVIDIA P100 GPUs is able to deliver decompression throughput of 33 GB s and a sustained 198 GFLOP s per GPU for mixed-precision dense matrix-vector multiplication. | @cite_16 , Windmann and Niggemann presented a method for fault detection in an industrial automation process. The method incorporates a particle filter with switching neural networks in a fault detection method. The execution time of the method was reduced from 80 s on CPUs to around 6 s on GPUs. Compared to our proposed solution, @cite_16 differs in: (i) their target latency is 100x our target latency, and (ii) the model is compute-bound with rather low memory bandwidth requirements. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2578609449"
],
"abstract": [
"In the present work, fault detection in industrial automation processes is investigated. A fault detection method for observable process variables is extended for application cases, where the observations of process variables are noisy. The principle of this method consists in building a probability distribution model and evaluating the likelihood of observations under that model. The probability distribution model is based on a hybrid automaton which takes into account several system modes, i.e. phases with continuous system behaviour. Transitions between the modes are attributed to discrete control events such as on off signals. The discrete event system composed of system modes and transitions is modeled as finite state machine. Continuous process behaviour in the particular system modes is modeled with stochastic state space models, which incorporate neural networks. Fault detection is accomplished by evaluation of the underlying probability distribution model with a particle filter. In doing so both the hybrid system model and a linear observation model for noisy observations are taken into account. Experimental results show superior fault detection performance compared to the baseline method for observable process variables. The runtime of the proposed fault detection method has been significantly reduced by parallel implementation on a GPU."
]
} |
1902.07846 | 2915873224 | In this paper we consider the problem of finding stable maxima of expensive (to evaluate) functions. We are motivated by the optimisation of physical and industrial processes where, for some input ranges, small and unavoidable variations in inputs lead to unacceptably large variation in outputs. Our approach uses multiple gradient Gaussian Process models to estimate the probability that worst-case output variation for specified input perturbation exceeded the desired maxima, and these probabilities are then used to (a) guide the optimisation process toward solutions satisfying our stability criteria and (b) post-filter results to find the best stable solution. We exhibit our algorithm on synthetic and real-world problems and demonstrate that it is able to effectively find stable maxima. | The works most closely related to the present work are unscented Bayesian optimisation @cite_1 and stable Bayesian optimisation @cite_15 @cite_12 . Both of these works attempt to find stability in terms of input noise by translating it to output (target) noise. @cite_1 does this using the unscented transformation, while @cite_15 @cite_12 constructs a new acquisition function combining the effects of epistemic variance ( standard'' variance in the output due to limited samples and noisy measurements) and aleatoric variance due to input perturbations translated into output through the objective function. Thus unstable regions of the objective function become regions of high uncertainty, which the algorithm may subsequently avoid. However there is no guarantee that such approaches will avoid unstable regions, particularly those that combine instability and particularly high (relative) return, so variability of results may still be a problem. | {
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_12"
],
"mid": [
"",
"2963481418",
"2796902300"
],
"abstract": [
"",
"Safe and robust grasping of unknown objects is a major challenge in robotics, which has no general solution yet. A promising approach relies on haptic exploration, where active optimization strategies can be employed to reduce the number of exploration trials. One critical problem is that certain optimal grasps discoverd by the optimization procedure may be very sensitive to small deviations of the parameters from their nominal values: we call these unsafe grasps because small errors during motor execution may turn optimal grasps into bad grasps. To reduce the risk of grasp failure, safe grasps should be favoured. Therefore, we propose a new algorithm, unscented Bayesian optimization, that performs efficient optimization while considering uncertainty in the input space, leading to the discovery of safe optima. The results highlight how our method outperforms the classical Bayesian optimization both in synthetic problems and in realistic robot grasp simulations, finding robust and safe grasps after a few exploration trials.",
"Tuning hyperparameters of machine learning models is important for their performance. Bayesian optimization has recently emerged as a de-facto method for this task. The hyperparameter tuning is usually performed by looking at model performance on a validation set. Bayesian optimization is used to find the hyperparameter set corresponding to the best model performance. However, in many cases, the function representing the model performance on the validation set contains several spurious sharp peaks due to limited datapoints. The Bayesian optimization, in such cases, has a tendency to converge to sharp peaks instead of other more stable peaks. When a model trained using these hyperparameters is deployed in the real world, its performance suffers dramatically. We address this problem through a novel stable Bayesian optimization framework. We construct two new acquisition functions that help Bayesian optimization to avoid the convergence to the sharp peaks. We conduct a theoretical analysis and guarantee that Bayesian optimization using the proposed acquisition functions prefers stable peaks over unstable ones. Experiments with synthetic function optimization and hyperparameter tuning for support vector machines show the effectiveness of our proposed framework."
]
} |
1902.07987 | 2916639378 | We explore the use of graph networks to deal with irregular-geometry detectors in the context of particle reconstruction. Thanks to their representation-learning capabilities, graph networks can exploit the full detector granularity, while natively managing the event sparsity and arbitrarily complex detector geometries. We introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task. The performance of the new architectures is evaluated on a data set of simulated particle interactions on a toy model of a highly granular calorimeter, loosely inspired by the endcap calorimeter to be installed in the CMS detector for the High-Luminosity LHC phase. We study the clustering of energy depositions, which is the basis for calorimetric particle reconstruction, and provide a quantitative comparison to alternative approaches. The proposed algorithms provide an interesting alternative to existing methods, offering equally performing or less resource-demanding solutions with less underlying assumptions on the detector geometry and, consequently, the possibility to generalize to other detectors. | In recent years, deep learning models, and in particular CNNs, have become very popular in different areas of HEP. CNNs were successfully applied to calorimeter-oriented tasks, including particle identification @cite_35 @cite_30 @cite_36 @cite_11 @cite_22 , energy regression @cite_35 @cite_36 @cite_11 @cite_22 , hadronic jet identification @cite_14 @cite_37 @cite_10 @cite_15 , fast simulation @cite_17 @cite_23 @cite_35 @cite_4 @cite_13 and pileup subtraction in jets @cite_34 . Many of these works assume a simplified detector description: the detector is represented as a somehow regular array of sensors expressed as 2D or 3D images, and the problem of overlapping regions at the transition between detector components (e.g. barrel and endcap) is ignored. Sometimes the fixed-grid pixel shape is intended to reflect the typical angular resolution of the detector, which is implicitly assumed to be a constant, while in reality it depends on the energy of the incoming particle. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_13",
"@cite_36",
"@cite_17",
"@cite_23",
"@cite_15",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"2810437889",
"2895846750",
"2563484019",
"2047792789",
"2891067076",
"",
"2798474886",
"2808572053",
"2581875816",
"2614083378",
"2325907229",
"2741476055",
"2257617748",
""
],
"abstract": [
"Machine learning has played an important role in the analysis of high-energy physics data for decades. The emergence of deep learning in 2012 allowed for machine learning tools which could adeptly ...",
"A key question for machine learning approaches in particle physics is how to best represent and learn from collider events. As an event is intrinsically a variable-length unordered set of particles, we build upon recent machine learning efforts to learn directly from sets of features or \"point clouds\". Adapting and specializing the \"Deep Sets\" framework to particle physics, we introduce Energy Flow Networks, which respect infrared and collinear safety by construction. We also develop Particle Flow Networks, which allow for general energy dependence and the inclusion of additional particle-level information such as charge and flavor. These networks feature a per-particle internal (latent) representation, and summing over all particles yields an overall event-level latent representation. We show how this latent space decomposition unifies existing event representations based on detector images and radiation moments. To demonstrate the power and simplicity of this set-based approach, we apply these networks to the collider task of discriminating quark jets from gluon jets, finding similar or improved performance compared to existing methods. We also show how the learned event representation can be directly visualized, providing insight into the inner workings of the model. These architectures lend themselves to efficiently processing and analyzing events for a wide variety of tasks at the Large Hadron Collider. Implementations and examples of our architectures are available online in our EnergyFlow package.",
"Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. To establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, the deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.",
"We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon- initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.",
"High Energy Physics (HEP) simulations are traditionally based on the Monte Carlo approach and generally rely on time consuming calculations. The present work investigates the use of Generative Adversarial Networks (GANs) as a fast alternative. Our approach treats the energy deposited by a particle inside a calorimeter detector as a three-dimensional image. True three-dimensional convolutions can be employed to capture the spatio-temporal correlation of shower energy depositions. Three-dimensional images are generated, conditioned on the energy of the incoming particle and validated against Monte Carlo simulation. The results show an agreement to full Mote Carlo simulations well within 10 thus proving that GAN can be used as a fast alternative for simulation of HEP detector response.",
"",
"Deep generative models parametrised by neural networks have recently started to provide accurate results in modeling natural images. In particular, generative adversarial networks provide an unsupervised solution to this problem. In this work, we apply this kind of technique to the simulation of particle detector response to hadronic jets. We show that deep neural networks can achieve high fidelity in this task, while attaining a speed increase of several orders of magnitude with respect to traditional algorithms.",
"Correctly identifying the nature and properties of outgoing particles from high energy collisions at the Large Hadron Collider is a crucial task for all aspects of data analysis. Classical calorimeter-based classification techniques rely on shower shapes -- observables that summarize the structure of the particle cascade that forms as the original particle propagates through the layers of material. This work compares shower shape-based methods with computer vision techniques that take advantage of lower level detector information. In a simplified calorimeter geometry, our DenseNet-based architecture matches or outperforms other methods on @math - @math and @math - @math classification tasks. In addition, we demonstrate that key kinematic properties can be inferred directly from the shower representation in image format.",
"We provide a bridge between generative modeling in the Machine Learning community and simulated physical processes in high energy particle physics by applying a novel Generative Adversarial Network (GAN) architecture to the production of jet images—2D representations of energy depositions from particles interacting with a calorimeter. We propose a simple architecture, the Location-Aware Generative Adversarial Network, that learns to produce realistic radiation patterns from simulated high energy particle collisions. The pixel intensities of GAN-generated images faithfully span over many orders of magnitude and exhibit the desired low-dimensional physical properties (i.e., jet mass, n-subjettiness, etc.). We shed light on limitations, and provide a novel empirical validation of image quality and validity of GAN-produced simulations of the natural world. This work provides a base for further explorations of GANs for use in faster simulation in high energy particle physics.",
"The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce , a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter, and achieve speedup factors comparable to or better than existing full simulation techniques on CPU ( @math - @math ) and even faster on GPU (up to @math ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.",
"At the extreme energies of the Large Hadron Collider, massive particles can be produced at such high velocities that their hadronic decays are collimated and the resulting jets overlap. Deducing whether the substructure of an observed jet is due to a low-mass single particle or due to multiple decay objects of a massive particle is an important problem in the analysis of collider data. Traditional approaches have relied on expert features designed to detect energy deposition patterns in the calorimeter, but the complexity of the data make this task an excellent candidate for the application of machine learning tools. The data collected by the detector can be treated as a two-dimensional image, lending itself to the natural application of image classification techniques. In this work, we apply deep neural networks with a mixture of locally connected and fully connected nodes. Our experiments demonstrate that without the aid of expert features, such networks match or modestly outperform the current state-of-the-art approach for discriminating between jets from single hadronic particles and overlapping jets from pairs of collimated hadronic particles, and that such performance gains persist in the presence of pileup interactions.",
"Pileup involves the contamination of the energy distribution arising from the primary collision of interest (leading vertex) by radiation from soft collisions (pileup). We develop a new technique for removing this contamination using machine learning and convolutional neural networks. The network takes as input the energy distribution of charged leading vertex particles, charged pileup particles, and all neutral particles and outputs the energy distribution of particles coming from leading vertex alone. The PUMML algorithm performs remarkably well at eliminating pileup distortion on a wide range of simple and complex jet observables. We test the robustness of the algorithm in a number of ways and discuss how the network can be trained directly on data.",
"Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. Finally, this interplay between physicallymotivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets.",
""
]
} |
1902.07987 | 2916639378 | We explore the use of graph networks to deal with irregular-geometry detectors in the context of particle reconstruction. Thanks to their representation-learning capabilities, graph networks can exploit the full detector granularity, while natively managing the event sparsity and arbitrarily complex detector geometries. We introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task. The performance of the new architectures is evaluated on a data set of simulated particle interactions on a toy model of a highly granular calorimeter, loosely inspired by the endcap calorimeter to be installed in the CMS detector for the High-Luminosity LHC phase. We study the clustering of energy depositions, which is the basis for calorimetric particle reconstruction, and provide a quantitative comparison to alternative approaches. The proposed algorithms provide an interesting alternative to existing methods, offering equally performing or less resource-demanding solutions with less underlying assumptions on the detector geometry and, consequently, the possibility to generalize to other detectors. | Some of these architectures have already been considered for collider physics, in the context of jet tagging @cite_18 , event topology classification @cite_25 , and for pileup subtraction @cite_28 . | {
"cite_N": [
"@cite_28",
"@cite_18",
"@cite_25"
],
"mid": [
"2959811291",
"2887330313",
"2884896715"
],
"abstract": [
"At the Large Hadron Collider, the high transverse-momentum events studied by experimental collaborations occur in coincidence with parasitic low transverse-momentum collisions, usually referred to as pileup. Pileup mitigation is a key ingredient of the online and offline event reconstruction as pileup affects the reconstruction accuracy of many physics observables. We present a classifier based on Graph Neural Networks, trained to retain particles coming from high-transverse-momentum collisions, while rejecting those coming from pileup collisions. This model is designed as a refinement of the PUPPI algorithm, employed in many LHC data analyses since 2015. Thanks to an extended basis of input information and the learning capabilities of the considered network architecture, we show an improvement in pileup-rejection performances with respect to state-of-the-art solutions.",
"",
"Top-squarks (stops) play a crucial role for the naturalness of supersymmetry (SUSY). However, searching for the stops at the LHC is a tough task especially for some corners of parameter space. To dig the stops out of the huge LHC data, various expert-constructed kinematic variables or cutting-edge analysis techniques have been invented. In this paper, we propose to represent events as graphs and use the message passing neutral network to search for the stops through the process @math at the LHC. We find that the signal and background events can be efficiently discriminated by the patterns of event graphs. Such an approach can thus greatly improve the current LHC sensitivity for the stops."
]
} |
1902.07802 | 2916070897 | In this paper, we propose and study opportunistic contextual bandits - a special case of contextual bandits where the exploration cost varies under different environmental conditions, such as network load or return variation in recommendations. When the exploration cost is low, so is the actual regret of pulling a sub-optimal arm (e.g., trying a suboptimal recommendation). Therefore, intuitively, we could explore more when the exploration cost is relatively low and exploit more when the exploration cost is relatively high. Inspired by this intuition, for opportunistic contextual bandits with Linear payoffs, we propose an Adaptive Upper-Confidence-Bound algorithm (AdaLinUCB) to adaptively balance the exploration-exploitation trade-off for opportunistic learning. We prove that AdaLinUCB achieves O((log T)^2) problem-dependent regret upper bound, which has a smaller coefficient than that of the traditional LinUCB algorithm. Moreover, based on both synthetic and real-world dataset, we show that AdaLinUCB significantly outperforms other contextual bandit algorithms, under large exploration cost fluctuations. | Contextual bandit algorithms have been applied to many real applications, such as display advertising @cite_19 and content recommendation @cite_2 @cite_18 . In contrast to the classic @math -arm bandit problem @cite_20 @cite_9 @cite_8 , side information called context is provided in contextual bandit problem before arm selection @cite_4 @cite_11 @cite_21 @cite_16 . The contextual bandits with linear payoffs was first introduced in @cite_4 . In @cite_2 , LinUCB algorithm is introduced based on the optimism in the face of Uncertainty" principal for linear bandits. The LinUCB algorithm and its variances are reported to be effective in real application scenarios @cite_2 @cite_6 @cite_3 @cite_1 . Compared to the classic @math -armed bandits, the contextual bandits achieves superior performance in various application scenarios @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"116854235",
"2108114251",
"2000080679",
"2108738385",
"2119738618",
"2604822632",
"2340290367",
"2532022121",
"2138909795",
"2112420033",
"2119850747",
"2160163723",
"2168405694",
"1487320471"
],
"abstract": [
"Most existing approaches in Mobile Context-Aware Recommender Systems focus on recommending relevant items to users taking into account contextual information, such as time, location, or social aspects. However, none of them has considered the problem of user's content evolution. We introduce in this paper an algorithm that tackles this dynamicity. It is based on dynamic exploration exploitation and can adaptively balance the two aspects by deciding which user's situation is most relevant for exploration or exploitation. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms.",
"We show how a standard tool from statistics --- namely confidence bounds --- can be used to elegantly deal with situations which exhibit an exploitation-exploration trade-off. Our technique for designing and analyzing algorithms for such situations is general and can be applied when an algorithm has to make exploitation-versus-exploration decisions based on uncertain information provided by a random process. We apply our technique to two models with such an exploitation-exploration trade-off. For the adversarial bandit problem with shifting our new algorithm suffers only O((ST)1 2) regret with high probability over T trials with S shifts. Such a regret bound was previously known only in expectation. The second model we consider is associative reinforcement learning with linear value functions. For this model our technique improves the regret from O(T3 4) to O(T1 2).",
"We consider a non-Bayesian infinite horizon version of the multi-armed bandit problem with the objective of designing simple policies whose regret increases sldwly with time. In their seminal work on this problem, Lai and Robbins had obtained a O(logn) lower bound on the regret with a constant that depends on the KullbackLeibler number. They also constructed policies for some specific families of probability distributions (including exponential families) that achieved the lower bound. In this paper we construct index policies that depend on the rewards from each arm only through their sample mean. These policies are computationally much simpler and are also applicable much more generally. They achieve a O(logn) regret with a constant that is also based on the Kullback-Leibler number. This constant turns out to be optimal for one-parameter exponential families; however, in general it is derived from the optimal one via a 'contraction' principle. Our results rely entirely on a few key lemmas from the theory of large deviations.",
"Thompson sampling is one of oldest heuristic to address the exploration exploitation trade-off, but it is surprisingly unpopular in the literature. We present here some empirical results using Thompson sampling on simulated and real data, and show that it is highly competitive. And since this heuristic is very easy to implement, we argue that it should be part of the standard baselines to compare against.",
"We improve the theoretical analysis and empirical performance of algorithms for the stochastic multi-armed bandit problem and the linear stochastic multi-armed bandit problem. In particular, we show that a simple modification of Auer's UCB algorithm (Auer, 2002) achieves with high probability constant regret. More importantly, we modify and, consequently, improve the analysis of the algorithm for the for linear stochastic bandit problem studied by Auer (2002), (2008), Rusmevichientong and Tsitsiklis (2010), (2010). Our modification improves the regret bound by a logarithmic factor, though experiments show a vast improvement. In both cases, the improvement stems from the construction of smaller confidence sets. For their construction we use a novel tail inequality for vector-valued martingales.",
"",
"Contextual bandit algorithms provide principled online learning solutions to find optimal trade-offs between exploration and exploitation with companion side-information. They have been extensively used in many important practical scenarios, such as display advertising and content recommendation. A common practice estimates the unknown bandit parameters pertaining to each user independently. This unfortunately ignores dependency among users and thus leads to suboptimal solutions, especially for the applications that have strong social components. In this paper, we develop a collaborative contextual bandit algorithm, in which the adjacency graph among users is leveraged to share context and payoffs among neighboring users while online updating. We rigorously prove an improved upper regret bound of the proposed collaborative bandit algorithm comparing to conventional independent bandit algorithms. Extensive experiments on both synthetic and three large-scale real-world datasets verified the improvement of our proposed algorithm against several state-of-the-art contextual bandit algorithms.",
"Contextual bandit algorithms provide principled online learning solutions to find optimal trade-offs between exploration and exploitation with companion side-information. Most contextual bandit algorithms simply assume the learner would have access to the entire set of features, which govern the generation of payoffs from a user to an item. However, in practice it is challenging to exhaust all relevant features ahead of time, and oftentimes due to privacy or sampling constraints many factors are unobservable to the algorithm. Failing to model such hidden factors leads a system to make constantly suboptimal predictions. In this paper, we propose to learn the hidden features for contextual bandit algorithms. Hidden features are explicitly introduced in our reward generation assumption, in addition to the observable contextual features. A scalable bandit algorithm is achieved via coordinate descent, in which closed form solutions exist at each iteration for both hidden features and bandit parameters. Most importantly, we rigorously prove that the developed contextual bandit algorithm achieves a sublinear upper regret bound with high probability, and a linear regret is inevitable if one fails to model such hidden features. Extensive experimentation on both simulations and large-scale real-world datasets verified the advantages of the proposed algorithm compared with several state-of-the-art contextual bandit algorithms and existing ad-hoc combinations between bandit algorithms and matrix factorization methods.",
"Contextual bandit algorithms have become popular for online recommendation systems such as Digg, Yahoo! Buzz, and news recommendation in general. Offline evaluation of the effectiveness of new algorithms in these applications is critical for protecting online user experiences but very challenging due to their \"partial-label\" nature. Common practice is to create a simulator which simulates the online environment for the problem at hand and then run an algorithm against this simulator. However, creating simulator itself is often difficult and modeling bias is usually unavoidably introduced. In this paper, we introduce a replay methodology for contextual bandit algorithm evaluation. Different from simulator-based approaches, our method is completely data-driven and very easy to adapt to different applications. More importantly, our method can provide provably unbiased evaluations. Our empirical results on a large-scale news article recommendation dataset collected from Yahoo! Front Page conform well with our theoretical results. Furthermore, comparisons between our offline replay and online bucket evaluation of several contextual bandit algorithms show accuracy and effectiveness of our offline evaluation method.",
"Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.",
"We present Epoch-Greedy, an algorithm for multi-armed bandits with observable side information. Epoch-Greedy has the following properties: No knowledge of a time horizon @math is necessary. The regret incurred by Epoch-Greedy is controlled by a sample complexity bound for a hypothesis class. The regret scales as @math or better (sometimes, much better). Here @math is the complexity term in a sample complexity bound for standard supervised learning.",
"We consider structured multi-armed bandit problems based on the Generalized Linear Model (GLM) framework of statistics. For these bandits, we propose a new algorithm, called GLM-UCB. We derive finite time, high probability bounds on the regret of the algorithm, extending previous analyses developed for the linear bandits to the non-linear case. The analysis highlights a key difficulty in generalizing linear bandit algorithms to the non-linear case, which is solved in GLM-UCB by focusing on the reward space rather than on the parameter space. Moreover, as the actual effectiveness of current parameterized bandit algorithms is often poor in practice, we provide a tuning method based on asymptotic arguments, which leads to significantly better practical performance. We present two numerical experiments on real-world data that illustrate the potential of the GLM-UCB approach.",
"Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.",
"In this paper we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O (√ Td ln(KT ln(T ) δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors."
]
} |
1902.07802 | 2916070897 | In this paper, we propose and study opportunistic contextual bandits - a special case of contextual bandits where the exploration cost varies under different environmental conditions, such as network load or return variation in recommendations. When the exploration cost is low, so is the actual regret of pulling a sub-optimal arm (e.g., trying a suboptimal recommendation). Therefore, intuitively, we could explore more when the exploration cost is relatively low and exploit more when the exploration cost is relatively high. Inspired by this intuition, for opportunistic contextual bandits with Linear payoffs, we propose an Adaptive Upper-Confidence-Bound algorithm (AdaLinUCB) to adaptively balance the exploration-exploitation trade-off for opportunistic learning. We prove that AdaLinUCB achieves O((log T)^2) problem-dependent regret upper bound, which has a smaller coefficient than that of the traditional LinUCB algorithm. Moreover, based on both synthetic and real-world dataset, we show that AdaLinUCB significantly outperforms other contextual bandit algorithms, under large exploration cost fluctuations. | Although LinUCB is effective and widely applied, its analysis is challenging. In the initial analysis effort @cite_11 , instead of analyzing LinUCB, it presents an @math regret bound for a modified version of LinUCB. The modification is needed to satisfy the independent requirement by applying Azuma Hoeffding inequality. In another line of analysis effort, the authors in @cite_21 design another algorithm for contextual bandits with linear payoffs and provide its regret analysis without independent requirement. Although the algorithm proposed in @cite_21 is different from LinUCB and suffers from a higher computational complexity, the analysis techniques are helpful. | {
"cite_N": [
"@cite_21",
"@cite_11"
],
"mid": [
"2119738618",
"1487320471"
],
"abstract": [
"We improve the theoretical analysis and empirical performance of algorithms for the stochastic multi-armed bandit problem and the linear stochastic multi-armed bandit problem. In particular, we show that a simple modification of Auer's UCB algorithm (Auer, 2002) achieves with high probability constant regret. More importantly, we modify and, consequently, improve the analysis of the algorithm for the for linear stochastic bandit problem studied by Auer (2002), (2008), Rusmevichientong and Tsitsiklis (2010), (2010). Our modification improves the regret bound by a logarithmic factor, though experiments show a vast improvement. In both cases, the improvement stems from the construction of smaller confidence sets. For their construction we use a novel tail inequality for vector-valued martingales.",
"In this paper we study the contextual bandit problem (also known as the multi-armed bandit problem with expert advice) for linear payoff functions. For T rounds, K actions, and d dimensional feature vectors, we prove an O (√ Td ln(KT ln(T ) δ) ) regret bound that holds with probability 1− δ for the simplest known (both conceptually and computationally) efficient upper confidence bound algorithm for this problem. We also prove a lower bound of Ω( √ Td) for this setting, matching the upper bound up to logarithmic factors."
]
} |
1902.07802 | 2916070897 | In this paper, we propose and study opportunistic contextual bandits - a special case of contextual bandits where the exploration cost varies under different environmental conditions, such as network load or return variation in recommendations. When the exploration cost is low, so is the actual regret of pulling a sub-optimal arm (e.g., trying a suboptimal recommendation). Therefore, intuitively, we could explore more when the exploration cost is relatively low and exploit more when the exploration cost is relatively high. Inspired by this intuition, for opportunistic contextual bandits with Linear payoffs, we propose an Adaptive Upper-Confidence-Bound algorithm (AdaLinUCB) to adaptively balance the exploration-exploitation trade-off for opportunistic learning. We prove that AdaLinUCB achieves O((log T)^2) problem-dependent regret upper bound, which has a smaller coefficient than that of the traditional LinUCB algorithm. Moreover, based on both synthetic and real-world dataset, we show that AdaLinUCB significantly outperforms other contextual bandit algorithms, under large exploration cost fluctuations. | The opportunistic linear contextual bandits can be regarded as a special case of non-linear contextual bandits. However, general contextual bandit algorithms such as KernelUCB @cite_17 do not take advantage of the opportunistic nature of the problem, and thus can lead to a less competitive performance. Moreover, KernelUCB suffers from the sensitivity to hyper-parameter tuning, and the extremely high computational complexity for even moderately large dataset, which limits its application in real problems. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2950238385"
],
"abstract": [
"We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits."
]
} |
1902.07494 | 2950570861 | In this paper, we develop a neural attentive interpretable recommendation system, named NAIRS. A self-attention network, as a key component of the system, is designed to assign attention weights to interacted items of a user. This attention mechanism can distinguish the importance of the various interacted items in contributing to a user profile. Based on the user profiles obtained by the self-attention network, NAIRS offers personalized high-quality recommendation. Moreover, it develops visual cues to interpret recommendations. This demo application with the implementation of NAIRS enables users to interact with a recommendation system, and it persistently collects training data to improve the system. The demonstration and experimental results show the effectiveness of NAIRS. | Recommender system is an active research field. The authors of @cite_14 @cite_4 described most of the existing techniques for recommender systems. In this section, we briefly review the following major approaches that are related to our work. | {
"cite_N": [
"@cite_14",
"@cite_4"
],
"mid": [
"2025605741",
"2030808931"
],
"abstract": [
"Recommender systems have developed in parallel with the web. They were initially based on demographic, content-based and collaborative filtering. Currently, these systems are incorporating social information. In the future, they will use implicit, local and personal information from the Internet of things. This article provides an overview of recommender systems as well as collaborative filtering methods and algorithms; it also explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.",
"A recommender system aims to provide users with personalized online product or service recommendations to handle the increasing online information overload problem and improve customer relationship management. Various recommender system techniques have been proposed since the mid-1990s, and many sorts of recommender system software have been developed recently for a variety of applications. Researchers and managers recognize that recommender systems offer great opportunities and challenges for business, government, education, and other domains, with more recent successful developments of recommender systems for real-world applications becoming apparent. It is thus vital that a high quality, instructive review of current trends should be conducted, not only of the theoretical research results but more importantly of the practical developments in recommender systems. This paper therefore reviews up-to-date application developments of recommender systems, clusters their applications into eight main categories: e-government, e-business, e-commerce e-shopping, e-library, e-learning, e-tourism, e-resource services and e-group activities, and summarizes the related recommendation techniques used in each category. It systematically examines the reported recommender systems through four dimensions: recommendation methods (such as CF), recommender systems software (such as BizSeeker), real-world application domains (such as e-business) and application platforms (such as mobile-based platforms). Some significant new topics are identified and listed as new directions. By providing a state-of-the-art knowledge, this survey will directly support researchers and practical professionals in their understanding of developments in recommender system applications. Research papers on various recommender system applications are summarized.The recommender systems are examined systematically through four dimensions.The recommender system applications are classified into eight categories.Related recommendation techniques in each category are identified.Several new recommendation techniques and application areas are uncovered."
]
} |
1902.07494 | 2950570861 | In this paper, we develop a neural attentive interpretable recommendation system, named NAIRS. A self-attention network, as a key component of the system, is designed to assign attention weights to interacted items of a user. This attention mechanism can distinguish the importance of the various interacted items in contributing to a user profile. Based on the user profiles obtained by the self-attention network, NAIRS offers personalized high-quality recommendation. Moreover, it develops visual cues to interpret recommendations. This demo application with the implementation of NAIRS enables users to interact with a recommendation system, and it persistently collects training data to improve the system. The demonstration and experimental results show the effectiveness of NAIRS. | To address the cold start problem in recommendation, @cite_15 presented a visual and textural recurrent neural network (VT-RNN), which simultaneously learned the sequential latent vectors of users' interest and captured the content-based representations that contributed to address the cold-start issues. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2951539240"
],
"abstract": [
"Sequential recommendation is a fundamental task for network applications, and it usually suffers from the item cold start problem due to the insufficiency of user feedbacks. There are currently three kinds of popular approaches which are respectively based on matrix factorization (MF) of collaborative filtering, Markov chain (MC), and recurrent neural network (RNN). Although widely used, they have some limitations. MF based methods could not capture dynamic user's interest. The strong Markov assumption greatly limits the performance of MC based methods. RNN based methods are still in the early stage of incorporating additional information. Based on these basic models, many methods with additional information only validate incorporating one modality in a separate way. In this work, to make the sequential recommendation and deal with the item cold start problem, we propose a Multi-View Recurrent Neural Network (MV-RNN ) model. Given the latent feature, MV-RNN can alleviate the item cold start problem by incorporating visual and textual information. First, At the input of MV-RNN, three different combinations of multi-view features are studied, like concatenation, fusion by addition and fusion by reconstructing the original multi-modal data. MV-RNN applies the recurrent structure to dynamically capture the user's interest. Second, we design a separate structure and a united structure on the hidden state of MV-RNN to explore a more effective way to handle multi-view features. Experiments on two real-world datasets show that MV-RNN can effectively generate the personalized ranking list, tackle the missing modalities problem and significantly alleviate the item cold start problem."
]
} |
1902.07599 | 2915429003 | Document listing on string collections is the task of finding all documents where a pattern appears. It is regarded as the most fundamental document retrieval problem, and is useful in various applications. Many of the fastest-growing string collections are composed of very similar documents, such as versioned code and document collections, genome repositories, etc. Plain pattern-matching indexes designed for repetitive text collections achieve orders-of-magnitude reductions in space. Instead, there are not many analogous indexes for document retrieval. In this paper we present a simple document listing index for repetitive string collections of total length @math that lists the @math distinct documents where a pattern of length @math appears in time @math . We exploit the repetitiveness of the document array (i.e., the suffix array coarsened to document identifiers) to grammar-compress it while precomputing the answers to nonterminals, and store them in grammar-compressed form as well. Our experimental results show that our index sharply outperforms existing alternatives in the space time tradeoff map. | Claude and Munro @cite_27 propose the first index for document listing based on grammar compression, which escapes from the problem above. They extend a grammar-based pattern-matching index @cite_23 by storing the list of the documents where each nonterminal appears. Those lists are grammar-compressed as well. The index searches for the minimal nonterminals that contain @math and merges their lists. While it does not offer relevant space or query time guarantees, the index performs well in practice. Navarro @cite_2 extends this index in order to obtain space guarantees and @math time, but the scheme is difficult to implement. | {
"cite_N": [
"@cite_27",
"@cite_23",
"@cite_2"
],
"mid": [
"179872536",
"",
"2737269238"
],
"abstract": [
"Representing versioned documents, such as Wikipedia history, web archives, genome databases, backups, is challenging when we want to support searching for an exact substring and retrieve the documents that contain the substring. This problem is called document listing. We present an index for the document listing problem on versioned documents. Our index is the first one based on grammar-compression. This allows for good results on repetitive collections, whereas standard techniques cannot achieve competitive space for solving the same problem. Our index can also be addapted to work in a more standard way, allowing users to search for word-based phrase queries and conjunctive queries at the same time. Finally, we discuss extensions that may be possible in the future, for example, supporting ranking capabilities within the index itself.",
"",
"We consider document listing on string collections, that is, finding in which strings a given pattern appears. In particular, we focus on repetitive collections: a collection of size @math over alphabet @math is composed of @math copies of a string of size @math , and @math edits are applied on ranges of copies. We introduce the first document listing index with size @math , precisely @math bits, and with useful worst-case time guarantees: Given a pattern of length @math , the index reports the @math strings where it appears in time @math , for any constant @math (and tells in time @math if @math ). Our technique is to augment a range data structure that is commonly used on grammar-based indexes, so that instead of retrieving all the pattern occurrences, it computes useful summaries on them. We show that the idea has independent interest: we introduce the first grammar-based index that, on a text @math with a grammar of size @math , uses @math bits and counts the number of occurrences of a pattern @math in time @math , for any constant @math . We also give the first index using @math bits, where @math is parsed by Lempel-Ziv into @math phrases, counting occurrences in time @math ."
]
} |
1902.07456 | 2925799800 | Aggregate location statistics are used in a number of mobility analytics to express how many people are in a certain location at a given time (but not who). However, prior work has shown that an adversary with some prior knowledge of a victim's mobility patterns can mount membership inference attacks to determine whether or not that user contributed to the aggregates. In this paper, we set to understand why such inferences are successful and what can be done to mitigate them. We conduct an in-depth feature analysis, finding that the volume of data contributed and the regularity and particularity of mobility patterns play a crucial role in the attack. We then use these insights to adapt defenses proposed in the location privacy literature to the aggregate setting, and evaluate their privacy-utility trade-offs for common mobility analytics. We show that, while there is no silver bullet that enables arbitrary analysis, there are defenses that provide reasonable utility for particular tasks while reducing the extent of the inference. | Location Privacy. Golle and Partridge @cite_28 demonstrate the feasibility of re-identifying users by leveraging the uniqueness of their home work places. @cite_8 show that k-anonymity in the context of location traces is mostly ineffective, while Zang and Bolot @cite_3 that anonymization of location data is, in general, extremely difficult. De @cite_19 measure the uniqueness of human mobility in a Call Detail Records (CDR) dataset, finding that four spatio-temporal points are enough to uniquely identify 95 also show that coarsening the data, both spatially and temporally, does not add significant anonymity. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_3",
"@cite_8"
],
"mid": [
"1536564267",
"2115240023",
"2045686369",
"2126729912"
],
"abstract": [
"Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.",
"We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.",
"We examine a very large-scale data set of more than 30 billion call records made by 25 million cell phone users across all 50 states of the US and attempt to determine to what extent anonymized location data can reveal private user information. Our approach is to infer, from the call records, the \"top N\" locations for each user and correlate this information with publicly-available side information such as census data. For example, the measured \"top 2\" locations likely correspond to home and work locations, the \"top 3\" to home, work, and shopping school commute path locations. We consider the cases where those \"top N\" locations are measured with different levels of granularity, ranging from a cell sector to whole cell, zip code, city, county and state. We then compute the anonymity set, namely the number of users uniquely identified by a given set of \"top N\" locations at different granularity levels. We find that the \"top 1\" location does not typically yield small anonymity sets. However, the top 2 and top 3 locations do, certainly at the sector or cell-level granularity. We consider a variety of different factors that might impact the size of the anonymity set, for example the distance between the \"top N\" locations or the geographic environment (rural vs urban). We also examine to what extent specific side information, in particular the size of the user's social network, decrease the anonymity set and therefore increase risks to privacy. Our study shows that sharing anonymized location data will likely lead to privacy risks and that, at a minimum, the data needs to be coarse in either the time domain (meaning the data is collected over short periods of time, in which case inferring the top N locations reliably is difficult) or the space domain (meaning the data granularity is strictly higher than the cell level). In both cases, the utility of the anonymized location data will be decreased, potentially by a significant amount.",
"There is a rich collection of literature that aims at protecting the privacy of users querying location-based services. One of the most popular location privacy techniques consists in cloaking users' locations such that k users appear as potential senders of a query, thus achieving k-anonymity. This paper analyzes the effectiveness of k-anonymity approaches for protecting location privacy in the presence of various types of adversaries. The unraveling of the scheme unfolds the inconsistency between its components, mainly the cloaking mechanism and the k-anonymity metric. We show that constructing cloaking regions based on the users' locations does not reliably relate to location privacy, and argue that this technique may even be detrimental to users' location privacy. The uncovered flaws imply that existing k-anonymity scheme is a tattered cloak for protecting location privacy."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.